Leadership in the Digital Age Starts with the Numbers

Noah Barsky
While often overshadowed by buzzwords and tech trends, financial literacy enables leaders to spot early distress signals, assess business health, and make grounded decisions. This Advisor presents two enduring insights: first, while earnings drive headlines, it’s the balance sheet that determines survivability — highlighting the importance of stewardship over growth at any cost. Second, that sales growth rate serves as a company’s “speed limit,” providing critical context for interpreting operational changes and ensuring strategic alignment. In times of transformation, disciplined financial insight remains a leader’s most dependable tool.

Overcoming Barriers to Tech-Enabled Healthcare Partnerships

Daniel Rees, Roderick Thomas, Victoria Bates, Gareth Davies
This Advisor distills insights from interviews with 48 healthcare and pharmaceutical leaders on the challenges and best practices for cross-sector collaboration using data and technology platforms. Key barriers include trust, governance complexity, misaligned incentives, and limited scalability. Success hinges on shared agendas, clear accountability, streamlined contracts, and strong leadership — all essential for embedding technology in a way that delivers lasting value to stakeholders.

Disciplining AI, Part II: Looping in Humans, Systems & Accountability — Opening Statement

Eystein Thanisch
Part II of this Amplify series on disciplining AI explores how to rigorously evaluate and govern systems amid rising concerns over safety, performance, and accountability. As GenAI plateaus in both capability and ROI, this issue argues for a shift from benchmark-based assessments to context-specific evaluations emphasizing oversight, adaptability, and explainability. It offers philosophical and technical frameworks for aligning AI outputs with human-defined goals — underscoring that effective AI integration demands continuous learning, human judgment, and strategic foresight.

Beyond the Benchmark: Developing Better AI with Evaluations

Dan North
ADL’s Dan North examines the engineering discipline of AI evaluations, arguing that this is the locus of an LLM’s translation from task-agnostic capabilities measured by benchmarks to tech that is setting specific and ready-to-deliver success. North emphasizes the ultimately human nature of this discipline. An organization must define the kind of outputs it is looking for through an inclusive process involving customers and stakeholders, alongside engineers.

Accountable AI?

Paul Clermont
Paul Clermont reminds us of a crucial but under-recognized trait of true AI: learning and improvement in response to feedback independent of explicit human design. The implementation of human requirements is thus both highly feasible and dialogic. Clermont stresses the high level of responsibility borne by humans when interacting with AI. Critical thinking about inputs and outputs, and awareness of both objectives and social context, remain firmly human (and sometimes regulatory) responsibilities — no matter how widely terms like “AI accountability” have gained currency.

Why Judgment, Not Accuracy, Will Decide the Future of Agentic AI

Joe Allen
Joe Allen joins the critique of benchmarks as the key means of measuring AI systems, with a focus on agents. Given the open world in which they operate, agents have autonomy over both planning and execution, and Allen argues they have already outgrown even the criteria used to evaluate LLMs. As self-organizing systems, agents should be assessed on their ability to follow their own plans consistently while adapting to unforeseen eventualities — an evaluation that cannot be fully scripted in advance. Drawing on direct experience, Allen details a suite of techniques that can be used to track and improve agent performance in terms of internal coherence, adherence to a context model, and more.

AI Asset Survival in the Age of Exponential Tech

Chirag Kundalia, V. Kavida
Chirag Kundalia and V. Kavida propose using survival analysis to understand when, and under what circumstances, an AI system might need maintenance or replacement. This approach involves modeling the intrinsic and extrinsic factors that could render a system no longer fit for purpose. It also requires organizations to define the standard of output the system must deliver, monitor that performance, and scan the horizon for relevant externalities (e.g., superior technology). Although survival analysis cannot anticipate every eventuality, modeling the future of an AI system enhances budgeting, compliance reporting, and strategic planning.

Taco Bell, 18,000 Waters & Why Benchmarks Don’t Matter

Michael Papadopoulos, Olivier Pilot, Eystein Thanisch
ADL’s Michael Papadopoulos, Olivier Pilot, and Eystein Thanish argue that with model performance plateauing, the critical differentiator for AI in 2025 is how the technology is integrated into the specifics of an organization — and the context in which it will operate. Benchmarking, which is concerned with models out of context, is of diminishing applicability. After reviewing some entertaining but troubling examples of theoretically capable AI systems disconnecting disastrously from data, business rules, and basic plausibility, they propose an evaluation framework more attuned to contemporary challenges to prevent similar outcomes.

Building Ethical Boards Through Leader Character

Trevor Hunter
This Advisor explores how leader character can strengthen board governance and support ethical, effective decision-making in the face of growing ESG demands. It argues that traits such as courage, integrity, humility, and judgment are essential for directors to fulfill their fiduciary duties while navigating complex stakeholder expectations. By embracing character-driven leadership, boards can move beyond compliance to shape resilient, purpose-led organizations capable of long-term value creation.

A Simulation-First Case Study: Rethinking AI’s Role in Clinical Decision-Making

Joseph Farrington
This Advisor presents a case study from a major London hospital, showing how a simulation-first approach can guide AI model development in complex clinical settings. By simulating blood bank operations, the team assessed the potential value of predictive models before investing in data acquisition and model training. The simulation revealed that even moderately accurate predictions could reduce waste, especially when combined with optimized ordering policies. Beyond technical insights, the process fostered early stakeholder collaboration, ensuring the final model addressed real-world needs.

Leading the Leadershift: Rethinking Transformation from the Top Down

Jeremy Blain
This Advisor urges leaders to move beyond surface-level digital upgrades and confront the deeper cultural and strategic shifts required for true transformation. By focusing on digital ambition, data-driven decision-making, and organizational alignment, it provides a roadmap for boards and executives to overcome inertia, reengage their teams, and lead with purpose in a rapidly evolving digital landscape.

On-Orbit Data Centers: Mapping the Leaders in Space-Based AI Computing

Curt Hall
This Advisor, Part II in a series that explores the current state of on-orbit data centers, spotlights the countries, companies, and initiatives leading their development. From national space agencies to emerging start-ups, it maps the key players building space-based computing infrastructure — transforming satellites into intelligent, autonomous systems that can process data in real time.

Balancing the Boardroom with Lead Independent Directors

Alessia Falsarone
This Advisor explores the growing importance of lead independent directors (LID) in strengthening board oversight, especially in firms with concentrated leadership or weak governance. The LID plays a key role in balancing power, fostering accountability, and enhancing investor trust — but only when backed by real authority and engagement.

Governing the Quantum Frontier

Guido Peterssen Nodarse, Jose Luis Hevia
This Advisor argues that to unlock the full potential of private quantum hubs (PQHs), organizations must implement centralized governance systems tailored to the unique demands of quantum ecosystems. These systems must coordinate diverse technologies, manage hybrid architectures, and ensure secure, scalable access for a broad spectrum of users. Without such purpose-built governance, PQHs risk inefficiency and fragmentation — undermining their role as key enablers of enterprise quantum innovation.

Bridging the Board–Management Gap with AI

David Larcker, Amit Seru, Brian Tayan, Laurie Yoler
As this Advisor explores, AI is transforming corporate governance by narrowing the information gap between boards and management. With greater access to data and analysis, directors can exercise more informed oversight— but face higher expectations, new legal questions, and cybersecurity risks.

Disciplining AI, Part I: Evaluation Through Industry Lenses — Opening Statement

Eystein Thanisch
In Part I of this two-part Amplify series on AI evaluation, we explore the impetus toward AI accountability that arises from tackling real problems in real-world settings. Understanding how AI can contribute, at what cost, and with what nth order effects in a given context requires rigorous socio-technical systems thinking.

Explain Yourself: The Legal Requirements Governing Explainability

Rosie Nance, Marcus Evans, Lisa Fitzgerald, Lily Hands
As Marcus Evans, Rosie Nance, Lisa Fitzgerald, and Lily Hands explore, AI explainability is a legal requirement as well as a scientific challenge. Despite the EU and UK’s differing approaches in other aspects of AI regulation, both the EU and UK GDPR continue to uphold individuals’ right to an explanation of automated or semiautomated decisions that impact them significantly. The EU AI Act also provides a right to an explanation for individuals or organizations.

Measuring AI’s Impact Across the Fashion Value Chain

Kitty Yeung
Kitty Yeung urges us to elevate our thinking and consider what we are trying to achieve — via AI or otherwise. She argues that the fashion industry has long failed to appreciate the imaginative journeys consumers are taking, journeys that weave together self, situations, social circles, and eclectic wearables. Destructive practices like fast fashion represent flawed attempts to address human complexity with incomplete information, cumbersome supply chains, and a narrow anthropology that undervalues consumers’ creative agency.

A Simulation-First Approach to AI Development

Joseph Farrington
Joseph Farrington emphasizes the importance of evaluating AI systems against their end goals. In healthcare, where developing and deploying AI models is especially challenging, he argues for first modeling the business context and processes the AI will interact with — before moving ahead with development or deployment. This approach can be used to assess, in advance, whether a plausible AI model will provide the intended benefit. It can also be used to run alternative scenarios to identify what else might need to change for the AI to really work or what else might work better if the AI were in place.

AI’s Impact on Expertise

Joseph Byrum
Joseph Byrum introduces a framework to help organizations plan rationally and prudently for AI adoption. One element is defining performance thresholds beyond which emerging technologies become economically viable. Another is assessing how AI and humans should interact across different business functions: some tasks can be commoditized and handled by AI, while others remain critical differentiators under human responsibility, with hybrid possibilities in between.

AI in B2B Publishing: The Promise & the Peril

Daniel Flatt
Daniel Flatt contends that an evaluation framework that promotes accuracy and objectivity is a commercial necessity. He points to B2B publishing as a sector of particular note: an industry built on credibility and accountability that AI could undermine, even as it offers opportunities to accelerate time to output. In response, new tools for detecting inaccuracy and bias in AI-generated copy are emerging, alongside collaborations across the publishing workflow and the wider industry.

On-Orbit Data Centers: Enabling Technologies & Emerging Capabilities

Curt Hall
This Advisor series explores the emergence of on-orbit data centers — space-based platforms that enable real-time, AI-powered data processing and analysis directly in orbit. Here in Part I, we examine the enabling technologies, key benefits, and transformative potential of these systems for autonomous space operations, Earth observation, defense, manufacturing, and beyond.

Stop Wasting Board Expertise

Siah Hwee Ang
Most organizations underutilize their boards, treating directors primarily as oversight agents rather than strategic partners. This Advisor recommends a shift toward deeper, ongoing engagement between directors and management to unlock the full value of board expertise. By institutionalizing both hard and soft skills — through regular interaction, strategic involvement, and better succession planning — organizations can ensure board knowledge becomes a lasting asset, not a transient one.

Powering Real-World Innovation with Quantum Algorithms

Michal Baczyk
This Advisor explores the accelerating convergence of academic research and industry investment in quantum algorithms, marking a shift from theoretical promise to practical application. From simulating molecules in drug discovery to optimizing logistics and advancing financial modeling, quantum algorithms are beginning to deliver domain-specific value. As industries invest in identifying where quantum advantage matters most, the focus is turning to integrating these tools into real-world workflows and architectures.

The Complexity Paradox: How CEO Overconfidence Shapes Strategy

Shuhui Wang, Hirindu Kawshala
This Advisor explores how CEO overconfidence influences firm complexity, drawing on analysis of over 14,000 earnings call transcripts. While confident leaders may aim to streamline operations, the findings show a 3.1% reduction in complexity among firms led by overconfident CEOs — raising concerns about oversimplification. Boards and investors should carefully evaluate whether reduced complexity reflects strategic clarity or signals deeper organizational blind spots.