Strategic advice to leverage new technologies

Technology is at the heart of nearly every enterprise, enabling new business models and strategies, and serving as the catalyst to industry convergence. Leveraging the right technology can improve business outcomes, providing intelligence and insights that help you make more informed and accurate decisions. From finding patterns in data through data science, to curating relevant insights with data analytics, to the predictive abilities and innumerable applications of AI, to solving challenging business problems with ML, NLP, and knowledge graphs, technology has brought decision-making to a more intelligent level. Keep pace with the technology trends, opportunities, applications, and real-world use cases that will move your organization closer to its transformation and business goals.

Subscribe to the Technology Advisor

Recently Published

Part II of this Amplify series on disciplining AI explores how to rigorously evaluate and govern systems amid rising concerns over safety, performance, and accountability. As GenAI plateaus in both capability and ROI, this issue argues for a shift from benchmark-based assessments to context-specific evaluations emphasizing oversight, adaptability, and explainability. It offers philosophical and technical frameworks for aligning AI outputs with human-defined goals — underscoring that effective AI integration demands continuous learning, human judgment, and strategic foresight.
ADL’s Dan North examines the engineering discipline of AI evaluations, arguing that this is the locus of an LLM’s translation from task-agnostic capabilities measured by benchmarks to tech that is setting specific and ready-to-deliver success. North emphasizes the ultimately human nature of this discipline. An organization must define the kind of outputs it is looking for through an inclusive process involving customers and stakeholders, alongside engineers.
Paul Clermont reminds us of a crucial but under-recognized trait of true AI: learning and improvement in response to feedback independent of explicit human design. The implementation of human requirements is thus both highly feasible and dialogic. Clermont stresses the high level of responsibility borne by humans when interacting with AI. Critical thinking about inputs and outputs, and awareness of both objectives and social context, remain firmly human (and sometimes regulatory) responsibilities — no matter how widely terms like “AI accountability” have gained currency.
Joe Allen joins the critique of benchmarks as the key means of measuring AI systems, with a focus on agents. Given the open world in which they operate, agents have autonomy over both planning and execution, and Allen argues they have already outgrown even the criteria used to evaluate LLMs. As self-organizing systems, agents should be assessed on their ability to follow their own plans consistently while adapting to unforeseen eventualities — an evaluation that cannot be fully scripted in advance. Drawing on direct experience, Allen details a suite of techniques that can be used to track and improve agent performance in terms of internal coherence, adherence to a context model, and more.
Chirag Kundalia and V. Kavida propose using survival analysis to understand when, and under what circumstances, an AI system might need maintenance or replacement. This approach involves modeling the intrinsic and extrinsic factors that could render a system no longer fit for purpose. It also requires organizations to define the standard of output the system must deliver, monitor that performance, and scan the horizon for relevant externalities (e.g., superior technology). Although survival analysis cannot anticipate every eventuality, modeling the future of an AI system enhances budgeting, compliance reporting, and strategic planning.
ADL’s Michael Papadopoulos, Olivier Pilot, and Eystein Thanish argue that with model performance plateauing, the critical differentiator for AI in 2025 is how the technology is integrated into the specifics of an organization — and the context in which it will operate. Benchmarking, which is concerned with models out of context, is of diminishing applicability. After reviewing some entertaining but troubling examples of theoretically capable AI systems disconnecting disastrously from data, business rules, and basic plausibility, they propose an evaluation framework more attuned to contemporary challenges to prevent similar outcomes.
This Advisor presents a case study from a major London hospital, showing how a simulation-first approach can guide AI model development in complex clinical settings. By simulating blood bank operations, the team assessed the potential value of predictive models before investing in data acquisition and model training. The simulation revealed that even moderately accurate predictions could reduce waste, especially when combined with optimized ordering policies. Beyond technical insights, the process fostered early stakeholder collaboration, ensuring the final model addressed real-world needs.
This Advisor, Part II in a series that explores the current state of on-orbit data centers, spotlights the countries, companies, and initiatives leading their development. From national space agencies to emerging start-ups, it maps the key players building space-based computing infrastructure — transforming satellites into intelligent, autonomous systems that can process data in real time.