Strategic advice to leverage new technologies

Technology is at the heart of nearly every enterprise, enabling new business models and strategies, and serving as the catalyst to industry convergence. Leveraging the right technology can improve business outcomes, providing intelligence and insights that help you make more informed and accurate decisions. From finding patterns in data through data science, to curating relevant insights with data analytics, to the predictive abilities and innumerable applications of AI, to solving challenging business problems with ML, NLP, and knowledge graphs, technology has brought decision-making to a more intelligent level. Keep pace with the technology trends, opportunities, applications, and real-world use cases that will move your organization closer to its transformation and business goals.

Subscribe to Arthur D. Little's Technology Newsletters

Insight

Cutter Expert Curt Hall explores how neuro-symbolic AI — a fusion of data-driven neural networks and rule-based symbolic reasoning — can address key limitations in today’s agentic AI systems. By combining pattern recognition and adaptability with logic, structure, and explainability, neuro-symbolic architectures enable autonomous agents to operate with greater reliability, transparency, and compliance. He highlights practical use cases where this hybrid approach mitigates risks such as hallucinations, planning errors, and opaque decision-making, laying the groundwork for trustworthy, real-world autonomy.
In this Advisor, ADL’s Michael Papadopoulos, Olivier Pilot, and Eystein Thanisch explore how AI’s greatest challenges in 2025 aren’t about intelligence, but context. From viral drive-through debacles to high-stakes corporate missteps, the problem isn’t capability — it’s integration. As top models plateau on benchmarks, their real-world performance often falters when they’re not tightly aligned with human workflows, norms, and systems.
This Advisor explores agentic AI — the next phase beyond generative AI — where systems not only generate outputs but also autonomously plan and execute tasks across enterprise environments. By combining LLMs with orchestration, tool integration, and feedback mechanisms, agentic AI enables end-to-end workflow automation with minimal human oversight. While the potential is substantial, technical, organizational, and governance challenges still limit deployment to early-stage pilots in most enterprises.
This Advisor explores how agentic AI — autonomous systems capable of reasoning and acting across multi-step workflows — is rapidly reshaping enterprise operations. Drawing on field research and market signals, it distinguishes between firms embedding agents into core processes and those stalled in pilot purgatory due to fragmented data and governance. Early winners aren’t just using better models; they’re better prepared — with mature data pipelines, governed toolchains, and institutionalized knowledge.
This Advisor distills insights from interviews with 48 healthcare and pharmaceutical leaders on the challenges and best practices for cross-sector collaboration using data and technology platforms. Key barriers include trust, governance complexity, misaligned incentives, and limited scalability. Success hinges on shared agendas, clear accountability, streamlined contracts, and strong leadership — all essential for embedding technology in a way that delivers lasting value to stakeholders.
Part II of this Amplify series on disciplining AI explores how to rigorously evaluate and govern systems amid rising concerns over safety, performance, and accountability. As GenAI plateaus in both capability and ROI, this issue argues for a shift from benchmark-based assessments to context-specific evaluations emphasizing oversight, adaptability, and explainability. It offers philosophical and technical frameworks for aligning AI outputs with human-defined goals — underscoring that effective AI integration demands continuous learning, human judgment, and strategic foresight.
ADL’s Dan North examines the engineering discipline of AI evaluations, arguing that this is the locus of an LLM’s translation from task-agnostic capabilities measured by benchmarks to tech that is setting specific and ready-to-deliver success. North emphasizes the ultimately human nature of this discipline. An organization must define the kind of outputs it is looking for through an inclusive process involving customers and stakeholders, alongside engineers.
Paul Clermont reminds us of a crucial but under-recognized trait of true AI: learning and improvement in response to feedback independent of explicit human design. The implementation of human requirements is thus both highly feasible and dialogic. Clermont stresses the high level of responsibility borne by humans when interacting with AI. Critical thinking about inputs and outputs, and awareness of both objectives and social context, remain firmly human (and sometimes regulatory) responsibilities — no matter how widely terms like “AI accountability” have gained currency.