Digital Strategy, Operating Models & Technology Implementation Insight
Boost business success via insights on emerging trends in digital transformation and IT strategy; practical frameworks you can apply; and guidance from the world’s experts in leadership, IaaS, investment prioritization, operational excellence, sustainable innovation, change management, enterprise agility, and applying emerging technologies.
While the world is enjoying the benefits of the fourth industrial revolution, the risks to businesses from cyber threats are increasing in both sophistication and frequency. What can business leaders do to strengthen their resilience to cyber threats? Leaders must first recognize that the risks in the digital space present as real a threat to the success of the business as do the more familiar risks in the physical world. To build effective digital resilience, leaders must adopt a C-suite response, embracing both robust technology and organizational culture approaches.
Cutter Consortium Senior Consultant Paul Clermont describes some of the impact that AI has had at the boundaries of commercial organizations and public policy in an article aptly entitled, “Who Knew THAT Would Happen?” Those of us who have experienced unintended consequences of other technologies will want to answer “anybody” but should remind ourselves that some may not have the memory of prior years, and that hindsight is perfect. Clermont explores how to identify possible unintended consequences in advance and proposes countermeasures to negative unintended consequences in the form of design principles and public policies.
Cutter Consortium Fellow Lynne Ellyn recounts her experiences with AI technology in the real world, surveys the current landscape, and identifies key nontechnical issues that companies are likely to face when deploying AI-based systems.
As AI becomes more visible as a corporate strategic tool, organizations will have to incorporate issues surrounding AI as part of corporate strategy. Pavankumar Mulgund and Sam Marrazzo help us by providing a framework for developing an AI strategy. The authors discuss the “minimum viable model” approach to the development of the underlying AI/ML models, along with the platform on which these models run and the inevitable tradeoffs. They conclude their piece by examining some best practices for the successful implementation of AI initiatives.
One way of getting an off-course system (or person) back on track is by nudging. This concept can be particularly useful in goal-directed systems. But, to reiterate, errors will occur. In his article, Richard Veryard describes technologically mediated nudging; the possible unintended consequences; and the need to consider the planning, design and testing, and operation of the system for robust and responsible nudging.
Experienced IT practitioners know that errors will occur. A big part of building and managing complex systems is dealing with risk management (which includes identification and mitigation strategies). This is hard enough when documentation and source code exist. But the current state of ML-based AI tends to result in opaque black boxes, which make this activity, um, challenging. David Biros, Madhav Sharma, and Jacob Biros explore the implications for organizations and their processes.
This article takes us to outer space (well, low Earth orbit, actually) to examine the issues around AI (in its ML incarnation) employed in a NASA system to track orbital debris. William Jolitz, the inventor of OpenBSD (open source Berkeley Software Distribution), makes the case for organization-wide awareness and alignment around ML and suggests that, like security, transparency cannot be bolted on later; it must be addressed at a project’s origin.
The contributions in this issue of CBTJ will help us get up to speed with the current state of AI and to think about some of the issues raised when we look beyond systems that appear to work as intended. Our contributors span industry and academia, and their commentary provides a good way to gain an overview of the problem.