Executive Update

Information Superiority from Operational Excellence

Posted November 12, 2020 in Business Technology & Digital Transformation Strategies, Data Analytics & Digital Technologies
data excellence

 DA & DT EXECUTIVE UPDATE VOL. 20, NO. 13
  

 

This is the third and final article that explores how information superiority can support organizations in achieving the three strategic value disciplines that emerged in the early 1990s. The first article, in Cutter Business Technology Journal, described how information superiority can deliver customer centricity, while the second article looked at how it can deliver product leadership. In this Executive Update, we turn our attention to operational excellence.

Recap: Information Superiority

Information superiority is based on the idea that the ability to collect, process, and disseminate an uninterrupted flow of information will give you operational and strategic advantage. The advantage comes not only from the quantity and quality of information at your disposal, but also from processing this information faster than your competitors and/or fast enough for your customers.

Information superiority involves the following capabilities:

  • Rich data gathering

  • Sense making (situation awareness, model building)

  • Decision making (evidence-based policy)

  • Rapid feedback (adaptive response and anticipation)

  • Organizational learning (knowledge and culture)

  • Effective collaboration

What Is Operational Excellence?

After Steve Jobs died, when asked what he thought was Jobs’s greatest strength, his former partner Steve Wozniak is said to have responded, “Everyone else will say vision, and gosh darn that’s important, but that doesn’t go anywhere without operational discipline.... [Steve] organized the company to have good, tight controls. Watching everything he could — that is operational excellence.”

Many executives see operational excellence in terms of control — especially cost control and risk manage­ment. Traditional management accounting provides an entry-level version of cost control, and some organizations never rise above this level. The abbreviation OPEX has two different meanings (operational excellence or operating expenses), but these two meanings are often (wrongly) mixed up. For the pur­poses of this Update, OPEX refers to operational excellence. Later, I’ll come back to the question of how organizations can go beyond basic management accounting to leverage advanced data and analytics for control purposes.

But there is an even more strategic view of operational excellence, which is about operating effectively as well as efficiently. Operational processes need to be repeatable, fault-tolerant, scalable, and so on. These are not merely internally focused goals. In their 1993 Harvard Business Review article, Michael Treacy and Fred Wiersema described operational excellence in terms of delivering price and convenience to the customer. Twenty years later, Cutter Consortium Senior Consultant Andrew Spanyi focused on continuous improvement and exceeding customer expectations, as well as on cost reduction in his Executive Report, “Achieving Operational Excellence.”

Controlling Cost

The accountant — or “bean-counter” — has always been an important figure within a large organization; as Accounting Professor Prem Sikka once noted, “accounting technologies are at the heart of contemporary business practices.” So, one starting point for understanding and controlling costs is through an account­ancy viewpoint. If we can identify where all the costs are incurred, then we can identify opportunities to reduce costs. But cost control is no longer dominated by this viewpoint; Spanyi indicates several important considerations:

  1. Value. “The emphasis on creating value for customers trumps the emphasis on cost reduction.”

  2. IT. “IT is now an essential ingredient in optimizing [operational excellence] performance.”

  3. Process. “An end-to-end business process must show a shared understanding of what is done, by whom, and how.”

  4. Customer perspective. “In measuring what matters to customers, it's important to view the business from the ‘outside-in.’ ”

To understand costs properly, we need a detailed tracking of how processes operate. In the old days, someone might have worked out that a given activity (e.g., the customer service desk) consumed X percent of the total cost-to-serve, decided (based on comparison with industry benchmarks or just gut feel) that it ought to cost at most Y percent, and then reduced the resources allocated to this activity — without understanding whether this would have a knock-on effect on other activities, let alone calculating the negative effect on customer satisfaction. H. Thomas Johnson criticizes the use of management accounting targets (or “levers”) to control or motivate operations as a form of what he calls “mechanistic thinking.” He goes on to argue that “if we assume that financial results emerge from complex interactions and nonlinear feedback loops in the interrelated parts of a natural living system, then attempting to control those results with linear accounting information is not only erroneous, but possibly destructive to the system’s operations in the long run.”

With more data and the ability to analyze it properly, it becomes possible to optimize the allocation of resources across different activities and calculate the most efficient and effective way of delivering a given level of quality and customer satisfaction.

One side effect of automating processes is that it should become easier to collect much more fine-grained and specific data — not only about cost but also about other dimensions, including time and error rates. From an accountancy viewpoint, we can then calculate costs (and, therefore, profit margins) against specific products and/or customer segments, and these calculations may prompt commercial decisions, such as changing the product mix or incentivizing the sales team to concentrate on the more profitable lines.

But what we’re interested in here is how these calculations can drive operational improvements. To improve processes, we don’t just need to look at the total or average costs. Process improvement techniques such as Lean and Six Sigma rely heavily on statistical process control, which involves (1) monitoring and analyzing the variation in process and (2) diagnosing and possibly eliminating the special causes of variation. This generally means looking at nonfinancial as well as financial data.

Therein lies the challenge for some organizations. For various reasons, financial (e.g., billing) data is taken more seriously than nonfinancial data. There is a positive spiral here: financial data is more valuable; therefore, we take more care over it; therefore, it is more accurate; therefore, it is more valuable. But this positive valuation of financial data can lead to nonfinancial data being discounted and disregarded, especially from external sources, because it is considered less reliable/trustworthy. As C. West Churchman warns:

Never allow the temptation to be clear, or to use reliable data, or to “come up to the standards of excellence,” divert you from the relevant, even though the relevant may be elusive, weakly supported by data, and requiring loose methods.

Meanwhile, as Consider Solutions CEO Dan French points out, if your IT systems are too tightly controlled, this may simply result in workarounds that are not visible to the IT system. For example, if the ERP system insists that you can’t have a delivery until you have a purchase order, then the practical workaround may simply be not to record the delivery in the system until the purchase order has been (retrospectively) raised. To get some visibility into this shadow process work, you may need to introduce effective continuous monitoring into key risk areas, as well as monitor the controls themselves.

Controlling Systems

For people in the IT world, operational excellence starts at home — with managing the data, applications, platforms, and technology. Technology platform vendors have long understood that as computerized systems-of-systems progressively increase in complexity, it is necessary to provide a degree of automatic system self-management. Early developments included HP OpenView (later relabeled BTO, for Business Technology Optimization) and IBM’s Autonomic Computing initiative.

Operational excellence is one of the five pillars of the Amazon Web Services (AWS) Well-Architected Framework (along with security, reliability, performance efficiency, and cost optimization — although many experts might think that all these should be included under OPEX anyway). In the context of cloud systems, Amazon sees OPEX as including “the ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.”

Amazon’s recommendations for cloud systems (which can easily be extended to other OPEX contexts) include:

  • Design for operations — capture a broad set of information to enable situational awareness

  • Test and rehearse new systems and processes before rolling them out across the organization

  • Use scripting and automation to reduce human error

  • Monitor the “health” of operations

  • Anticipate planned and unplanned events

  • Learn from experience, and share learnings across the organization

Anticipation

There is usually an operational advantage in dealing with incidents proactively rather than reactively. It may be much more efficient to replace a component before it reaches the end of its life, rather than waiting for it to fail and need an urgent fix job. This is known as predictive or preventive maintenance. For example, an organization that operates a large number of refrigerators can attach Internet of Things (IoT) sensors to each fridge to monitor temperature, energy consumption, and the like. If a fridge starts to produce irregular readings, then the motor can be scheduled for replacement.

The scale of business improvement is often dependent on the scale of effort and investment. For example, an engineering company may see some potential benefit from predictive maintenance — repairing or replacing components in response to small variations in performance (which could be regarded as a warning signal that they may need attention or may even be reaching the end of their useful life) rather than waiting until the component actually fails. Table 1 shows a hypothetical example, with different cost-benefit equations for different levels of accuracy — higher accuracy costs more and may take longer to achieve, but also delivers greater benefits in terms of the efficiency and cost-effectiveness of overall maintenance activity.

Table 1 — Hypothetical example of progressive improvement: predictive maintenance.
Table 1 — Hypothetical example of progressive improvement: predictive maintenance.
 

There are three important points to make about this example in relation to operational excellence. The first is that 80% accuracy is better than nothing; it’s better to implement something imperfect quickly, rather than spend ages on the perfect solution. The second is that 90% is better than 80%, so keep looking for further improvement. And the third is that reaching 95% accuracy may only be achievable once you have the feedback and calibration that come from operating this anticipatory model.

Managing Complexity

The more a business differentiates its operations to address different market niches, the more difficult it becomes to integrate the subsystems back together in a consistent and effective way. Traditional operational managers often believe their primary task is to achieve supply-side productivity by reducing variation, and techniques such as statistical process control are deployed to this end. This view of the primary task encourages traditional operational managers to suppress demand-side variation and to ignore demand-side signals that would complicate or contradict this view of the primary task.

This is where some of the elements of information superiority come into play. The organization needs to pick up relevant weak signals, not only from inside the organization but also from the customer ecosystem — and respond in an agile manner.

A common criticism of Lean is that while it helps to make predictable systems more efficient, there is a risk of removing those elements that will enable the system to adapt to unforeseen future needs.

Continuous Improvement

Continuous improvement is often described as a staircase, or even a moving staircase. Many people talk about “running up the down escalator”; in other words, you must run upwards simply to stay in the same place (some people call this the “Red Queen Effect”). Cutter Consortium Fellow Robert Charette envisions this metaphor as a rotating spiral staircase with the acceleration control at the top, so those who are most successful at continuous improvement get the chance to speed up the escalator for everyone else, thus increasing the gap between themselves and their competitors. This is where operational excellence becomes a source of competitive advantage and not just internal control.

Kevin Kelly relates the example of Fairchild Semiconductor, whose newly developed (at the time) transistor would replace a valve then used in a range of military equipment. The only problem was that the valves cost US $1.05 each. Based on initial production runs, each transistor was going to cost $100. Even when economists predict that production will get cheaper over time, it takes courage (or desperation) for a business to do what this company did next: it sold the transistor for $1.05, taking a huge financial loss. History is written by the victors. The transistor triumphed, and so did Fairchild Semiconductor. Within two years, the company had a 90% market share and was selling the transistor for 50¢ — and making a profit.

In economics, the concept of learning by doing (introduced by Kenneth Arrow) explains how such improvements in productivity are to be expected in a well-managed system. Following Arrow, other economists have studied the factors that influence the speed with which these improvements can be disseminated throughout an organization. This is where advanced data analysis comes into play. If an organization can monitor and analyze these factors directly, in near real time, then it becomes possible to accelerate the processes of continuous improvement and get to the top of Charette’s helical staircase before your competitors.

Organizational Learning

In his Report on achieving operational excellence, Spanyi quotes an unnamed CIO saying, “operational excellence is in our DNA.” He goes on to criticize this CIO’s version of operational excellence, which was based on limited and inadequate tracking of customer interaction as well as old-fashioned change management.

But, if you understand DNA, what else would you expect? One thing that distinguishes humans from other species is how little of our knowledge and skill comes directly from our DNA. Some animals can forage for food almost as soon as they are born, and some require only a short period of parental support, whereas a human baby must learn nearly everything from scratch. Our DNA gives very little directly useful knowledge and skill, but what it does give us is the ability to learn.

Very few cats and dogs reach the age of 20, but at this age many humans are still undergoing full-time education, while others have only recently started to attain financial independence. Either way, those young adults have by that age accumulated an impressive quantity of knowledge and skill. But only foolish humans would think that this accumulated knowledge and skill is enough to last the rest of their lives. What is in our DNA, more than anything else and more than is the case with other animals, is learning.

There are, of course, different kinds of learning. First, there is the stuff that the grownups already know. Ducks teach their young to swim, and adults teach kids to do sums and write history essays as well as some rather more important skills. In the world of organizational learning, consultants often play this role: coaching organizations to adopt “best practice.”

But then there is the learning that extends beyond this existing knowledge. Intelligent kids learn to question both the content and the method of what they've been taught, as well as the underlying assumptions — and some of them never stop reflecting on such things. Innovation depends on developing and implementing new ideas, not just adopting existing ones.

Similarly, operational excellence doesn’t mean simply adopting the ideas of the OPEX gurus — statistical proc­ess control, Six Sigma, Lean, or whatever — but collectively reflecting on the most effective and effi­cient ways to make radical as well as incremental improvements. In other words, applying OPEX to itself. As noted earlier, this means that your OPEX program itself must be instrumented, monitored, analyzed, and controlled.

Operational Intelligence

Let me round up these concepts with an example showing how information superiority can deliver operational excellence in healthcare. It involves a loop of decision/planning, monitoring, analysis and sense making, and back to decision/planning:

  1. Forward scheduling and resource allocation, based on standard healthcare procedures, using historical data and advanced analytics tools for simulation and optimization.

  2. Real-time tracking of what is going on, using a range of devices (sensors, cameras, etc.). This includes tracking the activity and behavior of healthcare professionals and patients, monitoring the status of relevant equipment, and logging the start and end of various healthcare procedures.

  3. Detecting variations and deviations from the schedule and making appropriate adjustments in real time.

  4. Monitoring patient outcomes (including forward indicators where possible) to identify where the most efficient procedure is not always the most effective. Monitor how a given procedure fits into a broader care pathway for a given patient.

  5. Understanding variations and deviations from the standard procedures, which might call for the standards to be questioned and revised (not in real time).

  6. Developing more effective (both patient-friendly and cost-effective) healthcare pathways.

Note how this example combines a rapid goal-directed feedback loop with a series of deeper analytics sense making and learning loops. Real-time monitoring is then useful not only for near-real-time operational intelligence but also for longer-term innovation.

Conclusion

Of the three value disciplines, operational excellence is the one where real-time data streaming is most critical. While product development should operate at a reasonably fast and agile tempo, and customer engagement assumes a reasonably up-to-date picture of the customer, some of the most interesting benefits of real-time data collection and analytics sit in operations.

Yet, although many of the approaches for optimization and improvement discussed in this Update have existed for years, their use in large organizations tends to be highly piecemeal and mostly retrospective. To drive OPEX as a strategic program, you need to join up multiple strands of data and analytics, with real-time monitoring and control, and combining best-in-class automation with agile human-in-the-loop intervention.

About The Author
Richard Veryard
Richard Veryard is a data architect with a keen interest in technology ethics. Currently, he is developing the data strategy for a global organization. Mr. Veryard's books include Component-Based Business and Building Organizational Intelligence, and he is writing a book on data strategy. He can be reached at richard@veryard.com.