For more on killing IT projects, see the December 2003 and March 2004 issues of Cutter IT Journal. For more information, contact Cutter Consortium at +1 781 641 9876, fax +1 781 648 1950, or e-mail service@cutter.com.

27 April 2004

IT PROJECT FAILURES OR BLUNDERS?

I have argued that most organizations do not have enough IT project failures. The reason I say this is that, in my experience, most project cancellations (or escalations for that matter) are not true failures but instead represent blunders. There is a big difference. A project failure is one in which most project decisions and actions were correct at the time, but for some reason the project didn't work out. It is a professional project -- the project risks were assessed, managed, and accepted where required; the assumptions were checked; success criteria were defined; the plan was estimated and funded well; the stakeholders participated; and so on. On such projects, the answers to Keil's 10 questions are all "no" (see Keil, Mark. " Software Project Escalation and De-escalation: What Do We Know?" Cutter IT Journal, Vol. 16, No. 12, December 2003, pp. 5-11.)

Project blunders, which I contend most project overruns and cancellations are, arise from Dilbert-like approaches to project management and implementation. There is little or no risk management, the project plan is a fantasy, stakeholder concerns are given short shrift, and on and on. The answers to some -- or more likely all -- of Keil's 10 questions are "yes."

In a recent study of US Department of Defense software-intensive system programs of large size and complexity, my coauthors and I found that risk management was poorly performed when it was performed at all (Charette, Robert, and Laura Dwinnell. "Whatever Happened to Process Capability?" Presentation to the 2003 PSM Users' Group Conference, Keystone, Colorado, USA, 16 July 2003). Furthermore, most risk management programs were pro forma in nature and rarely influenced program decision making. You would think that on large, complex, unprecedented system developments, an organization would consider risk management a vitally important process to use -- and you'd be wrong. I ask you, when programs like this get cancelled or have massive overruns, are they failures or blunders?

Take a look at Capers Jones's article in the December 2003 issue of Cutter IT Journal ( " Why Flawed Software Projects Are Not Cancelled in Time," Cutter IT Journal, Vol. 16, No. 12, December 2003, pp 12-17). Jones identifies three major root causes for large software project delays and disasters: inaccurate estimation, inaccurate status reporting, and inaccurate historical data. As he also points out, each of the three reasons for software project delays and disasters can be eliminated -- so why aren't they?

Why do we have inaccurate estimates? Because of the way most IT programs are sold. Tell the truth in many organizations, and you don't get funded. So you lie instead. Why do we have inaccurate status reporting? The same reason we don't do risk management on projects -- no one wants to report bad news. How often has bad news identified on the working level gotten massaged into good news on the way to the top? Why do we have a lack of historical data? Because we move very quickly to bury our dead projects before an autopsy shows who the true murderers are. Each of Jones's root causes isn't really a cause but a symptom of deeper organizational problems -- as are Keil's.

As I see it, the term "project failure" should be reserved only for projects that have a fair chance of success and are professionally managed -- everything else should be labeled a blunder.[1] In other words, the organization and the stakeholders know what they are getting into and have done everything in their power to increase their chances of success given the constraints. There are defined contingencies if problems arise and defined review points if certain (negative) criteria are met.

This approach doesn't mean always playing it safe -- far from it. An organization that knows how to manage its risk better than anyone else is in a far better position to take on more risk, thereby succeeding where few competitors can. However, this implies an understanding that the project can -- and possibly might -- fail. Everyone's eyes are wide open. When projects do fail, it's worthwhile learning why they failed. Project blunders provide very little opportunity for useful learning since you cannot really know what worked and what did not. You may be able to identify in a post-project review that you had inaccurate project cost and schedule estimates for the fifteenth time, but you won't ever be able to say why.

[1] Success here means attaining your desired ends. A project may be on time, within budget, and meet its technical requirements but still fail.

-- Robert Charette, Director, Risk Management Intelligence Network, Cutter Consortium

IT Project Failures or Blunders?

Advice and Analysis

The Cutter Edge is a free biweekly email service that gives you information and advice that you can put to work immediately for your organization. Issues are written by Cutter Consortium's journal and Senior Consultants.

Sign Up for the Cutter Edge

Advisor Free Trial

Sign up for a free, 4-week trial to any or all of our Advisor newsletters.

Sign Up