Domain
IT strategy
Assertion 181:
Syllabus
Over the past two decades, the topic of risk management -- explicitly at the systems portfolio and project development levels -- has been hailed as an indication of IT's coming of age. The present economic meltdown has proved that the finance industry's practice of risk management was more about covering up risk than managing it. The factors that made financial managers want to game the system are all too similar to factors working on IT managers. We would be fools indeed to believe that risk management in our organizations is working better than its prototype in banking and insurance.
Contents
- Opinion by Tom DeMarco
- Concurrence by Lynne Ellyn
- Concurrence by Mark Seiden
- Partial Concurrence by Ken Orr
- Concurrence by Christine Davis
- Concurrence by Tim Lister
OPINION BY TOM DEMARCO
A New York Times op-ed column by Thomas Friedman told of risk managers using a model to assess financial companies' net positions under different assumptions about mortgage interest rates and housing market factors. 1 One of the parameters the managers were allowed to enter was year-over-year percentage growth in single-family home value. The model was built to accept only positive numbers for this variable. This is risk management as practiced in the 21st century.
At the time of Barry Boehm's landmark article, "Software Risk Management: Principles and Practices," 2 there were only a few voices in the wilderness even talking about the subject: Boehm himself, Cutter Fellow Bob Charette, and Marvin Carr of the SEI. I attended Boehm's tutorial on risk management sometime in 1990 and listened to his imminently reasonable plea for risk management on software projects: insurance companies, Boehm told a rapt audience, lay off their risks by trading, for example, some of their Florida coastal homeowner property insurance against some other company's California liability insurance; their net portfolio is thus protected from any one catastrophic event. Similarly, companies that trade securities are careful to have broadly diversified holdings so that they are relatively immune to market gyrations. We software managers need to learn to assess our risks so we can insulate ourselves from the effect of our own catastrophes.
Risk management for IT was largely patterned on risk management as practiced in the financial and insurance industries. What we've learned painfully over the past year is that risk management was not working very well in those very industries whose example we were trying to emulate. There were highly paid risk managers and risk programs at work in banks, insurance companies, and brokerage firms. At the buzzword level, no one could fault them. If you needed to point to someplace in the policy manuals or on the org chart where risks were being carefully contained and controlled, you'd have no trouble finding the right places to point. The only problem was that risk management as practiced was nothing more than PR. They talked the talk, but meanwhile were inflating risk to a level that no prudent manager would consider.
Consider the following amazing fact: at Bear Stearns, Merrill Lynch, Lehman Brothers, and AIG, the ratio of assets to capital was more than 30:1. They were betting on moves in the financial market with 3,000% leverage! With that kind of leverage, even small adverse moves in a market can wipe a company out. Sure enough, it did.
Why would anyone run a company in such a way that even normal market fluctuation could drive it to bankruptcy? The reason is pretty clear: the rewards of 3,000% leveraging enriched the managers of those companies wildly as long as the market was going up, while the risks when it turned down didn't accrue to them at all. Managers at Citi et al. were managing risks just fine; however, it was only their own personal risks they were considering. With a few years of $100 million bonuses in their pockets, they hardly minded that someday the jig would be up for their companies.
Whenever the personal risk profile for a manager is very different from the risk profile for the company, you can count on the manager managing his or her own risks first. An obvious remedy for this problem is to align the two profiles. If any one of the companies had had a bonus system that kept all bonus monies in escrow for, say, 10 years, and such monies were forfeited in the event of gross decline of the company's fortunes, you can be sure that nobody would have been willing to consider 3,000% leverage.
While such alignment of top executives' remuneration is possible, the scheme is less workable at lower levels. A software risk manager, for example, is not paid at the million-dollars-a-moment pay rate so common among executives, so the amount of at-risk premium would be much smaller. Even if he or she had some bonus money in escrow that would potentially forfeit should risk assessments prove unreasonable, other factors might come to be more important than what was at risk. It's these "other factors" in the individual's risk profile that make risk management so problematic. Chief among these factors are:
-
The threat of job loss. Blowing the whistle on too much risk is likely to make the risk manager look like a poor team player. If he or she is enough of a bother, it might prove simpler to fire him or her and hope for someone to take over the position that would be better tuned in to the sincere wishes of upper management. Note that the decision to get rid of a risk manager always happens long before the actual outcome of the project is known.
-
Loss of prestige. The statement that "they can't make that deadline," for example, is sure to be equated to "I couldn't make that deadline if I were in their position."
-
Loss of power. Nobody looks good when things go awry, not even the risk manager who correctly predicted the problem. On the other hand, a risk manager who assures management that an iffy deadline can be met looks like a terrific hero -- an enabler -- if and when it is met. The likely rewards for understating a risk are greater for the individual than the penalties for overstating it. This amounts to a kind of power "leverage."
-
The near-term advantage of telling management what it wants to hear. By the time bad results are known, the risk manager might have been long since promoted or retired.
What does this mean for you?
-
Don't be very confident that your risk management organization or risk officer is in fact managing risks.
-
Manage your own risks. The mere fact that someone else has that role doesn't assure that he or she is managing the risks to your project and your career, so you have to stay on top of those. Treat the risk management function in your company as a compiler of at least some of the information you need to correctly assess your own situation.
-
Look out for totally unmanaged risks. Your major exposure is risks that aren't on the official risk radar. There is a terrific temptation to disregard certain kinds of risks because they are so unpalatable to upper management. This is particularly true of schedule risk.
-
Expect some risks to be exempted from management by politics. Every time a positive outcome has been declared by upper management fiat ("The January date will be met, or heads will roll."), expect entire classes of risks to be eliminated from the risk census. They're made effectively exempt from risk management, but they still have the potential to sink you.
1Friedman, Thomas L. "All Fall Down." New York Times, 25 November 2008 (www.nytimes.com/2008/11/26/opinion/26friedman.html).
2Boehm, Barry W. "Software Risk Management: Principles and Practices." IEEE Software, Vol. 8, No. 1, January 1991, pp. 32-41.
CONCURRENCE BY LYNNE ELLYN
Tom asserts that the state of risk management in IT is probably no better than the sham of risk management in the finance and banking industry. I wholeheartedly agree with that assessment. I generally agree with Tom's rationale for this -- that the incentives working in IT (and banking) promote poor or phony risk management. However, the concept that one could use a linear, mathematical approach to control risks inherent in activities that are social and cultural in nature, seems flawed from the get-go. Businesses are social organizations. They are composed of humans who are acting together for shared benefit. In many cases, the sharing of benefits extends to other stakeholders, but the people devoting their working lives within a given corporation are doing so because it benefits them. Most often, it is the usual self-interest in feeding the kids, paying the mortgage, and affording the summer vacation. This relationship between the company and the person is a civil arrangement, and both parties share the benefit inherent in the relationship. As recent events in the financial industry have demonstrated, there are cases when an individual seeks self-benefit that is akin to rapacious greed. Risk management methodologies failed to prevent the catastrophic meltdown at Morgan Stanley, AIG, Lehman Brothers, and so on, because the risky behavior and psychology of the people involved did not trigger risk alarms that would have resulted in the necessary interventions. The group psychology supported and legitimized the suspension of ethical behavior.
In the IT world, most of the folks participating are fundamentally involved in IT because they like technology, they can make a living doing technology, and the job affords some or most of the basics of a good life. IT almost never involves the huge financial incentives that operated in the mortgage derivatives business. In IT, there are benefits to the individual who has an IT career. In order to keep the job and its attendant benefits, IT people can fall victim to real or perceived incentives to skip steps, to take ridiculous risks, or to dismiss risks by holding a parochial view of the project or activity. Often IT people have strong personal motivation to do the project -- even if the project makes no sense. I have seen IT people infused with enthusiasm for a new technology or a novel project ignore critical risks. IT managers might never ask whether the organization has the capacity to do the project well or whether the project is cost-justified. The cost of a failed IT project can be huge -- even catastrophic for some companies -- but rarely has the capacity to take down an entire industry. The consequences for IT individuals associated with a failed project are typically smaller than the impact on the corporation. As a result, IT people may not take risk management as seriously as they should.
I think the biggest flaw in all risk management approaches is that there is no accounting for social pathology, only quantitative and analytical assignments of value or probabilities for outcomes. There is no discounting for the presence of sociopaths in the company or on an IT project team. There is no mechanism to discount the assumptions, beliefs, or targets; no way to approximate the veracity of the beliefs held about the activity or project. There is no "bad actor" calculation in risk approaches. Identifying bad actors or destructive social pathologies within a project team is a management responsibility, and there are times when the manager is a bad actor or just inept at organizational psychology.
I am befuddled when I think about how to counteract the risk inherent in such complex human activities as IT projects. Perhaps we should experiment with adding psychologists to project teams. Assessing the personalities and motivations of the members of a team might be a better indication of the real risk than all the analytics that have failed so catastrophically in recent times. Regular social and group dynamics analysis might provide insight into groupthink, parochial behavior, and gaps in truthfulness. This would be a totally different and human behavioral-based assessment. It could also be used to intervene in group dynamics to ensure truthfulness in management reporting.
For companies that want to use a traditional quantitative risk management approach, it would be best if the risk management processes were conducted independently of the project team or management structure. The risk manager and his or her assessment staff could facilitate the risk management process, provide independent assessments as the project unfolds, and coach and counsel senior leaders to make sure they do not inadvertently encourage project teams to take risks or suppress the truth regarding project difficulties.
CONCURRENCE BY MARK SEIDEN
Tom argues convincingly that companies aren't any better at managing IT risk than they are at managing financial risk. Just a few comments to elaborate a bit about some of the other underlying reasons why that might be true.
Groupthink, self-deception, and denial as human traits are common, prevalent, and not difficult to understand. These are reality-distorting characteristics. But, separately from those, successful people are not intrinsically good at the sort of pessimistic thinking and bad-case scenario construction that risk managers need to do. Some studies show that "dispositional optimism" (a generally positive state of mind) is correlated fairly highly with personal characteristics such as mastery (.55), and self-esteem (.54). 3 (There is some sparse literature based on studies on the relationships between where an individual lies on the situational optimism axis with socioeconomic status or health. 4)
A separate problem is that the business context puts additional roadblocks in the way. Businesspeople are not equipped with accurate (often, any) measures useful to judge long-term risk of IT projects or of operations. Middle managers are often put in a position of having to shade the truth or spin bad news positively (aka lie) both to their troops and to their management, acting more as cheerleaders than decision makers. Some top managers (such as Steve Jobs and Mike Bloomberg) are famous for the strength of the "reality distortion field" that they can create -- and their lack of interest in hearing negative news.
Businesspeople are not equipped or encouraged to communicate about risk. One lawyer, who I thought was on my side, once told me that my e-mails about some particular risks made for enjoyable reading, but they were too colorful and many of those expressions should not be put even in attorney-client e-mail, lest they possibly become the subject of discovery by somebody powerful and litigious. Forcing communication about risk to be only uttered in face-to-face conversations so no record exists that the risk was ever recognized means that potentially bad news is difficult to disseminate and recognize within the organization, while rosy news is freely disseminated unimpeded.
As Tom points out, whistle-blowers are often at risk to the people they snitched on. Luckily, Sarbanes-Oxley provides broad protection to whistle-blowers who disclose any conduct that they reasonably believe violates "any provision of federal law relating to fraud against shareholders." So if a deception rises to the level of fraud that would affect the numbers, a whistle-blower should have protection. One source on this subject speculates that "the SOX whistle-blower laws may well have as much effect on business practices, in the 21st century, as did the civil rights laws in the 20th." 5
Another problem is created when management creates short-term incentives for "team players" who take "big bets" or have "laser-sharp focus" on goals, ignoring external realities that might get in the way. Where did this "big bet" mentality come from? These are the same managers who create diversified stock portfolios in their 401ks but are willing to risk their companies' futures on investments in only a few areas. But now, strangely, companies reward them (with bonuses) for short-term returns, with little long-term obligation in the long term.
Top managers often ask for the "top three things they can do this quarter" to improve the state of some problem. I resist answering that kind of question, reply with the top "larger n" important things, and talk about the long-term need for investment in those problem areas that can't be "solutioned" in a quarter.
Big companies developing products (at least in parts of the last century) would sometimes develop in different divisions of the company, competing products that they knew would conflict. Let several genetically diverse flowers bloom, and kill off the ones that end up looking not so pretty. There is often lots of politics and, of course, some engineering and competitive analysis in deciding what would survive. We could stand to do this more often in IT, at least to have a design competition for important components.
Another difficulty for a practitioner of risk management goes deeper: Nobody likes a Cassandra (or an Eeyore, or a short seller), constantly predicting doom and gloom. Cassandra, you may remember, was given the gift of perfect prophecy by Apollo, but because she spurned him as a lover, he caused all listeners to believe her prophecies were lies. For example, nobody believed Cassandra's prediction that accepting the Trojan Horse would be a very bad idea. Meanwhile, Eeyore was less accurate, but no more popular. The short sellers seem to be doing pretty well this year (measured by all standards other than popularity).
Mythology reflects human nature fairly accurately, in my experience.
3Scheier, M.F., C.S. Carver, and M.W. Bridges. "Distinguishing Optimism from Neuroticism (and Trait Anxiety, Self-Mastery, and Self-Esteem): A Reevaluation of the Life Orientation Test." Journal of Personality and Social Psychology, Vol. 67, No. 6, December 1994, pp. 1063-1078.
4More information can be found at www.macses.ucsf.edu/Research/Psychosocial/notebook/optimism.html.
5Berkowitz, Philip M., Esq. "New Sarbanes-Oxley Whistle-Blower Regulations: Their Impact on Business." National Law Journal, 20 September 2004.
PARTIAL CONCURRENCE BY KEN ORR
In the run-up to the Iraq War, there was a systems dynamic model published on the High Performance Systems (HPS) Web site (now known as isee Systems). 6 The model showed clearly a number of feedback loops that were likely to produce bad rather than good outcomes from the invasion. As it turned out, the model was amazingly prescient, since it predicted: (1) the backlash of the Muslim world, (2) the deterioration of US/European relationships, and (3) increased rather than decreased polarization in the Middle East. The model did not predict the pushback of the Iraqis to the foreign radicals or the surge, but you can't get everything right.
Traditional risk management, like so many things in the 21st century, are relics of linear thinking in an increasingly nonlinear age. A recent Cutter IT Journal (CITJ) contained several very insightful articles about the failure of risk management in the wake of our first postindustrial economic panic. 7 The articles in that CITJ issue on risk management did explore the role of human nature in managing risks, but didn't adequately address the problem that true risk management means hiring someone with the guts to speak truth (or probability) to power.
In a discussion I had when the US economy started to tank, a friend of mine suggested that because the world economy had become so much larger and spread around in the last few decades, the failure of the US market would not have such a deleterious effect because it would be dampened by emerging economies, such as China and India. Always the critic, I suggested that, to the contrary, because everything everywhere, especially financial trading, was now done electronically, the effect could be much worse, since the effects could be instantaneous. We were both right in our own way, but there were no models that took both points into account.
Tom's opinion is speaking about software project risk as a reflection of risk management in the financial and insurance industries. But for the most part, risk management in the form of mathematical risk models, like all mathematical models, works by assuming that individual risks are statistically independent of one another. This may be true of hurricanes on the East Coast and earthquakes on the West Coast, but in a great many cases, especially if the risk domain is part of, or contains, one or more feedback systems, one cannot simply multiply the risk of one event times the risk of another to get the true risk to be covered (hedged).
In managing the risks of software projects -- especially very large software projects -- the risks are greater because, as Tom suggests, there are multiple classes of risk involved. My three big categories are: (1) political risk, (2) knowledge/experience risk, and (3) technological risk. My favorites are political and knowledge/experience, 8 since in all domains and in all ages, there is a great tendency for lemminglike behavior when things are going good. But my guess is that if we are to do a better job of risk management, we will have to do the following:
-
Document software project failure carefully; for example, similar to the way the US National Transportation Safety Board documents airplane and railroad crashes.
-
Break big projects into several smaller, "doable" pieces (e.g., nothing bigger than $XXX).
-
Create "tiger teams" that simulate failure at week spots.
-
Use a designated "Cassandra" assigned to report on what may go wrong every couple of weeks.
-
Set up a "project market," where team members, including consultants and vendors, can "buy" futures in whether the project will succeed or fail.
It is interesting that the sainted founding fathers installed one piece of nonlinear thinking into the US Constitution that has stood up rather better than any of our financial regulation structure, namely, the idea that one can only truly balance power with power. If we're serious about any issue that may actually be fatal to the organization, we need to think about organizational solutions.
6HPS developed the systems feedback modeling tools Stella (Mac) and iThink (PC).
7Charette, Robert N. (ed.) "Managing Enterprise Risk in a Failing Economy: Is It Time to Rethink Risk Management?" Cutter IT Journal, Vol. 22, No. 2, 2009.
8For more information, see Orr, Ken. "Pushing the Envelope: Managing Very Large Projects." Cutter Consortium Agile Product & Project Management Executive Report, Vol. 5, No. 7, 2004.
CONCURRENCE BY CHRISTINE DAVIS
Ineffective risk management is a symptom of a disease that has been spreading throughout corporations over the last two to three decades, leaving tremendous devastation in its path. The disease has been difficult to detect and, in many cases, the symptoms are masked. However, over time this disease has wreaked havoc on employees, business, and even entire industries. It has been somewhat of a silent killer, yet it has been well fed by those at the top. The disease has tainted so many and spread so widely that not many corporate cultures have been spared. The disease is called greed.
Upper management in progressive corporations created the right conditions for greed to thrive when they designed an executive compensation system that predominantly rewarded short-term financial performance. Companies quickly learned from each other, and in order to stay competitive, many publicly held corporations designed executive compensation packages rich in stock options that were exercisable within a short amount of time after being awarded. Of course, upper management has justified this approach because of its need to increase shareholder value through increasing its market valuation, which takes us to the other system that has been fueled by greed: the overall stock market.
The focus continues to be on the near term with a reward system that reinforces behavior that will try to bring, or even force, early success. A mindset has developed that some have called a "conspiracy of hope." It is a conspiracy of hope because those who need to know and should know about possible risks act irresponsibly as they avoid dealing with the ugly realities. They demand that "it" be "managed," which sometimes means the risk is simply ignored or rationalized away. Those in their organizations who openly verbalize their concerns are not seen as "team players," or they are labeled as not being "tough enough" for the job. In this kind of corporate culture, management wants to keep and reward those who will "make it happen" and takes pride in replacing the naysayer as quickly as possible. The practice of risk management can become merely an exercise where everyone goes through the motions, but no one is really trying to understand the threats. And remember that publicly held companies are required to disclose any known material risk that could have a negative impact on the company stock performance (i.e., possible stock option degradation for the boss). It is not beneficial to know and understand the true risk situation if you have to report it.
The basic underlying value system of a company is reflected in the actions and decisions made by management. Employees at the bottom can wave all the red flags they want, but until those who control the company are motivated to care about long-term results, many of these red flags may be ignored. People are very clever and very adaptable. We learn quickly how to survive and thrive in a particular company by knowing and understanding both the written and unwritten rules and practices. If there is not a basic value system that places a high importance on operating with integrity and high ethical behavior, risk management just becomes a game.
Albert Einstein said, "Three great forces rule the world: stupidity, fear, and greed." When it comes to risk management, I think we have all three of these forces coming together in an ugly way.
CONCURRENCE BY TIM LISTER
I'm with Tom on this one. I have been doing a fair share of software risk management consulting since Tom and I wrote Waltzing With Bears 9 together back in 2003. I try not to be skeptical on any consult, but when people tell me they are already practicing risk management on their projects, more often than not my skepticism is justified since all they are actually accomplishing is a cursory job of risk identification. The typical half-hearted risk management usually goes like this: the standard software management process calls for risk management; so early on in the project, a risk list is spun out, the risk management task is checked as done, and everyone goes back to work on the project.
There is a simple test to determine whether risk management is actually having any impact at all. Ask a project manager what has changed on the project as a result of the risk management activities. Something must have changed; the definition of the product, the sequence of work in which the product's functionality is developed, the team composition, or the team size -- something. If nothing was reworked in the product definition, project staff plan, or the project plan itself, then you know you have been given a risk management placebo.
Why is this so common? I believe that most people find serious risk management an uncomfortable emotional conflict with their project roles. Most project team members want to get their jobs done as well as they can, and thus maintain their sense of competence at work. Most project managers want to do as well as they can, given the hand that was dealt them. Most managers cannot handpick their team, nor do they have a controlling influence on the project goals and constraints. Voicing serious doubts, especially early, without any "hard evidence" to justify those doubts, raises questions as to one's ability to handle tough situations -- that hand you were dealt. All this leads to placebo risk management.
How do you get the boost of real risk management? First, until you change the unwritten rules of your organization's culture, you need to circumvent the culture that can't see and deal with risk. I like Ken's use of the term "Cassandra" (as a followup to Mark), but I prefer to use the term, "Outsider." You need to find a person outside the project, who has software project experience, and who can lead the risk identification and assessment sessions. The operative word here is "lead." This person needs to say out loud the unsayable. That leader needs to force the issue by getting ugly right at the start.
I point you to a technique I first saw Tom use in a risk identification session. Tom started out with disasters and worked back to contributing risks. He would say something like, "I can look into the future, and I can clearly see the fourth quarter of 2011, and the system you are starting on now, that everybody needs fully functional by the end of 2010, is not there at all. Nothing has been deployed. Nothing. How could that happen?" Slowly people start to come up with scenarios as to how that could possibly occur. Whenever the first scenario was described, Tom would say, "No, that's not what happened. What else could make you so late?" He would work a vein until it was dry, then go to another disaster: "The system was deployed at the very end of 2010, on schedule, but early on in 2011 management decided to roll it back out, and roll back in the old system. How could that have happened?" From the disasters, he got the scenarios, and from the scenarios he can find the root risks. In this case, going backward seems to defeat the culture.
There are two key ingredients here. One is the Outsider, the person with no stake whatsoever in the project itself, but with lots of experience with projects and their fates. The other is the disaster description. All organizations have had their trials and tribulations. It takes a true blind optimist to say "that could never happen on this project." Whenever it is said, the Outsider does not have to say a word, as the rest of the group will all start hooting.
You and your people can probably come up with a list of possible disasters based on past experience within the organization. Can you come up with an Outsider? If not, with all humility, I recommend you call Cutter right away. There are several great Outsiders right here!
9DeMarco, Tom, and Timothy Lister. Waltzing with Bears: Managing Risk on Software Projects. Dorset House Publishing Company, 2003.