Article

Managing Complexity

Posted July 31, 1998 | Leadership |

MANAGING COMPLEXITY

by Jim Highsmith

Whither Software Process Management?

I recently exchanged a series of e-mails with a development manager in a mid-sized software company. It seems the company understood the need for some additional organization and process, but the words CMM (for the Software Engineering Institute's Capability Maturity Model) and ISO (International Standards Organization, the process improvement body) sent developers scrambling for garlic and crosses to ward off the evil spirits.

A spring 1998 conference for IT professionals offered five presentation tracks, one of which was for software development management. There were six sessions on ActiveX and Java, two on testing, two on UML, and a couple on other topics. Nothing on process management, project management, software development lifecycles, requirements management, CASE, risk management, or any of the other topics we have come to associate with improving software delivery.

In a "Dilbert" cartoon, Wally bemoans the fact that he has no impact on results, but takes solace in having "process pride." As Wally says, "Everything I do is still pointless, but I'm very proud of the way I do it."

What is the message in these disparate events? There seems to be three possible answers to this question.

  • Everyone already knows about these software process management practices because they have been around for such a long time and are already thoroughly utilized.
  • In today's complicated Internet, client-server, data warehouse environment these process management practices are irrelevant.
  • Software process management practices are still relevant, but they have not been adapted very well to today's more complex conditions.

I believe the last position is the appropriate one. Yes, people have gotten lost in waves of technology change in the 1990s (first client-server and then the Internet, with all their accompanying issues) and distracted by year-2000 work, but there are deeper issues that contributed first to the demise of CASE (relatively speaking) and then to the reduced interest in software process management in general.

Management literature is exploding with the themes of complexity and change. From Tom Peters, Thriving on Chaos, to Stan Davis and Christopher Meyer, Blur: The Speed of Change in the Connected Economy, to Fast Company magazine with its up-tempo look at ultra-contemporary management practices, the emphasis is on speed, change, and uncertainty.

In this new era, software process management (SPM) needs a new paradigm (paradigm -- an overused word I regretfully overuse again). While the individual practices of SPM are still useful, the framework itself, geared to a belief in optimization, is in need of change if it is to regain relevancy in today's world. Speed, change, and uncertainty don't succumb to optimizing practices. In previous issues of this newsletter, Ed Yourdon used the phrase RAD/prototyping lifecycle to illustrate the shift from the more traditional waterfall lifecycle. But it goes beyond a shift from a linear to an iterative lifecycle -- it goes to the heart of the difference between a fundamental belief in adaptation rather than optimization.

Complicated vs. Complex Problems

A problem needs to be correctly identified before it can be solved. As all recent college graduates learn early in their careers, much of their educational perspective was on how to solve problems, while their career perspective is usually in identifying the correct problem to solve. In IT, there is a need to rethink the kinds of problems we are trying to solve. Traditional SPM was oriented toward solving complicated problems. Complicated problems were often difficult, intricate, detailed, and involved new technology. The software delivery problem was (and is) partially a function of size, as Howard Rubin and Capers Jones's metrics studies point out so vividly. Most of the rigor inherent in traditional SPM was based on the metaphor of other engineering disciplines, especially that of building construction. No one would argue that developing architectural plans, engineering designs, and managing the construction of a modern skyscraper isn't a complicated engineering problem. But the building analogy put us in big trouble. Although fast-tracking design and construction has become more prevalent in the construction business, once the foundation is poured, buildings do not change much. And they surely change very little once they top out.

Complex Adaptive Systems Corner

Question: Complex adaptive systems (CAS) concepts are

(1) a newer version of general systems thinking

(2) related in some way to chaos theory

(3) an integrated, interdisciplinary science devoted to the exploration, and understanding of complex phenomena.

The answer is all of the above. The study of CAS is providing new insights into such seemingly divergent fields as biological evolution and economics.

The scientific work is spearheaded by the Santa Fe Institute in New Mexico, while the application of the concepts to organizations and organizational management is newer and more diverse. At a basic level, a complex adaptive system is an ensemble of independent agents:

  • Whose interaction with each other creates an ecosystem
  • Whose interaction is defined by the exchange of information
  • Whose individual actions are based on some system of internal rules
  • Whose combined actions interact (self-organize) in nonlinear ways to produce emergent results
  • Whose environment exhibits characteristics of both order and chaos.

All of these evolve over time. The independent agents could be individual insects in a hive, species in an ecosystem, developers on a project team, or companies in an economy. While the utilization of these concepts in managing organizations is hotly debated, whether there is a direct causal link or merely an interesting metaphorical relationship, there is a growing cadre of organizations and management specialists making the linkages. CAS forms the conceptual base for adaptive software development.

While software is constrained by its initial architecture and existing user base, it is nearly infinitely malleable, as are modern businesses if they hope to survive. During design and construction, software systems are buffeted by technology, business process, and competitive and organizational changes. Faster business demands faster delivery of support applications. As the end of the 1990s approaches, businesses are more dependent on their software applications because information is often part of the product, in contrast to the past, when information supported business processes. Businesses are blurring, as Davis and Meyer contend. If application delivery organizations are to support their blurring businesses, they must adapt practices and processes to solve a new class of problems -- high speed, high change, and high uncertainty.

The first challenge is recognizing this new class of problems. Traditional SPM adherents address the issue of complexity in two ways:

  • If our processes were just more repeatable, measurable, etc., then we could solve the complexity problem. In the Software Engineering Institute's parlance, if we were all just at CMM level 5 (well, maybe 4 would do), we could adapt to the vagaries thrust upon us.
  • We need to find a balance -- some process, but not too much. We need to administer a judicious amount of process geared to the problem at hand.

The first approach suffers from a basic, fundamental belief in optimization with the ultimate goal being statistical process control. Proponents believe processes can first be defined and then improved until they are measurable and repeatable within prescribed limits.

Deep down, practitioners of optimization want to remove the messiness of the human condition from the mix. They believe in getting superb results from average talent by surrounding them with process (I must admit that the implication from the CMM that I am both average and immature is somewhat offensive). At the recent Cutter Consortium Summit '98 conference, the question of how to make the requirements process repeatable was asked. For projects undergoing high change and high uncertainty, the requirements process is not repeatable, at least in the typical sense of measurability and process control. Arriving at requirements is a nondeterministic process; we really don't know when we are finished, or how correct the results might be. Rob Arnold, CEO of the testing firm STLabs in Seattle, uses the word sustainable -- a better word for most processes.

The second approach, while more useful and practical, still suffers from process-itis, a central belief in linearity, causality, efficiency, and optimization. The second group usually acquiesces more to developers' lack of enthusiasm than to a fundamental difference in belief with the first.

Dee Hock, the former CEO of VISA International who presided over VISA's growth to 7.2 billion transactions and $650 billion annually, says it simply and effectively:

Simple, clear purpose and principles give rise to complex, intelligent behavior. Complex rules and regulations give rise to simple, stupid behavior. --Fast Company supplement

Most software process management approaches suffer from far too many rules, standards, methods, checklists, forms, and approvals. Solving complex problems cries out for innovation, for rich information flows and quality relationships, and for loosely bounded space in which to explore. Traditional software process management has not worked hard enough on creating this kind of environment.

Peter DeGrace and Leslie Hulet Stahl address the issue of what they refer to as wicked problems, those with characteristics similar to complex problems. In discussing the waterfall lifecycle discussed below, they point out that,

It would bring systemization and orderliness, but these are not sufficient to manage size or complexity. --Wicked Problems, Righteous Solutions

Solving complex problems does not mean abandoning good software process management, but it does mean adopting a new perspective on its use. It means understanding that applications delivery is not a mechanical process, but often an organic, nonlinear, nondeterministic one. It means altering some basic principles of how organizations work to solve complex problems, and instituting some new practices geared to those beliefs.

Lifecycles -- An Embodiment of Concept

An Evolution of Lifecycles

Since the early 1980s, there has been a large number of lifecycle types identified. Some have been truly different types, while others have had new names but have basically been variations on a theme. Other than ad hoc categories (hacking, code and fix), which won't be described in more detail (there isn't much actually), the major categories are (1) waterfall, (2) evolutionary, (3) RAD/prototyping, and (4) adaptive. A brief review of the first three, which are more widely known and used, provides a framework for discussing the last.

Waterfall Lifecycle

The traditional waterfall lifecycle (see Figure 1) has been around for many years and has been the lifecycle type for a large percentage of existing software applications. While it is currently in vogue to denigrate the waterfall approach, it has been successful in the past and should remain in the toolbox of good development managers.


Figure 1
Figure 1: A Waterfall Lifecycle

The waterfall works best on low-risk, predictable projects where the expectations of external change are low. It is scalable to larger, complicated projects as long as additional complexity is not introduced. When business changes were less frequent and less drastic, when the available technology was well defined, and when the applications being developed were simpler (basic accounting, payroll, etc.), the waterfall lifecycle was adequate. The fundamental conceptual basis of the waterfall was optimization, a belief in developing a plan and executing it.

Evolutionary Lifecycle

There are many variations of the evolutionary lifecycle (see Figure 2). Tom Gilb used the term evolutionary in the mid-1980s to describe an iterative approach with small, well-planned development cycles. Barry Boehm's spiral model explicitly incorporates the concept of driving development based on risk.


Figure 2
Figure 2: An Evolutionary Lifecycle

Evolutionary lifecycles are iterative in nature. They address uncertainty by taking smaller steps and testing the results. While an overall project plan is developed in the beginning, it is usually revised at each subsequent iteration. How much architectural and design work is done "up front," and whether each iteration involves an actual implementation of a certain feature set, determine the extent of variations in the evolutionary lifecycle. Some variations call for much of the definition and design work to be completed before the work is divided into modules that can then be built in parallel. Those modules are implemented, or not, depending on the variation being used.

Other evolutionary variations call for some overall architectural work that divides the work into modules. The cycle of definition, design, and development of each module (again with the option of actual implementation) begins again.

Evolutionary approaches arose to help manage risk and uncertainty rather than to increase delivery speed. They tend to involve significant planning and documentation, at least in the form articulated by Gilb and Boehm. Evolutionary approaches have tended to incorporate "mini-waterfalls" within each development cycle. The fundamental concept is still one of optimization, just optimization in smaller chunks.

A major consideration to implementing an evolutionary lifecycle on a project is the additional project management expertise and experience needed.

RAD/Prototyping Lifecycle

While evolutionary lifecycles arose to combat risk, RAD/prototyping lifecycles arose to accelerated development speed. At first, RAD became almost a synonym for ad hoc approaches, but sophisticated practitioners soon learned the limits of ad hoc approaches. As RAD approaches were used on larger projects, they borrowed heavily from certain evolutionary practices. The second important aspect of the RAD/prototyping approach was its focus on customer involvement. Prototyping arose in the early 1980s, as better tools became available to enhance the developer's interactions with customers by providing something more meaningful (to the customers at least) than paper software engineering models.

RAD/prototyping lifecycles were tagged (probably incorrectly) as appropriate for smaller projects, usually topping out at around 1,500 function points in size. They were usually considered to be ill planned, and the product just emerged from the interactions of developers and customers. Limits were placed on the projects, not in terms of requirements but in terms of time. Control was exercised by timeboxing, setting a fixed delivery date and working to it. Some advocates set arbitrary limits on the time frame of a RAD project, saying anything over six months was not a RAD project.

The best RAD projects accelerated development measurably. They also furthered the concepts of dedicated, co-located teams, war rooms, Joint Applications Development (JAD) sessions, and use of high-powered development tools. Several RAD approaches also advocated formal product reviews.

RAD/prototyping lifecycles were limited in many IT organizations because the fundamental conceptual perspective did not conform to the organization's optimizing one. To become more agile, RAD/prototyping lifecycles took on a more adaptive philosophy, although it was not usually articulated as such. The breach between mainframe legacy application and client-server developers was substantial and often bitter in many organizations. The developers usually did not consider that they were working on different classes of problems, and therefore failed to understand the root cause of their differences.

Adaptive Lifecycle

Both the evolutionary and RAD/prototyping lifecycles provide a heritage for the adaptive lifecycle. The adaptive lifecycle, illustrated in Figure 3, combines aspects of these two lifecycle types with two additional explicit features.


Figure 3
Figure 3: The Adaptive Lifecycle

First, the adaptive lifecycle has a very explicit, adaptive, conceptual base. This is reflected in the iterative phase names -- Speculate, Collaborate, and Learn. While the RAD/prototype lifecycle has an adaptive flavor in many of its incarnations, the underlying conceptual differences are not often articulated.

Second, the adaptive lifecycle is also explicitly a component-based, rather than a task-based, approach. This is crucial to scaling up to larger projects while maintaining an environment conducive to the innovation needed for complex problems. Since the adaptive lifecycle is new, it will be described in more detail in the following in three sections -- the conceptual base, the basic lifecycle, and the advanced lifecycle.

Adaptive Lifecycle -- The Concepts

One of the most difficult concepts for engineers and managers to examine thoughtfully is planning. The idea that there is no way to do it right the first time is foreign to many raised on the science of reductionism (reducing everything to its component parts) and the nearly religious belief that careful planning, followed by engineering execution, produces the desired results ("we are in control"). There was an inherent belief the only way plans failed was through poor execution. In an interview in Fast Company (June/July 1998), Douglas Adams (author of The Hitchhiker's Guide to the Galaxy) discusses reductionism and creativity:

Up until now, we've pursued knowledge by the scientific method of taking things apart to see how they work. This has led us to the fundamental forces: atoms and quarks. The limitation is that once you take things apart, they don't work anymore. If you take a cat apart to see how it works, you have a nonworking cat on your hands.

Speculation is a somewhat audacious word (especially for my banking clients), but it captures the essence of succeeding in a complex situation. As we will see in subsequent sections, speculating doesn't mean abandoning planning, it merely acknowledges the reality of uncertainty. On complex projects, there is a significant block of the unknown and sometimes unknowable. Speculating is more anticipating the future than trying to control it. Planning, as it is usually practiced, carries the assumption of knowledge and control.

Planning leaves little room for learning. For example, in nearly every quality assurance program in IT organizations, there is an explicit objective to "do it right the first time." This embodies the idea that we know what "right" actually is. We know what is right, because we have planned it. Unfortunately, in complex environments, this mantra sends the wrong message. In essence, it says:

  • We can't be uncertain.
  • We can't experiment.
  • We can't learn from mistakes (because we shouldn't make any).
  • We can't deviate from plan.

Deviations from plans are considered mistakes to be corrected, rather than opportunities for learning something important. Certainly good project managers have instinctively taken a more flexible approach to the word planning, but they often get caught up in constraints from customers and upper-level IT management, which keep them from exploiting their brief forays into flexibility.

Speculation recognizes explicitly the uncertain nature of complex problems and encourages exploration and experimentation. Speculation encourages looking at deviations as opportunities for learning. For example, feedback from customers that a feature is not what they wanted is as valuable as getting it right. Sometimes deviations offer advances over what customers thought possible, or did not think of at all. The question often comes back, "Do you mean mistakes are encouraged?" No, we deliver to the best of our knowledge and ability. However the concept explicitly recognizes the inevitability of mistakes. Another question may be, "What about someone who makes the same mistake over and over?" The answer to this second question is the same as it would be using a more conventional lifecycle.

Learning is another of the adaptive concepts illustrated in Figure 3. If the anticipated results are speculative, then good learning practices are paramount for success. Although organizational learning has been a hot topic in management literature in this decade, the penetration of the concepts into IT organizations has been slow. One reason has been the difficulty in transitioning from the conceptual information about learning to more useful practices. A second is differentiating between kinds of learning.

A typical response to the charge of poor learning in IT organizations is, "Why are we spending tons on educating our staff on new technology?" True, the exceptional pace of new technology in the 1990s has created an enormous amount of learning about "things." But the typical organization is still poor at learning about "itself" -- its beliefs, assumptions, mindset, or mental models. Learning about oneself -- whether personally, at a project team level, or at an organizational level -- is key to agility and the ability to adapt to changing conditions.

An example of a practice directed at the second type of learning is the project retrospective or, as most people continue to refer to it, the project postmortem. Simple in concept -- asking periodically about what went right, what went wrong, and what changes need to be made in the future -- postmortems are difficult for most organizations because they are very political. They become vehicles for "who do we blame," rather than "how do we learn." To become an adaptive, agile organization, one must first become a learning organization.

The final conceptual component of an adaptive lifecycle is collaboration. Complex applications are not built, they evolve. Complex applications require a large volume of information be collected, analyzed, and applied to the problem -- a much larger volume than any individual can handle alone.

Michael Schrage defines collaboration as "an act of shared creation and/or discovery." Schrage goes on, "The issue isn't teams; it's what kind of relationships organizations need to create and maintain if they want to deliver unique value to their customers and clients" (No More Teams, Mastering the Dynamics of Creative Collaboration, Currency Doubleday, 1989).

Shared creation crosses traditional project team boundaries. It encompasses the development team, customers, outside consultants, and vendors. While many IT organizations have installed collaboration tools (groupware) for other parts of their organization, their own use has been much more limited. It doesn't usually take long to establish that collaboration is not very advanced within an IT organization.

A waterfall lifecycle project can be subdivided into tasks, with each task assigned to an individual responsible for its execution. While communication is necessary and collaboration helpful, these projects can succeed on individual efforts. Complex projects succeed through group interaction and innovation -- the creation of emergent results through the group's collaborative efforts.

A Comparison

Table 1 -- Lifecycle Comparison Table

Rating Criteria Waterfall RAD/Prototype Evolutionary Adaptive
Encourages customer interaction Fair Excellent Good Good
Promotes feedback and learning Poor Good Good Excellent
Ability to handle risk and uncertainty Poor Good Good Excellent
Development speed Poor Excellent Good Excellent
Scalability Good Poor Good Good
Predictability and control Good Fair Good Fair
Project management expertise needed Low Medium Medium/High High
Client management expertise needed Low Low Medium High
Task or component based Task Component Task Component
Mindset (optimization or adaptation) Optimization Adaptation Optimization Adaptation
Amenable to timeboxing Poor Good Fair Good
Amenable to concurrent development Poor Good Good Good
Amenable to requirements changes Poor Good Good Good

Before continuing with the discussion of adaptive lifecycles, Table 1 outlines some of the rating criteria for lifecycles and how well each type addresses these criteria. How well a lifecycle type addresses each criterion is indicated by poor, fair, good, or excellent, except in a few cases more meaningfully measured by different responses. While the table provides relative information on each of the lifecycle types, readers should be careful in their evaluation. Even though the waterfall lifecycle rates "poor" in several categories, it may be the best alternative for certain projects. Similarly, the evolutionary lifecycle may rate "good" in several categories, but be inappropriate for particular projects.

[Editor's note: The adaptive column in Table 1 has been shaded to help convey the editor's distinctive bias in the evaluation. The first three lifecycle types have more history, and the evaluations are based on both my opinions and that of others in the field. The adaptive lifecycle, which is newer and therefore has fewer outside opinions available, was first promulgated by the author and is more susceptible to bias.]

Even more disconcerting is that ratings have to be geared, sometimes at least, to the appropriate use of the lifecycle. So, for example, the waterfall lifecycle is rated "good" on scalability, which is true only for projects appropriate for waterfall methods. In a more uncertain environment, the waterfall approach would rate "poor" in scalability.

While most of the criteria are self explanatory, a few additional comments will help clarify their meaning.

Encouraging customer interaction is key to a lifecycle's success. Many of the waterfall approach's problems stem from poor interaction with customers during the lifecycle, particularly due to the extensive documentation not geared to the customer's context. The RAD/prototyping gets the only "excellent" rating based primarily on size. As projects get bigger, even with good interaction practices, the ability to maintain good customer relationships becomes much more difficult.

While the evolutionary model, especially the spiral variation, handles some risk well, it does not handle wider uncertainty as well as the adaptive model does. With a fundamental belief in optimization, just in smaller chunks, the evolutionary model is less agile.

Development speed is meant to compare projects of similar size, so an adaptive project of 15 months might still be "fast," if comparable waterfall projects took 25 months.

One of the problems, or at least perceived problems, with RAD/prototyping and adaptive approaches is their poorer performance with respect to predictability and control. Both these approaches use timeboxing to improve performance with respect to these criteria, but the more fundamental problem is inappropriate expectations. Why would we think high-change, high-speed, and high-uncertainty projects should also be predictable and controllable? They are boundable and manageable, but not predictable and controllable. Is the reason many software applications miss delivery dates due to poor management or to the nature of complex projects?

Probably the biggest drawback to evolutionary and adaptive approaches is the higher level of expertise and experience needed to be successful. With the technology world expanding rapidly, just finding staff with basic project work-breakdown structure, estimating, and scheduling skills is difficult enough. Evolutionary and adaptive lifecycles live or die on good decisions and judgment by the project team and the project leader. Traditional software process management has attempted to reduce variability in results by instituting more repeatable processes that are less influenced by individuals. Success and failure on complex projects is much more dependent on individuals, and therefore more susceptible to lack of experience and expertise.

But again, it is as much a problem of expectation as actuality. Just as we should not expect a complex project to be as predictable, we should not expect one to be successfully managed by a novice project manager.

Another aspect of experience and expertise is that of the customer, customer management, and upper IT management. Projects with customers or with IT managers who do not understand the nature of complex projects, and those with customers who are reluctant to participate to the degree necessary in the project, will lead to disappointing results.

One key difference between lifecycle types is their emphasis on tasks or components (results). The waterfall model is purely task based, even to the point of confusing the lifecycle phases (requirements, design, etc.) with activities (analysis, design, etc.). In a waterfall approach, the analysis or specification activity results in a requirements document, which in turn delineates the end of the requirements phase. Therefore, analysis activity done in downstream phases has no place.

Both RAD/prototyping and adaptive lifecycles specifically plan components rather than tasks. They emphasize identification and outline specification of the components early, and then assign the components to development cycles. The detailed "how to" tasks are left to the discretion of the development team. Appropriate reviews and component completion criteria maintain quality.

One of the key determinants of accelerating project-delivery speed is the ability to manage concurrent development. In smaller projects, concurrency can be managed informally. As project size increases, concurrent development is dependent on good management of partial information about each component and intensive collaboration between component teams.

When comparing lifecycle types, the standard recommendation is to match the approach with the project. While a good general recommendation, the very fundamental, mindset differences between optimizing and adapting make this mix-and-match strategy very difficult to implement in practice.

The Basic Adaptive Lifecycle

While the iterative, Speculate-Collaborate-Learn cycle is useful for outlining overall concepts, more detail is necessary to actually deliver a software application. Figure 4 shows the basic adaptive lifecycle and extends the explanation of each of the components. The basic adaptive lifecycle would be very similar to one for a well-planned RAD/prototyping lifecycle. The relevant phrase is "well planned" since many RAD/prototyping projects suffer from too large a dose of ad hoc-ism. This basic lifecycle has been successfully used on projects up to the 5,000 to 7,500 function point range.


Figure 4
Figure 4: The Basic Adaptive Lifecycle

Project initiation is similar to that in any competent project management approach. It involves setting the project's mission and objectives, understanding and documenting constraints, establishing the project organization and key players, identifying and outlining requirements, making initial size and scope estimates, identifying key project risks, and performing other activities as appropriate for the type of organization and project. Since speed is usually a major consideration in using an adaptive lifecycle, much of the project initiation data should be gathered in preliminary JAD sessions.

As the size of the project increases above the 1,000 function point level, the project initiation work may also involve activities such as initial training, facilities procurement, and most importantly, the initial development of appropriate application architecture components. As with other deliverable components, the architectural components continue to evolve over the life of the project, although in a "well-behaved" project the changes to architectural components will decrease over the development cycles.

The project initiation activity needs to be detailed enough to provide a reasonable initial estimate of the project's size and enable creation of a component-based cycle plan.

Adaptive Cycle Planning

The second stage of "speculating" is the detailed, component-based cycle plan. Delivery is broken into three iterative levels -- Version, Cycle, and Build. The overall objective of each cycle is to force product visibility. Without visibility, deviations and mistakes remain hidden. Without visibility, learning suffers. At the Version level a product can be implemented by the customer. This definition follows the traditional understanding of versions of products. At the Cycle level a demonstrable set of components is delivered to a customer review process -- it makes the product visible to the customer. At the Build level a working component enters an integration process -- it makes the product visible to the development team.

The first step is to set the timebox for the entire project, decide on the number of cycles, and assign a timebox to each individual delivery cycle. The timebox should not be arbitrary, but based on reasonable estimating and knowledge of resources assigned to the project. For a small to medium-size application less than 5,000 function points, cycles usually vary from four to eight weeks.

Timeboxing forces difficult trade-off decisions throughout the project so they don't accumulate until it is too late. It also provides checkpoints to evaluate progress. A third, and very important, function of a timebox is to force the team to deliver partially completed results. Waiting until the team is comfortable with their results means it has gone too far without meaningful feedback. For example, there is little effectiveness in developing the details of a component that has a basic flaw. Timeboxing helps force the flaws to the surface early so they can be corrected. It is not easy. In a culture that lauds "high-quality" work the first time, it is difficult to release partially completed results with defects. But it is essential to quick adaptation to an ever-changing landscape.

After establishing the number and time frame of each cycle, the next step is developing a theme or objective for each cycle. Just as it is important to establish an overall project objective, each cycle should have its own theme to focus the delivery team.

The third step is to assign components to cycles. Components should be categorized for more effective planning. The first and most important category is to identify a list of components that deliver a visible, tangible result to an end user. Whether in terms of use cases or more traditional business scenarios and features, this establishes the most important components to be delivered and reviewed each cycle. One of the primary tenants of adaptive delivery is to deliver something useful to the end user, the customer, each and every cycle. Other components might be categorized as technology or support components on which primary customer components are dependent.

Assigning components to cycles is a multidimensional exercise. Factors in the decision include:

  • Making sure each cycle delivers something useful to the customer
  • Identifying and managing high-risk items early in the project
  • Scheduling components to accommodate natural dependencies
  • Balancing resource utilization.

The most effective tool for this cycle planning is usually a spreadsheet. The first column contains all the identified components. There is then a column for each cycle. (Cycle 0 can be used to accommodate project initiation and architecture activity.) As decisions are made on the scheduling of each component, either they are moved to the appropriate cycle or a check is made in the cycle column. Table 2 shows an example.

A Sample Cycle Plan C1 C2 C3 C4
Cycle Delivery Dates 1-Jun 1-Jul 1-Aug 1-Sep
Primary Components        
Order Entry
x
     
Order Pricing  
x
   
Warehouse Picking  
x
   
Partial Order Shipping    
x
 
Calculate Reorders    
x
 
System Interfaces    
x
 
Pricing Error Handling    
x
 
Security and Control  
x
Technology Components  
Install Visual Basic
x
     
Install Comm. Lines    
x
 
Support Components  
x
   
Client-Server Arch.  
x
   
Conversion Plan    
x
 
High-Level Data Model
x
     

Experience has shown that this type of planning, done as a team rather than exclusively by the project manager, provides a much more solid foundation for understanding the project than a traditional task-based approach. It is too easy to copy a task template from a previous project (or a standard template), fill in a new set of estimated hours, and deliver a completed plan.

Component-based planning reflects the uniqueness of each project. While there may be, and should be, best practices for activities such as analysis, object modeling, and design, what makes these activities unique to a project are the components, which ultimately reflect the deliverables to the customer. There may be certain tasks added to the project plan in addition to the components, but the bulk of the plan is a "component breakdown structure," rather than a "work breakdown structure."

Component-based cycle plans tend to be both more useful and much shorter than traditional task-based plans. I've been part of planning exercises for 4,000-to 6,000-function-point projects where the plans were on two or three spreadsheet pages. More importantly, clients invariably respond that these few pages provide a much better view of the project than their pages and pages of task-based work breakdown structures printed out of Microsoft Project. As might be expected, managing a component-based project becomes a very different experience.

Concurrent Component Engineering

Concurrent component engineering delivers the identified components. From a management perspective, this phase is more about collaboration and dealing with concurrency than it is about the details of component software engineering. For smaller projects, especially where the team is co-located, concurrency is usually handled on an informal basis. Dependencies and utilization of partial information are handled by hallway chats and whiteboard scribbling. Managing concurrency in larger projects will be covered in the section describing the advanced adaptive lifecycle. Collaboration will be the topic of the next issue of ADS.

Quality Review

The basic adaptive management approach is to maintain a focus on the project scope and objective, assign components to the delivery team, stand back and let the team figure out how to deliver the components, and maintain accountability through quality reviews. Rather than manage the how, quality is maintained by establishing appropriate exit criteria and review processes. These feedback processes are key to learning.

There are four general categories of things to learn at the end of each development cycle:

  • The product component's quality from the customer's perspective
  • The product component's quality from a technical perspective
  • The functioning of the delivery team and the practices they are utilizing
  • The project's status.

The first priority in most projects is providing visibility and feedback from the customers. One vehicle for this is a customer focus group. Derived from the concept of marketing focus groups, these group sessions are designed to explore a working model of the application and record customer change requests. They are facilitated sessions, similar to JAD sessions, but rather than generating requirements or defining project plans, focus groups are designed to review the application itself. In the end, the only artifact of application delivery the customer actually can relate to is the working application. Where prototyping sessions involve individuals or small groups in defining an application, focus groups are more formal, cycle milestones.

A second key to delivering quality products is reviewing technical quality. A standard vehicle for technical quality assessment is the technical review. (I won't differentiate between reviews, walk-throughs, and inspections here.) An application architectural review might occur at the end of a cycle, whereas a code review or test plan review might occur at any point during the cycle. In a component-based approach, where the delivery team is assigned a component and is responsible for how it is implemented, good quality criteria and reviews to monitor accountability against those criteria are important.

The third feedback process is to monitor the team's performance. This might be called the "people and process" review. While project postmortems are done in some organizations (not nearly enough are serious about it), mini-postmortems are needed at major milestones. (With very short cycles of four to six weeks, these reviews might be done every other cycle). While the questions asked in postmortems can be lengthy, the basics to ask are "What's working? What's not working? What do we need to do more of? What do we need to do less of?" Postmortems tell more about an organization's ability to learn than nearly any other practice. Good postmortems force us to learn about ourselves and how we work. Poor postmortems are hatchet jobs and are quickly discontinued either in actuality or in real content.

The fourth category of review is not strictly related to quality. It is the review of the project's overall status. This review is fed directly into a replanning effort at the beginning of each subsequent cycle.

The basic questions are:

  • Where is the project?
  • Where is it versus our plans?
  • Where should it be?

Determining a project's status is different in a component-based approach. In a waterfall lifecycle, completed deliverables mark the end of each major phase. A complete requirements document marks the end of the specification phase, for example. In a component-based approach, rather than having a single completed document or other deliverable, the status often reflects multiple components in a variety of states of completion. In reality, determining progress is not harder, just different.

The last statement needs clarification. If the same noncomplex project was undertaken twice, once using a waterfall approach and again using an adaptive approach, measuring progress would be comparable between the two approaches, just different. However, the more unpredictable nature of complex projects causes comparison problems. Our historical data provides a basis on which to evaluate progress on noncomplex projects. When the first complex projects are undertaken, the comparison of predictability is often made against our historical base. The conclusion is, therefore, that the component-based approach is inferior for prediction and control. The more correct interpretation is that complex projects are different in kind, and predictability and control shouldn't be viewed in the same way.

Many managers experienced in traditional approaches feel a distinctive loss of control and influence. Their "gut feelings" and rules of thumb about progress have to be reinitialized. It is not easy, but the days of checking off completed tasks, or worse, calculating percentage completions to measure progress, are fading rapidly.

The last of the status questions is particularly important -- where should the project be? Since the plans are understood to be speculative, measurement against them is insufficient to establish progress. The project team, IT, and customer sponsors need to ask continually, "What have we learned so far, and does it change our perspective on where we need to be?"

The Advanced Adaptive Lifecycle

As mentioned earlier, the conventional wisdom on RAD/prototyping lifecycles is its restriction to smaller projects. Once these projects grow much beyond 1,500 function points, conventional wisdom dictates a more formal, rigorous, and optimized approach. There is a definite place for additional rigor in larger projects. But if the key to success on complex projects lies in flexibility and group innovation, how can these conflicting concepts be incorporated into a solution?

There are other problems as size increases. Uncertainty creates a high level of change. The need for accelerating delivery schedules demands increased levels of concurrency. Concurrency means operating on partial information, which creates more change and rework. So acceleration in uncertain environments is potentially an unmanageable, and at best difficult, situation to manage.

While there are several components to addressing these problems, this discussion will be limited to lifecycle issues. The solution involves scaling the basic component lifecycle to handle increased concurrency and partial information, while maintaining an overall emphasis on group innovation to solve the thorny problems. Rather than increasing the rigor of the processes, the advanced component-based lifecycle increases the rigor of managing the results, or components.

The fundamental concepts for this approach to lifecycles come from nonsoftware product development. In automobile, electronics, and technology equipment product development companies, this approach is called a "phase and gate" lifecycle. It focuses on managing the what, not the how. It focuses on identifying and planning components, determining the dependencies and interrelationships between components, monitoring the evolution of each component through defined completion "states," and evaluating progress by evaluating and monitoring both fully and partially completed components at the end of each cycle. The advanced adaptive, or component, lifecycle is a "grown-up" version of the basic adaptive lifecycle, which in many respects is a more sophisticated version of the RAD/prototyping lifecycle.

In the earlier basic cycle plan shown in Table 2, the development of each component was marked in the cycle where the majority of the effort was scheduled to occur. In reality, work on each component spans several cycles. A menu item and preliminary input screen might be done in cycle 1, the main features delivered in cycle 2, requested changes and error correction incorporated in cycle 3, and final changes finished in cycle 4. The feature evolves over several cycles. Similarly, a requirements document might be outlined in cycle 1, details added in cycles 2 and 3, and finally approved in cycle 4.

In smaller projects, this feature evolution can be managed informally. Knowing the cycle where most of the work will occur is sufficient. Concurrency and interdependencies can be managed informally also. As size increases and more feature teams are needed, the number of cross-feature team dependencies also increase. Referring to the refined cycle plan in Table 3, the order pricing component due to start in cycle 2 is dependent on some basic functionality of the warehouse picking component being complete. These components are being done by different feature teams, located in two different buildings. The picking component team creates multiple builds of its component during the first cycle. At some point the team declares its component "good enough" to be declared in state 1 (the declaration may involve the other team's approval). The order pricing feature team now has more than just a random build of a predecessor component. It has a partially completed component with a predefined, preplanned completion state. The second team therefore has a much better idea about the limits of its own work. If the team goes too far, it risks significant rework. Not far enough, and it loses time.

Refined Delivery States

C1

C2

C3

C4

Cycle Delivery Dates

1-Jun

1-Jul

1-Aug

1-Sep

Primary Components        
  • Order Entry
S2
 
S4
 
  • Order Pricing
 
S1
S2
S4
  • Warehouse Picking
S1
S2
S3
S4
Partial Order Shipping    
S1
S4

Cycle milestones are now defined in more detail, not only by components, but also by the state of each component. For example, cycle 1 completion might be defined by:

  • Order entry in a detail state
  • Warehouse picking in an outline state
  • Overall data model in an outline state
  • Requirements document in an outline state
  • Technical architecture in a reviewed state.

One of the problems with concurrent development is the high rate of information flow between interdependent groups. If everyone gets everything, it becomes information clutter rather than information flow. Information states help filter the clutter in two ways. First, the downstream team doesn't look at a lot of interim work (unless it wants to). When the state changes, the team knows it is time to process the information. Second, at cycle milestones, access rules to information can change. For example, the partial order shipping feature team doesn't want to see any information on ordering until the beginning of cycle 3, since it can't do anything with it.

With a smaller project, all the project information is made available to the team from beginning to end. The team feels involved. As size increases, this involvement becomes inundation. Phases and gates, cycles and milestones, act as filters to regulate the flow of information. They increase the rigor of managing the results, but still leave the how to individual feature teams.

The concept of an advanced component-based lifecycle then helps solve the two key issues as complex projects get larger. First, by applying rigor to results rather than the process, it is less bureaucratic and less stifling to group innovation. Second, creating intermediate component completion states helps manage the high rates of information flow necessary for successful concurrent delivery.

Adaptive Delivery in Practice

While some of the concepts of adaptive software development are new, the lifecycle practices have evolved from well-tested evolutionary and RAD/ prototyping lifecycles. Responses from clients, workshop participants, and conference attendees range from "That's really how we work, but it now has a name and more complete definition" to "This is really weird stuff."

About two years ago, an adaptive approach was used to manage a six-month project at Nike to define a corporate-wide product data architecture (this project is described in more detail in "Just Doing It," Warren Keuffel's Tools of the Trade column, Software Development, November 1997). The team consisted of a half-dozen core project team members and another 15 part-time members. In this case, the deliverable components were technical and business-oriented documents. Although the goals of the project were well established, the specific contents of the deliverables and how to accomplish the results were much less certain.

The Nike project evolved in three cycles. Almost no time, from a project management perspective, was spent defining or managing tasks. Daily discussions were focused on deliverables and their content, not tasks. Jerry Gordon, chief architect of the project, was pleased with both the results and the approach. He attributed part of the success to Nike's culture, which he felt was amenable to adaptive ideas.

A second example is the PhORCE project currently in development by Phoenix International Life Sciences in Montreal, Canada. Phases I and II of the project are estimated to be in the 5,000-10,000 function point range and involve more than 20 developers. Phoenix is making extensive use of an adaptive approach on this project. Technical Project Manager Marc Gagnon comments, "The use of component-based planning has really helped the whole team understand the project, much more so than with our original task-based plan. While our customers were somewhat reluctant at first, their involvement has dramatically increased with the use of JAD sessions and customer focus groups. We've also noticed how the short cycles with specific components due for each has really helped keep us focused, especially the developers."

Gagnon continues, "Our original attraction to adaptive development was the need to speed delivery and improve customer involvement. We wanted something that was flexible, that guided our efforts without imposing stringent processes."

Summary

Not every project is a complex one. In fact, in many IT organizations the level of complex projects undertaken may be less than 20% of the total. But it is usually a very critical 20%. Again, it is not that optimizing practices are not needed - - no large project of any flavor can succeed without some measure of configuration management, for example -- but that they are insufficient. They need to be utilized within a new framework. A new set of practices, and more important, a different mindset, is needed on complex projects.

Resources and References

Sam Bayer and Jim Highsmith, "RADical Software Development," American Programmer, June 1994.

Barry Boehm, "A Spiral Model of Software Development and Enhancement," IEEE Computer, May 1988.

Stan Davis and Christopher Meyer, Blur: The Speed of Change in the Connected Economy, Addison-Wesley, 1998. www.blursight.com/lowcap/home/index.asp.

Peter DeGrace and Leslie Hulet Stahl, Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms, Yourdon Press, 1990.

FAST Company magazine, www.fastcompany.com. (supplement).

Tom Gilb, Principles of Software Engineering Management, Addison-Wesley, 1988.

James A. Highsmith, Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, Dorset House, 1998 (forthcoming).

James A. Highsmith, "Messy, Exciting, and Anxiety-Ridden: Adaptive Software Development," American Programmer, April 1997.

Steve McConnell, Rapid Development, Microsoft Press, 1996.

Santa Fe Center for Emergent Strategies, www.santafe-strategy.com.

Santa Fe Institute, www.santafe.edu.

Michael Schrage, No More Teams: Mastering the Dynamics of Creative Collaboration, Currency Doubleday, 1989.

M. Mitchell Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos, 1992.

M. Mitchell Waldrop, "The Trillion-Dollar Vision of Dee Hock," Fast Company, 1997.

Margaret J. Wheatley, Leadership and the New Science: Learning about Organizations from an Orderly Universe, Berrett-Koehler, 1992.


Editor's Musings

In this, my first issue of ADS, it seemed appropriate to provide some perspective on the direction and focus that I would like to bring to the newsletter.

First, I want to focus attention on the issues of combating environments characterized by complexity. In this era of speed and uncertainty, our more traditional management tools are often either insufficient or inappropriate. It will not be an easy task for either myself or you, the reader. Some of the ideas may be controversial, some may be wrong, but hopefully they will be thought provoking.

The adaptive lifecycle presented in this edition is part of a larger framework for managing as the next millenium approaches. There are many pieces, but I use the term Leadership Collaboration model to distinguish the amalgamation of these newer practices and theories from the more well-known and articulated Command and Control model. As future issues of ADS address specific issues such as collaboration or knowledge management, I will attempt to relate them both to this new framework and to practical application delivery problems.

In looking to the future, application development issues will continue be international and interdisciplinary in scope. One of my fascinations with the complex adaptive systems is its broad interdisciplinary perspective. Bringing perspectives from other disciplines often provides new, fresh ideas. The Complex Adaptive Systems Corner will be a regular feature of the newsletter. From working with consulting clients in different parts of the world, I also hope to bring more than a US perspective to common problems.

Another perspective I hope to bring to the newsletter is that of software companies. For much of my career, I was part of, or consulted with, IT organizations. However, in the past 10 years at least one-half of my work has been with software companies. They bring a different perspective to delivery issues -- sometimes appropriate for IT organizations and sometimes not, but usually offer food for thought.

Along with coverage of concepts and practices, the newsletter will also continue to cover appropriate software tools. My intent will not be exhaustive coverage of tools in any category, but coverage designed to illustrate how practices are being supported by tools. In keeping with this objective, I will also look for smaller, less well-known tools that reflect interesting new ideas.

I will also take an opportunity periodically to address a pet issue of mine. Just as it is currently fashionable to bash Microsoft, it is always fashionable to bash application development management. No conference is complete without appropriate speaker- or audience-bashing of management. There seem to be two alternatives -- either software management is uniformly bad or it is increasingly difficult. I prefer the latter perspective. Sure, there are bad managers, just like there are bad developers, but taking the wrong perspective hinders learning. Instead of asking, "Why is software management so bad?" we should be asking, "Why is software management so hard?"

The focus of ADS will also be on delivery as much as development. For many IT organizations the amount of actual code development is going down while the amount of component assembly and work with diverse service contractors are going up. The focus is on delivering solutions, not just developing code. While working with ASB Bank in New Zealand recently, Steve Watt, Chief Manager of Software Services, made just this point. "We will need more generalists and fewer coders. We need to act more like general contractors and contract for the specialists."

There will also be more of a focus on management and less on technology. Issues will be more along the lines of how to assemble and manage a team to deliver a component-based application, rather than a discourse on the technology of DCOM or CORBA.

A large part of thriving and surviving in an increasingly complex world is constant learning. It is not a one-way street. I welcome your comments, input, observations, thoughts, and criticism. Let me know what you think about the newsletter content and topics you are interested in for the future.

--Jim Highsmith, jimh@adaptivesd.com


Consortium Home | Business-IT Strategies Advisory Service | Online Resource Center

© Copyright 1998 Cutter Information Corp. All rights reserved. No part of this document may be reproduced in any manner without express written permission from Cutter Information Corp.
 

About The Author
Jim Highsmith
Jim Highsmith is a Cutter Fellow Emeritus. He was the founding Director of Cutter’s Agile Product & Project Management practice and received the 2005 Stevens Award in recognition of his work in adaptive software development and Agile processes. Mr. Highsmith has 30-plus years’ experience as an IT manager, product manager, project manager, consultant, and software developer. He has consulted with IT and product development organizations and… Read More