Executive Report

Methods for Defining and Analyzing Key EA Performance Metrics

Posted February 24, 2015 | Leadership |

B & EA Vol. 18, No. 2

Read the Executive Summary

With the increasing adoption of enterprise architecture (EA), organizations face many challenges in how to measure and demonstrate the value that EA provides to an enterprise. This Executive Report describes a process for deriving EA value metrics that align with the value drivers particular to an organization -- those important to both the core business capabilities of the organization as well as its key stakeholders.

Enterprise architecture (EA) is increasingly utilized by many types of organizations. Despite its growing popularity, the challenge facing many organizations is how to measure and demonstrate evidence of the value that EA provides to an enterprise. To gain some insight into this problem, this Executive Report provides an overview of the means used to measure the value of EA within organizations. The report seeks to accomplish four tasks. First, it demonstrates that EA value measurement is a challenge that organizations must address. Second, it aims to highlight the variety of methods and measures that organizations currently use in their attempts to measure the value of EA. Third, it provides insight into the reported challenges facing organizations involved in the process of measuring the value of EA. Finally, the report proposes a conceptual model for EA value measurement that those organizations that have implemented EA can utilize.

Leadership within organizations understands the need to strategically align business needs with the IT functions that support an organization's business units. Aligning business needs with IT can mean managing complex issues such as processing vast quantities of data to analyze products and services, avoiding redundancy in IT solutions, and working toward an efficient flow of information throughout an organization. These are just a few of the business and IT needs that organizations must address.

Organizations also realize that they cannot manage their activities if they don't measure them properly. Metrics are often gathered and used as a means to clearly and consistently report on the performance of business activities. The combination of a series of metrics called "measures" can allow for accurate communication about performance trends and progress toward strategic goals. This report describes a process for deriving long-term EA value metrics that are aligned with the value drivers of the organization. This process ensures that an organization will select the optimal set of metrics -- those that have the greatest likelihood of demonstrating the value that EA has produced for the organization. The process also ensures that the metrics selected align with both the value measures utilized by the core capabilities of the organization as well as the measures utilized by key stakeholders. Aligning the EA value measurement program this way allows the EA team to directly show how it positively impacts measures that matter to the rest of the organization.


There is no single agreed-upon definition for enterprise architecture. As a result, defining EA is highly debated in both academia and industry by those who research and practice it. In this report, we use the definition of EA supplied by the Federation of Enterprise Architecture Professional Organizations:

Enterprise architecture is a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a holistic approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies.1

Organizations continue to focus on becoming more agile in an effort to respond to the dynamic environments created by ever-changing markets, customers, and competitors. In order to remain competitive, organizations need to develop and utilize architectures that address the enterprise in a holistic fashion and that go beyond just architectures developed solely for IT transformation. Many large organizations are realizing the strategic impact and business value that EA generates.

Organizations expect many benefits from EA. Studies claim that EA brings about seamless integration of software services (both homogeneous and heterogeneous) and planning procedures. Organizations anticipate that EA will help them reduce complexity and achieve better efficiency, which in turn will aid managers in making decisions and scoping projects. Research also claims that EA can help stakeholders make better decisions by assisting management with prioritizing goals and communicating them effectively across an enterprise.2

Achieving strategic alignment between IT and business is another major expectation from adopting EA. The rise in business mergers and acquisitions, the continued economic crisis, and unstable governance are just some of the contemporary situations that lead to a dynamic business environment. Strategic alignment plays a significant role in improving the performance of an organization. It allows organizations to be more productive by achieving potential returns from IT investments and competing even under dynamic market conditions.

The effective use of IT is now an accepted organizational imperative -- and the primary motivation is improved communications and effectiveness. The swift pace of change in technologies has consigned many formerly established best practice approaches to the past. Today's EA decision makers and business managers face uncertainty, characterized by a lack of relevant and practical advice and standards to guide them through this new business revolution.

For organizational investment in EA to deliver full value, EA must fully align to organizational strategies and direction, key risks must be identified and controlled, and legislative and regulatory compliance must be demonstrated. One of the greatest challenges faced by those trying to manage EA in today's fast-moving economy and complex technical environment is knowing whether the "ship is on course" and being able to predict and anticipate failures before it is too late. Like driving a car or steering a ship, good instruments are essential.


Measurement is not a new concept. We deal with measurement in some form or another in our daily lives. When talking about metrics for business processes, we can sum up the concept with this well-known statement: "You cannot manage what you cannot measure. And you cannot improve something that you cannot manage properly."3 The term "metrics" refers to a series of measurement steps: defining the measure, how the measurement will be carried out (which may involve mathematical calculations), who will be involved in the measuring process, and the source/origin of the data.

A systematic and solid corporate performance management process is of paramount importance for organizations to survive in this Information Age4 and a vital part of performance management is metrics. Corporate performance metrics provide a picture of the health of an area under scrutiny within an organization, identify strengths and weaknesses, and enable detecting processes that deviate from normal behavior (e.g., fault detection, risk analysis). Organizations rely on metrics to assess and improve customer satisfaction, time-to-market factors, and other innovative processes for improving their performance. Numerous research studies have shown the impact of using metrics within organizations (e.g., customer metrics,5 financial performance metrics, and organizational performance metrics6). Defining and utilizing the right set of metrics is critical for organizations.

When it comes to metrics reporting, accounting for tangible assets is a very common and easy process that every organization performs through financial reporting, but when it comes to intangible assets, there is no single standard or comprehensive reporting mechanism. Many research studies have identified constructs of customer metrics, including observable or tangible constructs as well as unobservable or intangible constructs, and how they affect organizational performance. Experts believe that the easiest way of collecting data to measure unobservable customer metrics is through customer surveys. In addition, numerous studies discuss the need for evaluating the business value generated by IT services and propose methods for doing this.7 However, EA value evaluation should be based on more than just the business value analysis of IT services. It requires a much more comprehensive analysis encompassing constructs across an entire organization.

The process of measurement for EA is so complex that there isn't a single model that serves as a best fit for all measurements. Every organization has its own customized measurement process depending on its needs and management decisions. While numerous studies illustrate the ongoing necessity for measuring IT efforts using value metrics, very little research has been done on evaluating a metrics program itself. One of the major steps in defining a measurement process for an organization is effectively identifying the business goals of an organization.

When compared to EA value metrics, research on software metrics is more mature, as the tools to gather and analyze software metrics have been developed and used for a longer period of time. However, organizations lack robust measurement techniques to carry out measurement for EA. While many organizations focus predominantly on EA frameworks and tools, there is typically very limited attention given to assessing or evaluating the value of EA itself through the use of metrics or measures. In addition, there has been very little research conducted in academia related to measurement processes of value in EA. Popular measurement approaches such as the balanced scorecard have been applied in the context of EA value measurement.8 Another study involves techniques to measure the corporate value from EA.9 Probability models have also been used to analyze the impact of EA in organizations.10 Although such methods that discuss value measurement for EA certainly exist, most assume that a common set of metrics will work for all organizations and fail to consider that different organizations have different value sets that require different measures.


For many years, the use of measures to help steer the EA function has been a challenge that few appear to have successfully addressed, which is why the expression "It's like driving a car with a blacked-out windscreen and no instruments" is often used. If it is difficult for those literate in technology and relatively close to the IT function, then it is even worse for the end customers who find technical jargon a smokescreen and lack of information relevant to their business a major headache.

There is no doubt that a practical and effective way to measure EA's contribution to the organization is an essential part of any IT program. EA, as a key resource for the enterprise, begins in the realm of enterprise planning at the strategic level. As shown in Figure 1, the value creation for an enterprise starts with the development of a business strategy based on the organization's key business drivers. The resulting approach then guides the inception of IT strategy with IT strategy influencing enterprise business strategy over time. Together, these two strategies drive the formation of modern EA -- a combination of business and IT architectures that exist symbiotically. Once established, EA supports the decision makers in the underlying architecture domains at a tactical level. Figure 1 demonstrates the EA value-creation process going from business drivers to business value delivery, with EA value measurement as an overarching theme throughout this process.

Figure 1
Figure 1 -- EA value creation.


EA teams can use the steps at the bottom of Figure 1 to develop and maintain EA metrics. Metrics development is integrated throughout the EA value-creation process.


The use of EA in decision making enables practitioners to focus on achieving desirable levels of IT quality and performance, while ensuring that appropriate infrastructure support is in place to deliver the right business solutions. For most large commercial companies or government agencies, primary uses of EA include the following:

  • Effective value creation and delivery. This involves creating and delivering business value by effectively managing the EA activities that impact organizational value.

  • Facilitation. This encompasses establishing and promoting standards, best practices, and guidelines for technology adoption and support across organizations in order to advance business and IT strategies, as well as preparing the process for change management to adopt new technologies.

  • Guidance. This involves defining and enforcing ways to use architecture patterns and information during the entire lifecycle of major application development, integration, and deployment.

Most practitioners agree that at the strategic level, EA can influence and guide IT planning, spending, and investment priorities, as well as the alignment of IT resources in pursuit of the desired business goals. At the tactical level, EA supports the ability of an organization to mitigate development risks through guidance and the use of standards. From both perspectives, it is essential for practitioners to recognize the significance of a measurement program in making informed decisions.


An effective EA measurement program typically begins by identifying a set of precise and easy-to-use metrics recognized by both business and IT organizations across the enterprise. At a strategic level, EA metrics establish a number of quantifiable parameters that enable practitioners to assess and evaluate the EA program, the IT assets employed, and their relevance to delivering business value for the enterprise. At a tactical level, EA metrics include parameters that impact the EA and its effectiveness across the organization -- both directly and indirectly.

EA Measurement Program and Metrics

By leveraging EA metrics in a measurement program, IT practitioners and business stakeholders can evaluate:

  • The benefits delivered as a result of applying or following architecture processes, models, frameworks, and technology standards

  • The alignment (or lack of alignment) between projects and programs and the business strategies they support

  • The ability of each individual project to overcome architecturally significant risks, constraints, and challenges

  • The common architectural risks inherent in the overall architecture planning for business transformation, application rationalization, or legacy modernization initiatives

  • The use of EA information, such as patterns, standards, and registries

Six Steps for Leveraging EA Metrics

For practitioners, leveraging the metrics for EA governance involves six key steps (see Figure 1):

  1. Identify. Define the business capabilities that can meet strategic business goals and recognize the enterprise-wide stakeholders, business sponsors, and EA governance body members.
  2. Establish. Create an inventory of IT assets that includes existing resources, infrastructure investments, and architecturally significant technology components.
  3. Assess. Prioritize the business capabilities and match them with available IT assets to identify any gaps that EA must ensure are addressed by ongoing IT projects.

In most cases, these first three steps are performed periodically (often on an annual basis). Parameters for EA metrics may include the number of architecturally significant components, services, or infrastructure elements reused or shared across the enterprise. EA value determination is often based on metric parameters such as cost of IT projects to meet specific business goals, savings from the return on existing investments or the reuse of patterns, rationalization or consolidation of IT resources, and cost reduction from technology refreshment or retirement.

The next three steps support value measurement and the leveraging of architecture policies and principles:

  1. Measure. Establish value measures as well as measures of architectural usage. Select metrics that demonstrate value produced by EA to the organization.
  2. Monitor. Create a scorecard or dashboard to capture the information related to EA use and application, such as reuse of technology standards, architectural patterns, best practices, common or shared services, or architectural frameworks adopted across the enterprise.
  3. Manage. Evaluate measures to ensure continued relevance. Perform an annual review of selected metrics to ensure continued alignment with changes in the organization.

These last three steps play a vital role in assessing the effectiveness of EA as well as the maturity of the overall EA. These steps are performed on an ongoing basis and are closely interconnected. EA metrics in these steps ensure that value delivery is occurring and will assist the organization in delivering the targeted business value. EA staff should conduct reviews on a regular basis to determine how EA provides value.

There are many things to consider when developing an EA value measurement program. The seven top pitfalls typically encountered when developing an EA value measurement program are highlighted in Figure 2.

Figure 2
Figure 2 -- Seven pitfalls of EA value metrics.


One of the most difficult parts of developing a value measurement program for EA is selecting and developing well-thought-out metrics. Figure 3 highlights five attributes of an effective metric.

Figure 3
Figure 3 -- Five attributes of an effective metric.


There are hundreds of possible metrics for adoption that measure different aspects of the EA function, process, and use. EA teams should adopt no more than eight to 10 metrics initially.

Approaches for Calculating EA Value

Table 1 identifies common financial metrics for measuring EA value. The two most common financial metrics for measuring the value of EA are:11

  1. Return on investment (ROI) -- a performance measure to evaluate the efficiency of EA investment

  2. Benefit-to-cost ratio (B/CR) -- ratio of EA benefits to EA costs


Table 1
Table 1 -- Common Financial Metrics
ROI and Benefits Analysis

Of the two metrics listed above, ROI is by far the more commonly used. However, for ROI to be meaningful for EA, it should be calculated using the following five steps over a multiyear period:

  1. Select the expected benefits from the benefits framework.

  2. Identify the applicable cost scenario.

  3. Calculate the initial, simple return.

  4. Assess and select the benefits and cost scenario for the second and subsequent years.

  5. Calculate the returns for the second and subsequent years.

Figure 4 provides a sample EA benefits framework. (Note: this is to be modified to meet the needs of a particular organization; not all benefits listed below will apply to a given organization.) Some benefits are easily quantified and others are more qualitative in nature. For qualitative benefits, quantitative estimates are calculated and included in the benefits framework.

Figure 4
Figure 4 -- Sample EA benefits framework.


The challenge with the benefits framework is determining which quantitative and qualitative measures are appropriate for a given organization. The qualitative measures are particularly challenging because they require a certain amount of estimation in order to assign quantitative measures to qualitative factors. Assessing qualitative measures is particularly challenging in government. In the government context, some quantitative and qualitative benefits that are sometimes considered for the value framework include:

  • Direct user (customer) value. These are benefits directly realized by users or multiple user groups. Users or customers will vary based on the type of initiative assessed. Users may include, but are not limited to, government employees, other government organizations, and citizens. Quantitative and qualitative values might be used.

  • Nondirect user/public value. These are benefits not related to direct users (e.g., society as a whole). An example might be a well-functioning court system that provides value to society. This is typically a qualitative measure that is difficult to accurately estimate, and it may be determined that this value measure is not appropriate for use in the value framework.

  • Government operational/foundational value. These are order-of-magnitude improvements realized in current government operations and processes and in laying the groundwork for future initiatives. Quantitative and qualitative values might be used.

  • Government financial value. This is the financial benefit (e.g., cost savings, cost avoidance) realized by the government, including financial benefits received by the managing or sponsor agency as well as other federal agencies. Quantitative values are typically utilized.

  • Strategic value. These are benefits that move an organization closer to achieving its strategic goals. This is typically a qualitative measure that is difficult to accurately estimate, and it may be determined that this value measure is not appropriate for use in the value framework.

ROI Formula

The formula for calculating ROI is ROI = benefits/costs. The benefits are derived from the benefit framework discussed previously. The costs component comprises the total annual costs for the EA effort. The benefits framework is divided by the costs to derive an ROI. The ROI calculation must be done over a multiyear period (three to five years) to be meaningful. EA requires multiyear ROI because, as Figure 5 shows, there is a significant lag time between the EA effort and the measurement of value -- often up to 18-24 months, or longer in some cases.

Figure 5
Figure 5 -- The value lag of EA initiatives.


Tips for Using ROI

Some good practices for using ROI include the following:

  • Use ROI as a success factor. An organization can use ROI to drive EA. By definition, the goal of EA is to align an organization's strategy with its IT initiatives and projects. The organization cannot realize its strategy without this alignment. An organization can measure the costs and benefits of using EA for this alignment.

  • Etch the desired benefits in stone. It is important to identify a core set of benefits for the organization to realize from EA. The establishment of measurable goals for operating efficiency, cost reductions, staff reductions, customer satisfaction, computing budgets, and economic growth is also important.

  • Operationalize a core set of metrics. The core set of metrics should be a set of clear, measurable, and quantitative economic benefits for EA. Some common metrics mistakes are failing to define metrics, defining qualitative ones, or defining far too many metrics.


This section describes common metrics utilized by private and public organizations to measure EA.12 This is not a list of all possible metrics but rather those more commonly utilized. This section provides the reader with an idea of the variety of metrics used in different organizations. The metrics are listed by the following four categories: (1) IT metrics, (2) customer metrics, (3) business/strategy metrics, and (4) compliance metrics. Additional metrics that have been used to measure aspects of EA value can be found in the Appendix.

IT Metrics

Common IT-based metrics used to measure EA value include the following:

  • EA outcomes, including cost savings and cost avoidance due to process efficiency, technology standardization, retirement, and consolidation

  • The extent to which the organization decreases the number of technology products that duplicate existing capabilities and the extent to which it decreases the number of obsolete systems in its IT inventory using EA

  • The percentage of cost reduction associated with adopting enterprise-wide standards

  • The percentage of reuse of architectural components (especially services)

  • The percentage of reuse of common designs

  • The percentage of projects architecturally aligned

  • The percentage of successful projects by the EA team

  • The number of projects that leverage EA repository for future-state designs

  • Cost savings through reuse of software components and standardized purchase agreements

  • IT total cost of ownership (TCO); that is, the overall TCO for technology owned, operated, or supported by the organization

  • The number of projects exempt from the architecture-assurance process

  • The number of patterns and standards developed and utilized

  • The percentage of reduction of repetitive data entry

  • The percentage of increase in sharing of data via Web services

  • The percentage of reduction in application development time

  • The number of application and technology registry hits per month

  • The number of projects that leverage architectural patterns

  • The percentage of reduction of development costs

  • The number of projects that utilize approved designs

  • The number of times an EA product or guidance is downloaded from EA site

  • The number of projects using EA guidance

  • EA used to inform IT investment decisions

  • Total cost savings to investment

  • Total cost avoidance to investment

Customer Metrics

Common customer-focused metrics used to measure the value of EA include the following:

  • Average customer wait time (could be call wait time or in-person wait time)

  • Average handling time and average wrap-up time (could apply to customer service time or sales cycle time)

  • Labor cost per call

  • The number of calls handled per agent per hour

Business/Strategy Metrics

Common business/strategy metrics used to measure the value of EA include the following:

  • The number of to-be architectures defined

  • The percentage of applications used by more than one business/product mix offerings

  • The number of local innovations that led to common shared solutions

  • The number of business processes (or business capabilities) mapped

  • The number of business processes (or business capabilities) reused

  • The number of business capabilities mapped to business strategies

Compliance Metrics

Common compliance metrics used to measure the value of EA include the following:

  • The number of designs/projects that are 100% compliant with EA standards

  • Of those that are not 100% compliant, the number that would have achieved compliance at or above a given level

  • The number of new regulations implemented within the permitted time frame

  • The number of overdue regulatory filings


Value measurement is consistently cited as a top area of importance in EA, but it is also consistently cited as one of the areas of lowest maturity. A value proposition is a promise of value delivery and a belief from the customer that he or she will experience that value. A value proposition can apply to an entire organization, parts thereof, customer accounts, products, or services. Creating a value proposition is a part of business/organizational strategy. Robert Kaplan and David Norton state that "strategy is based on a differentiated customer value proposition. Satisfying customers is the source of sustainable value creation."13 However, as Figure 6 illustrates, there are many levels of metrics that have been developed to measure value for different parts and layers of the organization.

Figure 6
Figure 6 -- Many value metrics for different purposes and organizational layers.


The key to a successful value measurement program is to identify metrics that correlate to business key performance indicators (KPIs). While this makes sense to most people, correctly identifying a reasonable number of high-value metrics that are meaningful to the greatest number of areas of the organization is a very difficult endeavor.

As seen in the examples in the Appendix, the problem is not a lack of metrics; it is knowing which ones make sense for your organization and will provide the most "value" for the effort required. According to Michael Porter, there are three components of stakeholder value:14

  1. Economic value. The financial impact of the investment. What will be your ROI? What will it do for the company financially?

  2. Business value. The primary consideration to the user of the solution. What can the investment do for the organization and how can it increase productivity, efficiencies, and effectiveness for those that use the solution?

  3. Personal value. This is more of an intangible value that answers the following questions: What will the decision do for me personally? Will I get a promotion, a bonus, peace of mind, or improve my influence with other organizations?

There are many metrics that measure aspects of economic and business value. Personal value, while an important consideration, is not easily quantifiable.

As Figure 7 implies, EA can be an effective linchpin to bridge strategy to execution. In addition, organizations can use EA to analyze and address strategy execution priorities if those priorities are understood, which includes understanding how those strategic priorities are valued and measured.

Figure 7
Figure 7 -- EA as the bridge between strategy and execution.


There are many stakeholders and value propositions to consider, and it is not feasible to align the EA value measurement program to each and every one. So how do we select those stakeholders and value propositions with which to align our EA efforts?

There is a process for deriving long-term EA value metrics that are aligned with the value drivers of the organization. This process provides guidance on where to focus value measurement efforts for EA. The process consists of the following six steps:

  1. Stakeholder analysis/value mapping. This determines what value measures are the most important and most frequently cited. This process answers questions such as:
    • Who are our stakeholders and which ones are key stakeholders?

    • What do the key stakeholders value?

    • What value measures do we use?

  2. Business capability analysis/value mapping. This helps in the understanding, categorizing, and prioritizing of business capabilities and then determining what value measures are needed for identified high-value business capabilities.
  3. Stakeholder and business capability value measures mapping and analysis. This process determines how much of an intersection exists between identified stakeholder value measures and core business capability value measures.
  4. Metrics selection. This process helps select those metrics that are of importance to key stakeholders and core business capabilities.
  5. Performance improvement considerations. This process ensures that the EA value measurement set continually aligns with changes in the composition and value sets of the key stakeholders and core business capabilities.
  6. Communications considerations. This process ensures that effective EA value communications plans are developed for different key stakeholder groups.

Let's explore these processes in greater detail.

Stakeholder Analysis/Value Mapping

First, the EA team must define and identify the stakeholders. A stakeholder is anyone who can affect or is affected by an organization, strategy, or project. Mindmapping is a useful way of visually identifying stakeholders. There are many methods of stakeholder analysis, and any method can be utilized as long as the method analyzes two major characteristics of stakeholders:

  1. Interest/impact of stakeholders

  2. Influence/power of stakeholders

Once the interests of the stakeholders have been mapped, they can then be prioritized in order of importance. Different methodologies suggest different ways of analyzing stakeholders -- some complex and some very simple. A common approach is to map the interest and power or influence of each stakeholder group on a four quadrant graph.

Key stakeholders are identified as those stakeholders who have high interest/impact on the EA program and who also have high influence/power in the organization. This process identifies which of the many stakeholders in a given organization are key stakeholders. Value measurement efforts should focus on the value measures that are important to the key stakeholders.

The process then analyzes the economic and business values of the identified key stakeholders and determines which of these value measures EA can most directly impact. Table 2 lists some examples of key stakeholders and their financial, economic, and personal value considerations. Metrics, such as ROI, will be used for the financial and economic value considerations. Personal value considerations are typically nonquantifiable but should still be tracked to provide an overall picture of the value considerations for each key stakeholder.


Table 2
Table 2 -- Key Stakeholder Value Analysis


Next, list each financial and economic value consideration and how often each occurs between the different key stakeholders. A frequency diagram similar to Table 3 is produced for the financial and economic value considerations. A frequency diagram could also be produced for the personal value considerations to determine which personal/political value considerations occur most frequently between the different key stakeholders. The frequency is a count of the number of times the consideration is cited in the analysis.


Table 3
Table 3 -- Frequency of Key Stakeholder Value Considerations for Financial Values


Business Capability Analysis/Value Mapping

Next, identify key business capabilities by conducting a capability mapping exercise to understand, categorize, and prioritize business capabilities. The goal in this step is to identify the economic and business values associated with each high-value capability and identify which of these values EA can most directly impact.

A business capability defines the organization's capacity to successfully perform a unique business activity. Capabilities do the following:

  • Function as the building blocks of the business.

  • Represent stable business functions.

  • Are unique and independent from each other.

  • Are abstracted from the organizational model.

  • Capture the business's interests.

Figure 8 illustrates how capabilities comprise processes that in turn produce a service or product, using an example of producing a research paper. The research-related capabilities comprise the process of research paper production, which in turn produces the product: a research report.

Figure 8
Figure 8 -- The capability/process relationship.


Another way to think about Figure 8 is that processes implement capabilities. An understanding of core capabilities and the processes that implement those capabilities is very important for all organizations.

A business capability map is a conceptual model of all of the organization's capabilities along with the details (i.e., people, process, and technology) that enable them. In many capability-mapping exercises, capabilities are divided into categories to better understand the nature and importance of a capability to the organization. Typical capability categories and their functions are:

  • Advantage capabilities that directly contribute to the customer value proposition and have a high impact on company financials. Companies keep these inside to protect intellectual property. Improved methods for data analysis would be a government-related example of an advantage capability. By performing faster and/or more accurate data analysis, a government agency may be able to serve its constituents better, cheaper, or faster and may develop this into an advantage capability offered to other government agencies.

  • Strategic support capabilities that have high contribution in direct support of advantage capabilities. These are the operational capabilities that the organization performs on a daily basis. Examples of strategic support capabilities include processing cases and handling inquiries from constituents.

  • Essential capabilities that may not be visible to the customer but contribute to the organization's business focus and have a big impact on the bottom line. In this area, the focus is on efficiency improvement, especially in high-volume work. Essential capabilities are needed to operate the business, but are also often candidates for outsourcing. Examples might include some accounting functions, HR functions, and IT functions. Figure 9 illustrates a capability map with the capabilities color-coded by category.

Figure 9
Figure 9 -- Sample capability map with categories.


Once business capabilities are categorized, they are then prioritized by value to the business and the complexity of assessing the value of the capability. Value to the business is determined by having the leadership of the organization rank the relative value to the organization of identified capabilities. Complexity of assessing the value of the capability is determined by accessing the amount, type, and location of data needed to assess the value of a given capability. A 1-3 or 1-5 scale could be used to assess complexity. For example, if the data needed to assess the value of a capability resides in three different departments of the organization and two of these departments are difficult to access on a regular basis, then this capability would rate a higher complexity rating. These ratings are somewhat subjective and should be done by a team of three to five people from the EA organization in order to achieve consensus.

Figure 10 shows how a sample four-quadrant graph. The graph illustrates the value of the capabilities to the organization and the complexity in calculating/assessing the value measures for the capability. In this case, it is recommended that an EA value measurement initiative begin with exploring the capabilities in the upper-left quadrant -- those that have high value to the organization and relatively easy value assessments. As the EA value program matures, capabilities in the upper-right quadrant can also be explored.

Figure 10
Figure 10 -- Sample capability quadrant graph.


Next, key capabilities with high business value and reasonable value complexity should be assessed for their financial, economic, and personal value considerations. Table 4 lists some sample key business capabilities and their financial, economic, and personal value considerations. As with key stakeholder value analysis (see Table 2), key business capability value analysis includes metrics, such as ROI, for financial and economic value considerations. Personal value considerations are typically nonquantifiable but should still be tracked to provide an overall picture of the value considerations for each key capability.


Table 4
Table 4 -- Key Business Capability Value Analysis

Next, just like the key stakeholder value analysis, list each financial and economic value consideration and how often each occurs between the different key capabilities. A frequency diagram should be produced for the financial and economic value considerations. A frequency diagram could also be produced for the personal value considerations to determine which personal/political value considerations occur most frequently between the different key capabilities.

Stakeholder and Business Capability Value Measures Mapping and Analysis

Next, the identified key stakeholder value metrics in Table 2 are compared with the identified key business capability value metrics in Table 4 (above). Those metrics that appear in both areas are the business value metrics that have the greatest potential for impact on the largest portion of the organization. In most organizations, there will be considerable overlap between the value measures of the key stakeholders and the value measures of the key capabilities.

Metrics Selection

If more than 10 metrics fall into the area of high-impact metrics, then a discussion on which 10 metrics can be most positively impacted by EA is needed in order to determine which 10 metrics to include in the initial value measurement program (see Figure 11). Those metrics that occur more frequently have greater potential to impact the organization. If the EA organization can demonstrate an ability to positively impact these metrics, value demonstration to the organization will be apparent.

Figure 11
Figure 11 -- Area of high-impact EA value measures.


Metrics Governance

The metrics review process includes representatives from key stakeholder groups as well as representatives from the core business capabilities. These groups are included to ensure that the set of selected metrics is in alignment with any changes in the values of the key stakeholders and core business capabilities. It is important to periodically review metrics for their continued relevance. As Figure 12 suggests, the metrics review process annually evaluates the selected value measures.

Figure 12
Figure 12 -- Metrics governance process.


Performance Improvement Considerations

The metrics identification process should be performed to examine any changes in the key stakeholders or key business capabilities of the organization, and the metrics selected for EA value measurement should be updated accordingly. This process ensures that time is not wasted in calculating and tracking metrics no longer valued by the organization, and that the EA value metrics set reflects changes in the composition and value sets of the key stakeholders and key business capabilities.

Communications Considerations

Enterprise architects often confuse deliverables with value. Successful enterprise architects focus on the value received and communicate in terms of value to the stakeholder. The recommended process includes the development of a communications strategy/plan for each of the key stakeholders and owners of core business capabilities -- one size does not fit all. This communications plan should be updated annually to reflect any changes to the target audiences. It is important to present information in the manner most appropriate to the data represented and in the presentation format the audience is used to. It is also important to adjust reporting frequency, presentation format, and level of detail to the needs of different stakeholder groups. The underlying data should remain consistent, but the presentation format doesn't have to be the same.

Communicating Value and Performance

Effective communication of EA value is a critical (and often overlooked) component of an effective EA value measurement program. There are many cases of EA teams that were producing great value for their organizations but, due to poor communications, key stakeholder groups did not understand or appreciate the value created by the EA function. In most organizations, perception is reality, and without effective communications, incorrect perceptions are often formed. Figure 13 illustrates the need to know your key stakeholders and what form of data representation and communication is most effective for each target group.

Figure 13
Figure 13 -- Matching media to audience type.


Metrics Aggregation

Aggregation of different in-kind metrics is a common value measurement challenge. The most common aggregation problem is the use of nonquantitative (or qualitative) metrics. Quantitative metrics are the numbers that are readily measured and are often financial in nature. Quantitative metrics help crunch the numbers and determine ROI.

While quantitative metrics focus on the hard numbers, qualitative metrics are based on the quality of responses derived from documents, interview, and surveys. Qualitative metrics are often "fuzzier" and hard to quantify, and, as a result, are often not used in EA value measurement. However, some of the biggest impacts made from EA initiatives are in qualitative areas.

Deriving Quantitative Measures and Results from Qualitative Data

Qualitative research is important because it generates data that can provide in-depth insight into a question or topic. However, in order to draw conclusions from qualitative data, it is essential to quantify the data. Qualitative researchers may criticize the quantification of qualitative data, suggesting that such an inversion sublimates the very qualities that make qualitative data distinctive: narrative layering and textual meaning. But assessment (and the policy implications that flow from it) demands that the data is presented within a scientific construct. In addition, until we know more about how, why, and to what degree and under what circumstances certain types of qualitative research can usefully or reliably be quantified, it is unlikely that program planners or policy makers will base decisions on studies generally regarded as "qualitative."

Therefore, it is important to quantify the data obtained from qualitative research. Quantitative analysis of qualitative data "involves turning the data from words or images into numbers"; this can be done by coding ethnographic or other data and looking for emerging patterns.15 If qualitative data is in the form of responses to standardized questionnaire surveys, this data may also be quantified.

Simple frequencies and relationships between variables can be calculated either manually or by using qualitative software, such as EZ-Text. For example, as shown below, a researcher studying smoking habits utilized a frequency table to describe the smoking that occurred in specific contexts. The definitions of these "contexts" were derived from interview data generated from in-depth interviews with youth.

There are three main steps to conducting quantitative analysis of qualitative data: organizing the data, reading and coding it, and presenting and interpreting it.

1. Organizing Data

First, the value measurement professional should organize the data. The data can be organized in groups that relate to particular areas of interest. For example, a study on tobacco farmers might group data into the following sections: history of tobacco farming, other crops grown, role of women in tobacco farming, reasons for tobacco farming, and environmental consequences of tobacco farming.

An example related to EA might entail a study on how enterprise architecture assists decision makers with understanding, managing, and improving operations. Data might be grouped into the following sections: types of decision makers, issues associated with understanding operations for each decision maker, issues associated with managing operations for each decision maker, issues associated with improving operations for each decision maker, the impact of EA on understanding operations for each decision maker, the impact of EA on managing operations for each decision maker, and the impact of EA on improving operations for each decision maker.

2. Reading and Coding Data

The next step is to read all the data carefully and construct a category system that allows all the data to be categorized systematically. The categories should be internally homogeneous and externally heterogeneous. Everything in one category must hold together in some meaningful way, and the differences between categories need to be bold and clear. If there is a lot of data that does not fit into the category system, it usually means that there is a flaw that requires the system to be reorganized. To convert qualitative data to quantitative data for good metrics, there must be a label for all data, and every attempt should be made so that each segment fits in only one category. Lastly, the classification system should be meaningful and relevant to the study. Once a system is created for organizing the data, each category should be assigned a number, and then transcriptions of interviews or survey results can be coded (see sidebar).


First, assign each category a number; for example:

1. The smoking period
1.1. When started and stopped
1.2. Reasons for starting smoking and continuing to smoke
2. Circumstances in which smoking takes place
3. Influences on smoking behavior
3.1. Home environment
3.2. Peers
3.3. Work environment
4. Reasons for stopping
4.1 Health
4.2 Cultural or religious
4.3 Financial
4.4 "Significant other" pressure
4.5 Consideration for others
5. Ways to stop
5.1 Based on experience of respondent
5.2 Based on opinion of the respondent


Next, assign the categories to the interview data; for example:

Interviewer (I): How did you start smoking?

Subject (S): I started smoking out of peer pressure (1.2). Before, my father used to smoke, but he stopped because he was scolded by my mom (4.4). In school, my classmates are smoking so they invited me to join them; I tried it out of curiosity, for the taste (1.2).

I: Why do you smoke?

S: Every time you drink wine or beer, it's nice to smoke. After eating you feel like a smoke (2.0). It's very nice to smoke. The aroma is nice (1.2).

I: Do you smoke at specific times?

S: Walking together with a group or when I am out drinking (2.0). When I'm studying and the times I have to concentrate, I smoke. I'm usually alone in my room (2.0). But I realize that it doesn't make sense because it dries my brain and it lessens my memory, my comprehension, and I'm beginning to lose weight. I don't want to eat anymore (4.1).

I: What would help people to stop smoking?

S: Once you smoke, you have to be aware that there are passive smokers who are mostly affected (4.5).

I: So people would stop more if they thought about the consequences to others?

S: Yes, but also there is the cost. You can put the money instead of buying cigars into something worthwhile. If you're with a girl, why don't you put it into something fun for the future (4.3)?

I: How do you get people to think about quitting?

S: Increase the tax. Discourage buyers from buying cigarettes. And have a campaign of anti-smoking so people are really discouraged to smoke. And tell your kids. Once you feel an obligation to the children, you stop smoking because you are conscious that it will affect their lungs. And then for the smokers, tell them to consider your "stick" as a nail to your coffin. Bring them to hospitals and let them see videos or actual operations with those who have lung cancer in order to get scared of the effect of smoking (5.2).

I: Do you think that creating a challenge for people is a way to encourage people to quit smoking? What would you say to them?

S: You tell them this is how I started. I started smoking just like you. You'd tell them, I was able to overcome the habit and I know you can do it, too (5.1). When you're smoking you're dirtying your temple (4.2).

The most common way to present qualitative data is by using a frequency analysis and reporting the frequency of coded items as shown in Table 5.


Table 5
Table 5 -- Sample Frequency Report

The following are two examples of possible categories and coding systems for an EA initiative:

• Example 1: Outcome of improving collaboration and cooperation across the enterprise. Here is a possible coding system for types of collaboration to be improved:
  1. Collaboration between management

  2. Collaboration between IT staff

  3. Collaboration between support staff

  4. Other collaboration types

As an interviewer reviews the interview transcripts, each reference to improvements in these types of collaboration is recorded with a code of 1 for references to improvements to collaboration between management, a code of 2 for references to improvements to collaboration between IT staff, a code of 3 for references to improvements to collaboration between support staff, and a code of 4 for other types of collaboration improvement.

(Note: If many codes of 4 are found, the categories of collaboration should be reviewed for whether additional categories should be added to the coding system.)

After coding, a frequency analysis can be performed to determine which type of collaboration was referred to the most and to establish a rank order for the codes. Other statistics are possible, but often the frequency analysis is sufficient to establish a pattern of improvement.
• Example 2: Outcome to improve decision making and analysis based on information captured in the EA body of knowledge. This goal should be broken down into two subgoals of (A) use of the EA body of knowledge and (B) use of EA to improve decision making. For subgoal A, here is a possible coding system for types of uses of the EA body of knowledge:
  1. Software architecture

  2. Data architecture

  3. Technical architecture

  4. Security architecture

  5. Business architecture

  6. Solution architecture

  7. Other uses of EA body of knowledge

As the interviewer reviews the interview transcripts, each reference to a use of the EA body of knowledge is recorded with a code of 1 for software architecture, a code of 2 for date architecture, a code of 3 for technical architecture, a code of 4 for security architecture, a code of 5 for business architecture, a code of 6 for solution architecture, and a code of 7 for other uses of the EA body of knowledge.

(Note: If many codes of 7 are found, the categories of uses for the EA body of knowledge should be reviewed to determine whether additional categories should be added to the coding system.)

After coding, a frequency analysis can be performed to determine which use of the EA body of knowledge was referred to the most and to establish a rank order for the codes. Other statistics are possible but often the frequency analysis is sufficient to establish a pattern of improvement.

For subgoal B, here is a possible coding system for uses of EA to improve decision making:

1. Improved IT analysis and decision making

  • 1.1 Major software systems selection

  • 1.2 Data consolidation

  • 1.3 Infrastructure planning

  • 1.4 Other improved IT analysis and decision making

2. Improved business architecture analysis and decision making

  • 2.1 Capability planning

  • 2.2 Business process planning

  • 2.3 Linking tactical projects with strategic planning

  • 2.4 Other improved business architecture analysis and decision making

3. Improved strategy analysis and decision making

  • 3.1 Staffing

  • 3.2 Capacity planning

  • 3.3 Office consolidation

  • 3.4 Decision making based on project portfolio management

  • 3.5 Other improved strategy analysis and decision making

As the interviewer reviews the interview transcripts, each reference to a use of EA to improve IT analysis and decision making is recorded with a code of 1.1 for improved IT analysis and decision making for major software systems, a code of 1.2 for improved IT analysis and decision making for data consolidation, and so on.

(Note: If many codes in the "other" option are found, the need for additional categories should be reviewed.)

After coding, a frequency analysis can be performed to determine which codes were referred to the most and to establish a rank order for the codes. Other statistics are possible but often the frequency analysis is sufficient to establish a pattern of improvement.

While more advanced qualitative data analysis techniques exist, it is recommended that those new to qualitative data analysis initially focus on the methods described above, as more advanced techniques would require more formal education on qualitative research approaches than is possible in this report.
3. Presenting and Interpreting Data

After coding the data, the data should be displayed and organized so that it can be interpreted. Often, simple matrices or charts can be used to compile interview data so that patterns can be determined among respondents. Causal network diagrams and flowcharts are also often helpful to assess the cause and effect of relationships that appear in the data. To analyze the data, the use of a computer-assisted qualitative data-analysis program is suggested. Such programs link code with text in order to perform complex model building and help in data management.

For example, EZ-Text is a useful software program when working with responses to open-ended questions in a standardized survey to quantify the results of an analysis, indicating the frequency of particular responses to each question. Many other software programs are available, and the options depend on the exact nature of the research and the size of the database. The coding and analysis of data in qualitative research is done differently for each study and depends on the research design as well as the researcher's skill and experience. Regardless of the study, it is always essential to clearly document how the data was coded and interpreted, and it is important to quantify it in order to draw conclusions.

Table 6 builds on the smoking example and illustrates how to organize and display data so that it is easier to interpret. Simple matrices may be used to compile interview data so that patterns among different types of respondents can be detected. This is an example of presenting descriptive data (what the data says).


Table 6
Table 6 -- Example of Data Organization and Display


Causal network diagrams can also be constructed to help make sense of the cause-and-effect relationships that appear in the data. For example, the causal network in Figure 14 illustrates how linkages between smoking behavior and smoking outcomes might occur at different stages of a man's life. This is an example of presenting interpretive data (what the data means).

Figure 14
Figure 14 -- Sample casual network diagram.



This Executive Report described a process for deriving long-term EA value metrics -- including deriving quantitative measures and results from qualitative data -- that are aligned with the value drivers of the organization. This process ensures that an organization will select the optimal set of metrics -- those with the greatest likelihood of demonstrating the value that EA has produced for the organization. This process also helps to select metrics that align with the value measures utilized by the core capabilities of the organization as well as those measures utilized by key stakeholders. Aligning the EA value measurement program with these measures allows the EA team to directly show how it positively impacts measures that matter to the rest of the organization. Once accomplishing this task, the value of EA to the enterprise will not only be understood but assured.



1 "A Common Perspective on Enterprise Architecture." The Federation of Enterprise Architecture Professional Organizations, 2013.

2 Sousa, Pedra, Carla Marques Pereira, and Jose Alves Marques. "Enterprise Architecture: Business and IT Alignment." Proceedings from the 20th Annual ACM Symposium on Applied Computing, Santa Fe, New Mexico, USA, 13-17 March 2005.

3 Harrington, H. James. Business Process Improvement: The Breakthrough Strategy for Total Quality, Productivity, and Competitiveness. McGraw-Hill, 1991.

4 Kaplan, Robert S., and David P. Norton. The Balanced Scorecard: Translating Strategy into Action. Harvard Business Review Press, 1996.

5 Lehmann, Donald R. "Metrics for Making Marketing Matter." The Journal of Marketing, Vol. 68, No. 4, 2004, pp. 73-75.

6 Gupta, Sunil, and Valerie Zeithaml. "Customer Metrics and Their Impact on Financial Performance." Marketing Science, Vol. 25, No. 6, 2006.

7 Tian, Chunhua, RongZeng Cao, Wei Ding, Hao Zhang, and Juhnyoung Lee. "Business Value Analysis of IT Services." Proceedings of the IEEE International Conference on Services Computing, IEEE, 2007.

8 Schelp, Joachim, and Matthias Stutz. "A Balanced Scorecard Approach to Measure the Value of Enterprise Architecture." Proceedings from Trends in Enterprise Architecture Research, St. Gallen, Switzerland, June 2007.

9 Dietzsch, Andreas, Christian Kluge, and Michael Rosemann. "How to Realise Corporate Value from Enterprise Architecture." Proceedings from the 14th European Conference on Information Systems, Sweden, Goteborg, 12-14 June 2006.

10 Johnson, Pontus, Erik Johansson, Teodor Sommestad, and Johan Ullberg. "A Tool for Enterprise Architecture Analysis." Proceedings of the 11th IEEE International Enterprise Distributed Object Computing Conference. IEEE, 2007.

11 Cameron, Brian, and Eric McMillan. "Enterprise Architecture Valuation and Metrics: A Survey-Based Research Study." Journal of Enterprise Architecture, Vol. 9, No. 1, 2013, pp. 39-59.

12 Cameron and McMillan (see 11).

13 Kaplan and Norton (see 4).

14 Porter, Michael E. "The Five Competitive Forces That Shape Strategy." Harvard Business Review, January 2008, pp. 79-93.

15"How to Quantify Qualitative Data." United for Sight, 27 December 2014.

About The Author
Brian H. Cameron is a Senior Consultant with Cutter Consortium and a member of Arthur D. Little's AMP open consulting network. He is the Associate Dean for Professional Graduate Programs and Clinical Professor of Information Systems in the Smeal College of Business at the Pennsylvania State University. Dr. Cameron is also Founding Director of the Center for Enterprise Architecture in the College of Information Sciences and Technology at Penn… Read More