Skip to main content

The economics of quality

Posted by malcolmdavis on March 10, 2006 at 8:54 PM PST

The economics of defects:

There is a cost to software that goes beyond the purchase price. There is the price of installation, configuration, training, and defects. Determining the impact of defects on the cost is the focus of this writing.

Defect cost:

The following is the formula to determine the cost of defects in code:

Product Defect Cost = Q*S*C*P


Q The quality of the product.
Quality is defined in terms of defect density (DD). DD is number of defects per line of code after the code is released. (DD = Number of Defects Found / Size). In the equation, Q is measured in the number of defects per 1000 lines of code (1K LOC).
S The size of the product.
The size of the product can be counted in thousands of lines of code or Function Points (or some other size measure).
C The cost per defect.
The cost value is based on installation and post-purchase problems.
P The proportion of defects that the customer finds.
P is defined as the fraction of all of the defects in the software that a customer is likely to find. For example, a value of 0.02 for P means that any single customer is likely to experience 2% of all of the detectable defects in the software.

Real-world example:

While consulting for an Application Service Provider (ASP) several years ago, I noticed that the development staff dedicated to daily problems was larger than the staff for new development. I decided to conduct a product analysis. I randomly pulled 10 modules from source control and performed a code review. I found roughly 1 defect for every 10 lines of code, or a defect density of .1.

I continued with the investigation by looking at factors such as the number of lines of code (LOC), running tools to search for cloned (copy & pasted) code, how the code was segmented, and the general architecture of the application. There was 350K of code, 100s of instances of cloned code, 100K of the code base existed in stored procedures, and basically a poorly architected application.

I finally wrote up a report that is summed in the following chart:

Future - 20 Future - 10 Future - 1 Future
Future - 20
Reduce code
Defects per KLOC 100 20 10 1 100 20
Q - quality 0.1 0.02 0.01 0.001 0.1 0.02
S - size 350,000 350,000 350,000 350,000 350,000 175,000
C - cost $400 $400 $400 $400 $100 $400
P - proportion 0.02 0.02 0.02 0.02 0.02 0.02
Annual cost $280,000 $56,000 $28,000 $2,800 $70,000 $28,000
Diff from present $224,000 $252,000 $277,200 $210,000 $252,000

Real-world notes:

  • C - Cost: The values I used was based on industry standards. C will vary based on organization. Your organization should already know their Product Defect Cost. You can back out the cost of a single defect.
  • P - proportion: As we know, not all defects are visible, or found. What percentage of defects found by a customer can really never be known. However, the 2% value seems to be a steady value I found used over and over again.
  • Future - 20: Shifting from a defect density of .1 to .02 has a dramatic impact on the economics of the software.
  • Future - Outsourcing: Keeping the defect density of constant at .1 and just outsourcing the support work to India, had less of an impact than improving the present product. There is the additional problem of customers that are not happy with outsourcing of support, or lower quality product.
  • Future - 20 Reduced code: Decreasing defects to .02 and reducing the code base by half, has the same impact as reducing the defects to .01.
  • Defining a defect - I was liberal with defect counts. I consider everything from unused variables to incorrect method documentation as an error.
  • $280,000 - Was a calculated value. When I submitted the report to the company, the value was very close to their actual cost.


"You cannot control what you cannot measure." - Tom DeMarco (Software Engineer).

I've heard all types of aversions to measuring. The following are common excuses:

  • "We deliver defect free code."
    My response: There is no such thing as defect free code. There are defects in every code base, people just don't track the defects, don't report them, or just lie about the defect count. Its not about defect free code, its about limiting defects to increase the companies profitability.
  • "We practice extreme programming, therefore we deliver better solutions, quicker."
    I start to ask questions as soon as I hear the phrase "Extreme Programming". I eventually determine the group is practicing a subset of extreme practices. Further, how does someone determine that they are developing better, faster solutions if measurements are not taken? I'm also a big believer that no single process can fit every situation.
  • "We have processes that work."
    If your organization is delivering high quality software on schedule, great. However, if there are no measurements how can anyone know the quality of the software?

Note: The cornerstone of science is measurement. While I was in school, measurement was involved in every laboratory experiment in I had in physics, chemistry, and engineering.

Steve McConnell has a good discussion on defect measurement discussion at

Higher quality, faster software delivery

The focus of quality has the added value of reducing the software delivery schedule.

Defect culture

In Corporate IT development, defect density is normally about .1 or 100 defects per 1K LOC. Commercial is about .05, and some high-end open source projects like Apache and Linux, are around .001 (1 defect per 1K LOC).

You can google for quality numbers for Linux, Apache, and commercial software. There are some conflicting studies for Apache. The consensus is Linux's quality numbers are about .001, and .05 is consistent for commercial software.

I have not read studies that address IT organizations. I think this is partly because IT organizations do not open up. The defect density of .1 is based on my experiences consulting for 3 Fortune 500 companies over a 7-year period.

The difference in defects between IT software and Commercial is explained in
"Why the little fish eat the big".

Defect density is not a panacea

"Not everything that can be counted counts, and not everything that counts can be counted." - Albert Einstein

In Vietnam, a horrifying practice came about called 'body counts'. The concept of the 'body count' was to convey the status of the conflict by the ratio of enemy kills to friendly casualties. Just because the defect density is .001 doesn't mean that the software is marketable, meets the customer needs, or is well designed. As demonstrated previously, well-designed code that reduces the code base results in lower cost.


The company discussed in the blog had their customer base double over a 12-month period. As the formula predicted, the companies support cost doubled.

Developers are hired to make their employer profitable, not just sit around creating cool gadgets. Once developers take an economic perspective to creating software, the more profitable they will make their employer, and more valuable they will make themselves.


Return on Software by Steve Tockey

The ROI from Software Quality, An Executive Briefing by Khaled El Emam (Ph.D.)

Related Topics >>