Software Quality Metrics — What to Measure When for Competitive Advantage | Part 3

Shawn D. Sheridan's picture

This series of blog posts originally appeared as a white paper I wrote a number of years ago.  Nonetheless, the content is still relevant today, and useful to anyone in the software engineering business, be it commercial software producers, or in-house development shops.

Reducing Defect Introduction and Repair Costs and Consequences

In order to know what engineering processes to target and how, in order to not introduce defects in the first place and completely avoid the costs of poor software quality, we need to understand where defects come from, how costly they are to repair, and why they are introduced.  In the formulas that follow, the subscripted capital letter “I” denotes “Introduction”.

Rate of defect-introduction by SDLC stage SRI( S )

The number of defects introduced into the software (nI) divided by the size/volume v of the software, shown for each stage S of the SDLC where the defects were introduced.

 R_I(S)=\frac{n_I(S)}{v},\ \forall SDLC stages S

The use of the symbol \forall here and following means “for all”. 

While this metric has an intuitive appeal, as discussed earlier, by itself, it is a relatively useless metric.  The cost-rate of defect-introduction by SDLC stage (given next) provides much more insight into what is not working properly.

Cost-rate of defect-introduction by SDLC stage SCI( S )

Total costs of fix f [1] (including retesting, documentation changes, etc.) plus any penalty or other identifiable costs p to remediate defects divided by the size/volume v of the software, shown for each stage S of the SDLC where the defects were introduced.

 C_I(S)=\frac{\sum_{\forall\ costs\ f( S )\ and\ p( S )} f( S )+p( S )}{v},\ \forall SDLC stages S

From this metric, we can see how much ineffective engineering processes are costing us per unit-size for each stage in the SDLC, and can start to compare those over time as we effect improvements to see what impact those improvements are having.

Cost-rate of defect-introduction by cause UCI(U)

Total costs of fix f (including retesting, documentation changes, etc.) plus any penalty or other identifiable costs p to remediate defects divided by the size/volume v of the software, shown for each cause U of the defects being introduced.

 C_I(U)=\frac{\sum_{\forall\ costs\ f(U)\ and\ p(U)}f(U)+p(U)}{v},\ \forall causes U

Again, from this metric, we can see how much ineffective engineering processes are costing us per unit-size for a variety of root causes, and can start to compare those over time as we effect improvements to address those root causes, seeing what impact those improvements are having.

Severity rate of defect-introduction by cause UTI(U)

Total severity scores tI of defects divided by the size/volume v of the software, shown for each cause U of the defects being introduced.

 T_I(U)=\frac{\sum_{\forall\ severities\ t_I(U)} {t_I(U)}}{v},\ \forall causes U

Here too we have the base metrics pointing to causes of defects and showing their severity to our customers.

Ranked Lists

With the above four preliminary sets of computations, we can then create lists showing in what stages introduced and by what causes defects cost the most, and are of the worst severities.  That then gives us somewhere to focus our attention to reducing defect introduction and associated costs, and improving customer and business-partner satisfaction.

Defect introduction cause ranked by cost rates

An ordered list with the associated cost rates of the causes of defects, also showing the stage introduced, ranked highest cost-rate to lowest cost-rate.  This starts to give us an idea of what to tackle first.

Defect introduction cause ranked by severity

An ordered list with the associated severities of defects, also showing the stage introduced, ranked highest severity to lowest severity.  This generally should correlate to the prior list, and again gives us an idea of what to tackle first.

Other Metrics

Two other metrics that should be compiled and reviewed are rate and cost-rate of defect-introduction by software component.  These metrics will indicate areas of the system that are likely overly-complex, of a legacy design, poorly understood, or otherwise ‘fragile’ and in need of attention.  Rate and cost-rate of defect-introduction by software component by SDLC stage will further pinpoint the nature of the underlying problem.

For example, suppose there is an inordinate rate and cost-rate of defect-introduction for a specific module that is introduced consistently in Design.  Clearly the design of that component should be reviewed.  Good design documentation may not exist.  Or the design may be overly complex and cumbersome.  In the former case, the costs captured and associated with defects introduced for that component during design should justify the effort to create the needed documentation.  In the latter case, those same costs should justify a re-design. 

Conversely, if defects were consistently introduced in Design across all modules, then it is our design disciplines and practice in general that needs attention, not the design issues of one module.  If we see a pattern of defects in design of additions to only more complex modules while the simpler ones seem to pass this SDLC stage unscathed, then again, we need to look at what is wrong with our design discipline and techniques in general that fails for larger and more complex modules.  It may be that we in fact do not have a design discipline, and that our programmer/analysts can “wing-it” for the simpler components and modifications, but fall down on the more complex elements and aspects.

As a second example, suppose there is another inordinate rate and cost-rate of defect introduction for a set of modules that combined perform a specific function, and the source stage is consistently in business and software requirements development, with discovery in UAT or worse production.  In that case, if the same pattern is not detected consistently across all areas of the software product, but just this particular one, then the issue is a poor understanding by the business (and consequently by the software engineering team) of what it actually needs to achieve.  Nonetheless, the failing is still with the modelling of the business and specific processes, as solid software engineering processes would have driven out the correct models.  Focus needs to be given on two fronts:  (1) why has the business chronically not understood this element of its activity, and (2) what failed in particular in our modelling and development of requirements here that may fail elsewhere in future, and how do we fix it?

Read Part 4 – Improving Defect Detection


[1] Note that this amount f is the “avoidable rework” we noted before.  Clearly, in your time-tracking system, there must be the ability to flag hours as being of this type of work, as well as the organisational discipline to actually do that flagging.  That means that, at least initially when implementing such practice, there must not be negative ramifications for doing so.

Portfolio

exeQutive
ClimbingPartner.net
OutSport Toronto
Accelerated Evolution
Ryan Dunlop Design & Drafting
The Privacy Solution
Lotus Gill Pilates
Investigative Research Group
Crag Crux Climbers