Software Quality Metrics — What to Measure When for Competitive Advantage | Part 4

Shawn D. Sheridan's picture

This series of blog posts originally appeared as a white paper I wrote a number of years ago.  Nonetheless, the content is still relevant today, and useful to anyone in the software engineering business, be it commercial software producers, or in-house development shops.

Improving Defect Detection

No software engineering shop is perfect, as human beings are by nature prone to error.  So, not only do we want to determine what we can do to avoid defects altogether, but failing that, what can we do better to “nip them in the bud”, before they explode in terms of cost and organisational reputation damage.  In the formulas that follow, the subscripted capital letter “I” denotes “Introduction”, and the subscripted capital letter “D” denotes “Detection”.

Rate of defect-detection per SDLC stage SRD( S )

The number of defects detected in the software nD divided by the size/volume v of the software, shown for each stage S of the SDLC where the defects were detected.

 R_D( S ) = \frac{n_D( S )}{v},\ \forall SDLC stages S

Again, while this metric has some nostalgic appeal, it has little practical value for the reasons already discussed.  The rate-metric that follows provide better information.

Rate of defect-detection per SDLC stage defect detected SD, per SDLC stage defect introduced SIR(SI,SD)

This is a key measure to show how well our detection methods are working to minimise cost.  The number of defects n detected in the software divided by the size/volume v of the software, shown for each stage SD of the SDLC where the defects were detected, and for each stage SI of the SDLC where the defects were introduced.

 R(S_I, S_D) = \frac{n(S_I, S_D)}{v},\ \forall SDLC stages SI and SD

These metrics are best shown in a matrix form, such as:

Introduced


Detected
Conceptualise Business Requirements Software Requirements Design Build Unit Test Build and Smoke Test Performance Test System and Regres­sion Test UAT Deployment Total
Conceptualise 2, $150                     2, $150
Business Requirements   1, $100                   1, $100
Software Requirements   1, $300 3, $300                 4, $600
Design       1, $50               1, $50
Build         1, $50             1, $50
Unit Test         3, $300 1, $40           4, $340
Build and Smoke Test                        
Performance Test                        
System and Regression Test     1, $3,000                 1, $3,000
UAT   1, $20,000 3, $21,000                 4, $41,000
Deployment                        
Production Warranty     1, $14,000                 1, $14,000
Production Post-Warranty                        
TOTAL 2, $150 3, $20,400 8, $38,300 1, $50 4, $350 1, $40           19, $59,290
% Cost 0.3 34.4 64.6 0.1 0.6 0.1           100

 

Grey areas contain no data, as one cannot detect a defect in a stage before it is created.  Pale green areas show defect containment within the SDLC stage / activity-type.  Non-green areas indicate lack of stage- / activity-type-containment, with colours moving toward “white hot” indicating worse and worse cost consequences.  The matrix above indicates that anything found in Production will have the worst cost consequences, as there are reputation ramifications.  Ideally, the cost-rate of defect introduction would also be included in each cell along with the counts, providing a more holistic view (as illustrated).  The matrix clearly shows where process improvement for prevention and detection needs to occur.  From the example above, this fictitious software development group is in worse shape than most, as fully 99% of the cost of its defects is created in requirements (business and software), not detected until far too late in the SDLC.

Defect-detection methods ranked by rate

An ordered list of the various defect-detection methods sorted by most number of defects uncovered to least, with the associated cost-rates of detection and cost-rates to fix.  This should be examined to ensure that the methods best for stage-containment (and thus least cost-to-fix) are the most effective.  For example, “System Functional and Regression Testing” should not appear at the top of the list, because that would indicate the majority of defects would be found at this very late stage in the SDLC.

Cost-rate of defect-detection per SDLC stage, CD( S )

An ordered list of the various defect-detection methods grouped by SDLC stage, and sorted within by most number of defects uncovered to least, with the associated cost-rates of detection and cost-rates to fix.  This should be examined to ensure that the methods best for stage-containment (and thus least cost-to-fix) are the most effective.  Again, for example, “System Functional and Regression Testing” should not appear at the top of the list, because that would indicate the majority of defects would be found at this very late stage in the SDLC.

Conclusion

Coming full-circle back to one of our premises, you cannot manage what you cannot measure.  This series of posts has given you a set of metrics that you can action, and that will point you in the direction you need to go to improve your software engineering processes to produce a better quality product.  And that better quality product will improve your organisations profitability and customer relations.  And that, of course, is all of our accountability

Portfolio

exeQutive
ClimbingPartner.net
OutSport Toronto
Accelerated Evolution
The Privacy Solution
Ryan Dunlop Design & Drafting
Lotus Gill Pilates
Investigative Research Group
Crag Crux Climbers