Understanding the Sources of Variation in Software Inspections

Loading...
Thumbnail Image

Files

CS-TR-3762.ps (566.49 KB)
No. of downloads: 374
CS-TR-3762.pdf (376.58 KB)
No. of downloads: 848

Publication or External Link

Date

1998-10-15

Advisor

Citation

DRUM DOI

Abstract

In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size, and number and sequencing of session), altered effectiveness and interval. our results showed that such changes did not significantly influence the defect detection reate, but that certain combinations of changes dramatically increased the inspection interval.

We also observed a large amount of unexplained variance in the data, indicating that other factors much be affecting inspection performance. The nature and extent of these other factos now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspection.

Acting on the hypothesis that the "inputs" into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in defect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, at the key to improving inspection effectiveness.

The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there still remain other factors which need to be identified. (Also cross-referenced as UMIACS-TR-97-22)

Notes

Rights