Porter, Adam A.Toman, C. A.Siy, HarveyVotta, Lawrence G.We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it, (2) to present our results and discuss their implications for both software practitioners and researchers, and (3) to discuss several new questions raised by our findings. For each inspection we randomly assigned 3 independent variables: (1) the number of reviewers on each inspection team (1,2, or 4), (2) the number of teams inspection the code unit (1 or 2), and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for arch inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variable for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results are based on the observation of 88 inspection s and challenge certain long-held beliefs about the most cost-effective ways to conduct inspections and raise some questions about the benefits of recently proposed methods. (Also cross-referenced as UMIACS-TR-97-20)en-USAn Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development.Technical Report