Browsing by Author "Votta, Lawrence G."
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Anywhere, Anytime Code Inspections: Using the Web to Remove Inspection Bottlenecks in Large-Scale Software Development.(1998-10-15) Perpich,, James; Perry, Dewayne E.; Porter, Adam A.; Votta, Lawrence G.; Wade, Michael W.The dissemination of critical information and the synchronization of coordinated activities are critical problems in geographically separated, large-scale, software development. While these problems are not insurmountable, their solutions have varying trade-offs in terms of time, cost and effectiveness. Out previous studies have shown that the inspection interval is typically lengthened because of schedule conflicts among inspectors which delay the (usually) required inspection collection meeting. We present and justify a solution using an intranet web that is both timely in its dissemination of information and effective in its coordination of distributed inspectors. First, exploiting a naturally occurring experiment (reported here), we conclude that the asynchronous collection of inspection results is at least as effective as the synchronous collection of those results. Second, exploiting the information dissemination qualities and the on-demand nature of information retrieval of the web, and the platform independence of browsers, e build an inexpensive tool that integrates seamlessly into the current development process. By seamless we man an identical paper flow that results in an almost identical inspection process. The acceptance of the inspection tool has been excellent. The cost savings just from the reduction in paper work and the time savings from the reduction in distribution interval of the inspection package (sometimes involving international mailings) have been substantial. These savings together with the seamless integration into the existing environment are the major factors for this acceptance. From our viewpoint as experimentalists, the acceptance came too readily. Therefor we lost our opportunity to explore this tool using a series of controlled experiments to isolate the underlying factors or its effectiveness. Nevertheless, by using historical data we can show that the new process is less expensive in terms of cost and at least as effective in terms of quality (defect detection effectiveness). (Also cross-referenced as UMIACS-TR-97-17)Item An Experiment to Assess Cost-Benefits of Inspection Meetings and their Alternatives(1998-10-15) McCarthy, Patricia; Porter, Adam; Siy, Harvey; Votta, Lawrence G.We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables- (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs. (Also cross-referenced as UMIACS-TR-95-89)Item An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development.(1998-10-15) Porter, Adam A.; Toman, C. A.; Siy, Harvey; Votta, Lawrence G.We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it, (2) to present our results and discuss their implications for both software practitioners and researchers, and (3) to discuss several new questions raised by our findings. For each inspection we randomly assigned 3 independent variables: (1) the number of reviewers on each inspection team (1,2, or 4), (2) the number of teams inspection the code unit (1 or 2), and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for arch inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variable for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results are based on the observation of 88 inspection s and challenge certain long-held beliefs about the most cost-effective ways to conduct inspections and raise some questions about the benefits of recently proposed methods. (Also cross-referenced as UMIACS-TR-97-20)Item An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development(1998-10-15) Porter, Adam A.; Toman, C. A.; Siy, Harvey; Votta, Lawrence G.We are conducting a long-term experiment (in progress) to compare the costs and benefits of several different software inspection methods. These methods are being applied by professional developers to a commercial software product they are currently writing. Because the laboratory for this experiment is a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it, (2) to present our preliminary results and discuss their implications for both software practitioners and researchers, and (3) to discuss how we expect to modify the experiment in order to reduce potential risks to the project. For each inspection we randomly assign 3 independent variables: (1) the number of reviewers on each inspection team (1, 2 or 4), (2) the number of teams inspecting the code unit (1 or 2), and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection are randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection include inspection interval (elapsed time), total effort, and the defect detection rate. To date we have completed 34 of the planned 64 inspections. Our preliminary results challenge certain long-held beliefs about the most cost-effective ways to conduct inspections and raise some questions about the feasibility of recently proposed methods. (Also cross-referenced as UMIACS-TR-95-14)Item A Review of Software Inspections(1998-10-15) Porter, Adam A.; Siy, Harvey; Votta, Lawrence G.For two decades, software inspections have proven effective for detecting defects in software. We have reviewed the different ways software inspections are done, created a taxonomy of inspection methods, and examined claims about the cost-effectiveness of different methods. We detect a disturbing pattern in the evaluation of inspection methods. Although there is universal agreement on the effectiveness of software inspection, their economics are uncertain. Our examination of several empirical studies leads us to conclude that the benefits of inspections are often overstated and the costs (especially for large software developments) are understated. Furthermore, some of the most influential studies establishing these costs and benefits are 20 years old now, which leads us to question their relevance to today's software development processes. Extensive work is needed to determine exactly how, why, and when software inspections work, and whether some defect detection techniques might be more cost-effective than others. In this article we ask some questions about measuring effectiveness of software inspections and determining how much they really cost when their effect on the rest of the development process is considered. Finding answers to these questions will enable us to improve the efficiency of software development. (Also cross-referenced as UMIACS-TR-95-104)Item Specification-based Testing of Reactive Software: A Case Study in Technology Transfer(1998-10-15) Jangadeesan, Lalita; Porter, Adam A.; Puchol, Carlos; Ramming, J. Christopher; Votta, Lawrence G.We describe a case study in which we tried to transfer a specification-based testing system from research to practice. We did the case study in two steps: First we conducted a feasibility study in a laboratory setting to estimate the potential costs and benefits of using the system. Next we conducted a usability study, in an industrial setting, to determine whether it would be effective in practice. The case study illustrates that technology transfer efforts can benefit from a greater focus on practitioners' needs, and that this focus helps identify some of the open problems that limit formal methods technology transfer. We also found that there is often a tension between the scope of the problem to be solved and the specificity of the solution. The greater the scope of the problem, the more general the formal method solution and, thus, the more customization that must be done to use it in a particular environment. We suggest that researchers limit the scope of the problems they try to solve to minimize the risk of technology transfer failure. (Also cross-referenced as UMIACS-TR-97-16)Item Specification-based Testing of Reactive Software: Tools and Experiments(1998-10-15) Jangadeesan, Lalita Jategaonkar; Porter, Adam A.; Puchol, Carlos; Ramming, J. Christopher; Votta, Lawrence G.Testing commercial software is expensive and time consuming. Automated testing methods promise to save a great deal of time and money throughout the software industry. One approach that is well-suited for the reactive systems found in telephone switching systems is specification-based testing. We have built a set of tools to automatically test software applications for biolations of safety properties expressed in temporal logic. out testing system automatically constructs finite state machine oracles corresponding to safety properties, builds test harnesses, and integrates them with the application. The test harness hen generates inputs automatically to test the application. We describe a study examining the feasibility of this approach for testing industrial applications. To conduct this study we formally modeled an Automatic Protection Switching system (APS), which is an application common to many telephony systems. We then asked a number of computer science graduate students to develop several versions of the APS and use our tools to test them. We found that the tools are very effective, save significant amounts of human effort (at the expense of machine resources), and are easy to use. We also discuss improvements that are needed before we can use the tools with professional developer building commercial products. (Also cross-referenced as UMIACS-TR-97-18)Item Understanding the Effects of Developer Activities on Inspection Interval(1998-10-15) Porter, Adam A.; Siy, Harvey; Votta, Lawrence G.We have conducted an industrial experiment to assess the cost-benefit tradeoffs of several software inspection processes. Our results to date explain the variation in observed effectiveness very well, but are unable to satisfactorily explain variation in inspection interval. In this article we examine the effect of a new factor - process environment - on inspection interval (calendar time needed to complete the inspection). Our analysis suggests that process environment does indeed influence inspection interval. in particular, we found that non-uniform work priorities, time-varying workloads, and deadlines have significant effects. Moreover, these experiences suggest that regression models are inherently inadequate for interval modeling, and that queueing models may be more effective. (Also cross-referenced as UMIACS-TR-97-19)Item Understanding the Sources of Variation in Software Inspections(1998-10-15) Porter, Adam A.; Siy, Harvey; Mockus, Audris; Votta, Lawrence G.In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size, and number and sequencing of session), altered effectiveness and interval. our results showed that such changes did not significantly influence the defect detection reate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors much be affecting inspection performance. The nature and extent of these other factos now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspection. Acting on the hypothesis that the "inputs" into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in defect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, at the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there still remain other factors which need to be identified. (Also cross-referenced as UMIACS-TR-97-22)