DISTRIBUTED CONTINUOUS QUALITY ASSURANCE
MetadataShow full item record
Quality assurance (QA) tasks, such as testing, profiling, and performance evaluation, have historically been done in-house on developer-generated workloads and regression suites. The shortcomings of in-house QA efforts are well-known and severe, including (1) increased QA cost and (2) misleading results when the test cases, input workloads, and software platforms at the developer's site differ from those in the field. Consequently, tools and processes have been developed to improve software quality by increasing user participation in the QA process. A limitation of these approaches is that they focus on isolated mechanisms, not on the coordination and control policies and tools needed to make the global QA process efficient, effective, and scalable. To address these issues, we have initiated the Skoll project, which is developing and validating novel software QA processes and tools that leverage the extensive computing resources of worldwide user communities in a distributed, continuous manner to significantly and rapidly improve software quality. We call this distributed, continuous quality assurance (DCQA). We envision a QA process conducted around-the-world, around-the-clock on a powerful computing grid provided by thousands of user machines during off-peak hours. Skoll processes are distributed, opportunistic, and adaptive. They are distributed: Given a QA task we divide it into several subtasks each of which can be performed on a single user machine. They are opportunistic: When a user machine becomes available we allocate one or more subtasks to it, collect the results when they are available, and fuse them together at central collection sites to complete the overall QA process. And finally they are adaptive: We use earlier subtask results to schedule and coordinate subtask allocation. In this thesis, we build an infrastructure, algorithms and tools for developing and executing through, transparent, managed, and adaptive DCQA processes. We then develop several novel DCQA processes and empirically evaluate them, with a special focus on cost efficiency and applicability of these processes to real-life, highly-configurable software systems. Our results strongly suggest that these new processes are an effective and efficient way to conduct QA tasks such as evaluating performance characteristics and testing for functional correctness of evolving software systems.