Show simple item record

dc.contributor.advisorHollingsworth, Jeffrey Ken_US
dc.contributor.authorStoker, Geoffrey Melvinen_US
dc.date.accessioned2014-10-11T05:46:06Z
dc.date.available2014-10-11T05:46:06Z
dc.date.issued2014en_US
dc.identifierhttps://doi.org/10.13016/M2J59R
dc.identifier.urihttp://hdl.handle.net/1903/15752
dc.description.abstractDynamic performance analysis of executing programs commonly relies on statistical profiling techniques to provide performance measurement results. When a program execution is sampled we learn something about the examined program, but also change, to some extent, the program's interaction with the underlying system and thus its behavior. The amount we learn diminishes (statistically) with each sample taken, while the change we affect with the intrusive sampling risks growing larger. Effectively sampling programs is challenging largely because of the opposing effects of the decreasing sampling error and increasing perturbation error. Achieving the highest overall level of confidence in measurement results requires striking an appropriate balance between the tensions inherent in these two types of errors. Despite the popularity of statistical profiling, published material typically only explains in general qualitative terms the motivation of the systematic sampling rates used. Given the importance of sampling, we argue in favor of the general principle of deliberate sample size selection and have developed and tested a technique for doing so. We present our idea of sample rate selection based on abstract and mathematical performance measurement models we developed that incorporate the effect of sampling on both measurement accuracy and perturbation effects. Our mathematical model predicts the sampling size at which the combination of the residual measurement error and the accumulating perturbation error is minimized. Our evaluation of the model with simulation, calibration programs, and selected programs from the SPEC CPU 2006 and SPEC OMP 2001 benchmark suites indicates that this idea has promise. Our results show that the predicted sample size is generally close to the best sampling rate and effectively avoids bad choices. Most importantly, adaptive sample rate selection is shown to perform better than a single selected rate in most cases.en_US
dc.language.isoenen_US
dc.titleAnalyzing the Combined Effects of Measurement Error and Perturbation Error on Performance Measurementen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentComputer Scienceen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledPerformance Measurementen_US
dc.subject.pquncontrolledPerturbationen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record