Lam, Michael OneilAs scientific computation continues to scale upward, correct and efficient use of floating-point arithmetic is crucially important. Users of floating-point arithmetic encounter many problems, including rounding error, cancellation, and a tradeoff between performance and accuracy. This dissertation addresses these issues by introducing techniques for automated floating-point precision analysis. The contributions include a software framework that enables floating-point program analysis at the binary level, as well as specific techniques for cancellation detection, mixed-precision configuration, and reduced-precision sensitivity analysis. This work demonstrates that automated, practical techniques can provide insights regarding floating-point behavior as well as guidance towards acceptable precision level reduction. The tools and techniques in this dissertation represent novel contributions to the fields of high performance computing and program analysis, and serve as the first major step towards the larger vision of automated floating-point precision and performance tuning.enAutomated Floating-Point Precision AnalysisDissertationComputer scienceautotuningbinary editingfloating-pointhigh-performance computingprecisionprogram analysis