Thumbnail Image


Publication or External Link






As the role of computational models has increased, the accuracy of computational results has been of great concern to engineering decision-makers. To address a growing concern about the predictive capability of the computational models, this dissertation proposed a generic model validation framework with four research objectives as: Objective 1 &mdash to develop a hierarchical framework for statistical model validation that is applicable to various computational models of engineered products (or systems); Objective 2 &mdash to advance a model calibration technique that can facilitate to improve predictive capability of computational models in a statistical manner; Objective 3 &mdash to build a validity check engine of a computational model with limited experimental data; and Objective 4 &mdash to demonstrate the feasibility and effectiveness of the proposed validation framework with five engineering problems requiring different experimental resources and predictive computational models: (a) cellular phone, (b) tire tread block, (c) thermal challenge problem, (d) constrained-layer damping structure and (e) energy harvesting device.

The validation framework consists of three activities: validation planning (top-down), validation execution (bottom-up) and virtual qualification. The validation planning activity requires knowledge about physics-of-failure (PoF) mechanisms and/or system performances of interest. The knowledge facilitates to decompose an engineered system into subsystems and/or components such that PoF mechanisms or system performances of interest can be decomposed accordingly. The validation planning activity takes a top-down approach and identifies vital tests and predictive computational models of which contain both known and unknown model input variable(s). On the other hand, the validation execution activity takes a bottom-up approach, which improves the predictive capability of the computational models from the lowest level to the highest using the statistical calibration technique. This technique compares experimental results with predicted ones from the computational model to determine the best statistical distributions of unknown random variables while maximizing the likelihood function. As the predictive capability of a computational model at a lower hierarchical level is improved, this enhanced model can be fused into the model at a higher hierarchical level. The validation execution activity is then continued for the model at the higher hierarchical level. After the statistical model calibration, a validity of the calibrated model should be assessed; therefore, a hypothesis test for validity check method was developed to measure and evaluate the degree of mismatch between predicted and observed results while considering the uncertainty caused by limited experimental data. Should the model become valid, the virtual qualification can be executed in a statistical sense for new product developments. With five case studies, this dissertation demonstrates that the validation framework is applicable to diverse classes of engineering problems for improving the predictive capability of the computational models, assessing the fidelity of the computational models, and assisting rational decision making on new design alternatives in the product development process.