Integrated Field Inversion and Machine Learning With Embedded Neural Network Training for Turbulence Modeling

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2019

Citation

Abstract

A rich set of experimental and high fidelity simulation data is available to improve Reynolds Averaged Navier Stokes (RANS) models of turbulent flow. In practice, using this data is difficult, as measured quantities cannot be used to improve models directly. The Field Inversion and Machine Learning (FIML) approach addressed this challenge through an inference step, in the form of an inverse problem, which treats inconsistencies between the models and the data in a consistent manner. However, a separate learning algorithm is not always able to be learned from the generated inverse problem data accurately. Two new methods of incorporating higher fidelity data into RANS turbulence models via machine learning are proposed and applied for the first time in this thesis. Both build on the FIML framework by performing learning during the inference step, instead of considering the inference and learning steps separately as in the classic FIML approach.

The first new approach embeds neural network learning into the RANS solver, and the second trains the weights of the neural network directly. Additionally, for the first time, the inverse problem can incorporate higher fidelity data from multiple cases simultaneously, promising to improve the generalization of the augmented model. The two new methods and the classic approach are demonstrated with a simple model problem, as well as a number of challenging RANS cases. For a 2D airfoil case, all three FIML augmentations are shown to improve predictions, with the new methods demonstrating increased regularization. Additionally, a model augmentation is generated by considering seven angles of attack of an airfoil in the inference step, and the augmentation is shown to improve predictions on a different airfoil. Additional cases are considered including a transonic shock wave boundary layer interaction and the NASA wall-mounted hump. In all cases, the inference is shown to improve predictions. For the first time, the inverse problem accounts for the limitations of the learning procedure, guaranteeing that the model discrepancy is optimal for the chosen learning algorithm. The results in this thesis prove that learning during the inference step provides additional regularization, and guarantees the inference produces learnable model discrepancy.

Notes

Rights