Exploring uncertainty quantification integrated to Neural Networks for Physical Modelling
The intersection of machine learning and physical modeling has witnessed significant growth in recent years [1]. Simultaneously, Uncertainty Quantification (UQ) has emerged as a powerful tool for modeling uncertainties, enabling probabilistic predictions and quantifying confidence in both experimental and theoretical data. By integrating UQ with neural networks, we can enhance the reliability and interpretability of predictions, particularly in complex physical systems. This integration allows for both improved reliability of the computations and a better understanding of the uncertainties inherent in these predictions.
This internship aims to explore the application of UQ methods, such as Bayesian Neural Networks [2,3], Gaussian Processes, and Mixture Density Networks, to numerical computations in complex physics modeling. The primary objective is to ensure that machine learning models not only provide accurate representations of the system but also quantify uncertainties, enabling the correction of potential errors. Using these UQ methods, is it possible to predict the bias in a numerical computation of a physical system introduced by replacing a exact computation by a machine learning approximation?
The student will help optimize and apply this hybrid approach, with the goal of providing deeper insights into complex physical models. More specifically, he/she will generate data from toy model systems (e.g Ising model or scalar field theory) using MCMC sampling techniques, he/she will design architectures and train machine learning models to predict physical variables from a statistical model system, he/she will apply UQ techniques, such as Bayesian inference and Gaussian Processes, to quantify uncertainties in the model's predictions.
By the end of the internship, the intern will contribute to the development of more robust and reliable machine learning models for complex physical systems, advancing the state-of-the-art in this field. These results could contribute to fields like materials science where costly computations are replaced by machine learning approximations [4].
[1] M. J. Karcz, E. Kawasaki et al., “Semi-supervised generative approach to chemical disorder: application to point-defect formation in uranium–plutonium mixed oxides,” Phys. Chem. Chem. Phys., vol. 25, no. 34, pp. 23069–23080, Aug. 2023, doi: 10.1039/D3CP02790B
[2] E. Kawasaki, M. Holzmann and L. Adu-Gyamfi, “Data Subsampling for Bayesian Neural Networks”, Sep. 2024 arXiv:2210.09141
[3] H. Wang, E. Kawasaki, G. Damblin, and G. Daniel, “Multivariate Bayesian Last Layer for Regression: Uncertainty Quantification and Disentanglement,” May 2024, arXiv:2405.01761
[4] S. Watanabe et al., “High-dimensional neural network atomic potentials for examining energy materials: some recent simulations,” J. Phys. Energy, vol. 3, no. 1, p. 012003, Jan. 2021, doi: 10.1088/2515-7655/abc7f3
The candidate should have a strong theoretical and numerical interest and a special care about software development. An educational background in statistics, machine learning (in particular neural networks) and/or statistical physics is required. Programming experience in python is a distinct advantage. Pytorch library knowledge is a plus but it is not compulsory. An ideal candidate would be eager to explore Uncertainity Quantification techniques in-depth and understand their role in physical modeling.