Researcher of Deep Learning for Scientific Simulations.
Qiang Liu | 刘 强
I am currently a Ph.D. student at the Technical University of Munich , supervised by Prof. Nils Thuerey. My research focuses on deep learning methods for partial differential equations (PDEs) and scientific simulation.
A central theme of my work is exploring the potential of deep generative models—such as Denoising Diffusion Probabilistic Models (DDPMs) and flow matching—for physical applications. In particular, I am interested in incorporating physical priors and domain knowledge into the training of generative models to improve their reliability, generalization, and fidelity when modeling complex physical systems.
More recently, I have been investigating foundation models for PDEs, aiming to develop large-scale pretrained models trained across diverse families of equations that can be efficiently adapted to downstream tasks.
In addition to my research, I am the main developer of TorchFSM, a PyTorch-based library that implements Fourier spectral methods for solving PDEs.
Leveraging neural networks as surrogate models for turbulence simulation is a topic of growing interest. At the same time, embodying the inherent uncertainty of simulations in the predictions of surrogate models remains very challenging. The present study makes a first attempt to use denoising diffusion probabilistic models (DDPMs) to train an uncertainty-aware surrogate model for turbulence simulations. Due to its prevalence, the simulation of flows around airfoils with various shapes, Reynolds numbers, and angles of attack is chosen as the learning objective. Our results show that DDPMs can successfully capture the whole distribution of solutions and, as a consequence, accurately estimate the uncertainty of the simulations. The performance of DDPMs is also compared with varying baselines in the form of Bayesian neura networks and heteroscedastic models. Experiments demonstrate that DDPMs outperformthe other methods regarding a variety of accuracy metrics. Besides, it offers the advantageof providing access to the complete distributions of uncertainties rather than providing a set of parameters. As such, it can yield realistic and detailed samples from the distribution of solutions.
@article{airfoilDDPM2023,journal={AIAA Journal},title={Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models},author={Liu, Qiang and Thuerey, Nils},year={2024},eprint={2312.05320},archiveprefix={arXiv},primaryclass={physics.flu-dyn},arixv={https://arxiv.org/abs/2312.05320},url={https://arc.aiaa.org/doi/10.2514/1.J063440},doi={10.2514/1.J063440},volume={62},issue={8},pages={2192-2933},}
ICLR
ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks
The loss functions of many learning problems contain multiple additive terms that can disagree and yield conflicting update directions. For Physics-Informed Neural Networks (PINNs), loss terms on initial/boundary conditions and physics equations are particularly interesting as they are well-established as highly difficult tasks. To improve learning the challenging multi-objective task posed by PINNs, we propose the ConFIG method, which provides conflict-free updates by ensuring a positive dot product between the final update and each loss-specific gradient. It also maintains consistent optimization rates for all loss terms and dynamically adjusts gradient magnitudes based on conflict levels. We additionally leverage momentum to accelerate optimizations by alternating the back-propagation of different loss terms. The proposed method is evaluated across a range of challenging PINN scenarios, consistently showing superior performance and runtime compared to baseline methods. We also test the proposed method in a classic multi-task benchmark, where the ConFIG method likewise exhibits a highly promising performance.
@article{ConFIG2024,journal={ICLR2025 Spotlight},title={ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks},author={Liu, Qiang and Chu, Mengyu and Thuerey, Nils},year={2025},eprint={2408.11104},archiveprefix={arXiv},primaryclass={cs.LG},arixv={https://arxiv.org/abs/2408.11104},url={https://arxiv.org/abs/2408.11104},doi={https://doi.org/10.48550/arXiv.2408.11104},}
ICML
PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations
Benjamin Holzschuh , Qiang Liu , Georg Kohl , and Nils Thuerey
We introduce PDE-Transformer, an improved transformer-based architecture for surrogate modeling of physics simulations on regular grids. We combine recent architectural improvements of diffusion transformers with adjustments specific for large-scale simulations to yield a more scalable and versatile general-purpose transformer architecture, which can be used as the backbone for building large-scale foundation models in physical sciences. We demonstrate that our proposed architecture outperforms state-of-the-art transformer architectures for computer vision on a large dataset of 16 different types of PDEs. We propose to embed different physical channels individually as spatio-temporal tokens, which interact via channel-wise self-attention. This helps to maintain a consistent information density of tokens when learning multiple types of PDEs simultaneously. We demonstrate that our pre-trained models achieve improved performance on several challenging downstream tasks compared to training from scratch and also beat other foundation model architectures for physics simulations.
@article{PDETransformer2025,journal={ICML 2025},title={PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations},author={Holzschuh, Benjamin and Liu, Qiang and Kohl, Georg and Thuerey, Nils},year={2025},eprint={2505.24717},archiveprefix={arXiv},primaryclass={cs.LG},arixv={https://arxiv.org/abs/2505.24717},url={https://arxiv.org/abs/2505.24717},doi={https://doi.org/10.48550/arXiv.2505.24717},}