3.3.2. Choose Integrator
In TorchFSM
, the time integration relies on the Exponential Time Differencing Runge-Kutta (ETDRK) method [1]. To use the ETDRK integrator, you can use the set_integrator
method of the Operator
class:
from torchfsm.operator import Laplacian
from torchfsm.integrator import ETDRKIntegrator
operator =Laplacian()
operator.set_integrator(ETDRKIntegrator.ETDRK2)
TorchFSM
provides from ETDRK0
to ETDRK2
integrators, where the number indicates the order of the method. With higher-order methods, the stability of the simulation increases (allowing the use of larger time steps), but the computational cost also increases. Note that the ETDRK0
method only supports linear systems and is the default integrator for linear systems. However, the default integrator is not ETDRK2
or ETDRK1
for nonlinear systems.
In our ETDRK implementation, the highest order method is ETDRK2
. This is because any ETDRK scheme with the order higher than 2 is not numerically stable. To overcome this limitation, we implement the "stable ETDRK" method from [2]. In TorchFSM
, the default integrator for nonlinear systems is the fourth-order stable ETDRK method, i.e., SETDRKIntegrator.ETDRK4
.
However, there is no free lunch in numerical methods. The stable ETDRK method is more computationally expensive, especially when initializing the integrators. There is an integer parameter n_integration_points
for the SETDRKIntegrator
. Some temporary tensors with roughly the size of n_integration_points
times the variable size will be created during the initialization of the integrator. The default value of n_integration_points
is 16, which is easily cost out of GPU memory, especially when running large 3D simulations. You can reduce the n_integration_points
to reduce the memory usage, but this will also reduce the stability of the simulation:
from torchfsm.integrator import SETDRKIntegrator
operator.set_integrator(SETDRKIntegrator.SETDRK4,n_integration_points=8)
In TorchFSM
, we provide another way to avoid the memory issue of the stable ETDRK method. We allow the construction of the temporary tensors on the CPU instead of the GPU and move them to the GPU later. Since the temporary tensors are only used during the initialization of the integrator, this will not affect the performance of the simulation. You can set the cpu_cached
parameter to True
when setting the SETDRKIntegrator
:
operator.set_integrator(SETDRKIntegrator.SETDRK4,cpu_cached=True)
Since the initialization of the integrator is made on the CPU when cpu_cached
is set to True
, the speed for the integrator initialization may be slow and result in a CPU out of memory issue if the CPU memory is also not enough.
In TorchFSM
, we also provide RKIntegrator
for the classical Runge-Kutta methods. However, we do not recommend using them for the time integration of PDEs. If you really face memory issues but still want to run the simulation, you may try to use the simplest Euler
method with a small time step.
from torchfsm.integrator import RKIntegrator
operator.set_integrator(RKIntegrator.Euler)
Reference
We will not discuss the details of the ETDRK method in this notebook. If you are interested in this topic, you can refer to the following papers:
[1] Cox, Steven M., and Paul C. Matthews. "Exponential time differencing for stiff systems." Journal of Computational Physics 176.2 (2002): 430-455.
[2] Kassam, Aly-Khan, and Lloyd N. Trefethen. "Fourth-order time-stepping for stiff PDEs." SIAM Journal on Scientific Computing 26.4 (2005): 1214-1233.