Chapter 1
dipole_charge modifier with multiple backends
When training a DPLR model, dipole_charge modifier should be set. However, the calculation of Ewald reciprocal interaction is implemented for CPU, which would cost considerable time in GPU-based training tasks. Here, we implemented two new calculator (i.e., jax and torch) based on dmff and torch-admp packages, respectively, which allow GPU-accelereated calculation of Ewald reciprocal interaction. The usage of these calculators is shown below:
"modifier": {
"type": "dipole_charge_beta",
"model_name": "dw.pb",
"model_charge_map": [
-8
],
"sys_charge_map": [
6,
1
],
"ewald_h": 1.00,
"ewald_beta": 0.40,
"ewald_calculator": "torch"
},
The type should be set as dipole_charge_beta to use the new calculators. The ewald_calculator can be set as naive, jax or torch to use the original CPU-based calculator, the jax-based calculator or the torch-based calculator, respectively.
Note: virial calculation has not been implemented in the torch-/jax-based calculators yet.
dipole_charge_electrode modifier for polarizable electrode
verlet/split for MLP
Introduction
[placeholder]
Usage
Lammps input:
# use original kspace_style (compatible with verlet/split) rather than pppm/dplr
kspace_style pppm 1e-5
# ...
# add and use run_style verlet/split/dplr before the run command
add_run_style verlet/split/dplr
run_style verlet/split/dplr
timestep 0.0005
run 100
Run Lammps with two partitions:
# add the plugin path to the environment variable
export LAMMPS_PLUGIN_PATH=/path/to/plugin:$LAMMPS_PLUGIN_PATH
# mpirun -np [total np] -p [np for rspace] [np for kspace] lmp_mpi -i input.lmp
mpirun -np 5 -p 1 4 lmp_mpi -i input.lmp
Note: the number of processors for the kspace partition should be either the same or an integer multiple of number of processors for the rspace partition.
Code structure
- Red color indicates new functions in verlet/split/dplr.
- Blue color highlights GPU-intensive processes and loop flow.