跳转至

n2p2 Usage Guide

Short Introduction

This repository provides ready-to-use software for high-dimensional neural network potentials in computational physics and chemistry.

The following link is for your information:

Warning

This page is just the experience and understanding of author. If you find any mistake or vague part, please report the issue

Basic Principle

The basic of n2p2 softeware is based on the method of neural network fitting. For detail of neural network(NN), please refer to [here].

The extra works done by Behler and Parrinello, is build a link between Potential Energy Surface and NN.

At first, they decomposes the total energy into atomic energy(\(E^{atom}\)). \(E^{atom}\) is not the energy of neutral atom in the vacuum as we have seen in quantum chemistry book. The \(E^{atom}\) is just the decomposition of total energy into the contribution of every atoms, as expressed by the following equation: $$ E_{tot}=\sum_i {E_i^{atom}} $$ Where i runs over the index of atom in a system.

Usage in Cluster

n2p2 has installed in Cluster51. Use command module load n2p2/2.0.0 to load the code n2p2. After that, you can use all the executable binary of n2p2. LSF script is in the directory /share/base/script/n2p2.lsf. Explanation of lsf script is put in here

Training Procedure

Overview

The Core library in n2p2 is nnp-train. You can see this command after load module n2p2/2.0.0. Enter the Directory of prepared files and type nnp-train is all enough. For mpi running of command, just type mpirun nnp-train. The input files for nnp-train include:

  • input.nn: input setup for training
  • input.data: input training set for training procedure.
  • scaling.data: scaling data from data set (you will obtain this from nnp-scaling)

Example input file is in the github repository <n2p2 root>/examples/nnp-train

File: input.data

See input.data format here

Python script for convertion from cp2k xyz to input.data

 from ase.io import read, write
 import os, sys

 # data_path: directory contains forces.xyz and coords.xyz
 data_path = "./test_data"
 data_path = os.path.abspath(data_path)

 #input cell parameter here, a 3x3 list
 cell = [[10., 0., 0. ], [0., 10., 0.], [0., 0., 10.]]

 #read coords and forces
 pos_path= os.path.join(data_path, "coords.xyz")
 frc_path= os.path.join(data_path, "forces.xyz")
 pos = read(pos_path, index = ":")
 frc = read(frc_path, index = ":")

 out_path = os.path.join(data_path, "input.data")
 fw = open(out_path, "w")
 for frame_idx in range(len(pos)):
     fw.write("begin\n")
     for i in range(3):
         fw.write("lattice{:10.4f}{:10.4f}{:10.4f}\n".format(cell[i][0], cell[i][1], cell[i][2]))
     for atom in zip(pos[i], frc[i]):
         fw.write("atom{:12.5f}{:12.5f}{:12.5f}".format(atom[0].position[0], atom[0].position[1], atom[0].position[2]))
         fw.write("{:3}".format(atom[0].symbol))
         fw.write("{:10.4f}{:10.4f}".format(0.0, 0.0))
         fw.write("{:12.5f}{:12.5f}{:12.5f}\n".format(atom[1].position[0], atom[1].position[1], atom[1].position[2]))
     fw.write("energy{:20.4f}\n".format(pos[i].info['E']))
     fw.write("charge{:20.4f}\n".format(0.0))
     fw.write("end\n")

nnp-scaling

nnp-scaling should be executed before nnp-train in order to obtain file scaling-data. There are only two files you need:

  • input.nn
  • input.data

Example input file is in the github repository <n2p2 root>/examples/nnp-scaling. A point is worth to notice. The random_seed keyword in file input.nn is followed by a number. This number serves as a initialization of psudo-random code. However as you can imply from the name, this random number is fake. It depends strongly on your initialization number (more exactly, you will get a same serial number if you start by a same random seed number). Therefore, if you would like a random starting for parameter in NN, set a different number for random seed.

评论