Third-Party Interfaces: installations and usages
MLatom aslo provides interfaces to some third-party software.
To use third-party software, MLprog should be set to a third-party software.
Currently implemented programs and the default choises when MLmodelType and MLprog is defined:
+-------------+----------------+ | MLmodelType | default MLprog | +-------------+----------------+ | KREG | MLatomF | +-------------+----------------+ | sGDML | sGDML | +-------------+----------- ----+ | GAP-SOAP | GAP | +-------------+----------------+ | PhysNet | PhysNet | +-------------+----------------+ | DeepPot-SE | DeePMD-kit | +-------------+----------------+ | ANI | TorchANI | +-------------+----------------+ +------------+----------------------+ | MLprog | MLmodelType | +------------+----------------------+ | MLatomF | KREG [default] | | | see | | | MLatom.py KRR help | +------------+----------------------+ | sGDML | sGDML [default] | | | GDML | +------------+----------------------+ | GAP | GAP-SOAP | +------------+----------------------+ | PhysNet | PhysNet | +------------+----------------------+ | DeePMD-kit | DeepPot-SE [default] | | | DPMD | +------------+----------------------+ | TorchANI | ANI [default] | +------------+----------------------+DeePMD-kit
usage
MLprog=DeePMD-kitto enable the interface.options
Expressions like deepmd.xxx.xxx=X specify arguments for DeePMD, follows the structure of DeePMD’s json input file.
For example:
deepmd.training.stop_batch=N is an equivalent of
{ ... "training": { ... "stop_batch": N ... } ... } in DeePMD-kit’s json input.In addition, option deepmd.input=S intakes a input json file S as a template. Final input file will be generated base on it with deepmd.xxx.xxx=X options (if any). Check default template file bin/interfaces/DeePMDkit/template.json for defualt values
GAP and QUIP
usage
MLprog=GAP to enable the interface.options
gapfit.xxx=x xxx could be any option for gap_fit (e.g. default_sigma).
Note that there’s no need to set at_file and gp_file.gapfit.gap.xxx=x xxx could be any option for gap.
gapfit.default_sigma={0.0005,0.001,0,0} hyperparameter sigmas for energies, forces, virals and hessiansgapfit.e0_method=average method for determining e0 gapfit.gap.type=soap descriptor typegapfit.gap.l_max=6 max number of angular basis functions gapfit.gap.n_max=6 max number of radial basis functions gapfit.gap.atom_sigma=0.5 hyperparameter for Gaussain smearing of atom densitygapfit.gap.zeta=4 hyperparameter for kernel sensitivity gapfit.gap.cutoff=6.0 cutoff radius of local environment gapfit.gap.cutoff_transition_width=0.5 cutoff transition width gapfit.gap.delta=1 hyperparameter delta for kernel scalingTorchANI
usage
MLprog=TorchANIto enable the interface.options
Arguments with their default values:
ani.batch_size=8 batch size ani.max_epochs=10000000 max epochs ani.early_stopping_learning_rate=0.00001 learning rate that triggers early-stopping ani.force_coefficient=0.1 weight for force ani.Rcr=5.2 radial cutoff radius ani.Rca=3.5 angular cutoff radius ani.EtaR=1.6 radial smoothness in radial part ani.ShfR=0.9,1.16875,1.4375,1.70625,1.975,2.24375,2.5125,2.78125,3.05,3.31875,3.5875,3.85625,4.125,4.9375,4.6625,4.93125 radial shifts in radial part ani.Zeta=32 angular smoothness ani.ShfZ=0.19634954,0.58904862,0.9817477,1.3744468, 1.7671459,2.1598449,2.552544,2.9452431 angular shifts ani.EtaA=8 radial smoothness in angular part ani.ShfA=0.9,1.55,2.2,2.85 radial shifts in angular part ani.Neuron_l1=160 number of neurons in layer 1 ani.Neuron_l2=128 number of neurons in layer 2 ani.Neuron_l3=96 number of neurons in layer 3 ani.AF1='CELU' acitivation function for layer 1 ani.AF2='CELU' acitivation function for layer 2 ani.AF3='CELU' acitivation function for layer 3 PhysNet
usage
MLprog=PhysNet to enable the interface.options
Arguments with their default values:
physnet.num_features=128 number of input features physnet.num_basis=64 number of radial basis functions physnet.num_blocks=5 number of stacked modular building blocks physnet.num_residual_atomic=2 number of residual blocks for atom-wise refinements physnet.num_residual_interaction=3 number of residual blocks for refinements of proto-message physnet.num_residual_output=1 number of residual blocks in output blocks physnet.cutoff=10.0 cutoff radius for interactions in the neural network physnet.seed=42 random seedphysnet.learning_rate=0.0008 starting learning rate physnet.decay_steps=10000000 decay steps physnet.decay_rate=0.1 decay rate for learning rate physnet.batch_size=12 training batch size physnet.valid_batch_size=2 validation batch size physnet.force_weight=52.91772105638412 weight for force physnet.summary_interval=5 interval for summary physnet.validation_interval=5 interval for validation physnet.save_interval=10 interval for model saving sGDML
usage
MLprog=sGDMLto enable the interface.options
Arguments with their default values:
sgdml.gdml=False use GDML instead of sGDML sgdml.cprsn=False compress kernel matrix along symmetric degrees of freedomsgdml.no_E=False not to predict energies sgdml.E_cstr=False include the energy constraints in the kernel sgdml.s=<s1>[,<s2>[,...]] or <start>:[<step>:]<stop> set hyperparameter sigma, see sgdml create -h for details.