Features of MLatom, version 0.92, revision 102
Tasks Performed by MLatom
A brief overview of MLatom capabilities. See sections below for more details.
ML operations
- Estimating accuracy of ML models.
- Creating ML model and saving it to a file.
- Loading existing ML model from a file and performing ML calculations with this model.
Data Set Operations
- Converting XYZ coordinates into an input vector (molecular descriptor) for ML.
- Sampling subsets from a data set.
Sampling
- none: simply splitting the data set into the training, test, and, if necessary, training set into the subtraining and validation sets (in this order) without changing the order of indices.
- random sampling.
- user-defined: requests MLatom to read indices for the training, test, and, if necessary, for the subtraining and validation sets from files.
- structure-based sampling.
- farthest-point traversal iterative procedure, which starts from two points farthest apart.
ML Algorithm
Kernel ridge regression with the following kernels:
Molecular Descriptors
- Coulomb matrix
- sorted by norms of its rows;
- unsorted.
- Unsorted vector {req/r}, where r is an internuclear distance in a current molecule and req is an internuclear distance in the equilibrium (or other reference) structure.
Model Validation
ML model can be validated (generalization error can be estimated) in several ways:
- on a hold-out test set not used for training. Both training and test sets can be sampled in one of the ways described above;
- by performing N-fold cross-validation. User can define the number of folds N. If N is equal to the number of data points, leave-one-out cross-validation is performed. Only random or no sampling can be used for cross-validation.
MLatom prints out mean absolute error (MAE), mean signed error (MSE), root-mean-squared error (RMSE), mean values of reference and estimated values, largest positive and negative outliers, correlation coefficient and its squared value R2 as well as coefficients of linear regression and corresponding standard deviations.
Hyperparameter Tuning
Gaussian, Laplacian, and Matérn kernels have σ and λ tunable hyperparameters. MLatom can determine them by performing user-defined number of iterations of hyperparameter optimization on a logarithmic grid. User can adjust number of grid points, starting and finishing points on the grid. Hyperparameter are tuned to minimize either mean absolute error or root-mean-square error as defined by the user. Hyperparameters can be tuned to minimize
- the error of the ML model trained on the subtraining set in a hold-out validation set. Both subtraining and validation sets are parts of the training set, which can be used at the end with optimal parameters for training the final ML model. These sets ideally should not overlap and can be sampled from the training set in one of the ways described above;
- N-fold cross-validation error. User can define the number of folds N. If N is equal to the number of data points, leave-one-out cross-validation is performed. Only random or no sampling can be used for cross-validation.
Note that hyperparameter tuning can be performed together with model validation. This means that for example one can perform outer loop of the cross-validation for model validation and tune hyperparameters via inner loop of the cross-validation.
First Derivatives
MLatom can be also used to estimate first derivatives from an ML model. It can be useful, if an ML model has been created for energies, and one wants to estimate forces. Two scenarios are possible:
- partial derivatives are calculated for each dimension of given input vectors (analytical derivatives for Gaussian kernel, numerical derivatives for other kernels);
- first derivatives are calculated in XYZ coordinates for input files containing molecular XYZ coordinates (only numerical derivatives).