Amer El-Samman

Physics-Backed ML Models in Chemistry

Compressing First-Principles Physics with Machine Learning

Quantum mechanics provides the rules that shape our universe. But first principles physics is enormous in scale. Even a modest molecule like caffeine requires an exact quantum calculation so large that it is impossible to store the full solution. Modern machine learning, trained on quantum mechanical data, offers a practical way through this. It compresses memory heavy quantum information into high quality representations that remain faithful to structure, reflect the language of reactions, and can be transferred to new tasks with small datasets.

The Hidden Language of AI in Chemistry: Unifying Graph and Language Models of Chemistry

Graph neural networks learn to describe quantum data in a language that mirrors reaction formulae, without training on reactions. Much like how natural language models capture meanings of sentences through vector arithmetic, where “King” minus “Man” plus “Woman” yields “Queen,” graph based models appear to learn an analogous language for chemistry;  “Hydrogen” plus “Oxygen” resulting in “Water,” revealing fundamental chemical transformations emerging naturally within GNN’s in learned representation.

Learn more

How Graph Neural Networks Learn Molecular Representations from Quantum Mechanics

Graph neural networks are among the most sophisticated probabilistic models used in chemistry, yet the principles guiding their internal decision making often remain obscure. Read on to find out how build compressed description of quantum data which related to concepts of molecular similarity, chemical fingerprinting, and molecular structure.

Learn more

Highlighted Projects

Physics Informed Machine Learning

Scientific laws are immutable. They conserve energy, respect symmetry, follow causality, and vary smoothly under change. Standard machine learning models do not. Science-Informed Machine Learning shows how embedding these principles into model design allows ML to scale scientific reasoning rather than ignore it.

Learn More

Global and Local Explainability of Graph Neural Network Chemistry

How does message passing in graph neural networks constructs description of atomistic system that is transferable across chemical tasks? By learning how each atom’s contribution propagates through the molecular graph, we learn how graph model develop a latent space that captures electronic structure, functional groups, and reaction relevant patterns. This structure explains why these models can transfer learning to new tasks, enabling accurate prediction of diverse chemical properties and even recovery of reaction formulas.

Learn More

Transfer Learning From Graph Neural Networks Trained on Quantum Chemistry

GNN learn their own descriptors for quantum chemistry. Can these learned descriptors be used to transfer learn on new chemical problems? By probing transfer learning to properties such as electron occupancy, NMR, pKa, and solubility, we assess whether its GNN learned compressed description of quantum chemistry can serve as a general purpose engine for chemical prediction and design.

Learn More