What is Machine Learning and How is it Changing Physical Chemistry and Materials Science?

When I talk about artificial intelligence (AI), the usual images that come to mind are from fiction: Hal from 2001: A Space Odyssey, the cyborg from The Terminator, or perhaps the gloomy world of The Matrix. Since March 2016, however, the poster child for real-world AI has become AlphaGo, the computer program developed by Google Deepmind that beat professional Go player Lee Sedol in a highly publicized five-game match. This was hailed as a milestone in AI because the mathematical complexity of Go is substantially more advanced than other board games such as chess, at which IBM computer Deep Blue beat the legendary Garry Kasparov almost 20 years ago. Indeed, some AI experts have stated that “…board games are more or less done and it’s time to move on.”1

Go
A game of Go like the one AlphaGo won against Lee Sedol earlier this year. (photo by Kendrick)

The basic algorithm of AlphaGo was published in Nature in 20162 and consists of mainly machine learning and Monte Carlo tree search. What this means is that AlphaGo uses an efficient probability-based (Monte Carlo) strategy to sample a large number of possible moves, and then evaluates the chances of winning for these moves using algorithms that it has established (or “learned”) during its training process.

Why is it such a big deal that AlphaGo could win a game of Go? Chess, although very complicated, has a finite number of combinations of moves that can possibly occur in any game. For example, there are only 20 possible initial moves in the game: 16 options for pawns and 4 for knights.3  Calculating all the different possible combinations and sequences in a chess game is the kind of thing that computers are very good at and humans often find challenging.

chess
Chess is very complex, but easier for a computer to learn than Go. (image from Pixabay)

In contrast to chess, Go games have so many possible moves that it is impossible to predict plays numerically (the number is greater than the number of atoms in the universe and was only determined accurately by computer scientists in early 2016!4) Therefore, AlphaGo had to take a very different approach. It trained itself with a deep neural network (explained more below) using a large number of recorded games (about 30 million moves), and then it reinforced the training by playing a large number of games with other computers, human players, and itself. The computations required for this process were certainly not “cheap”, and were reported to have used about 1920 CPUs and 280 graphics processing units (GPUs).5

With all this excitement, you might wonder: what is machine learning anyway, and does it have anything to do with physical sciences, such as chemistry and materials research? My goal with this blog post is to provide some basic background and point you to places where you can learn more!

What is Machine Learning?

Machine learning is a subfield of computer science that works on developing algorithms that can learn from and make decisions on data. A much-quoted description that goes back to 1959 by Arthur Samuel describes machine learning as the field of study “that gives computers the ability to learn without being explicitly programmed.”6 Another more technical description is credited to Tom Mitchell in 1997: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”7

The “task T” in Mitchell’s explanation is generally classified into three categories: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the task is to figure out the relationship between a set of input and output data. The simplest example of this would be a linear regression, where one variable correlates directly with another, although the problems that machine learning attempts to handle are usually much more complicated. In unsupervised learning, the task is to discover a hidden pattern or feature in data, without any information about what the pattern is expected to be. Examples include the identification of close friends on social networks or detection of behavior anomalies for credit card fraud protection. Finally, reinforcement learning involves performing a particular task through repeated interactions with a dynamic environment, such as when AlphaGo played a series of games to learn how to improve its performance.

supervised learning
An unsupervised learning approach can be used to train computer programs to identify faces   (image from Pixabay)

There are many approaches to machine learning, but two popular examples are support vector machine (SVM) and artificial neural network (ANN) systems (see figure below). SVM is usually used in regression and classification problems, in which the goal is to classify data into groups with distinct features. ANN was inspired by the way that the brain solves problems using a large number of interconnected neurons. The key parameters in an ANN are the numerical weights that specify how different artificial neurons are connected, because their values are optimized by “training” during which a large set of input and output examples is fed to the network. Deep neural networks like AlphaGo are ANNs that feature multiple hidden layers connected by non-linear operations and therefore are capable of performing highly complex mapping between the input data and output result.

machine learning
Two examples of machine learning approaches.8 Left: example of supporting vector machine (SVM), which is widely used for classifying data into groups with specific features – here the red line was calculated as a way to divide the two groups with the widest margin between them (image by ZackWeinberg); Right: artificial neural network, which performs highly non-linear mapping between the input (yell0w) and output (orange) through a large number of interconnected hidden layer neurons (blue) that have different weighted influences on the outcome (image adapted from Zufzzi).

The predictive power of a machine learning model depends on its mathematical architecture and the data used to train the model. Going back to the AlphaGo example, Lee did win Game 4 after he played a surprising move that apparently was not in the scope of AlphaGo’s training set. This “Hand of God” move led AlphaGo to make several poor move sequences and ultimately to its defeat.

Machine Learning in Biomedical Research

Machine learning has already been widely used in challenging tasks such as speech recognition and image recognition. It also has a long history in biomedical research, such as processing and diagnoses of cancers and drug design. In more recent years, machine learning has become increasingly active in the drug design area.9 For example, a team led by George Dahl won the “Merck Molecular Activity Challenge” in 2012 using deep neural networks to predict the biomolecular target for a specific chemical compound.10 Sepp Hochreiter’s group won the “Tox21 Data Challenge” in 2014 by using deep neural network to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.11

In a very recent study, Lee and co-workers developed a SVM to classify a large number (~1,100) of known antimicrobial peptides.12 The SVM was then used successfully to search for peptides that had high antimicrobial-like activities but did not look similar to other antimicrobial peptides that were already known. Interestingly, the search identified an unexpectedly diverse set of sequences that have the potential to be just as effective but with a range of characteristics that are different from familiar antimicrobial peptides. This is only the first step in identifying possible future antimicrobial drugs, but clearly, machine learning has great potential to assist other scientific techniques in this type of research.

Machine Learning in Physical Sciences and Materials Research

What about how machine learning is used in physical sciences and materials research? Although the “big data” revolution is often associated with biomedical fields due to the availability of various high-throughput “-omics” techniques (like those discussed in last week’s blog post), researchers in the physical and materials sciences have also started to tap into the power of machine learning, using data collected either from experimental measurements or high quality computations. One approach is traditionally referred to as “data mining,” which explores huge data sets to identify interesting features about a material. Identifying such connections can certainly help in developing new materials.

An arguably more ambitious approach is to closely integrate physics based computations (e.g., quantum mechanics) and machine learning techniques in the search for novel materials. This is illustrated in the infographic below from a recent Nature article,13 which reported on the current status of employing high performance computing for materials discovery. (This is an important thrust for the Materials Genome Project that I wrote about in an earlier post.) A particularly compelling aspect of the strategy is that even failed experiments can be highly informative.14 Nevertheless, even modern quantum chemistry methods aren’t reliable yet for predicting everything about complex materials, so it is still important for scientists to integrate rounds of experimentation with any kind of machine learning techniques.15

Intelligent Search
Combining physical computations (e.g., density functional theory) and machine learning for novel materials design. (Reprinted by permission from Macmillan Publishers Ltd: Nature, copyright 201613)

Why does the Center for Sustainable Nanotechnology care about machine learning?

Although we don’t use machine learning directly in CSN research, there are several ways that this topic is relevant to our work. First, with increasing amounts of data available from toxicity studies done by the CSN and other researchers, machine learning techniques can potentially be used to help understand the relationship between nanomaterials’ properties and their biological impact, similar to what has been done in the biomedical field.9,12 Second, by combining with physical computations, we can potentially use machine learning approaches to help establish new strategies for designing nanomaterials that have the functions we want but with minimal negative impact on the environment and health.

So the next time you sit down to play a game of chess or Go, take a moment to think about the amazing computational power that goes into training a machine – or your human brain! – to learn such a complicated activity. The key to a lot of cutting-edge research is to combine the strengths of human and computer problem-solving skills to tackle big scientific questions.


EDUCATIONAL RESOURCES


REFERENCES

  1. Borowiec, S. & Lien, T. AlphaGo beats human Go champ in milestone for artificial intelligence. Los Angeles Times, March 12, 2016. Retrieved from http://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html
  2. Silver et al. Mastering the game of Go with deep neural networks and tree search, Nature, 2016, 529, 484-489. doi: 10.1038/nature16961
  3. Shapiro, G. Mathematics and Chess. Website, 2007. Retrieved from https://www.chess.com/chessopedia/view/mathematics-and-chess
  4. Tromp, J. et al. Number of Legal Go Positions. Website [n.d.] Retrieved from http://tromp.github.io/go/legal.html
  5. Wikipedia. AlphaGo. Retrieved from https://en.wikipedia.org/wiki/AlphaGo
  6. As quoted in Wikipedia. Machine Learning. Retrieved from https://en.wikipedia.org/wiki/Machine_learning
  7. Mitchell, T. Machine Learning. 1997, McGraw Hill. p. 2. ISBN 9780070428072
  8. Behler, J. Perspective: Machine learning potentials for atomistic simulations, Journal of Chemical Physics, 2016, 145, 170901. doi: 10.1063/1.4966192
  9. Wikipedia. Deep Learning. Retrieved from  https://en.wikipedia.org/wiki/Deep_learning
  10. Merck. Merck Molecular Activity Challenge winners list. 2012, Retrieved from https://www.kaggle.com/c/MerckActivity/details/winners
  11. NCATS Announces Tox21 Data Challenge Winners [press release]. 2015, Retrieved from https://ncats.nih.gov/news/releases/2015/tox21-challenge-2014-winners
  12. Lee, E. et al. Mapping membrane activity in undiscovered peptide sequence space using machine learning, Proceedings of the National Academy of Sciences of the U.S.A. 2016, 113(48) 13588-13593. doi: 10.1073/pnas.1609893113
  13. Nosengo, N. Can artificial intelligence create the next wonder material? Nature, 2016, 533, 22-25. doi: 10.1038/533022a
  14. Raccuglia et al. Machine-learning-assisted materials discovery using failed experiments, Nature, 2016, 533, 73-76. doi: 10.1038/nature17439
  15. Xue, D. et al. Accelerated search for materials with targeted properties by adaptive design, Nature Communications, 2016, 7, 11241. doi: 10.1038/ncomms11241