Tutorial: Optimal Learning in the laboratory sciences

New (and still in beta - comments are appreciated)

 This webpage provides a series of tutorials (in powerpoint) covering the general principles of optimal learning in the context of problems that arise in laboratory sciences. This tutorial arose out of a project funded by the Air Force Office of Scientific Research.

The tutorial has been broken into a series of short presentations. Please use the brief descriptions below to prioritize the components that are likely to be most interesting to you.

These tutorials are quite new - please feel free to add notes to the notes section of each slide if there are questions or comments, and send your annotated slides to Warren Powell <powell@princeton.edu>.

Overview - Why is optimal learning important in the laboratory sciences. This also highlights the fundamental elements of a learning problem.

Case application – Carbon nanotubes - This is a sample application at AFRL where a robotic "scientist" has to quickly run experiments (guided by an algorithm) to learn the best of a set of tunable parameters. We use this application to help to motivate the elements of a learning problem.

A richer belief model - We started by using a simple lookup table with independent beliefs. In this tutorial, we introduce the concept of correlated beliefs, and the use of a linear belief model.

Understanding the objective function - If we want to make good decisions, we need some way to evaluate how well we are doing. This means designing a set of metrics that we can use to evaluate our performance.

The value of information - A central idea in optimal learning involves understanding the value of information. It is always the case that more information is better, but in some settings, the marginal value of information decreases with the number of experiments, while in others, the marginal value of information may actually increase before decreasing.

The knowledge gradient - This is a method that calculates the expected value of information from an experiment, which we have found to be a useful input to the decision of what experiment to run next.

Forming a decision set - Scientists sometimes make choices without fully articulating the range of possible decisions. Here, we talk about the importance of first identifying the full set of choices, even when it is much more than you could ever test in the lab.

Building a belief model - At the heart of optimal learning is capturing the expertise of a scientist in a mathematical construct known as a belief model. These can come in different forms, depending on the setting. Here we illustrate a variety of belief models that we have encountered in our work up to now.

Searching a two-dimensional surface - If we are just searching over one or two dimensions, relatively simple belief models can provide surprisingly good results with a small number of experiments without putting a lot of time into building a more structured belief model.

Nonlinear belief models - If we are searching over more than two parameters, then it helps if we can build an analytical model that captures, at least approximately, relationships between control parameters and the behavior of the system. Typically these models depend on several unknown parameters, and these models tend to be nonlinear in these unknown parameters, which creates computational challenges when calculating the value of information. Here we describe a method that gets around this problem.

Interacting with the scientist - Choosing the next experiment to run requires striking a balance between maximizing the likelihood of success (everyone wants to hit the home run), collecting information, minimizing the complexity of an experiment, and running them as quickly as possible at least cost. Some of these can be quantified in our models, but the rest are known only to the scientist. For this reason, we favor an approach where the scientist is shown the likelihood of success and the value of information. Then the scientist has to make a subjective judgment about which experiment to run.

Evaluating risk - A challenge faced by research managers is assessing the likelihood that an experimental program might be successful given the noise inherent in experiments and the nature of the underlying problem. We show that we can use the knowledge gradient as a black-box policy to run hundreds of simulations, capturing both the noise of experiments and the range of possible beliefs.

Closing notes