> @B?y Ybjbj 42{{Y$4<#pppppKKKe#g#g#g#g#g#g#$$'~#KKKKK#pp#kkkKjppe#kKe#kk:%,pZĮ"Q
Q##0#[R(((KKkKKKKK##KKK#KKKK(KKKKKKKKK : ORF 418 Optimal Learning
Spring, 2012
Warren B. Powell
There is a wide range of problems where you have to make a decision, but you do not know the outcome (how much time will it take, how much profit will you make, what will it cost, will the treatment work, will it be a good restaurant). You do not even have a probability distribution (or if you do, you are not sure it is the correct one). You can collect data, but this takes time and money. Sometimes you have to learn on the job, which means you have to live with the mistakes you make while you are still collecting information.
This course addresses the problem of collecting information efficiently. Sometimes we have to collect data using a predefined budget, after which we have to use what we learned to solve a problem. In other cases, we have to make decisions using what we know, but we can learn from these decisions to make better decisions in the future. We have to balance the cost of making the wrong decision now against the value of the information we gain to make better decisions in the future.
Prerequisites:
Statistics (ORF 245 or equivalent)
Probability (ORF 309 or equivalent)
Readings:
W. B. Powell, I. O. Ryzhov, Optimal Learning, Pre-publication copy can be purchased from Pequod in the U-Store.
Teaching assistant:
Daniel Salas
Format:
Two lectures per week
Weekly problem sets up to the midterm
Midterm
Final project Teams of students pick a problem requiring the efficient collection of information.
ORF 418 Optimal Learning
Spring, 2012
All readings are from Powell and Ryzhov, Optimal Learning. Sections marked with an * are never required.
Feburary 6 - Introduction
Examples and illustrations
Elements of a learning problem
Course projects from previous years
Read: Chapter 1, sections 1.1-1.5
February 8 - Learning using decision trees
Use basic decision tree example
Demonstrate use of Bayes theorem when collecting information
Read: Chapter 1, section 1.6
February 13 - Adaptive learning - I
Frequentist estimation
Bayesian updating
Conjugate priors
Read: Chapter 2, sections 2.1-2.2
February 15 - Adaptive learning II
Examples of learning
Value of information
Bayesian learning with correlated beliefs
Monte Carlo simulation
Read: Chapter 2, sections 2.2 (contd)-2.4
February 20 -The ranking and selection problem
Problem definition and examples
Deterministic vs. sequential learning policies
Overview of different heuristic learning policies
Read: Chapter 4
February 22 - Evaluating learning policies
Simulating a policy
Elementary Monte Carlo sampling
Estimating a confidence interval for a policy
Read: Chapter 4
February 27 - The knowledge gradient for ranking and selection
Derivation of the KG formula
Theoretical properties
Numerical illustrations
Read: Chapter 5, section 5.1
February 29 - The S-curve effect and the marginal value of information
The marginal value of information
The KG(*) algorithm
The economics of too many choices
Read: Chapter 3, section 3.2 and Chapter 5, section 5.2
March 5 - The knowledge gradient for correlated beliefs
Examples of correlated beliefs
Bayesian updating with correlated beliefs
Computing the KG formula
Read: Chapter 5, section 5.3.
March 7 - The knowledge gradient for correlated beliefs (contd)
Derivation of the KG formula for correlated beliefs
Relatives of the knowledge gradient
The problem of priors
Read: Chapter 5, section 5.4; skim 5.5; and read 5-6-5.8
March 12 The multiarmed bandit problem and Gittins indices
The online learning objective function
An optimal (but uncomputable) policy for online learning
Gittins indices for normally distributed random variables
Read: Chapter 6, sections 6.1-6.2
March 14 Policies for online problems
Upper confidence bounding
The knowledge gradient for on-line learning
Read: Chapter 6, section 6.4
Spring break
March 26 Overview of learning problems and midterm review
Brief review of what we have covered
Fundamental elements of a learning problem
Potential class projects
Read: Chapter 7
March 28 Midterm
April 2-4 - Knowledge gradient with parametric belief models
Linear regression review
Recursive updating equations
Compact derivation of correlation matrix
KG updating equations
Illustrations
Read: Chapter 8
April 9 - Subset selection problems
Applications
Computing the correlation matrix
Monte Carlo methods for large subsets
Read: Chapter 9
April 11 Optimizing unimodular function
Bisection search
Fibonacci search
Noisy bisection search
Read: Chapter 10, section 10.1
April 16 Optimal learning in bidding
The Priceline problem
Bidding for goods and services
The logistics curve
Updating and learning
Read: Chapter 11, sections 11.1-11.3.
April 18 Optimal stopping
The secretary problem
Sequential probability ratio test
Read: Chapter 12
April 23 May 2 Project presentations
May 2 Closing notes
Statistical learning when we can choose what to observe
Overview of methods
();<GKX
<=
!
#
/
0
1
7
F
G
H
O
P
y
־ʦʟzhh)`5CJ
hE5CJ h`hhmhhjh|h7Oh*yhph(Mh8~h?ch(h*h|hL 6h|hL
h
C2CJ
h)`CJ
hL CJ;*Lr7W'UefghgdL ^gdk^gdj^gd*y
&FgdbHk^gd7O
&Fgd\t^gdTUehsv>ghMNT#]^m
"V|Ĺh}_,h}_,hWh}_,h[h2h`'h&h|h&h(MhbHkh*hL h7Oh%$hjh*yIWy>h8N^gd*y^gd|^gd`'
&Fgd`'^gdbHk
&FgdbHk^gd(M$^V{ 9V^gd(
&FgdTI^gd%$gd2
&FgdtKgd[^gd7O
&Fgd[^gdW^gd[UV+;CiCDXYh;_1h9ah_hNh]h.ho`hL h
h`hh[hTIh*h7Oh}_,$V+;ev-Ci
&Fgd]^gdL
&FgdL ^gd.
&Fgd.^gdo`^gd(DXYgdL ^gd`h
&Fgd`hgd_
&FgdL ^gdL ^gd]
21h:pd/ =!"#$%^02 0@P`p2( 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p8XV~_HmH nH sH tH @`@NormalCJ_HaJmH sH tH Z@Zh Heading 1$
&F<@&5CJKH \^JaJ X@X Heading 2$`@&^`56CJ\]^JaJH@HhU Heading 3$@&^\^JaJN@Nvb Heading 4$p<@&^p5CJ\aJR@Rvb Heading 5@<@&^@56CJ\]aJH@Hvb Heading 6
@&^5CJ\aJDA DDefault Paragraph FontRiRTable Normal4
l4a(k (No ListB^B
50Normal (Web)dd[$\$PK![Content_Types].xmlN0EH-J@%ǎǢ|ș$زULTB l,3;rØJB+$G]7O٭V$!)O^rC$y@/yH*)UDb`}"qۋJחX^)I`nEp)liV[]1M<OP6r=zgbIguSebORD۫qu gZo~ٺlAplxpT0+[}`jzAV2Fi@qv֬5\|ʜ̭NleXdsjcs7f
W+Ն7`gȘJj|h(KD-
dXiJ؇(x$(:;˹!I_TS1?E??ZBΪmU/?~xY'y5g&/ɋ>GMGeD3Vq%'#q$8K)fw9:ĵ
x}rxwr:\TZaG*y8IjbRc|XŻǿI
u3KGnD1NIBs
RuK>V.EL+M2#'fi~Vvl{u8zH
*:(W☕
~JTe\O*tHGHY}KNP*ݾ˦TѼ9/#A7qZ$*c?qUnwN%Oi4=3N)cbJ
uV4(Tn
7_?m-ٛ{UBwznʜ"ZxJZp;{/<P;,)''KQk5qpN8KGbe
Sd̛\17 pa>SR!
3K4'+rzQ
TTIIvt]Kc⫲K#v5+|D~O@%\w_nN[L9KqgVhn
R!y+Un;*&/HrT >>\
t=.Tġ
S; Z~!P9giCڧ!# B,;X=ۻ,I2UWV9$lk=Aj;{AP79|s*Y;̠[MCۿhf]o{oY=1kyVV5E8Vk+֜\80X4D)!!?*|fv
u"xA@T_q64)kڬuV7t'%;i9s9x,ڎ-45xd8?ǘd/Y|t&LILJ`& -Gt/PK!
ѐ'theme/theme/_rels/themeManager.xml.relsM
0wooӺ&݈Э5
6?$Q
,.aic21h:qm@RN;d`o7gK(M&$R(.1r'JЊT8V"AȻHu}|$b{P8g/]QAsم(#L[PK-![Content_Types].xmlPK-!֧60_rels/.relsPK-!kytheme/theme/themeManager.xmlPK-!0C)theme/theme/theme1.xmlPK-!
ѐ' theme/theme/_rels/themeManager.xml.relsPK]
Y*2YP
VY8@0(
B
S ?DJ
Q[[}[33
7
8
M
#m
V|h[
7
8
M
#m
V|h[lB8|}e?x
h
^`hH.h
^`hH.h
L^ `LhH.h
^`hH.h
x^x`hH.h
HL^H`LhH.h
^`hH.h
^`hH.h
L^`LhH.h
^`hH.h
^`hH.h
pLp^p`LhH.h
@@^@`hH.h
^`hH.h
L^`LhH.h
^`hH.h
^`hH.h
PLP^P`LhH.e?xlB8 JC = t^]ijUe(s=
kV 0_2 g
zT9(Mqr9^52GFr6AWpm)E Q(ftK|;4r0!|8!DA!x#T&mf&rq&(=(&)"V)*ua*@,}_,%r,{,*-.g/6~0;_12212
C2[T2(3
55H5;c5~5.6o6E7Rs7:<:o:;4c=6P>!H@B9B]cCEVH>5ITI~YKqfKELTL^NOiQ4|R3TWvTU
U2VN6VZMVOV(W;WiW%ZnZ[B[h[d\k]Z_4_2_H_)`YX`o`9avbEbAb({bc?cCcac}cdene:f"gOg
RgpgSh`h.ijj)k;kbHkJVl`lrn}neoUpSyp r+sftJ{t1uVu4|umH=~I%K_,=S+YL vQpdhrbpWAita)fv3|>pwR
%fThg29J}kY8!rA0s5F^:iEmr)\GQN:BO'6<,WVOeONEqYrR,
x6I7OP{@
\t<e*~N]~?v0>e3N;o*&:^bk*`'ShUZ,<?q`vI:jDV&gZgbxk*@B9Tw.%$6E$Zb
$bq@n>C8IM/.>Ny*yY[@YP@UnknownG*Ax Times New Roman5Symbol3.*Cx ArialA$BCambria Math"qh$&RbҺ6v #v #24PP3QHX ?({b2!xxORF 4xx Optimal Learningxxx
Warren PowellOh+'0 ,
LXd
p|ORF 4xx Optimal LearningxxxNormal.dotmWarren Powell54Microsoft Office Word@p8@BT@(@4Үv՜.+,D՜.+,\hp
Princeton University# PORF 4xx Optimal LearningTitle| FNMendeley Citation Style_1$American Psychological Association
!"#$%&'()*+,-.012345689:;<=>ARoot Entry FW%ĮC1Table"(WordDocument42SummaryInformation(/DocumentSummaryInformation87CompObjr
F Microsoft Word 97-2003 Document
MSWordDocWord.Document.89q