Let’s say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I’m also going to make the simplifying assumption that the effects of the learning materials are independent.
I’m looking for an experimental protocol with the following conditions:
I want to be able to give each student as many learning materials as possible. I don’t want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.
I have a prior about which learning materials will do better, I’d like to utilize this prior by originally distributing these materials to more students.
(Bonus) Students are constantly entering this class, I’d love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.
I’ve looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you’d use that method.
You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they’re more intended for fixed up-front designing of experiments. They also tend to be oriented towards overall information or reduction of variance, which doesn’t necessarily correspond to your loss function. Having priors affects the optimal design somewhat (usually, you can spend fewer datapoints on the variables with prior information; for a Bayesian experimental design, you can simulate a set of parameters from your priors and then simulate drawing n datapoints with a particular experimental design, fit the model, find your loss or your entropy/variance, record the loss/design, and repeat many times; then find the design with the best average loss.).
If you are running the learning material experiment indefinitely and want to maximize cumulative test scores, then it’s a multi-armed bandit and so Thompson sampling on a factorial Bayesian model will work well & handle your 3 desiderata: you set your informative priors on each learning material, model as a linear model (with interactions?), and Thompson sample from the model+data.
You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they’re more intended for fixed up-front designing of experiments.
So after looking at the problem I’m actually working on, I realize an adaptive/sequential design isn’t really what I’m after.
What I really want is a fractional factorial model that takes a prior (and minimizes regret between information learned and cumulative score). It seems like the goal of multi-armed bandit is to do exactly that, but I only want to do it once, assuming a fixed prior which doesn’t update over time.
Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?
but I only want to do it once, assuming a fixed prior which doesn’t update over time.
I still don’t understand what you’re trying to do. If you’re trying to maximize test scores by increasing them through picking textbooks and this is done many times, you want a multi-armed bandit to help you find what is the best textbook over the many students exposed to different combinations. If you are throwing out the information from each batch and assuming the interventions are totally different each time, then your decision is made before you do any learning and your optimal choice is simply whatever your prior says: the value of information is the subsequent decisions it affects, except you’re not updating your prior so the information can’t change any decisions after the first one and is worthless.
Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?
Dunno. Simulation is the most general way of tackling the problem, which will work for just about anything, but can be extremely computationally expensive. There are many special cases which can reuse computations or have closed-form solutions, but must be considered on a case by case basis.
Let’s say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I’m also going to make the simplifying assumption that the effects of the learning materials are independent.
I’m looking for an experimental protocol with the following conditions:
I want to be able to give each student as many learning materials as possible. I don’t want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.
I have a prior about which learning materials will do better, I’d like to utilize this prior by originally distributing these materials to more students.
(Bonus) Students are constantly entering this class, I’d love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.
I’ve looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you’d use that method.
Thanks!
You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they’re more intended for fixed up-front designing of experiments. They also tend to be oriented towards overall information or reduction of variance, which doesn’t necessarily correspond to your loss function. Having priors affects the optimal design somewhat (usually, you can spend fewer datapoints on the variables with prior information; for a Bayesian experimental design, you can simulate a set of parameters from your priors and then simulate drawing n datapoints with a particular experimental design, fit the model, find your loss or your entropy/variance, record the loss/design, and repeat many times; then find the design with the best average loss.).
If you are running the learning material experiment indefinitely and want to maximize cumulative test scores, then it’s a multi-armed bandit and so Thompson sampling on a factorial Bayesian model will work well & handle your 3 desiderata: you set your informative priors on each learning material, model as a linear model (with interactions?), and Thompson sample from the model+data.
If you want to find what set of learning materials is optimal as fast as possible by the end of your experiment, then that’s the ‘best-arm identification’ multi-armed bandit problem. You can do a kind of Thompson sampling there too: best-arm Thompson sampling: http://imagine.enpc.fr/publications/papers/COLT10.pdf https://www.escholar.manchester.ac.uk/api/datastream?publicationPid=uk-ac-man-scw:227658&datastreamId=FULL-TEXT.PDF http://nowak.ece.wisc.edu/bestArmSurvey.pdf http://arxiv.org/pdf/1407.4443v1.pdf https://papers.nips.cc/paper/4478-multi-bandit-best-arm-identification.pdf One version goes: with the full posteriors, find the action A with the best expected loss; for all the other actions B..Z, Thompson sample their possible value; take the action with the best loss out of A..Z. This explores the other arms in proportion to their remaining chance of being the best arm, better than A, while firming up the estimate of A’s value.
So after looking at the problem I’m actually working on, I realize an adaptive/sequential design isn’t really what I’m after.
What I really want is a fractional factorial model that takes a prior (and minimizes regret between information learned and cumulative score). It seems like the goal of multi-armed bandit is to do exactly that, but I only want to do it once, assuming a fixed prior which doesn’t update over time.
Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?
I still don’t understand what you’re trying to do. If you’re trying to maximize test scores by increasing them through picking textbooks and this is done many times, you want a multi-armed bandit to help you find what is the best textbook over the many students exposed to different combinations. If you are throwing out the information from each batch and assuming the interventions are totally different each time, then your decision is made before you do any learning and your optimal choice is simply whatever your prior says: the value of information is the subsequent decisions it affects, except you’re not updating your prior so the information can’t change any decisions after the first one and is worthless.
Dunno. Simulation is the most general way of tackling the problem, which will work for just about anything, but can be extremely computationally expensive. There are many special cases which can reuse computations or have closed-form solutions, but must be considered on a case by case basis.