Background: lukeprog wrote this post about articles he wouldn’t have the time to write, and the first one on the list was something I was confident about, and so I decided to write a post on it. (As a grad student in operations research, practical decision theory is what I spend most of my time thinking about.)
Amusingly enough, I had the most trouble working in his ‘classic example.’ Decision analysis tends to be hinged on Bayesian assumptions often referred to as “small world”- that is, your model is complete and unbiased (If you knew there was a bias in your model, you’d incorporate that into your model and it would be unbiased!). Choosing a career is more of a search problem, though- specifying what options you have is probably more difficult than picking from them. You can still use the VoI concept- but mostly for deciding when to stop accumulating new information. Before you’ve done your first research, you can’t predict the results of your research very well, and so it’s rather hard to put a number on how valuable looking into potential careers is.
There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That’s the sort of thing I suspect I could write a useful primer on, whereas I find it hard to care about, say, Sleeping Beauty.
There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That’s the sort of thing I suspect I could write a useful primer on
Please do! This is exactly the sort of topic that should be LessWrong’s specialty.
There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That’s the sort of thing I suspect I could write a useful primer on
My reaction while reading this post was “whoa, this seems really valuable, and the sort of thing that should have been discussed on LW years ago”. So yes, please write more.
Please write an article about “practical decision analysis”. I tried to learn about this briefly before, but didn’t learn anything useful. I must be missing the right keywords and phrases that are used in the field, so I would definitely appreciate an overview, or anything that helps improve everyday decision making.
I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I’ve actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.
So count me in for a rousing endorsement of interest in more practical decision theory.
I was introduced to the term by Binmore’s Rational Decisions. Amusingly, he asks what small worlds are on page 2 but doesn’t get around to answering the question until page 117.
Essentially, a “small world” is one in which you can “look before you leap.” When playing Chess by the rules, you could in theory determine every possible position which could be legally reached from the current position. If you have a sufficiently good model of your opponent and know your decision strategy, you could even assign a probability on every terminal board position in that tree. (This world may not seem very small because there are combinatorially many states!)
A large world is one in which you cannot cross some bridges until you get to them. The example given by Binmore is that, at one time, people thought the world was flat; now, they think it’s round. That’s a process that could be described by Bayesian updating, but it’s not clear that’s the best way to do things. When I think the world is flat, does it make much sense to enumerate every possible way for the world to be non-flat and parcel out a bit of belief to each? I would argue against such an approach. Wait until you discover that the Earth is roughly spherical, then work from there. That is, parcel out some probability to “world is not flat” and then, when you get evidence for that, expand on it. In a “small world,” everything is expanded from the beginning.
This happens in many numerical optimization problems. Someone in my department (who defended their PhD yesterday, actually) was working on a decision model for Brazilian hydroelectric plants. They have to decide how much water stored in dams to use every month, and face stochastic water inflows. The model looks ahead by four years to help determine how much water to use this month, but it only tells you how much water to use this month. There’s no point in computing a lookup table for next month, because next month you can take the actual measurements for the most recent month (which you have probability ~0 to predict exactly) and solve the model again, looking ahead four years based on the most recent data.
I presume it’s because actually having a complete model about a problem requires looking at a problem that is small enough that you can actually know all the relevant factors. This is in contrast to e.g. problems in the social sciences, where the amount of things that might possibly affect the result—the size of the world—is large enough that you can never have a complete model.
As another example, many classic AI systems like SHRDLU fared great when in small, limited domains where you could hand-craft rules for everything. They proved pretty much useless in larger, more complex domains where you ran into a combinatorial explosion of needed rules and variables.
Background: lukeprog wrote this post about articles he wouldn’t have the time to write, and the first one on the list was something I was confident about, and so I decided to write a post on it. (As a grad student in operations research, practical decision theory is what I spend most of my time thinking about.)
Amusingly enough, I had the most trouble working in his ‘classic example.’ Decision analysis tends to be hinged on Bayesian assumptions often referred to as “small world”- that is, your model is complete and unbiased (If you knew there was a bias in your model, you’d incorporate that into your model and it would be unbiased!). Choosing a career is more of a search problem, though- specifying what options you have is probably more difficult than picking from them. You can still use the VoI concept- but mostly for deciding when to stop accumulating new information. Before you’ve done your first research, you can’t predict the results of your research very well, and so it’s rather hard to put a number on how valuable looking into potential careers is.
There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That’s the sort of thing I suspect I could write a useful primer on, whereas I find it hard to care about, say, Sleeping Beauty.
Please do! This is exactly the sort of topic that should be LessWrong’s specialty.
Agree.
My reaction while reading this post was “whoa, this seems really valuable, and the sort of thing that should have been discussed on LW years ago”. So yes, please write more.
Please write an article about “practical decision analysis”. I tried to learn about this briefly before, but didn’t learn anything useful. I must be missing the right keywords and phrases that are used in the field, so I would definitely appreciate an overview, or anything that helps improve everyday decision making.
I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I’ve actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.
So count me in for a rousing endorsement of interest in more practical decision theory.
Side question: Why are these called “small world” assumptions? I’ve heard the term before but didn’t understand it there either.
I was introduced to the term by Binmore’s Rational Decisions. Amusingly, he asks what small worlds are on page 2 but doesn’t get around to answering the question until page 117.
Essentially, a “small world” is one in which you can “look before you leap.” When playing Chess by the rules, you could in theory determine every possible position which could be legally reached from the current position. If you have a sufficiently good model of your opponent and know your decision strategy, you could even assign a probability on every terminal board position in that tree. (This world may not seem very small because there are combinatorially many states!)
A large world is one in which you cannot cross some bridges until you get to them. The example given by Binmore is that, at one time, people thought the world was flat; now, they think it’s round. That’s a process that could be described by Bayesian updating, but it’s not clear that’s the best way to do things. When I think the world is flat, does it make much sense to enumerate every possible way for the world to be non-flat and parcel out a bit of belief to each? I would argue against such an approach. Wait until you discover that the Earth is roughly spherical, then work from there. That is, parcel out some probability to “world is not flat” and then, when you get evidence for that, expand on it. In a “small world,” everything is expanded from the beginning.
This happens in many numerical optimization problems. Someone in my department (who defended their PhD yesterday, actually) was working on a decision model for Brazilian hydroelectric plants. They have to decide how much water stored in dams to use every month, and face stochastic water inflows. The model looks ahead by four years to help determine how much water to use this month, but it only tells you how much water to use this month. There’s no point in computing a lookup table for next month, because next month you can take the actual measurements for the most recent month (which you have probability ~0 to predict exactly) and solve the model again, looking ahead four years based on the most recent data.
I presume it’s because actually having a complete model about a problem requires looking at a problem that is small enough that you can actually know all the relevant factors. This is in contrast to e.g. problems in the social sciences, where the amount of things that might possibly affect the result—the size of the world—is large enough that you can never have a complete model.
As another example, many classic AI systems like SHRDLU fared great when in small, limited domains where you could hand-craft rules for everything. They proved pretty much useless in larger, more complex domains where you ran into a combinatorial explosion of needed rules and variables.
I had assumed that the term related to small-world network (math) though it doesn’t seem to have quite the same application.
The start of my decision analysis sequence is here.
Seconding Steven. You write well and this is an interesting and useful topic that has not been sufficiently explored.