I was introduced to the term by Binmore’s Rational Decisions. Amusingly, he asks what small worlds are on page 2 but doesn’t get around to answering the question until page 117.
Essentially, a “small world” is one in which you can “look before you leap.” When playing Chess by the rules, you could in theory determine every possible position which could be legally reached from the current position. If you have a sufficiently good model of your opponent and know your decision strategy, you could even assign a probability on every terminal board position in that tree. (This world may not seem very small because there are combinatorially many states!)
A large world is one in which you cannot cross some bridges until you get to them. The example given by Binmore is that, at one time, people thought the world was flat; now, they think it’s round. That’s a process that could be described by Bayesian updating, but it’s not clear that’s the best way to do things. When I think the world is flat, does it make much sense to enumerate every possible way for the world to be non-flat and parcel out a bit of belief to each? I would argue against such an approach. Wait until you discover that the Earth is roughly spherical, then work from there. That is, parcel out some probability to “world is not flat” and then, when you get evidence for that, expand on it. In a “small world,” everything is expanded from the beginning.
This happens in many numerical optimization problems. Someone in my department (who defended their PhD yesterday, actually) was working on a decision model for Brazilian hydroelectric plants. They have to decide how much water stored in dams to use every month, and face stochastic water inflows. The model looks ahead by four years to help determine how much water to use this month, but it only tells you how much water to use this month. There’s no point in computing a lookup table for next month, because next month you can take the actual measurements for the most recent month (which you have probability ~0 to predict exactly) and solve the model again, looking ahead four years based on the most recent data.
I presume it’s because actually having a complete model about a problem requires looking at a problem that is small enough that you can actually know all the relevant factors. This is in contrast to e.g. problems in the social sciences, where the amount of things that might possibly affect the result—the size of the world—is large enough that you can never have a complete model.
As another example, many classic AI systems like SHRDLU fared great when in small, limited domains where you could hand-craft rules for everything. They proved pretty much useless in larger, more complex domains where you ran into a combinatorial explosion of needed rules and variables.
Side question: Why are these called “small world” assumptions? I’ve heard the term before but didn’t understand it there either.
I was introduced to the term by Binmore’s Rational Decisions. Amusingly, he asks what small worlds are on page 2 but doesn’t get around to answering the question until page 117.
Essentially, a “small world” is one in which you can “look before you leap.” When playing Chess by the rules, you could in theory determine every possible position which could be legally reached from the current position. If you have a sufficiently good model of your opponent and know your decision strategy, you could even assign a probability on every terminal board position in that tree. (This world may not seem very small because there are combinatorially many states!)
A large world is one in which you cannot cross some bridges until you get to them. The example given by Binmore is that, at one time, people thought the world was flat; now, they think it’s round. That’s a process that could be described by Bayesian updating, but it’s not clear that’s the best way to do things. When I think the world is flat, does it make much sense to enumerate every possible way for the world to be non-flat and parcel out a bit of belief to each? I would argue against such an approach. Wait until you discover that the Earth is roughly spherical, then work from there. That is, parcel out some probability to “world is not flat” and then, when you get evidence for that, expand on it. In a “small world,” everything is expanded from the beginning.
This happens in many numerical optimization problems. Someone in my department (who defended their PhD yesterday, actually) was working on a decision model for Brazilian hydroelectric plants. They have to decide how much water stored in dams to use every month, and face stochastic water inflows. The model looks ahead by four years to help determine how much water to use this month, but it only tells you how much water to use this month. There’s no point in computing a lookup table for next month, because next month you can take the actual measurements for the most recent month (which you have probability ~0 to predict exactly) and solve the model again, looking ahead four years based on the most recent data.
I presume it’s because actually having a complete model about a problem requires looking at a problem that is small enough that you can actually know all the relevant factors. This is in contrast to e.g. problems in the social sciences, where the amount of things that might possibly affect the result—the size of the world—is large enough that you can never have a complete model.
As another example, many classic AI systems like SHRDLU fared great when in small, limited domains where you could hand-craft rules for everything. They proved pretty much useless in larger, more complex domains where you ran into a combinatorial explosion of needed rules and variables.
I had assumed that the term related to small-world network (math) though it doesn’t seem to have quite the same application.