LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren’t so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft’s world, Qoheleth’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are in Cassandra’s or Buddha’s world?
Edited lightly — In the first couple of paragraphs, I’ve clarified that I’m talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.
I don’t see these as alternatives, more like complements.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world
It’s a memorable name, but it does not need to be called anything so dramatic, given that we live in this world already. For example, most of us make a likely correct prediction that if we procrastinate less then we will be better off, yet we still waste time and regret it later.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals.
Why this AIXIsm? We are a part of the world, and the most important part of it for many people, so updating your model of self is very Bayesian. Lacking this self-update is what leads to a “Cassandra’s world”.
That’s an interesting post. Let me throw in some comments.
I am not sure about the Cassandra’s world. Here’s why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions—what will come to pass and what will not. That is not a low-level world property, that’s just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin’s genie: “Phenomenal cosmic predictive capacity … itty bitty evidential status.”
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo’s “install prophecy” code was. If Cassandra took a lesson in lying from Epimenides, she mightn’t have had any problems.
You’re right about the prisoner. (Which also reminds me of Locke’s locked-room example regarding voluntariness.) That particular situation doesn’t distinguish those worlds.
(I should clarify that in each of these “worlds”, I’m talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.).
I think we’re thinking about different myths. I’m thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we’re in Cassandra’s world, we expect to be in situations where that doesn’t work.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
That’s pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.
Other than Bayes’ world, which other worlds might we be living in?
A world with causes and effects. (Bayes’ world as described is Cassandra’s world, for the usual reasons of “prediction” not being what you want for choosing actions).
[ There was something else here, having to do with how it is hard to use causal info in a Bayesian way, but I deleted it for now in order to think about it more. You can ask me about it if interested. The moral is, it’s not so easy to just be Bayesian with arbitrary types of information. ]
Hmm. I think I know what you’re referring to — aside from prediction, you also need to be able to factor out irrelevant information, consider hypotheticals, and construct causal networks. A world where cause and effect didn’t work a good deal of the time might still be predictable, but choosing actions wouldn’t work very effectively.
(I suspect that if I’d read more of Pearl’s Causality I’d be able to express this more precisely.)
Well, when you use Bayes theorem, you are updating based on a conditioning event. But with causal info, it is not a conditioning event anymore. I don’t think it is literally impossible to be Bayesian with causal info, but it sounds hard. I am still thinking about it.
So I am not sure how practical this “be more Bayesian” advice really is. In practice we should be able to use information of the form “aspirin does not cause cancer”, right?
For one thing, we already have strong evidence rationality is a useful idea: it’s called science & technology.
Cassandra’s world: Mythical predictions seem to be unconditional whether Bayesian predictions are conditional on your own actions and thus can be acted upon.
Buddha’s world: Well, understanding your own values and understanding how to maximize them are two tasks none of which is redundant. I think rationality is useful in understanding your own values as well, for example by analyzing them through evolutionary psychology or cognitive neuroscience. Moreover, empirically understanding of our own values also improves when learning epistemic facts and analyzing hypothetical scenarios. Without rationality it is difficult to create sufficiently precise language for formulating the values.
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren’t so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft’s world, Qoheleth’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are in Cassandra’s or Buddha’s world?
Edited lightly — In the first couple of paragraphs, I’ve clarified that I’m talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.
Replace religion with this dilemma and you have NS’s Microkernel reliigon.
I don’t see these as alternatives, more like complements.
It’s a memorable name, but it does not need to be called anything so dramatic, given that we live in this world already. For example, most of us make a likely correct prediction that if we procrastinate less then we will be better off, yet we still waste time and regret it later.
Why this AIXIsm? We are a part of the world, and the most important part of it for many people, so updating your model of self is very Bayesian. Lacking this self-update is what leads to a “Cassandra’s world”.
I’d tell you what method, I would use to evaluate the evidence to decide in which world we are, but it seems like you denied it in the premise. ;)
That’s an interesting post. Let me throw in some comments.
I am not sure about the Cassandra’s world. Here’s why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions—what will come to pass and what will not. That is not a low-level world property, that’s just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
So then the Cassandra’s world is essentially a predetermined world where fate rules and you can’t change anything. None of your choices matter.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin’s genie: “Phenomenal cosmic predictive capacity … itty bitty evidential status.”
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo’s “install prophecy” code was. If Cassandra took a lesson in lying from Epimenides, she mightn’t have had any problems.
You’re right about the prisoner. (Which also reminds me of Locke’s locked-room example regarding voluntariness.) That particular situation doesn’t distinguish those worlds.
(I should clarify that in each of these “worlds”, I’m talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
I think we’re thinking about different myths. I’m thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we’re in Cassandra’s world, we expect to be in situations where that doesn’t work.
That’s pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.
A world with causes and effects. (Bayes’ world as described is Cassandra’s world, for the usual reasons of “prediction” not being what you want for choosing actions).
[ There was something else here, having to do with how it is hard to use causal info in a Bayesian way, but I deleted it for now in order to think about it more. You can ask me about it if interested. The moral is, it’s not so easy to just be Bayesian with arbitrary types of information. ]
Hmm. I think I know what you’re referring to — aside from prediction, you also need to be able to factor out irrelevant information, consider hypotheticals, and construct causal networks. A world where cause and effect didn’t work a good deal of the time might still be predictable, but choosing actions wouldn’t work very effectively.
(I suspect that if I’d read more of Pearl’s Causality I’d be able to express this more precisely.)
Is that what you’re getting at, at all?
Well, when you use Bayes theorem, you are updating based on a conditioning event. But with causal info, it is not a conditioning event anymore. I don’t think it is literally impossible to be Bayesian with causal info, but it sounds hard. I am still thinking about it.
So I am not sure how practical this “be more Bayesian” advice really is. In practice we should be able to use information of the form “aspirin does not cause cancer”, right?
[ I did not downvote the parent. ]
For one thing, we already have strong evidence rationality is a useful idea: it’s called science & technology.
Cassandra’s world: Mythical predictions seem to be unconditional whether Bayesian predictions are conditional on your own actions and thus can be acted upon.
Buddha’s world: Well, understanding your own values and understanding how to maximize them are two tasks none of which is redundant. I think rationality is useful in understanding your own values as well, for example by analyzing them through evolutionary psychology or cognitive neuroscience. Moreover, empirically understanding of our own values also improves when learning epistemic facts and analyzing hypothetical scenarios. Without rationality it is difficult to create sufficiently precise language for formulating the values.