I only recently got involved with LessWrong, and I’d like to explicitly point out that this is a tangent. I made this account to make an observation about the following passage:
Some clever fellow is bound to say, “Ah, but since I have hope, I’ll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility. So the two events are related after all.” At this point, we have to drag in Bayes’s Theorem and measure the charge of entanglement quantitatively. Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only carries a very tiny charge of entanglement, will still mess up your mapping.
First, let me say that I agree with your dismissal of the instance, but I think the idea suggests another argument that is interesting and somewhat related. While the accuracy of an estimate relies very little upon an individual’s beliefs or actions, similar to explanations of the Prisoner’s Dilemma or why an individual should vote, I can see a reasonable argument that a person’s beliefs can represent a class of individuals that can actually affect probabilities.
Arguing that hope makes the world better and so staves off war still seems silly, as the effect would still likely be very small, and instead I argue from the perspective of “reasonableness” of actions. I read “pure hope” as revealing a kind of desperation, representing an unwillingness to consider nuclear war a reasonable action in nearly any circumstance. A wide-spread belief that nuclear war is an unreasonable action would certainly affect the probability of a nuclear war occurring, both for political reasons (fallout over such a war) and statistical ones (government officials are drawn from the population), and so such a belief could actually have a noticeable effect on the possibility of a nuclear war. Furthermore, it can be argued that, for a flesh-and-blood emotional being with a flawed lens, viewing a result as likely could make it seem less unreasonable (more reasonable). As such, one possible argument for why nuclear war may happen later than earlier would look like: Nuclear war is widely regarded as an unreasonable action to take, and the clear potential danger of nuclear war makes this view unlikely to change in the forseeable future.
Following this, an argument that it is beneficial to believe that nuclear war will happen later: Believing that nuclear war is likely could erode the seeming “unreasonableness” of the action, which would increase the likelihood of such a result. As a representative of a class of individuals who are thus affected, I should therefore believe nuclear war is unlikely, so as to make it less likely.
I am not claiming I believe the conclusions of this argument, only that I found the argument interesting and wanted to share it. The second argument is also not an argument for why it is unlikely, and is rather an argument for why to believe it is unlikely, independent of actual likelihood, which is obviously something a perfect rationalist should never endorse (and why the argument relies on not being a perfect rationalist). If anyone is interested in making them, I’d like to hear any rebuttals. Personally, I find the “belief erodes unreasonableness” part the most suspect, but I can’t quite figure out how to argue against it without essentially saying “you should be a better rationalist, then”.
I only recently got involved with LessWrong, and I’d like to explicitly point out that this is a tangent. I made this account to make an observation about the following passage:
First, let me say that I agree with your dismissal of the instance, but I think the idea suggests another argument that is interesting and somewhat related. While the accuracy of an estimate relies very little upon an individual’s beliefs or actions, similar to explanations of the Prisoner’s Dilemma or why an individual should vote, I can see a reasonable argument that a person’s beliefs can represent a class of individuals that can actually affect probabilities.
Arguing that hope makes the world better and so staves off war still seems silly, as the effect would still likely be very small, and instead I argue from the perspective of “reasonableness” of actions. I read “pure hope” as revealing a kind of desperation, representing an unwillingness to consider nuclear war a reasonable action in nearly any circumstance. A wide-spread belief that nuclear war is an unreasonable action would certainly affect the probability of a nuclear war occurring, both for political reasons (fallout over such a war) and statistical ones (government officials are drawn from the population), and so such a belief could actually have a noticeable effect on the possibility of a nuclear war. Furthermore, it can be argued that, for a flesh-and-blood emotional being with a flawed lens, viewing a result as likely could make it seem less unreasonable (more reasonable). As such, one possible argument for why nuclear war may happen later than earlier would look like: Nuclear war is widely regarded as an unreasonable action to take, and the clear potential danger of nuclear war makes this view unlikely to change in the forseeable future.
Following this, an argument that it is beneficial to believe that nuclear war will happen later: Believing that nuclear war is likely could erode the seeming “unreasonableness” of the action, which would increase the likelihood of such a result. As a representative of a class of individuals who are thus affected, I should therefore believe nuclear war is unlikely, so as to make it less likely.
I am not claiming I believe the conclusions of this argument, only that I found the argument interesting and wanted to share it. The second argument is also not an argument for why it is unlikely, and is rather an argument for why to believe it is unlikely, independent of actual likelihood, which is obviously something a perfect rationalist should never endorse (and why the argument relies on not being a perfect rationalist). If anyone is interested in making them, I’d like to hear any rebuttals. Personally, I find the “belief erodes unreasonableness” part the most suspect, but I can’t quite figure out how to argue against it without essentially saying “you should be a better rationalist, then”.