Someone who reacts to gap in the sky with “its most likely a hallucination” may, with incredibly low probability, encounter the described hypothetical where it is not a hallucination, and lose out. Yet this person would perform much more optimally when their drink got spiced with LSD or if they naturally developed an equivalent fault.
And of course the issue is that maximum or even typical impact of faulty belief processing which is described here could be far larger than $5 - the hypothesis could have required you to give away everything, to work harder than you normally would and give away income, or worse, to kill someone. And if it is processed with disregard for probability of a fault, such dangerous failure modes are rendered more likely.
One of the points in the post was a dramatically non Bayesian dismissal of updates on the possibility of hallucination. An agent of finite reliability faces a tradeoff between it’s behaviour under failure and it’s behaviour in unlikely circumstances.
With regards to fixing up probabilities, there is an issue that early in it’s life, an agent is uniquely positioned to influence it’s future. Every elderly agent goes through early life; while the probability of finding your atheist variation on the theme of immaterial soul in the early age agent is low, the probability that an agent will be making decisions at an early age is 1, and its not quite clear that we could use this low probability. (It may be more reasonable to assign low probability to an incredibly long lifespan though, in the manner similar to the speed prior).
Someone who reacts to gap in the sky with “its most likely a hallucination” may, with incredibly low probability, encounter the described hypothetical where it is not a hallucination, and lose out. Yet this person would perform much more optimally when their drink got spiced with LSD or if they naturally developed an equivalent fault.
What Eliezer is actually saying about this kind of hallucination:
I mean, in practice, I would tend to try and take certain actions intended to do something about the rather high posterior probability that I was hallucinating and be particularly wary of actions that sound like the sort of thing psychotic patients hallucinate, but this is an artifact of the odd construction of the scenario and wouldn’t apply to the more realistic and likely-to-be-actually-encountered case of the physics theory which implied we could use dark energy for computation or whatever.
The kind of ‘hallucination’ that is discussed in the posts is more about the issues of being forced you believe you are a boltzmann brain or a descendent human who is seamlessly hallucinating being an ‘ancestor’ before being able to believe that it is likely that there will be many humans in the future. This is an entirely different kind of issue.
Someone who reacts to gap in the sky with “its most likely a hallucination” may, with incredibly low probability, encounter the described hypothetical where it is not a hallucination, and lose out. Yet this person would perform much more optimally when their drink got spiced with LSD or if they naturally developed an equivalent fault.
And of course the issue is that maximum or even typical impact of faulty belief processing which is described here could be far larger than $5 - the hypothesis could have required you to give away everything, to work harder than you normally would and give away income, or worse, to kill someone. And if it is processed with disregard for probability of a fault, such dangerous failure modes are rendered more likely.
This is true, but the real question here is how to fix a non-convergent utility calculation.
One of the points in the post was a dramatically non Bayesian dismissal of updates on the possibility of hallucination. An agent of finite reliability faces a tradeoff between it’s behaviour under failure and it’s behaviour in unlikely circumstances.
With regards to fixing up probabilities, there is an issue that early in it’s life, an agent is uniquely positioned to influence it’s future. Every elderly agent goes through early life; while the probability of finding your atheist variation on the theme of immaterial soul in the early age agent is low, the probability that an agent will be making decisions at an early age is 1, and its not quite clear that we could use this low probability. (It may be more reasonable to assign low probability to an incredibly long lifespan though, in the manner similar to the speed prior).
What Eliezer is actually saying about this kind of hallucination:
The kind of ‘hallucination’ that is discussed in the posts is more about the issues of being forced you believe you are a boltzmann brain or a descendent human who is seamlessly hallucinating being an ‘ancestor’ before being able to believe that it is likely that there will be many humans in the future. This is an entirely different kind of issue.