I think you’re right to conclude you’re insane in the case of Omega. It sufficiently parallels traditional delusions and doesn’t even make sense even if what you see and here are true (why if this super-intelligence exists is it offering me weird bets to test game theory paradoxes?). In any case the though experiments just assume Omega is honest, in real life you would require a proof before you started handing him money.
The psy thing seems altogether different from me. People don’t hallucinate scientific studies, it just doesn’t fit with any known pattern of delusion. Moreover, the hypothesis, while contrary to our current understanding of the world, isn’t nearly as a priori implausible as an honest and omniscient AI appearing at your front door.
I think you’re right to conclude you’re insane in the case of Omega.
Ouch. Once LW members start going insane at an elevated rate, we pretty much know what we’re going to hallucinate, so all that decision theory stuff is going to become really useful.
I think you’re right to conclude you’re insane in the case of Omega. It sufficiently parallels traditional delusions and doesn’t even make sense even if what you see and here are true (why if this super-intelligence exists is it offering me weird bets to test game theory paradoxes?). In any case the though experiments just assume Omega is honest, in real life you would require a proof before you started handing him money.
The psy thing seems altogether different from me. People don’t hallucinate scientific studies, it just doesn’t fit with any known pattern of delusion. Moreover, the hypothesis, while contrary to our current understanding of the world, isn’t nearly as a priori implausible as an honest and omniscient AI appearing at your front door.
Ouch. Once LW members start going insane at an elevated rate, we pretty much know what we’re going to hallucinate, so all that decision theory stuff is going to become really useful.