Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap.
This made me laugh. Well said!
There’s only one question about this scenario for me—is it possible for a sufficiently intelligent being to fully, fully model an individual human brain? If so, (and I think it’s tough to argue ‘no’ unless you think there’s a serious glass ceiling for intelligence) choose box B. If you try and second-guess (or, hell, googolth-guess) Omega, you’re taking the risk that Omega is not smart enough to have modelled your consciousness sufficiently well. How big is this risk? 100 times out of 100 speaks for itself. Omega is cleverer than we can understand. Box B.
(Time travel? No thanks. I find the probability that Omega is simulating people’s minds a hell of a lot more likely than that he’s time travelling, destroying the universe etc. And even if he were, Box B!)
If you can have your brain modelled exactly—to the point where there is an identical simulation of your entire conscious mind and what it perceives—then a lot of weird stuff can go on. However, none of it will violate causality. (Quantum effects messing up the simulation or changing the original? I guess if the model could be regularly updated based on the original...but I don’t know what I’m talking about now ;) )
Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap.
This made me laugh. Well said!
There’s only one question about this scenario for me—is it possible for a sufficiently intelligent being to fully, fully model an individual human brain? If so, (and I think it’s tough to argue ‘no’ unless you think there’s a serious glass ceiling for intelligence) choose box B. If you try and second-guess (or, hell, googolth-guess) Omega, you’re taking the risk that Omega is not smart enough to have modelled your consciousness sufficiently well. How big is this risk? 100 times out of 100 speaks for itself. Omega is cleverer than we can understand. Box B.
(Time travel? No thanks. I find the probability that Omega is simulating people’s minds a hell of a lot more likely than that he’s time travelling, destroying the universe etc. And even if he were, Box B!)
If you can have your brain modelled exactly—to the point where there is an identical simulation of your entire conscious mind and what it perceives—then a lot of weird stuff can go on. However, none of it will violate causality. (Quantum effects messing up the simulation or changing the original? I guess if the model could be regularly updated based on the original...but I don’t know what I’m talking about now ;) )