I accidentally asked this of wedrifid above, but it was intended for you:
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
(About the unwillingness to expend time to elaborate: I really am sorry about that.)
Decisions? …Kind of. In some cases, the answer is trivially yes, because I decide to spend a lot of time thinking about the implications of being in the computation of an agent whose utility function I’m not sure of. But that’s not what you mean, I know.
It doesn’t really change my decisions, but I think that’s because I’m the kind of person who’d be put in a simulation. Or, in other words, if I wasn’t already doing incredibly interesting things, I wouldn’t have heard of Tegmark or the simulation argument, and I would have significantly less anthropic evidence to make me really pay attention to it. (The anthropic evidence is in no way a good source of argument or belief, but it forces me to pay attention to hypotheses that explain it.) If by some weird counterfactual miracle I’d determined I was in a simulation before I was trying to do awesome things, then I’d switch to trying to do awesome things, as awesome things people probably have more measure, and more measure lets me better achieve my goals. But it’s not really possible to do that, because you only have lots of measure (observer moments) in the first place if you’re doing simulation-worthy things. (That’s the part where anthropics comes in and mucks things up, and probably where most people would flat out disagree with me; nonetheless, it’s not that important for establishing >95% certainty in non-negligible simulation measure.) This is the point where hypotheses of structural uncertainty or the outside view like “I’m a crazy narcissist” and “everything I know is wrong” are most convincing. (Though I still haven’t prevented the real arguments these are counterarguments against.)
So to answer your question: no, but for weird self-fulfilling reasons.
I accidentally asked this of wedrifid above, but it was intended for you:
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
(About the unwillingness to expend time to elaborate: I really am sorry about that.)
Decisions? …Kind of. In some cases, the answer is trivially yes, because I decide to spend a lot of time thinking about the implications of being in the computation of an agent whose utility function I’m not sure of. But that’s not what you mean, I know.
It doesn’t really change my decisions, but I think that’s because I’m the kind of person who’d be put in a simulation. Or, in other words, if I wasn’t already doing incredibly interesting things, I wouldn’t have heard of Tegmark or the simulation argument, and I would have significantly less anthropic evidence to make me really pay attention to it. (The anthropic evidence is in no way a good source of argument or belief, but it forces me to pay attention to hypotheses that explain it.) If by some weird counterfactual miracle I’d determined I was in a simulation before I was trying to do awesome things, then I’d switch to trying to do awesome things, as awesome things people probably have more measure, and more measure lets me better achieve my goals. But it’s not really possible to do that, because you only have lots of measure (observer moments) in the first place if you’re doing simulation-worthy things. (That’s the part where anthropics comes in and mucks things up, and probably where most people would flat out disagree with me; nonetheless, it’s not that important for establishing >95% certainty in non-negligible simulation measure.) This is the point where hypotheses of structural uncertainty or the outside view like “I’m a crazy narcissist” and “everything I know is wrong” are most convincing. (Though I still haven’t prevented the real arguments these are counterarguments against.)
So to answer your question: no, but for weird self-fulfilling reasons.
Interesting. Your reasoning in the counterfactual miracle is very reminiscent of UDT reasoning on Newcomb’s problem.
Thanks for sharing. If you ever take the time to lay out all your reasons for having >95% certainty in a simulation universe, I’d love to read it.