You’re looking at it all wrong, “you” are not “in” any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn’t matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don’t matter for the multiverse at large.
None of the evolved concepts and heuristics, which you have been wired to assume so deeply alternatives may be literally unthinkable, are inapplicable in this kind of situation. These concepts include the self, anticipation, and reality. Anthropic is an heuristic as well, and a rather crappy one at that.
So ask yourself, what is your objective, non-local utility function over the entirety of the tegmark-4 multiverse, and for what action would it be logically implied to be the largest if all algorithms similar to yours outputted that action?
Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong—the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn’t be factual—but it’d take awhile, and it’s easier to just point people at UDT. Might give me some philosophy cred, which is cred I’d be okay with.
Actually, it does wrong on a much deeper and earlier level than that, and also you don’t grok UDT as well as you think you do, or you wouldn’t have considered the lottery question worth even considering.
More precisely, though, I thought the subject was worth your consideration, because I hadn’t seen you in decision theory discussion. (Sorry, I don’t mean to be or come across as defensive here. I’m a little surprised your model of me doesn’t predict me asking those as trick questions. But only a little.)
Re deeper problems, there are metaphysical problems that are deeper and should be obvious, but the tack I wanted to take was purely epistemological, such that there’s less wiggle room. Many people reject UDT because “values shouldn’t affect anticipation”, and I think I can neatly argue against anthropics without hitting up against that objection. Which would be necessary to convince the philosophers, I think.
I should at least make a few paragraphs of summary, because I’ve referenced the idea like three times now, I’ve never written it down, and if it ends up being wrong I’m going to feel pretty dumb. I’ll try to respond to your comment in the next few days with said paragraphs.
Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.
You’re looking at it all wrong, “you” are not “in” any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn’t matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don’t matter for the multiverse at large.
None of the evolved concepts and heuristics, which you have been wired to assume so deeply alternatives may be literally unthinkable, are inapplicable in this kind of situation. These concepts include the self, anticipation, and reality. Anthropic is an heuristic as well, and a rather crappy one at that.
So ask yourself, what is your objective, non-local utility function over the entirety of the tegmark-4 multiverse, and for what action would it be logically implied to be the largest if all algorithms similar to yours outputted that action?
Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong—the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn’t be factual—but it’d take awhile, and it’s easier to just point people at UDT. Might give me some philosophy cred, which is cred I’d be okay with.
Actually, it does wrong on a much deeper and earlier level than that, and also you don’t grok UDT as well as you think you do, or you wouldn’t have considered the lottery question worth even considering.
More precisely, though, I thought the subject was worth your consideration, because I hadn’t seen you in decision theory discussion. (Sorry, I don’t mean to be or come across as defensive here. I’m a little surprised your model of me doesn’t predict me asking those as trick questions. But only a little.)
Re deeper problems, there are metaphysical problems that are deeper and should be obvious, but the tack I wanted to take was purely epistemological, such that there’s less wiggle room. Many people reject UDT because “values shouldn’t affect anticipation”, and I think I can neatly argue against anthropics without hitting up against that objection. Which would be necessary to convince the philosophers, I think.
Compensating over duplicitous behavior in models can tend to clog up simulations and lead to processing halting.
I generally would take all statements as reflective of exactly what some one means if at all possible.
Its also great fun to short circuit sarcasm in a similar way.
I’d be very interested in seeing such a post.
I should at least make a few paragraphs of summary, because I’ve referenced the idea like three times now, I’ve never written it down, and if it ends up being wrong I’m going to feel pretty dumb. I’ll try to respond to your comment in the next few days with said paragraphs.
Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.