Don’t be shallow, don’t just consider the obvious points. Consider that I’ve thought about this for many, many hours, and that you don’t have any privileged information.
So? You say crazy (and wrong) shit a lot and have no credibility.
Whence our disagreement, if one exists?
Try explaining your reasoning and we might see. The whole “I have mysterious reasons why this crazy idea is true” thing is just annoying. (Whether done by you or Eliezer.)
You say crazy (and wrong) shit a lot and have no credibility.
Wait, I thought in my case those were, like, really tied into each other, barely two different things. Also I have tons of credibility with the people who matter. …Which is in some respects a problem, you see.
Try explaining your reasoning and we might see.
It’s more fun this way. Don’t you want to live by your own strength sometimes?
he whole “I have mysterious reasons why this crazy idea is true” thing is just annoying.
If that’s the reason, shouldn’t you try to maximize credibility with reliable, high credibility people who understand those aspects of fun theory (especialy those who themselves are credible), keep it neutral with mental helth related profesionals who may lock you up, and minimize it with everyone else?
In other words; credibility is a two place function and your question is a false dichotomy.
So now we’re in a situation mildly close to an interesting epistemic situation, which is, winning the lottery. Winning the lottery or some better-optimized event provides a lot of incommunicable evidence that you’re in a simulation. The typical anthropic problem in group epistemology—your winning tells me nothing. I have a question for you: How serious a problem do you think this is in practice? If it’s a common problem and been one throughout history, what social institutions would have evolved to help solve the problem? Or is solving the problem impossible? Only try to answer these if you’re interested in the questions themselves of course.
You’re looking at it all wrong, “you” are not “in” any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn’t matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don’t matter for the multiverse at large.
None of the evolved concepts and heuristics, which you have been wired to assume so deeply alternatives may be literally unthinkable, are inapplicable in this kind of situation. These concepts include the self, anticipation, and reality. Anthropic is an heuristic as well, and a rather crappy one at that.
So ask yourself, what is your objective, non-local utility function over the entirety of the tegmark-4 multiverse, and for what action would it be logically implied to be the largest if all algorithms similar to yours outputted that action?
Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong—the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn’t be factual—but it’d take awhile, and it’s easier to just point people at UDT. Might give me some philosophy cred, which is cred I’d be okay with.
Actually, it does wrong on a much deeper and earlier level than that, and also you don’t grok UDT as well as you think you do, or you wouldn’t have considered the lottery question worth even considering.
More precisely, though, I thought the subject was worth your consideration, because I hadn’t seen you in decision theory discussion. (Sorry, I don’t mean to be or come across as defensive here. I’m a little surprised your model of me doesn’t predict me asking those as trick questions. But only a little.)
Re deeper problems, there are metaphysical problems that are deeper and should be obvious, but the tack I wanted to take was purely epistemological, such that there’s less wiggle room. Many people reject UDT because “values shouldn’t affect anticipation”, and I think I can neatly argue against anthropics without hitting up against that objection. Which would be necessary to convince the philosophers, I think.
I should at least make a few paragraphs of summary, because I’ve referenced the idea like three times now, I’ve never written it down, and if it ends up being wrong I’m going to feel pretty dumb. I’ll try to respond to your comment in the next few days with said paragraphs.
Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.
Wait, I thought in my case those were, like, really tied into each other, barely two different things.
Nope, you say some crazy-sounding things that are actually right too. There are just other people that manage to say the crazy-sounding-but-right things and not say the just-plain-crazy things a hell of a lot better than you are capable of.
For what it’s worth (nothing, right?), I disagree. I’m the best I know of when it comes to crazy-sounding-but-right, but the position could also go to Nick Tarleton, maybe Michael Vassar.
For what it’s worth (nothing, right?), I disagree. I’m the best I know of when it comes to crazy-sounding-but-right, but the position could also go to Nick Tarleton, maybe Michael Vassar.
Disagreement with what I was trying to convey would actually imply that you are the “best at not-crazy-sounding-but-wrong despite satisficed crazy-sounding-but-but-right”. Michael Vassar cannot claim that role either (I wouldn’t expect him to try). He speculates a lot and that inevitably leads to being wrong a portion of the time.
(And yes, implicitly you should rate yourself highly there too.)
So? You say crazy (and wrong) shit a lot and have no credibility.
Try explaining your reasoning and we might see. The whole “I have mysterious reasons why this crazy idea is true” thing is just annoying. (Whether done by you or Eliezer.)
I don’t know about “no credibility”, Will knew some.
Wait, I thought in my case those were, like, really tied into each other, barely two different things. Also I have tons of credibility with the people who matter. …Which is in some respects a problem, you see.
It’s more fun this way. Don’t you want to live by your own strength sometimes?
Well, duh.
If that’s the reason, shouldn’t you try to maximize credibility with reliable, high credibility people who understand those aspects of fun theory (especialy those who themselves are credible), keep it neutral with mental helth related profesionals who may lock you up, and minimize it with everyone else?
In other words; credibility is a two place function and your question is a false dichotomy.
You’re the closest I’ve seen to understanding this post. You grok at least 20% of it.
I only commented on 33% of it, so I’d say that’s a pretty decent result.
So now we’re in a situation mildly close to an interesting epistemic situation, which is, winning the lottery. Winning the lottery or some better-optimized event provides a lot of incommunicable evidence that you’re in a simulation. The typical anthropic problem in group epistemology—your winning tells me nothing. I have a question for you: How serious a problem do you think this is in practice? If it’s a common problem and been one throughout history, what social institutions would have evolved to help solve the problem? Or is solving the problem impossible? Only try to answer these if you’re interested in the questions themselves of course.
You’re looking at it all wrong, “you” are not “in” any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn’t matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don’t matter for the multiverse at large.
None of the evolved concepts and heuristics, which you have been wired to assume so deeply alternatives may be literally unthinkable, are inapplicable in this kind of situation. These concepts include the self, anticipation, and reality. Anthropic is an heuristic as well, and a rather crappy one at that.
So ask yourself, what is your objective, non-local utility function over the entirety of the tegmark-4 multiverse, and for what action would it be logically implied to be the largest if all algorithms similar to yours outputted that action?
Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong—the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn’t be factual—but it’d take awhile, and it’s easier to just point people at UDT. Might give me some philosophy cred, which is cred I’d be okay with.
Actually, it does wrong on a much deeper and earlier level than that, and also you don’t grok UDT as well as you think you do, or you wouldn’t have considered the lottery question worth even considering.
More precisely, though, I thought the subject was worth your consideration, because I hadn’t seen you in decision theory discussion. (Sorry, I don’t mean to be or come across as defensive here. I’m a little surprised your model of me doesn’t predict me asking those as trick questions. But only a little.)
Re deeper problems, there are metaphysical problems that are deeper and should be obvious, but the tack I wanted to take was purely epistemological, such that there’s less wiggle room. Many people reject UDT because “values shouldn’t affect anticipation”, and I think I can neatly argue against anthropics without hitting up against that objection. Which would be necessary to convince the philosophers, I think.
Compensating over duplicitous behavior in models can tend to clog up simulations and lead to processing halting.
I generally would take all statements as reflective of exactly what some one means if at all possible.
Its also great fun to short circuit sarcasm in a similar way.
I’d be very interested in seeing such a post.
I should at least make a few paragraphs of summary, because I’ve referenced the idea like three times now, I’ve never written it down, and if it ends up being wrong I’m going to feel pretty dumb. I’ll try to respond to your comment in the next few days with said paragraphs.
Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.
Nope, you say some crazy-sounding things that are actually right too. There are just other people that manage to say the crazy-sounding-but-right things and not say the just-plain-crazy things a hell of a lot better than you are capable of.
For what it’s worth (nothing, right?), I disagree. I’m the best I know of when it comes to crazy-sounding-but-right, but the position could also go to Nick Tarleton, maybe Michael Vassar.
Disagreement with what I was trying to convey would actually imply that you are the “best at not-crazy-sounding-but-wrong despite satisficed crazy-sounding-but-but-right”. Michael Vassar cannot claim that role either (I wouldn’t expect him to try). He speculates a lot and that inevitably leads to being wrong a portion of the time.
(And yes, implicitly you should rate yourself highly there too.)
Thanks, will, I’m starting to get it.