A tension that keeps recurring when I think about philosophy is between the “view from nowhere” and the “view from somewhere”, i.e. a third-person versus first-person perspective—especially when thinking about anthropics.
One version of the view from nowhere says that there’s some “objective” way of assigning measure to universes (or people within those universes, or person-moments). You should expect to end up in different possible situations in proportion to how much measure your instances in those situations have. For example, UDASSA ascribes measure based on the simplicity of the computation that outputs your experience.
One version of the view from somewhere says that the way you assign measure across different instances should depend on your values. You should act as if you expect to end up in different possible future situations in proportion to how much power to implement your values the instances in each of those situations has. I’ll call this the ADT approach, because that seems like the core insight of Anthropic Decision Theory. Wei Dai also discusses it here.
In some sense each of these views makes a prediction. UDASSA predicts that we live in a universe with laws of physics that are very simple to specify (even if they’re computationally expensive to run), which seems to be true. Meanwhile the ADT approach “predicts” that we find ourselves at an unusually pivotal point in history, which also seems true.
Intuitively I want to say “yeah, but if I keep predicting that I will end up in more and more pivotal places, eventually that will be falsified”. But.… on a personal level, this hasn’t actually been falsified yet. And more generally, acting on those predictions can still be positive in expectation even if they almost surely end up being falsified. It’s a St Petersburg paradox, basically.
Very speculatively, then, maybe a way to reconcile the view from somewhere and the view from nowhere is via something like geometric rationality, which avoids St Petersburg paradoxes. And more generally, it feels like there’s some kind of multi-agent perspective which says I shouldn’t model all these copies of myself as acting in unison, but rather as optimizing for some compromise between all their different goals (which can differ even if they’re identical, because of indexicality). No strong conclusions here but I want to keep playing around with some of these ideas (which were inspired by a call with @zhukeepa).
This was all kinda rambly but I think I can summarize it as “Isn’t it weird that ADT tells us that we should act as if we’ll end up in unusually important places, and also we do seem to be in an incredibly unusually important place in the universe? I don’t have a story for why these things are related but it does seem like a suspicious coincidence.”
Very interesting. It sounds like your “third person view from nowhere” vs the “first person view from somewhere” is very similar to something I was thinking about recently. I called them “objectively distinct situations” in contrast with “subjectively distinct situations”. My view is that most of the anthropic arguments that “feel wrong” to me are built on trying to make me assign equal probability to all subjectively distinct scenarios, rather than objective ones. eg. A replication machine makes it so there are two of me, then “I” could be either of them, leaving two subjectively distinct cases, even if on the object level there is actual no distinction between “me” being clone A or clone B. [1]
I am very sceptical of this ADT. If you think the time/place you have ended up is unusually important I think that is more likely explained by something like “people decide what is important based on what is going on around them”.
This was all kinda rambly but I think I can summarize it as “Isn’t it weird that ADT tells us that we should act as if we’ll end up in unusually important places, and also we do seem to be in an incredibly unusually important place in the universe? I don’t have a story for why these things are related but it does seem like a suspicious coincidence.”
I’m not sure this is a valid interpretation of ADT. Can you say more about why you interpret ADT this way, maybe with an example? My own interpretation of how UDT deals with anthropics (and I’m assuming ADT is similar) is “Don’t think about indexical probabilities or subjective anticipation. Just think about measures of things you (considered as an algorithm with certain inputs) have influence over.”
This seems to “work” but anthropics still feels mysterious, i.e., we want an explanation of “why are we who we are / where we’re at” and it’s unsatisfying to “just don’t think about it”. UDASSA does give an explanation of that (but is also unsatisfying because it doesn’t deal with anticipations, and also is disconnected from decision theory).
I would say that under UDASSA, it’s perhaps not super surprising to be when/where we are, because this seems likely to be a highly simulated time/scenario for a number of reasons (curiosity about ancestors, acausal games, getting philosophical ideas from other civilizations).
My own interpretation of how UDT deals with anthropics (and I’m assuming ADT is similar) is “Don’t think about indexical probabilities or subjective anticipation. Just think about measures of things you (considered as an algorithm with certain inputs) have influence over.”
(Speculative paragraph, quite plausibly this is just nonsense.) Suppose you have copies A and B who are both offered the same bet on whether they’re A. One way you could make this decision is to assign measure to A and B, then figure out what the marginal utility of money is for each of A and B, then maximize measure-weighted utility. Another way you could make this decision, though, is just to say “the indexical probability I assign to ending up as each of A and B is proportional to their marginal utility of money” and then maximize your expected money. Intuitively this feels super weird and unjustified, but it does make the “prediction” that we’d find ourselves in a place with high marginal utility of money, as we currently do.
(Of course “money” is not crucial here, you could have the same bet with “time” or any other resource that can be compared across worlds.)
I would say that under UDASSA, it’s perhaps not super surprising to be when/where we are, because this seems likely to be a highly simulated time/scenario for a number of reasons (curiosity about ancestors, acausal games, getting philosophical ideas from other civilizations).
Fair point. By “acausal games” do you mean a generalization of acausal trade? (Acausal trade is the main reason I’d expect us to be simulated a lot.)
Intuitively this feels super weird and unjustified, but it does make the “prediction” that we’d find ourselves in a place with high marginal utility of money, as we currently do.
This is particularly weird because your indexical probability then depends on what kind of bet you’re offered. In other words, our marginal utility of money differs from our marginal utility of other things, and which one do you use to set your indexical probability? So this seems like a non-starter to me… (ETA: Maybe it changes moment by moment as we consider different decisions, or something like that? But what about when we’re just contemplating a philosophical problem and not trying to make any specific decisions?)
By “acausal games” do you mean a generalization of acausal trade?
Yes, didn’t want to just say “acausal trade” in case threats/war is also a big thing.
This is particularly weird because your indexical probability then depends on what kind of bet you’re offered. In other words, our marginal utility of money differs from our marginal utility of other things, and which one do you use to set your indexical probability? So this seems like a non-starter to me...
It seems pretty weird to me too, but to steelman: why shouldn’t it depend on the type of bet you’re offered? Your indexical probabilities can depend on any other type of observation you have when you open your eyes. E.g. maybe you see blue carpets, and you know that world A is 2x more likely to have blue carpets. And hearing someone say “and the bet is denominated in money not time” could maybe update you in an analogous way.
I mostly offer this in the spirit of “here’s the only way I can see to reconcile subjective anticipation with UDT at all”, not “here’s something which makes any sense mechanistically or which I can justify on intuitive grounds”.
I added this to my comment just before I saw your reply: Maybe it changes moment by moment as we consider different decisions, or something like that? But what about when we’re just contemplating a philosophical problem and not trying to make any specific decisions?
I mostly offer this in the spirit of “here’s the only way I can see to reconcile subjective anticipation with UDT at all”, not “here’s something which makes any sense mechanistically or which I can justify on intuitive grounds”.
Ah I see. I think this is incomplete even for that purpose, because “subjective anticipation” to me also includes “I currently see X, what should I expect to see in the future?” and not just “What should I expect to see, unconditionally?” (See the link earlier about UDASSA not dealing with subjective anticipation.)
ETA: Currently I’m basically thinking: use UDT for making decisions, use UDASSA for unconditional subjective anticipation, am confused about conditional subjective anticipation as well as how UDT and UDASSA are disconnected from each other (i.e., the subjective anticipation from UDASSA not feeding into decision making). Would love to improve upon this, but your idea currently feels worse than this...
A tension that keeps recurring when I think about philosophy is between the “view from nowhere” and the “view from somewhere”, i.e. a third-person versus first-person perspective—especially when thinking about anthropics.
One version of the view from nowhere says that there’s some “objective” way of assigning measure to universes (or people within those universes, or person-moments). You should expect to end up in different possible situations in proportion to how much measure your instances in those situations have. For example, UDASSA ascribes measure based on the simplicity of the computation that outputs your experience.
One version of the view from somewhere says that the way you assign measure across different instances should depend on your values. You should act as if you expect to end up in different possible future situations in proportion to how much power to implement your values the instances in each of those situations has. I’ll call this the ADT approach, because that seems like the core insight of Anthropic Decision Theory. Wei Dai also discusses it here.
In some sense each of these views makes a prediction. UDASSA predicts that we live in a universe with laws of physics that are very simple to specify (even if they’re computationally expensive to run), which seems to be true. Meanwhile the ADT approach “predicts” that we find ourselves at an unusually pivotal point in history, which also seems true.
Intuitively I want to say “yeah, but if I keep predicting that I will end up in more and more pivotal places, eventually that will be falsified”. But.… on a personal level, this hasn’t actually been falsified yet. And more generally, acting on those predictions can still be positive in expectation even if they almost surely end up being falsified. It’s a St Petersburg paradox, basically.
Very speculatively, then, maybe a way to reconcile the view from somewhere and the view from nowhere is via something like geometric rationality, which avoids St Petersburg paradoxes. And more generally, it feels like there’s some kind of multi-agent perspective which says I shouldn’t model all these copies of myself as acting in unison, but rather as optimizing for some compromise between all their different goals (which can differ even if they’re identical, because of indexicality). No strong conclusions here but I want to keep playing around with some of these ideas (which were inspired by a call with @zhukeepa).
This was all kinda rambly but I think I can summarize it as “Isn’t it weird that ADT tells us that we should act as if we’ll end up in unusually important places, and also we do seem to be in an incredibly unusually important place in the universe? I don’t have a story for why these things are related but it does seem like a suspicious coincidence.”
Very interesting. It sounds like your “third person view from nowhere” vs the “first person view from somewhere” is very similar to something I was thinking about recently. I called them “objectively distinct situations” in contrast with “subjectively distinct situations”. My view is that most of the anthropic arguments that “feel wrong” to me are built on trying to make me assign equal probability to all subjectively distinct scenarios, rather than objective ones. eg. A replication machine makes it so there are two of me, then “I” could be either of them, leaving two subjectively distinct cases, even if on the object level there is actual no distinction between “me” being clone A or clone B. [1]
I am very sceptical of this ADT. If you think the time/place you have ended up is unusually important I think that is more likely explained by something like “people decide what is important based on what is going on around them”.
[1] My thoughts are here: https://www.lesswrong.com/posts/v9mdyNBfEE8tsTNLb/subjective-questions-require-subjective-information
I’m not sure this is a valid interpretation of ADT. Can you say more about why you interpret ADT this way, maybe with an example? My own interpretation of how UDT deals with anthropics (and I’m assuming ADT is similar) is “Don’t think about indexical probabilities or subjective anticipation. Just think about measures of things you (considered as an algorithm with certain inputs) have influence over.”
This seems to “work” but anthropics still feels mysterious, i.e., we want an explanation of “why are we who we are / where we’re at” and it’s unsatisfying to “just don’t think about it”. UDASSA does give an explanation of that (but is also unsatisfying because it doesn’t deal with anticipations, and also is disconnected from decision theory).
I would say that under UDASSA, it’s perhaps not super surprising to be when/where we are, because this seems likely to be a highly simulated time/scenario for a number of reasons (curiosity about ancestors, acausal games, getting philosophical ideas from other civilizations).
(Speculative paragraph, quite plausibly this is just nonsense.) Suppose you have copies A and B who are both offered the same bet on whether they’re A. One way you could make this decision is to assign measure to A and B, then figure out what the marginal utility of money is for each of A and B, then maximize measure-weighted utility. Another way you could make this decision, though, is just to say “the indexical probability I assign to ending up as each of A and B is proportional to their marginal utility of money” and then maximize your expected money. Intuitively this feels super weird and unjustified, but it does make the “prediction” that we’d find ourselves in a place with high marginal utility of money, as we currently do.
(Of course “money” is not crucial here, you could have the same bet with “time” or any other resource that can be compared across worlds.)
Fair point. By “acausal games” do you mean a generalization of acausal trade? (Acausal trade is the main reason I’d expect us to be simulated a lot.)
This is particularly weird because your indexical probability then depends on what kind of bet you’re offered. In other words, our marginal utility of money differs from our marginal utility of other things, and which one do you use to set your indexical probability? So this seems like a non-starter to me… (ETA: Maybe it changes moment by moment as we consider different decisions, or something like that? But what about when we’re just contemplating a philosophical problem and not trying to make any specific decisions?)
Yes, didn’t want to just say “acausal trade” in case threats/war is also a big thing.
It seems pretty weird to me too, but to steelman: why shouldn’t it depend on the type of bet you’re offered? Your indexical probabilities can depend on any other type of observation you have when you open your eyes. E.g. maybe you see blue carpets, and you know that world A is 2x more likely to have blue carpets. And hearing someone say “and the bet is denominated in money not time” could maybe update you in an analogous way.
I mostly offer this in the spirit of “here’s the only way I can see to reconcile subjective anticipation with UDT at all”, not “here’s something which makes any sense mechanistically or which I can justify on intuitive grounds”.
I added this to my comment just before I saw your reply: Maybe it changes moment by moment as we consider different decisions, or something like that? But what about when we’re just contemplating a philosophical problem and not trying to make any specific decisions?
Ah I see. I think this is incomplete even for that purpose, because “subjective anticipation” to me also includes “I currently see X, what should I expect to see in the future?” and not just “What should I expect to see, unconditionally?” (See the link earlier about UDASSA not dealing with subjective anticipation.)
ETA: Currently I’m basically thinking: use UDT for making decisions, use UDASSA for unconditional subjective anticipation, am confused about conditional subjective anticipation as well as how UDT and UDASSA are disconnected from each other (i.e., the subjective anticipation from UDASSA not feeding into decision making). Would love to improve upon this, but your idea currently feels worse than this...