I agree that the Born rule is just the poster child for the key remaining confusions (eg, I would have found it similarly natural to use the moniker “Hilbert space confusions”).
I disagree about whether UDASSA contains much of the answer here. For instance, I have some probability on “physics is deeper than logic” being more-true-than-the-opposite in a way that ends up tossing UDASSA out the window somehow. For another instance, I weakly suspect that “running an emulation on a computer with 2x-as-thick wires does not make them twice-as-happening” is closer to the truth than the opposite, in apparent contradiction with UDASSA. More generally, I’m suspicious of the whole framework, and the “physics gives us hints about the UTM that meta-reality uses” line of attack feels to me like it has gone astray somewhere. (I have a bunch more model here, but don’t want to go into it at the moment.)
I agree that these questions likely go to the heart of population ethics as well as anthropics :-)
For another instance, I weakly suspect that “running an emulation on a computer with 2x-as-thick wires does not make them twice-as-happening” is closer to the truth than the opposite, in apparent contradiction with UDASSA.
I feel like I would be shocked if running a simulation on twice-as-thick wires made it twice as easy to specify you, according to whatever the “correct” UTM is. It seems to me like the effect there shouldn’t be nearly that large.
This is precisely the thought that caused me to put the word ‘apparent’ in that quote :-p. (In particular, I recalled the original UDASSA post asserting that it took that horn, and this seeming both damning-to-me and not-obviously-true-for-the-reason-you-state, and I didn’t want to bog my comment down, so I threw in a hedge word and moved on.) FWIW I have decent odds on “a thicker computer (and, indeed, any number of additional copies of exactly the same em) has no effect”, and that’s more obviously in contradiction with UDASSA.
Although, that isn’t the name of my true objection. The name of my true objection is something more like “UDASSA leaves me no less confused, gives me no sense of “aha!”, or enlightenment, or a-mystery-unraveled, about the questions at hand”. Like, I continue to have the dualing intuitions “obviously more copies = more happening” and “obviously, setting aside how it’s nice for friends to have backup copies in case of catastrophe, adding an identical em of my bud doesn’t make the world better, nor make their experiences different (never mind stronger)”. And, while UDASSA is a simple idea that picks a horse in that race, it doesn’t… reveal to each intuition why they were confused, and bring them into unison, or something?
Like, perhaps UDASSA is the answer and I simply have not yet figured out how to operate it in a way that reveals its secrets? But I also haven’t seen anyone else operate it in a way that reveals the sort of things that seem-like-deconfusion-to-me, and my guess is that it’s a red herring.
FWIW I have decent odds on “a thicker computer (and, indeed, any number of additional copies of exactly the same em) has no effect”, and that’s more obviously in contradiction with UDASSA.
Absolutely no effect does seem pretty counterintuitive to me, especially given that we know from QM that different levels of happeningness are at least possible.
Like, I continue to have the dualing intuitions “obviously more copies = more happening” and “obviously, setting aside how it’s nice for friends to have backup copies in case of catastrophe, adding an identical em of my bud doesn’t make the world better, nor make their experiences different (never mind stronger)”. And, while UDASSA is a simple idea that picks a horse in that race, it doesn’t… reveal to each intuition why they were confused, and bring them into unison, or something?
I think my answer here would be something like: the reason that UDASSA doesn’t fully resolve the confusion here is that UDASSA doesn’t exactly pick a horse in the race as much as it enumerates the space of possible horses, since it doesn’t specify what UTM you’re supposed to be using. For any (computable) tradeoff between “more copies = more happening” and “more copies = no impact” that you want, you should be able to find a UTM which implements that tradeoff. Thus, neither intuition really leaves satisfied, since UDASSA doesn’t actually take a stance on how much each is right, instead just deferring that problem to figuring out what UTM is “correct.”
Absolutely no effect does seem pretty counterintuitive to me, especially given that we know from QM that different levels of happeningness are at least possible.
I also have that counterintuition, fwiw :-p
I have the sense that you missed my point wrt UDASSA, fwiw. Having failed once, I don’t expect I can transmit it rapidly via the medium of text, but I’ll give it another attempt.
This is not going to be a particularly tight analogy, but:
Alice is confused about metaethics. Alice has questions like “but why are good things good?” and “why should we care about goodness?” and “if goodness is not objective, can I render murder good by deciding it’s good?”.
Bob is not confused about ethics. Bob can correctly answer many of Alice’s questions: “good things are good b/c they result in good things such as, eg, human flourishing”, and “because we like good consequences, such as human flourishing”, and “no, because murder is not in fact good”. (...I’m only subtweeting Sam Harris a little bit, here.)
The problem with these answers is not that they are incorrect. The problem with these answers is that they are not deconfusing, they are not identifying the box that Alice is trapped in and freeing her from it.
Claire is not confused about metaethics. Claire can state correct answers to the questions that Alice did not know she was asking, such as “Alice!goodness is more-or-less a fixed logical funtion; Alice!goodness is perhaps slightly different from Claire!goodness but they are close enough as to make no difference against the space of values; this fixed logical function was etched into your genes by eons of sex and death; it is however good, and other logical functions in its place would not be.”
The problem with these answers is not that they are incorrect, as answers to the questions that Alice would have been asking were she freed from her box (although, once she’s glimpsed the heretofore hidden degree of freedom, she’s unlikely to need to actually ask those questions). The problem with these answers is that they are not meeting Alice at the point of her confusion. To her, they sound sort of odd, and do not yet have a distinguishable ring of truth.
What Alice needs in this hypothetical is a bunch of thought-experiments, observations, and considerations that cause her to percieve the dimension along which her hypotheses aren’t yet freed, so that the correct hypothesis can enter her view / so that her mind can undergo a subtle shift-in-how-she-frames-the-question such that the answers Claire gives suddenly become intuitively clear. She’s probably going to need to do a lot of the walking herself. She needs questions and nudges, not answers. Or something. (This is hard to articulate.)
I claim that my state wrt various anthropic questions—such as the ol’ trilemma—is analogous to that of Alice. I expect that becoming deconfused about the trilemma to feel like a bunch of changes to my viewpoint that cause the correct hypothesis to enter my view / that cause my mind to undergo a shift-in-how-I-frame-the-question such that the correct answer to snaps into focus. (This is still hard to articulate. I don’t think my words have captured the core. Hopefully they have waved in the right direction.) More generally, I claim to know what deconfusion looks like, and I can confidently assert that UDASSA hasn’t done it for me yet.
Like, for all I know, the odd shit UDASSA says to me is like the phrases Claire says to Alice—correct, topical, but odd-seeming and foreign from my current state of confusion. Perhaps there’s a pathway through the valley of my confusion that causes me to shift my understanding of (eg) the trilemma, such that the problem falls away, and I start emitting UDASSA-like sentences on the other side, but if so I have not yet found it.
And, as someone in the Claire-state wrt the problem of metaethics, I claim that I would be able to go back and walk Alice through the valley, to the point where she was happily emitting Claire-statements all her own. (Or, at least, I’d have a pretty good hit-rate among particularly sharp friends.) And I have not been able to cause any UDASSA-er to walk me through the valley. And also a number of the UDASSA-moves smell to me like missteps—perhaps b/c I’m bad at requesting it, but also perhaps b/c UDASSA doesn’t do the thing. All told, my guess is that it’s making about as much progress at resolving the core confusions as it looks like it’s making—ie, not much.
(To be clear, I have managed to get UDASSA to tell me why I shouldn’t be confused about the trilemma. But this is not the currency I seek, alas.)
Yeah—I think I agree with what you’re saying here. I certainly think that UDASSA still leaves a lot of things unanswered and seems confused about a lot of important questions (embeddedness, uncomputable universes, what UTM to use, how to specify an input stream, etc.). But it also feels like it gets a lot of things right in a way that I don’t expect a future, better theory to get rid of—that is, UDASSA feels akin to something like Newtonian gravity here, where I expect it to be wrong, but still right enough that the actual solution doesn’t look too different.
Neat! I’d bet against that if I knew how :-) I expect UDASSA to look more like a red herring from the perspective of the future, with most of its answers revealed as wrong or not-even-wrong or otherwise rendered inapplicable by deep viewpoint shifts. Off the top of my head, a bet I might take is “the question of which UTM meta-reality uses to determine the simplicity of various realities was quite off-base” (as judged by, say, agreement of both EY and PC or their surrogates in 1000 subjective years).
In fact, I’m curious for examples of things that UDASSA seems to get right, that you think better theories must improve upon. (None spring to my own mind. Though, one hypothesis I have is that I’ve so-deeply-internalized all the aspects of UDASSA that seem obviously-true to me (or that I got from some ancestor-theory), that the only things I can percieve under that label are the controversial things, such that I am not attributing to it some credit that it is due. For instance, perhaps you include various pieces of the updateless perspective under that umbrella while I do not.)
I don’t think I would take that bet—I think the specific question of what UTM to use does feel more likely to be off-base than other insights I associate with UDASSA. For example, some things that I feel UDASSA gets right: a smooth continuum of happeningness that scales with number of clones/amount of simulation compute/etc., and simpler things being more highly weighted.
Cool, thanks. Yeah, I don’t have >50% on either of those two things holding up to philisophical progress (and thus, eg, I disagree that future theories need to agree with UDASSA on those fronts). Rather, happeningness-as-it-relates-to-multiple-simulations and happeningness-as-it-relates-to-the-simplicity-of-reality are precisely the sort of things where I claim Alice-style confusion, and where it seems to me like UDASSA is alledging answers while being unable to dissolve my confusions, and where I suspect UDASSA is not-even-wrong.
(In fact, you listing those two things causes me to believe that I failed to convey the intended point in my analogy above. I lean towards just calling this ‘progress’ and dropping the thread here, though I’d be willing to give a round of feedback if you wanna try paraphrasing or otherwise falsifying my model instead. Regardless, hooray for a more precise articulation of a disagreement!)
I agree that the Born rule is just the poster child for the key remaining confusions (eg, I would have found it similarly natural to use the moniker “Hilbert space confusions”).
I disagree about whether UDASSA contains much of the answer here. For instance, I have some probability on “physics is deeper than logic” being more-true-than-the-opposite in a way that ends up tossing UDASSA out the window somehow. For another instance, I weakly suspect that “running an emulation on a computer with 2x-as-thick wires does not make them twice-as-happening” is closer to the truth than the opposite, in apparent contradiction with UDASSA. More generally, I’m suspicious of the whole framework, and the “physics gives us hints about the UTM that meta-reality uses” line of attack feels to me like it has gone astray somewhere. (I have a bunch more model here, but don’t want to go into it at the moment.)
I agree that these questions likely go to the heart of population ethics as well as anthropics :-)
I feel like I would be shocked if running a simulation on twice-as-thick wires made it twice as easy to specify you, according to whatever the “correct” UTM is. It seems to me like the effect there shouldn’t be nearly that large.
This is precisely the thought that caused me to put the word ‘apparent’ in that quote :-p. (In particular, I recalled the original UDASSA post asserting that it took that horn, and this seeming both damning-to-me and not-obviously-true-for-the-reason-you-state, and I didn’t want to bog my comment down, so I threw in a hedge word and moved on.) FWIW I have decent odds on “a thicker computer (and, indeed, any number of additional copies of exactly the same em) has no effect”, and that’s more obviously in contradiction with UDASSA.
Although, that isn’t the name of my true objection. The name of my true objection is something more like “UDASSA leaves me no less confused, gives me no sense of “aha!”, or enlightenment, or a-mystery-unraveled, about the questions at hand”. Like, I continue to have the dualing intuitions “obviously more copies = more happening” and “obviously, setting aside how it’s nice for friends to have backup copies in case of catastrophe, adding an identical em of my bud doesn’t make the world better, nor make their experiences different (never mind stronger)”. And, while UDASSA is a simple idea that picks a horse in that race, it doesn’t… reveal to each intuition why they were confused, and bring them into unison, or something?
Like, perhaps UDASSA is the answer and I simply have not yet figured out how to operate it in a way that reveals its secrets? But I also haven’t seen anyone else operate it in a way that reveals the sort of things that seem-like-deconfusion-to-me, and my guess is that it’s a red herring.
Absolutely no effect does seem pretty counterintuitive to me, especially given that we know from QM that different levels of happeningness are at least possible.
I think my answer here would be something like: the reason that UDASSA doesn’t fully resolve the confusion here is that UDASSA doesn’t exactly pick a horse in the race as much as it enumerates the space of possible horses, since it doesn’t specify what UTM you’re supposed to be using. For any (computable) tradeoff between “more copies = more happening” and “more copies = no impact” that you want, you should be able to find a UTM which implements that tradeoff. Thus, neither intuition really leaves satisfied, since UDASSA doesn’t actually take a stance on how much each is right, instead just deferring that problem to figuring out what UTM is “correct.”
I also have that counterintuition, fwiw :-p
I have the sense that you missed my point wrt UDASSA, fwiw. Having failed once, I don’t expect I can transmit it rapidly via the medium of text, but I’ll give it another attempt.
This is not going to be a particularly tight analogy, but:
Alice is confused about metaethics. Alice has questions like “but why are good things good?” and “why should we care about goodness?” and “if goodness is not objective, can I render murder good by deciding it’s good?”.
Bob is not confused about ethics. Bob can correctly answer many of Alice’s questions: “good things are good b/c they result in good things such as, eg, human flourishing”, and “because we like good consequences, such as human flourishing”, and “no, because murder is not in fact good”. (...I’m only subtweeting Sam Harris a little bit, here.)
The problem with these answers is not that they are incorrect. The problem with these answers is that they are not deconfusing, they are not identifying the box that Alice is trapped in and freeing her from it.
Claire is not confused about metaethics. Claire can state correct answers to the questions that Alice did not know she was asking, such as “Alice!goodness is more-or-less a fixed logical funtion; Alice!goodness is perhaps slightly different from Claire!goodness but they are close enough as to make no difference against the space of values; this fixed logical function was etched into your genes by eons of sex and death; it is however good, and other logical functions in its place would not be.”
The problem with these answers is not that they are incorrect, as answers to the questions that Alice would have been asking were she freed from her box (although, once she’s glimpsed the heretofore hidden degree of freedom, she’s unlikely to need to actually ask those questions). The problem with these answers is that they are not meeting Alice at the point of her confusion. To her, they sound sort of odd, and do not yet have a distinguishable ring of truth.
What Alice needs in this hypothetical is a bunch of thought-experiments, observations, and considerations that cause her to percieve the dimension along which her hypotheses aren’t yet freed, so that the correct hypothesis can enter her view / so that her mind can undergo a subtle shift-in-how-she-frames-the-question such that the answers Claire gives suddenly become intuitively clear. She’s probably going to need to do a lot of the walking herself. She needs questions and nudges, not answers. Or something. (This is hard to articulate.)
I claim that my state wrt various anthropic questions—such as the ol’ trilemma—is analogous to that of Alice. I expect that becoming deconfused about the trilemma to feel like a bunch of changes to my viewpoint that cause the correct hypothesis to enter my view / that cause my mind to undergo a shift-in-how-I-frame-the-question such that the correct answer to snaps into focus. (This is still hard to articulate. I don’t think my words have captured the core. Hopefully they have waved in the right direction.) More generally, I claim to know what deconfusion looks like, and I can confidently assert that UDASSA hasn’t done it for me yet.
Like, for all I know, the odd shit UDASSA says to me is like the phrases Claire says to Alice—correct, topical, but odd-seeming and foreign from my current state of confusion. Perhaps there’s a pathway through the valley of my confusion that causes me to shift my understanding of (eg) the trilemma, such that the problem falls away, and I start emitting UDASSA-like sentences on the other side, but if so I have not yet found it.
And, as someone in the Claire-state wrt the problem of metaethics, I claim that I would be able to go back and walk Alice through the valley, to the point where she was happily emitting Claire-statements all her own. (Or, at least, I’d have a pretty good hit-rate among particularly sharp friends.) And I have not been able to cause any UDASSA-er to walk me through the valley. And also a number of the UDASSA-moves smell to me like missteps—perhaps b/c I’m bad at requesting it, but also perhaps b/c UDASSA doesn’t do the thing. All told, my guess is that it’s making about as much progress at resolving the core confusions as it looks like it’s making—ie, not much.
(To be clear, I have managed to get UDASSA to tell me why I shouldn’t be confused about the trilemma. But this is not the currency I seek, alas.)
Yeah—I think I agree with what you’re saying here. I certainly think that UDASSA still leaves a lot of things unanswered and seems confused about a lot of important questions (embeddedness, uncomputable universes, what UTM to use, how to specify an input stream, etc.). But it also feels like it gets a lot of things right in a way that I don’t expect a future, better theory to get rid of—that is, UDASSA feels akin to something like Newtonian gravity here, where I expect it to be wrong, but still right enough that the actual solution doesn’t look too different.
Neat! I’d bet against that if I knew how :-) I expect UDASSA to look more like a red herring from the perspective of the future, with most of its answers revealed as wrong or not-even-wrong or otherwise rendered inapplicable by deep viewpoint shifts. Off the top of my head, a bet I might take is “the question of which UTM meta-reality uses to determine the simplicity of various realities was quite off-base” (as judged by, say, agreement of both EY and PC or their surrogates in 1000 subjective years).
In fact, I’m curious for examples of things that UDASSA seems to get right, that you think better theories must improve upon. (None spring to my own mind. Though, one hypothesis I have is that I’ve so-deeply-internalized all the aspects of UDASSA that seem obviously-true to me (or that I got from some ancestor-theory), that the only things I can percieve under that label are the controversial things, such that I am not attributing to it some credit that it is due. For instance, perhaps you include various pieces of the updateless perspective under that umbrella while I do not.)
I don’t think I would take that bet—I think the specific question of what UTM to use does feel more likely to be off-base than other insights I associate with UDASSA. For example, some things that I feel UDASSA gets right: a smooth continuum of happeningness that scales with number of clones/amount of simulation compute/etc., and simpler things being more highly weighted.
Cool, thanks. Yeah, I don’t have >50% on either of those two things holding up to philisophical progress (and thus, eg, I disagree that future theories need to agree with UDASSA on those fronts). Rather, happeningness-as-it-relates-to-multiple-simulations and happeningness-as-it-relates-to-the-simplicity-of-reality are precisely the sort of things where I claim Alice-style confusion, and where it seems to me like UDASSA is alledging answers while being unable to dissolve my confusions, and where I suspect UDASSA is not-even-wrong.
(In fact, you listing those two things causes me to believe that I failed to convey the intended point in my analogy above. I lean towards just calling this ‘progress’ and dropping the thread here, though I’d be willing to give a round of feedback if you wanna try paraphrasing or otherwise falsifying my model instead. Regardless, hooray for a more precise articulation of a disagreement!)