if you have a sufficient density of superintelligences you can reverse the computation done by the universe itself which is pretty crazy isnt it yeah it is anyway this has practical consequences for ai cooperation problems so your decision theory should be able to handle it if your decision theory doesnt suck the end result can end up looking like qm with a nonrandom collapse postulate if superintelligences reverse computations in a way that is not identical to the born rule ie they prefer worlds where magical unicorns exist or whatever
Yes, it seems possible that a superintelligence embedded in the world can exert enough control to reverse the universe’s computation. I don’t think density is the relevant notion, and there are confusing issues going on here (which have certainly been touched on at LW). It is not clear that this sort of reversal is possible.
Yes, it seems like reversing the universe’s computation may be useful under a broad range of circumstances, though ai cooperation is not high on the list for me—recovering negentropy the universe has already burned seems much more likely, and incomprehensible superintelligence-activities seem more likely still. I agree that recovering as much negentropy as possible may require cooperation between not-identical AIs.
I agree that if you have some sort of causal/decision-theoretic-significance account of consciousness, the possibility of these interactions might give you a Born rule. But it seems like an account of consciousness has already done most of the work of explaining the Born rule, and this is sort of an afterthought. I think it is possible in principle that consciousness in our universe only exists because of some convergence of this form, but again most of the mystery is coming from the account of consciousness (and similar strangeness is possible without QM).
I don’t have a good model for why you write the things you write (though I rarely vote them down). LessWrong seems like the only place anywhere on earth where people care about some of the things you think about it, but your behavior suggests you don’t care about it. I accept that this is due to facts about you I don’t understand, and that comments like this one may just make the issue worse, but it does seem to me like the stuff you write about isn’t at very much inferential distance from the zeitgeist here, and that if you wanted to talk about it uncryptically you just could, and if you don’t want to talk about it it is perplexing that you bother.
Density is important because entanglements run away at light speed: if you can’t get those entanglements back then you can’t reverse the past, you can only reverse back to the point where the superintelligence originated, which isn’t that neat. The only way to recohere the system is if there’s a boundary condition or, more understandably, if a superintelligence catches your lost information and rushes it back to you (and presumably you would do the same for it, so it’s a very clear-cut trade scenario). Problem is that it might take a very high density of superintelligences throughout the universe to pull this off all the way back to the big bang: even so it seems likely that you can get perfect information for at least the last few thousand years, enough to, say, ressurect every human who’d ever died. Even cooler than cryonics. (ETA: Insofar that these ideas are correct they’re Steve Rayhawk’s, insofar as they’re retarded they’re mine.)
I don’t follow what consciousness has to do with it? I think I see what you’re getting at but your use of the word “consciousness” kinda threw me off, and I’d rather not misinterpret you.
The only way to recohere the system is if there’s a boundary condition
Or if spacetime is compact in any way, or in fact if there is only finitely much negentropy. In any of these cases, your abstraction of a bunch of distinct but potentially interfering branches will break down, and you can pick up all of your old waste heat even if it is “running away” at light speed.
In the other universes, where somehow things can continue expanding at light speed indefinitely, you can recover perfect info by exploring possible physical theories until you find yourself.
But it seems like all of these considerations are far too frail to have much impact in themselves; they just serve to put a lower bound on how weird we can expect the far future to be.
In the other universes, where somehow things can continue expanding at light speed indefinitely, you can recover perfect info by exploring possible physical theories until you find yourself.
I don’t immediately see how this gets around the problem; I’m probably just being stupid, but aren’t you still left with a bunch of possible histories consistent with your current state/decisions only an unknown subset of which are real? (“Real” in the usual sense, i.e. can reliably be used to coordinate with other agents.)
I agree re lower bound on weirdness, I’d add it serves as a lower bound on how competent your decision theory has to be (which shouldn’t be a problem).
(“Real” in the usual sense, i.e. can reliably be used to coordinate with other agents.)
With respect to e.g. bringing back all of the dead at least it doesn’t seem to matter: there are lots of histories consistent with your memories, some of them aren’t consistent with your ‘real’ history, but (at least if we have appropriate philosophical views towards our prior and so on) each of these histories also leads to an agent in your current situation, so if each one of them guesses the same distribution you end up with the same guesses as if each had been informed of their “real” history. With respect to uncomputing the universe I agree that you can’t recover all of the negentropy, but you do seem to recover perfect info in the relevant sense and in such universes you have infinitely much stuff anyway.
Okay, I understand now; I was thinking about the problem of reversing the past. Your arguments make sense if you just want to resurrect folk; it’s possible (as you seem to think?) that there’s no particularly good reason to reverse the past as long you have tons of computing power and all the information about the past that you’d need in practice. It’s definitely true that the latter strategy is applicable in more possible universes.
if you have a sufficient density of superintelligences you can reverse the computation done by the universe itself which is pretty crazy isnt it yeah it is anyway this has practical consequences for ai cooperation problems so your decision theory should be able to handle it if your decision theory doesnt suck the end result can end up looking like qm with a nonrandom collapse postulate if superintelligences reverse computations in a way that is not identical to the born rule ie they prefer worlds where magical unicorns exist or whatever
Let me try to parse and respond:
Yes, it seems possible that a superintelligence embedded in the world can exert enough control to reverse the universe’s computation. I don’t think density is the relevant notion, and there are confusing issues going on here (which have certainly been touched on at LW). It is not clear that this sort of reversal is possible.
Yes, it seems like reversing the universe’s computation may be useful under a broad range of circumstances, though ai cooperation is not high on the list for me—recovering negentropy the universe has already burned seems much more likely, and incomprehensible superintelligence-activities seem more likely still. I agree that recovering as much negentropy as possible may require cooperation between not-identical AIs.
I agree that if you have some sort of causal/decision-theoretic-significance account of consciousness, the possibility of these interactions might give you a Born rule. But it seems like an account of consciousness has already done most of the work of explaining the Born rule, and this is sort of an afterthought. I think it is possible in principle that consciousness in our universe only exists because of some convergence of this form, but again most of the mystery is coming from the account of consciousness (and similar strangeness is possible without QM).
I don’t have a good model for why you write the things you write (though I rarely vote them down). LessWrong seems like the only place anywhere on earth where people care about some of the things you think about it, but your behavior suggests you don’t care about it. I accept that this is due to facts about you I don’t understand, and that comments like this one may just make the issue worse, but it does seem to me like the stuff you write about isn’t at very much inferential distance from the zeitgeist here, and that if you wanted to talk about it uncryptically you just could, and if you don’t want to talk about it it is perplexing that you bother.
Density is important because entanglements run away at light speed: if you can’t get those entanglements back then you can’t reverse the past, you can only reverse back to the point where the superintelligence originated, which isn’t that neat. The only way to recohere the system is if there’s a boundary condition or, more understandably, if a superintelligence catches your lost information and rushes it back to you (and presumably you would do the same for it, so it’s a very clear-cut trade scenario). Problem is that it might take a very high density of superintelligences throughout the universe to pull this off all the way back to the big bang: even so it seems likely that you can get perfect information for at least the last few thousand years, enough to, say, ressurect every human who’d ever died. Even cooler than cryonics. (ETA: Insofar that these ideas are correct they’re Steve Rayhawk’s, insofar as they’re retarded they’re mine.)
I don’t follow what consciousness has to do with it? I think I see what you’re getting at but your use of the word “consciousness” kinda threw me off, and I’d rather not misinterpret you.
Or if spacetime is compact in any way, or in fact if there is only finitely much negentropy. In any of these cases, your abstraction of a bunch of distinct but potentially interfering branches will break down, and you can pick up all of your old waste heat even if it is “running away” at light speed.
In the other universes, where somehow things can continue expanding at light speed indefinitely, you can recover perfect info by exploring possible physical theories until you find yourself.
But it seems like all of these considerations are far too frail to have much impact in themselves; they just serve to put a lower bound on how weird we can expect the far future to be.
I don’t immediately see how this gets around the problem; I’m probably just being stupid, but aren’t you still left with a bunch of possible histories consistent with your current state/decisions only an unknown subset of which are real? (“Real” in the usual sense, i.e. can reliably be used to coordinate with other agents.)
I agree re lower bound on weirdness, I’d add it serves as a lower bound on how competent your decision theory has to be (which shouldn’t be a problem).
With respect to e.g. bringing back all of the dead at least it doesn’t seem to matter: there are lots of histories consistent with your memories, some of them aren’t consistent with your ‘real’ history, but (at least if we have appropriate philosophical views towards our prior and so on) each of these histories also leads to an agent in your current situation, so if each one of them guesses the same distribution you end up with the same guesses as if each had been informed of their “real” history. With respect to uncomputing the universe I agree that you can’t recover all of the negentropy, but you do seem to recover perfect info in the relevant sense and in such universes you have infinitely much stuff anyway.
Okay, I understand now; I was thinking about the problem of reversing the past. Your arguments make sense if you just want to resurrect folk; it’s possible (as you seem to think?) that there’s no particularly good reason to reverse the past as long you have tons of computing power and all the information about the past that you’d need in practice. It’s definitely true that the latter strategy is applicable in more possible universes.
New hobby: expounding profound ideas in lolcat or without any punctuation or capitalization.