There seems to be rather a lot of information lost beyond the chance of recovery.
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve’s idea; I think it’s pretty epic, especially if the part works where the AIs collectively catch all each others’ quantum entanglements which enables them to coordinate to reverse the past. That’d be freakin’ awesome. And either way, I think hearing this idea was the first time I thought, “wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with”.)
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default.
I’m not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn’t know that. It’s certainly not mine. I thought it was more likely that we were for practical purposes alone. If you’ll pardon the shorthand reasoning:
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
Anthropics and self indication and suchforth… most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to “we’re probably effectively alone in the universe as we see it”.
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
I don’t follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn’t exist. Remember, all those stars you see at night don’t necessarily exist; could just as easily be an illusion. All it’d take is for one superintelligence to show up somewhere and decide that we weren’t worth killing but that we shouldn’t get to see what’s actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.]
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
(They don’t have to be Friendly, they just have to be willing to trade.) I don’t have a strong opinion either way. If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI, so I was focusing on the scenario where we’re not being overseen/simulated/fucked-with. In such a scenario I don’t know what probability to put on it… I’ll think about it more.
If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.
I keep running into people who think anthropic reasoning doesn’t explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn’t understand the basic argument.
I’ve never encountered anyone who’s willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:
1) If the universe didn’t enable the formation of sapient life, it wouldn’t exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.
2) Our existence thus doesn’t indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.
3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
Is there a typo in there? The simulation argument doesn’t seem to fit.
Sorry, my sentence was unclear; “it” was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.
An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven’t done the actual calculations; it’d be super cool if someone else did them.
On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
But the “error correction and image reconstruction” itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.
It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)
I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).
It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.
What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn’t hurt to ask either. Thanks.
Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn’t have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve’s idea; I think it’s pretty epic, especially if the part works where the AIs collectively catch all each others’ quantum entanglements which enables them to coordinate to reverse the past. That’d be freakin’ awesome. And either way, I think hearing this idea was the first time I thought, “wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with”.)
I’m not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn’t know that. It’s certainly not mine. I thought it was more likely that we were for practical purposes alone. If you’ll pardon the shorthand reasoning:
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
Anthropics and self indication and suchforth… most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to “we’re probably effectively alone in the universe as we see it”.
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
I don’t follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn’t exist. Remember, all those stars you see at night don’t necessarily exist; could just as easily be an illusion. All it’d take is for one superintelligence to show up somewhere and decide that we weren’t worth killing but that we shouldn’t get to see what’s actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.]
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
(They don’t have to be Friendly, they just have to be willing to trade.) I don’t have a strong opinion either way. If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI, so I was focusing on the scenario where we’re not being overseen/simulated/fucked-with. In such a scenario I don’t know what probability to put on it… I’ll think about it more.
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
I keep running into people who think anthropic reasoning doesn’t explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn’t understand the basic argument.
I’ve never encountered anyone who’s willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:
1) If the universe didn’t enable the formation of sapient life, it wouldn’t exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.
2) Our existence thus doesn’t indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.
3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.
That’s really all there is to it...
Is there a typo in there? The simulation argument doesn’t seem to fit.
Oh, I was assuming… never mind, it’s probably not worth untangling.
Not as far as I can tell. What do you mean?
Sorry, my sentence was unclear; “it” was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.
Is there any reason to think that such detailed information as would be needed to recreate people wouldn’t get lost in noise?
An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven’t done the actual calculations; it’d be super cool if someone else did them.
On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
But the “error correction and image reconstruction” itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.
It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)
I’m not a total expert, but try me.
I’ll have to spend a few hours reloading the concepts into my brain. When I do that I’ll post it to Discussion.
Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).
It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.
What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn’t hurt to ask either. Thanks.
Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn’t have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.