Not particularly, no. I care about communicating with my actual friends and family, not shadows of them.
I believe I’d still prefer this scenario over our current world, assuming those two—or destroying the world—are the only options. That’s not very likely, though.
I would very much prefer CelestAIs utopia over this one, aliens and all.
I’d take status quo over this, and would only accept this with extremely low odds of intelligent life existing elsewhere or elsewhen in universe and the alternative being destruction.
How about the alternative being probably destruction?
I’m not optimistic about our future. I do think we’re likely to be alone within this hubble volume, though.
Hmmm. What specific X-risks are you worried about? UFAI beating MUFAI (what I consider this to be) to the punch?
Not sure about ‘probably destruction’ and no life going to arise in universe (Hubble volume? Does it matter?). But I think that the choice is unrealistic given the possibility of making another, less terrible AI in another few years.
-A lot of this probably depends on my views on the Singularity and the like: I have never had particularly a high estimation of either the promise or the peril of FOOMing AI.
If the AI will allow me to create people inside my subjective universe, and they are allowed to be actual people, not imitation P-zombies, my acceptance of this goes a lot higher, but I would still shut the project down.
-Hubble volume? Really? I mean, we are possibly the only technological civilization of our level within the galaxy, but the Hubble Volume is really, really big. (~10E10 galaxies?) And it goes temporally, as well.
It matters. There’s a good chance our universe is infinite, but there’s also a good chance it’s physically impossible to escape the (shrinking, effectively) hubble volume, superintelligence or not.
I’m inclined to think that if there was intelligence in there we’d probably see it, though. UFAI is a highly probable way for our civilization to end, but it won’t stop the offspring from spreading. Yes, it’s really big, but I expect UFAI to spread at ~lightspeed.
X-risks
UFAI’s the big one, but there are a couple others. Biotech-powered script kiddies, nanotech-driven war, etc. Suffice to say I’m not optimistic, and I consider death to be very, very bad. It’s not at all clear to me that this scenario is worse than status quo, let alone death.
Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.
Would you want to live in such a utopia?
Not particularly, no. I care about communicating with my actual friends and family, not shadows of them.
I believe I’d still prefer this scenario over our current world, assuming those two—or destroying the world—are the only options. That’s not very likely, though.
I would very much prefer CelestAIs utopia over this one, aliens and all.
I’d take status quo over this, and would only accept this with extremely low odds of intelligent life existing elsewhere or elsewhen in universe and the alternative being destruction.
Mm, well.
How about the alternative being probably destruction? I’m not optimistic about our future. I do think we’re likely to be alone within this hubble volume, though.
Hmmm. What specific X-risks are you worried about? UFAI beating MUFAI (what I consider this to be) to the punch?
Not sure about ‘probably destruction’ and no life going to arise in universe (Hubble volume? Does it matter?). But I think that the choice is unrealistic given the possibility of making another, less terrible AI in another few years.
-A lot of this probably depends on my views on the Singularity and the like: I have never had particularly a high estimation of either the promise or the peril of FOOMing AI.
If the AI will allow me to create people inside my subjective universe, and they are allowed to be actual people, not imitation P-zombies, my acceptance of this goes a lot higher, but I would still shut the project down.
-Hubble volume? Really? I mean, we are possibly the only technological civilization of our level within the galaxy, but the Hubble Volume is really, really big. (~10E10 galaxies?) And it goes temporally, as well.
It matters. There’s a good chance our universe is infinite, but there’s also a good chance it’s physically impossible to escape the (shrinking, effectively) hubble volume, superintelligence or not.
I’m inclined to think that if there was intelligence in there we’d probably see it, though. UFAI is a highly probable way for our civilization to end, but it won’t stop the offspring from spreading. Yes, it’s really big, but I expect UFAI to spread at ~lightspeed.
UFAI’s the big one, but there are a couple others. Biotech-powered script kiddies, nanotech-driven war, etc. Suffice to say I’m not optimistic, and I consider death to be very, very bad. It’s not at all clear to me that this scenario is worse than status quo, let alone death.
Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.