It matters. There’s a good chance our universe is infinite, but there’s also a good chance it’s physically impossible to escape the (shrinking, effectively) hubble volume, superintelligence or not.
I’m inclined to think that if there was intelligence in there we’d probably see it, though. UFAI is a highly probable way for our civilization to end, but it won’t stop the offspring from spreading. Yes, it’s really big, but I expect UFAI to spread at ~lightspeed.
X-risks
UFAI’s the big one, but there are a couple others. Biotech-powered script kiddies, nanotech-driven war, etc. Suffice to say I’m not optimistic, and I consider death to be very, very bad. It’s not at all clear to me that this scenario is worse than status quo, let alone death.
Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.
It matters. There’s a good chance our universe is infinite, but there’s also a good chance it’s physically impossible to escape the (shrinking, effectively) hubble volume, superintelligence or not.
I’m inclined to think that if there was intelligence in there we’d probably see it, though. UFAI is a highly probable way for our civilization to end, but it won’t stop the offspring from spreading. Yes, it’s really big, but I expect UFAI to spread at ~lightspeed.
UFAI’s the big one, but there are a couple others. Biotech-powered script kiddies, nanotech-driven war, etc. Suffice to say I’m not optimistic, and I consider death to be very, very bad. It’s not at all clear to me that this scenario is worse than status quo, let alone death.
Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.