Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.
Do we care whether another intelligence is inside or outside of the Hubble volume?
My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don’t see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.
If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.
a) is perhaps possible, but long-range optimization is so much more useful that it won’t last. You might use an AI like that while creating a better one, if the stars are right. If you don’t, you can expect that someone else will.
I like to call this variation (among others) LAI. Limited, that is. It’s on a continuum from what we’ve got now; Google might count.
b) might be possible, at a risk of getting stuck like that. Ideally you’d want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you’re most of the way to having an FAI. This one’s a hard problem, due mostly to human factors.