Aliens won’t produce a FAI, their successful AI project would have alien values, not ours (complexity of value). It would probably eat us. I suspect even our own humane FAI would eat us, at the very least get rid of the ridiculously resource-hungry robot-body substrate. The opportunity cost of just leaving dumb matter around seems too enormous to compete with whatever arguments there might be for not touching things, under most preferences except those specifically contrived to do so.
UFAI and FAI are probably about the same kind of thing for the purposes of powerful optimization (after initial steps towards reflective equilibrium normalize away flaws of the initial design, especially for “scruffy” AGI). FAI is just an AGI that happens to be designed to hold our values in particular. UFAI is not characterized by having “simple” values (if that characterization makes any sense in this context, it’s not clear in what way should optimization care about the absolute difficulty of the problem, as compared to the relative merits of alternative plans). It might even turn out to be likely for a poorly-designed AGI to have arbitrarily complicated “random noise” values. (It might also turn out to be relatively simple to make an AI with values so opaque that it would need to turn the whole universe into an only instrumentally valuable computer in order to obtain a tiny chance of figuring out where to move a single atom, the only action it ever does for terminal reasons. Make it solve a puzzle of high computational complexity or something.)
There doesn’t appear to be a reason to expect values to influence the speed of expansion to any significant extent, it’s astronomical waste (giving an instrumental drive) for almost all values, important to start optimizing the matter as soon as possible.
Relating to your first point, I’ve read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that’s solvable with a wider definition of the sort of stuff it’s supposed to be Friendly to, and I’d hope aliens would think of that, but it’s certainly possible.
(Terminological nitpick: You can’t usually solve problems by using different definitions.)
sort of stuff it’s supposed to be Friendly to
Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn’t decide on object level what’s “Friendly”. See also Complex Value Systems are Required to Realize Valuable Futures.
Aliens won’t produce a FAI, their successful AI project would have alien values, not ours (complexity of value). It would probably eat us. I suspect even our own humane FAI would eat us, at the very least get rid of the ridiculously resource-hungry robot-body substrate. The opportunity cost of just leaving dumb matter around seems too enormous to compete with whatever arguments there might be for not touching things, under most preferences except those specifically contrived to do so.
UFAI and FAI are probably about the same kind of thing for the purposes of powerful optimization (after initial steps towards reflective equilibrium normalize away flaws of the initial design, especially for “scruffy” AGI). FAI is just an AGI that happens to be designed to hold our values in particular. UFAI is not characterized by having “simple” values (if that characterization makes any sense in this context, it’s not clear in what way should optimization care about the absolute difficulty of the problem, as compared to the relative merits of alternative plans). It might even turn out to be likely for a poorly-designed AGI to have arbitrarily complicated “random noise” values. (It might also turn out to be relatively simple to make an AI with values so opaque that it would need to turn the whole universe into an only instrumentally valuable computer in order to obtain a tiny chance of figuring out where to move a single atom, the only action it ever does for terminal reasons. Make it solve a puzzle of high computational complexity or something.)
There doesn’t appear to be a reason to expect values to influence the speed of expansion to any significant extent, it’s astronomical waste (giving an instrumental drive) for almost all values, important to start optimizing the matter as soon as possible.
Relating to your first point, I’ve read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that’s solvable with a wider definition of the sort of stuff it’s supposed to be Friendly to, and I’d hope aliens would think of that, but it’s certainly possible.
(Terminological nitpick: You can’t usually solve problems by using different definitions.)
Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn’t decide on object level what’s “Friendly”. See also Complex Value Systems are Required to Realize Valuable Futures.