Relating to your first point, I’ve read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that’s solvable with a wider definition of the sort of stuff it’s supposed to be Friendly to, and I’d hope aliens would think of that, but it’s certainly possible.
(Terminological nitpick: You can’t usually solve problems by using different definitions.)
sort of stuff it’s supposed to be Friendly to
Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn’t decide on object level what’s “Friendly”. See also Complex Value Systems are Required to Realize Valuable Futures.
Relating to your first point, I’ve read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that’s solvable with a wider definition of the sort of stuff it’s supposed to be Friendly to, and I’d hope aliens would think of that, but it’s certainly possible.
(Terminological nitpick: You can’t usually solve problems by using different definitions.)
Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn’t decide on object level what’s “Friendly”. See also Complex Value Systems are Required to Realize Valuable Futures.