Right. I did. Ironic, I know. What I meant is, is properly defining “interesting” enough to avoid a UFAI, or are there some other issues to watch out for?
Hmm. It seems like a very small group of new lifeforms could lead properly interesting lives even if the AI killed us all beforehand and turned Earth (at least) into computing power.
I also suspect that we’d not enjoy an AGI that aims only for threshold values for two of the three of sentient, lives, or diverse, strongly favoring the last one.
Did you just ask how the phrase “interesting lives” could go wrong?
Right. I did. Ironic, I know. What I meant is, is properly defining “interesting” enough to avoid a UFAI, or are there some other issues to watch out for?
Hmm. It seems like a very small group of new lifeforms could lead properly interesting lives even if the AI killed us all beforehand and turned Earth (at least) into computing power.
I also suspect that we’d not enjoy an AGI that aims only for threshold values for two of the three of sentient, lives, or diverse, strongly favoring the last one.