Sorry for a repost, my previous comment softly and silently vanished away, and since it wasn’t at all spammy I can only assume that was a mistake.
Eliezer, please explain why you think that it would be immoral to build an FAI as a person? (I’m assuming a very loose interpretation of “person” that doesn’t mean “thinks like a human” but does mean “talks back and claims to be conscious and self-experiencing”.)
Surely the friendly goals would just be part of its ground-state assumptions?
Sorry for a repost, my previous comment softly and silently vanished away, and since it wasn’t at all spammy I can only assume that was a mistake.
Eliezer, please explain why you think that it would be immoral to build an FAI as a person? (I’m assuming a very loose interpretation of “person” that doesn’t mean “thinks like a human” but does mean “talks back and claims to be conscious and self-experiencing”.)
Surely the friendly goals would just be part of its ground-state assumptions?