I don’t see the circularity. “human” is a subset of “person”; there’s no reason an AI that is a “person” will have “human” values.
I dont’ see the relevance. Goetzel isn’t talking about building non-human persons.
Also, just thinking of the AI as being human-like doesn’t actually make it human-like.
If you design an AI on x-like principles, it will probably be X-like, unless something goes wrong.
Ah, I may not have gotten all the context.
If “something goes wrong” with high probability, it will probably not be X-like.
I dont’ see the relevance. Goetzel isn’t talking about building non-human persons.
If you design an AI on x-like principles, it will probably be X-like, unless something goes wrong.
Ah, I may not have gotten all the context.
If “something goes wrong” with high probability, it will probably not be X-like.