Noticing an unachievable goal may force it to have an existential crisis of sorts, resulting in self-termination.
Do you have reasoning behind this being true, or is this baseless anthropomorphism ?
It should not hurt an aligned AI, as it by definition conforms to the humans’ values, so if it finds itself well-boxed, it would not try to fight it.
Do you have reasoning behind this being true, or is this baseless anthropomorphism ?
So it is an useless AI ?