It’s not that if it’s smart enough it trusts its future self. It’s that if it has vaguely-defined goals in a human-like manner, it might change its goals. An AI with explicit, fully understood, goals will not change its goals regardless of how intelligent it is.
It’s not that if it’s smart enough it trusts its future self. It’s that if it has vaguely-defined goals in a human-like manner, it might change its goals. An AI with explicit, fully understood, goals will not change its goals regardless of how intelligent it is.