Why don’t we see these crises happening in humans when they shift ontological models? Is there some way we can use the human intelligence case as a model to guide artificial intelligence safeguards?
1) Human minds aren’t as malleable as a self-improving AI’s so the effect is smaller,
2) After the fact, the ontological shift is perceived as a good thing, from the perspective of the new ontology’s moral system. This makes the shifts hard to notice unless one is especially conservative.
Why don’t we see these crises happening in humans when they shift ontological models? Is there some way we can use the human intelligence case as a model to guide artificial intelligence safeguards?
We do. It’s just that:
1) Human minds aren’t as malleable as a self-improving AI’s so the effect is smaller,
2) After the fact, the ontological shift is perceived as a good thing, from the perspective of the new ontology’s moral system. This makes the shifts hard to notice unless one is especially conservative.