What are the problems that don’t show up in sub-human AI systems and also don’t show up in humans because we can’t think of them? I don’t know. I can’t think of them. That’s why they don’t show up.
An example of such a problem is, AI systems that figure out metacosmology and thereby become subject to acausal attack.
An example of such a problem is, AI systems that figure out metacosmology and thereby become subject to acausal attack.