These are interesting ideas. I’m not sure I understand what you mean by the first; causal structure can be arbitrarily complex, so I’m unsure how to mitigate across the plausible structures. (It seems to be an AIXI-like problem.)
2&3, however, require that humans understand the domain, and too-often in existing systems we do not. Superhuman AI might be better than us at this, but if causal understanding scales more slowly than capability, it would still fail.
These are interesting ideas. I’m not sure I understand what you mean by the first; causal structure can be arbitrarily complex, so I’m unsure how to mitigate across the plausible structures. (It seems to be an AIXI-like problem.)
2&3, however, require that humans understand the domain, and too-often in existing systems we do not. Superhuman AI might be better than us at this, but if causal understanding scales more slowly than capability, it would still fail.