Also MIRI seems for some reason to threat-model its AGI’s as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think.
This is extremely important, and I hope you will write a post about it.
This is extremely important, and I hope you will write a post about it.