Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.
I had a post about this.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.