What if, as I suspect, UFAI is much easier than IA, where IA is at the level you’re hoping for? Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
Basically, I think you overestimate the value of intelligence.
Which is not to say that a parallel track of IA might not be worth a try.
Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.
What if, as I suspect, UFAI is much easier than IA, where IA is at the level you’re hoping for? Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
Basically, I think you overestimate the value of intelligence.
Which is not to say that a parallel track of IA might not be worth a try.
I had a post about this.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.