In the scenario described, what you want most of all is a way to turn UFAI projects into FAI projects. Fortunately, this should be possible.
Think of it this way: you could postulate a universe where the dangerous stuff is pure magic, where there is a particular incantation that will summon Azathoth while every similar sequence of symbols does nothing more than you would expect, so the particular incantation can be known only by divine revelation. But that’s not what you’re postulating, right? We are talking about a universe where this stuff is science, not magic? In that case it should be scientifically knowable.
So what I would try to do is figure out:
What should we expect to see if the superintelligent AI is easier than most experts think, so FAI is important theory is true? What predictions does the theory make about what we should observe, that we would not observe if mainstream science is correct?
What should we do in response? How exactly do you aim a project at FAI instead of UFAI? (For example, should researchers move away from brute force techniques like genetic programming, in favor of greater emphasis on techniques like logical reasoning?)
And I’d write these up in an ongoing dialogue aimed at making surprising and successful predictions and thereby convincing relevant parties to take the correct actions for building FAI.
In the scenario described, what you want most of all is a way to turn UFAI projects into FAI projects. Fortunately, this should be possible.
Think of it this way: you could postulate a universe where the dangerous stuff is pure magic, where there is a particular incantation that will summon Azathoth while every similar sequence of symbols does nothing more than you would expect, so the particular incantation can be known only by divine revelation. But that’s not what you’re postulating, right? We are talking about a universe where this stuff is science, not magic? In that case it should be scientifically knowable.
So what I would try to do is figure out:
What should we expect to see if the superintelligent AI is easier than most experts think, so FAI is important theory is true? What predictions does the theory make about what we should observe, that we would not observe if mainstream science is correct?
What should we do in response? How exactly do you aim a project at FAI instead of UFAI? (For example, should researchers move away from brute force techniques like genetic programming, in favor of greater emphasis on techniques like logical reasoning?)
And I’d write these up in an ongoing dialogue aimed at making surprising and successful predictions and thereby convincing relevant parties to take the correct actions for building FAI.