I’m not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we’re forced to live in a third-best world:
First best: Do AI research until just before we’re ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can’t expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don’t try to stop or slow down AI research because we don’t know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I’ve never seen such attitude from Friendliness-researchers, nor can even imagine that.
Why is this so ridiculous as to be unimaginable? Isn’t the second-best world above actually better than the third-best, if only it was feasible?
I’m not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we’re forced to live in a third-best world:
First best: Do AI research until just before we’re ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can’t expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don’t try to stop or slow down AI research because we don’t know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
Why is this so ridiculous as to be unimaginable? Isn’t the second-best world above actually better than the third-best, if only it was feasible?