But if the current paradigm is not the final form of existentially dangerous AI, such research may not he particularly valuable.
I think we should figure out how to train puppies before we try to train wolves. It might turn out that very few principles carry over, but if they do, we’ll wish we delayed.
The only drawback I see to delaying is that it might cause people to take the issue less seriously than if powerful AI’s appear in their lives very suddenly.
I think we should figure out how to train puppies before we try to train wolves. It might turn out that very few principles carry over, but if they do, we’ll wish we delayed.
The only drawback I see to delaying is that it might cause people to take the issue less seriously than if powerful AI’s appear in their lives very suddenly.
I endorse attempts to deliberately engineer a slow takeoff. I am less enthused about attempts to freeze AI development at a particular level.