Your argument seems like saying, “Look, everyone said that the Y2K bug would cause terrible problems, but nothing happened.”
Nothing happened, exactly because everybody was predicting terrible problems and so they fixed the bugs in advance. If people had been following your idea, they wouldn’t have bothered to predict any problems or to fix the bugs, and it could easily have therefore caused terrible problems.
It’s more like saying “look, everyone said pricing sulfur dioxide would cause great problems, but nothing really bad happened, because people and the market adapted naturally to the change”.
So the Y2K bug is not an argument for “do nothing if you’re heavily involved with computers”, but it is an argument for “do nothing if you have no connection with the computer industry (including funding, etc...) because it seems to have a decent track record of sorting out its own problems”.
I agree that “taking down this fence is going to cause society to collapse” is almost always false, at least when there is any real danger of the fence being taken down.
The same thing likely applies to statements like “programming an AGI without a tremendous amount of care about its exact goals is going to destroy the world.”
Your argument seems like saying, “Look, everyone said that the Y2K bug would cause terrible problems, but nothing happened.”
Nothing happened, exactly because everybody was predicting terrible problems and so they fixed the bugs in advance. If people had been following your idea, they wouldn’t have bothered to predict any problems or to fix the bugs, and it could easily have therefore caused terrible problems.
It’s more like saying “look, everyone said pricing sulfur dioxide would cause great problems, but nothing really bad happened, because people and the market adapted naturally to the change”.
So the Y2K bug is not an argument for “do nothing if you’re heavily involved with computers”, but it is an argument for “do nothing if you have no connection with the computer industry (including funding, etc...) because it seems to have a decent track record of sorting out its own problems”.
I agree that “taking down this fence is going to cause society to collapse” is almost always false, at least when there is any real danger of the fence being taken down.
The same thing likely applies to statements like “programming an AGI without a tremendous amount of care about its exact goals is going to destroy the world.”
I’d argue we have rather more experience of taking down fences which people cling to, than of programming AGI goals...