You can’t get such proof. A) the future is complex and hard to predict, b) there are even many plausible scenarios (not much likely, but plausible) where it doesn’t happen.
The best argument we can come up with is that it would be reasonable to assume a high probability (say 70-90%) that it will happen through x and y, and that probability is definitely too high for it not to become the most important thing of today.
Imo the best route is: Alpha Code can already compete with human programmers in some areas. Will it take long before AI becomes better than humans at programming, and can therefore take care of AI development by itself and rewrite its own code?
I’ve found that people often don’t agree with AI x-risk because they haven’t been exposed to the key concepts like recursive self improvement and basic AI drives. I believe that once you understand those concepts it’s hard not to change your mind, because they paint such a clear picture.
Indeed, I don’t want a formally-verified watertight Proof of Doom. I’d actually be a little surprised if we were doomed to mathematical standards.
I want a viral memetic Proof of Doom. A compelling argument that will convince everyone clever enough to be dangerous. The sort of thing that a marketing AI might come up with if it were constrained to only speak the truth. The sort of thing that might start a religion. A religion with a prohibition against creating God.
You can’t get such proof. A) the future is complex and hard to predict, b) there are even many plausible scenarios (not much likely, but plausible) where it doesn’t happen.
The best argument we can come up with is that it would be reasonable to assume a high probability (say 70-90%) that it will happen through x and y, and that probability is definitely too high for it not to become the most important thing of today.
Imo the best route is: Alpha Code can already compete with human programmers in some areas. Will it take long before AI becomes better than humans at programming, and can therefore take care of AI development by itself and rewrite its own code?
I’ve found that people often don’t agree with AI x-risk because they haven’t been exposed to the key concepts like recursive self improvement and basic AI drives. I believe that once you understand those concepts it’s hard not to change your mind, because they paint such a clear picture.
Indeed, I don’t want a formally-verified watertight Proof of Doom. I’d actually be a little surprised if we were doomed to mathematical standards.
I want a viral memetic Proof of Doom. A compelling argument that will convince everyone clever enough to be dangerous. The sort of thing that a marketing AI might come up with if it were constrained to only speak the truth. The sort of thing that might start a religion. A religion with a prohibition against creating God.