There’s no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn’t supported.
There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.
The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it’s going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network’s required just to make a sparkplug?
One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.
We do in fact have multiple tries to get AI “right”.
We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It’s not mathematically possible.
Eh, I agree it is not mathematically possible to break one time pad (but it is important to remember NSA broke VENONA, mathematical cryptosystems are not same as their implementations in reality), but most of our cryptographic proofs are conditional and rely on assumptions. For example, I don’t see what is mathematically impossible about breaking AES.
There are three things to address, here. (1) That it can’t update or improve itself. (2) That doing so will lead to godlike power. (3) Whether such power is malevolent.
Of 1, it does that now. Last year, I started to get a bit nervous noticing the synergy between AI fields converging. In other words, Technology X (e.g. Stable Diffusion) could be used to improve the function of Technology Y (e.g. Tesla self-driving) for an increasingly large pool of X and Y. This is one of the early warning signs that you are about to enter a paradigm shift or geometric progression of discovery. Suddenly, people saying AGI was 50 years away started to sound laughable to me. If it is possible on silicon transistors, it is happening in the next 2 years. Here is an experiment testing the self reflection and self improvement (loosely “self training,” but not quite there) of GPT4 (last week).
Of 2, there is some merit to the argument that “superintelligence” will not be vastly more capable because of the hard universal limits of things like “causality.” That said, we don’t know how regular intelligence “works,” much less how much more super a super-intelligence would or could be. If we are saved from AI, then it is these computation and informational speed limits of physics that have saved us out of sheer dumb luck, not because of anything we broadly understood as a limit to intelligence, proper. Given the observational nature of the universe (ergo, quantum mechanics), for all we know, the simple act of being able to observe things faster could mean that a superintelligence would have higher speed limits than our chemical-reaction brains could ever hope to achieve. The not knowing is what causes people to be alarmist. Because a lot of incredibly important things are still very, very unknown …
Of 3, on principle, I refuse to believe that stirring the entire contents of Twitter and Reddit and 4Chan into a cake mix makes for a tasty cake. We often refer to such places as “sewers,” and oddly, I don’t recall eating many tasty things using raw sewage as a main ingredient. No, I don’t really have a research paper, here. It weirdly seems like the thing that least requires new and urgent research given everything else.
There’s no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn’t supported.
There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.
The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it’s going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network’s required just to make a sparkplug?
One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.
We do in fact have multiple tries to get AI “right”.
We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It’s not mathematically possible.
Eh, I agree it is not mathematically possible to break one time pad (but it is important to remember NSA broke VENONA, mathematical cryptosystems are not same as their implementations in reality), but most of our cryptographic proofs are conditional and rely on assumptions. For example, I don’t see what is mathematically impossible about breaking AES.
Meaning it hasn’t happened, or it isn’t possible?
If it offers to improve them, we may well see that as a benevolent act...
There are three things to address, here. (1) That it can’t update or improve itself. (2) That doing so will lead to godlike power. (3) Whether such power is malevolent.
Of 1, it does that now. Last year, I started to get a bit nervous noticing the synergy between AI fields converging. In other words, Technology X (e.g. Stable Diffusion) could be used to improve the function of Technology Y (e.g. Tesla self-driving) for an increasingly large pool of X and Y. This is one of the early warning signs that you are about to enter a paradigm shift or geometric progression of discovery. Suddenly, people saying AGI was 50 years away started to sound laughable to me. If it is possible on silicon transistors, it is happening in the next 2 years. Here is an experiment testing the self reflection and self improvement (loosely “self training,” but not quite there) of GPT4 (last week).
Of 2, there is some merit to the argument that “superintelligence” will not be vastly more capable because of the hard universal limits of things like “causality.” That said, we don’t know how regular intelligence “works,” much less how much more super a super-intelligence would or could be. If we are saved from AI, then it is these computation and informational speed limits of physics that have saved us out of sheer dumb luck, not because of anything we broadly understood as a limit to intelligence, proper. Given the observational nature of the universe (ergo, quantum mechanics), for all we know, the simple act of being able to observe things faster could mean that a superintelligence would have higher speed limits than our chemical-reaction brains could ever hope to achieve. The not knowing is what causes people to be alarmist. Because a lot of incredibly important things are still very, very unknown …
Of 3, on principle, I refuse to believe that stirring the entire contents of Twitter and Reddit and 4Chan into a cake mix makes for a tasty cake. We often refer to such places as “sewers,” and oddly, I don’t recall eating many tasty things using raw sewage as a main ingredient. No, I don’t really have a research paper, here. It weirdly seems like the thing that least requires new and urgent research given everything else.