For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.
But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.
“But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition”—unlikely to happen in the forseeable future
Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.
For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.
But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.
“But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition”—unlikely to happen in the forseeable future
Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.