Let’s not forget what the dark age monks were disputing about for centuries… and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists… singularists? :-)
But let’s look at the history of power to destruct.
So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear… yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.
Nowadays, once knowledge is freely and widely available, imagine “free nanomanufacturing” revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.
Not even necessary to go into any AI-powered new stuff.
The problem is not AI, it’s us, humanimals.
We are mentally still the same animals as we were at least thousands of years ago, even the “best” ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality—“eat all”, “overpopulate”, “kill all”, “conquer all”… be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons… or gadgets. Unfortunately, it’s animals who rule, be it in the highest ranks or lowest floors.
Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla’ of amygdala, you know, reptilian brain, etc.)
In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.
So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.
And yes, of course, you will not be able to “design” it precisely, not talking about controlling it (or any of those laughable “friendly” tales) - it will learn, plain and simple. Of course it will “escape” and of course if will be “human-like” and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.
As a humanimal, your only hope can be that “you” will be somehow “integrated” into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of “individual” (maybe in some “multifork subprocesses”, certainly not in a “ruling” role). Or… interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like “system”)?
Memento monachi!
Let’s not forget what the dark age monks were disputing about for centuries… and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists… singularists? :-)
But let’s look at the history of power to destruct.
So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear… yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.
Nowadays, once knowledge is freely and widely available, imagine “free nanomanufacturing” revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.
Not even necessary to go into any AI-powered new stuff.
The problem is not AI, it’s us, humanimals.
We are mentally still the same animals as we were at least thousands of years ago, even the “best” ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality—“eat all”, “overpopulate”, “kill all”, “conquer all”… be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons… or gadgets. Unfortunately, it’s animals who rule, be it in the highest ranks or lowest floors.
Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla’ of amygdala, you know, reptilian brain, etc.)
In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.
So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.
And yes, of course, you will not be able to “design” it precisely, not talking about controlling it (or any of those laughable “friendly” tales) - it will learn, plain and simple. Of course it will “escape” and of course if will be “human-like” and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.
As a humanimal, your only hope can be that “you” will be somehow “integrated” into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of “individual” (maybe in some “multifork subprocesses”, certainly not in a “ruling” role). Or… interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like “system”)?