The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to ‘control it’, fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.
The question then is what kind of god do we want to unleash.
While we’re in a thread with “Public Relations” in its title, I’d like to point out that calling an AGI a “god”, even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn’t going to help much in combating the singularity-as-religion pattern completion.
Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.
I read the “by (some) definition” and I find it actually supports the cluster mapping utility of the god term as it applies to AI’s. “Scary powerful optimization process” just doesn’t instantly convey the proper power relation.
But nonetheless, I do consider your public relations image point to be important. But I’m not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.
Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.
Understanding the latter domain is important for creating good AI and CEV and all that.
Calling an AGI a god too easily conjures up visions of a benevolent force. Even those who consider that it might not have our best interests at heart tend to think of dystopian science fiction.
I use the phrase “robot Cthulhu”, because the Singularity will probably eat the world without particularly noticing or caring that there’s someone living on it.
Calling an AGI a god too easily conjures up visions of a benevolent force
That really depends on how you feel about religion/god in the first place. To a guy like me, who is, as Hitchens is fond of describing himself, “not just an atheist, but an anti-theist”, the uFAI/god connection makes me want to donate everything I have to SIAI to make sure it doesn’t happen.
Incompetence is not a necessary condition for failure. Building something new is pretty near a sufficient condition for it, though. For instance, bridge design has been well-understood by engineers for millenia, but a slight variation on it brought catastrophic failure.
...shows that after the first success there were some failures—but nobody died up until The White Bird in 1927.
Engineers are pretty good at not killing people. In fact their efforts have created lives on a large scale.
Major sources of lives lost to engineering are automobile accidents and weapons of war. Automobile accidents are due to machines being too stupid—and intelligent machines should help fix that.
The bug that destroyed the world scenario seems pretty incredible to me—and I don’t see a case for describing it as the “default scenario”.
It seems, if anything—based on what we have seen so far—that it is slightly more likely that a virus might destroy the world—not that the chances of that happening are very high either.
Thanks. I had edited my post before seeing your reply.
Powered flight had a few associated early deaths: Otto Lilienthal died in a glider in 1896. Percy Pilcher in another hang gliding crash in 1899. Wilbur Wright almost came to a sticky end himself.
Don’t you realize the default scenario?
The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to ‘control it’, fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.
The question then is what kind of god do we want to unleash.
While we’re in a thread with “Public Relations” in its title, I’d like to point out that calling an AGI a “god”, even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn’t going to help much in combating the singularity-as-religion pattern completion.
Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.
I read the “by (some) definition” and I find it actually supports the cluster mapping utility of the god term as it applies to AI’s. “Scary powerful optimization process” just doesn’t instantly convey the proper power relation.
But nonetheless, I do consider your public relations image point to be important. But I’m not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.
Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.
Understanding the latter domain is important for creating good AI and CEV and all that.
Calling an AGI a god too easily conjures up visions of a benevolent force. Even those who consider that it might not have our best interests at heart tend to think of dystopian science fiction.
I use the phrase “robot Cthulhu”, because the Singularity will probably eat the world without particularly noticing or caring that there’s someone living on it.
That really depends on how you feel about religion/god in the first place. To a guy like me, who is, as Hitchens is fond of describing himself, “not just an atheist, but an anti-theist”, the uFAI/god connection makes me want to donate everything I have to SIAI to make sure it doesn’t happen.
Maybe that’s just me.
You assume incompetent engineers?!? What’s the best case for engineers predictably failing at safety-critical tasks.
Incompetence is not a necessary condition for failure. Building something new is pretty near a sufficient condition for it, though. For instance, bridge design has been well-understood by engineers for millenia, but a slight variation on it brought catastrophic failure.
Moon landings? Man in space?
http://en.wikipedia.org/wiki/Transatlantic_flight#Early_notable_transatlantic_flights
...shows that after the first success there were some failures—but nobody died up until The White Bird in 1927.
Engineers are pretty good at not killing people. In fact their efforts have created lives on a large scale.
Major sources of lives lost to engineering are automobile accidents and weapons of war. Automobile accidents are due to machines being too stupid—and intelligent machines should help fix that.
The bug that destroyed the world scenario seems pretty incredible to me—and I don’t see a case for describing it as the “default scenario”.
It seems, if anything—based on what we have seen so far—that it is slightly more likely that a virus might destroy the world—not that the chances of that happening are very high either.
“Notable attempt (3)”—“lost” likely means “died”.
Thanks. I had edited my post before seeing your reply.
Powered flight had a few associated early deaths: Otto Lilienthal died in a glider in 1896. Percy Pilcher in another hang gliding crash in 1899. Wilbur Wright almost came to a sticky end himself.
I’d never compared the likelihood of those two events before; is this comparison discussed anywhere prominent?
I don’t know. Looking at the current IT scene, viruses, trojans and malware are probably the most prominent source of damage.
Bugs which are harmful are often the ones that allow viruses and malware to be produced.
We kind-of know how to avoid most harmful bugs. But either nobody cares enough to bother—or else the NSA likes people to be using insecure computers.