Provided that strong artificial intelligence is possible, and hard takeoff is possible, there is a very real chance, well over 50% that we are all going to get killed by an unFriendly singularity.
Where is this (conditional) probability estimate coming from?
There is a lot of money in intelligence research. It’s valuable, powerful, and interesting. We’re just starting to get to the point again where there’s some good reason to get excited about the hype. I consider IBM Watson system to be a huge step forward, both in terms of PR and actual advancement. A lot of groups of smart people are working on increasingly broad artificial intelligence systems. They’re not talking about building full agents yet, because that’s a low-status idea due to the overpromises of the 70′s, but it’s just a matter of time.
Eliezer’s put a lot of blog posts into the idea that the construction of a powerful optimization process that isn’t explicitly Friendly is an incredibly bad scenario (see: paperclip maximizers). I tend to agree with him here.
Most AI researchers out there are either totally unaware of, or dismissive of Friendly AI concerns, due at least partially to its somewhat long-term and low status nature.
This is not a good recipe. Things not going badly relies on a Friendly AI researcher developing bootstrapping AI first, which is unlikely, and successfully making it Friendly, which is also unlikely. I put the joint probability at less than 50%, based on the information currently available to me. It’s not a good position to be in.
Either friendliness is still a hard thing compared to an entire AGI project, or not.
In the first case, ignoring friendliness gives significant advantage in speed, and there can be someone who cuts enough corners to win the race and builds an uFAI.
In the second case, trying to create a mostly-friendly AI and getting any partial progress wins some reputation in the AI community. This can help to sell friendliness considerations to other researchers.
Where is this (conditional) probability estimate coming from?
Here’s the way I look at it::
There is a lot of money in intelligence research. It’s valuable, powerful, and interesting. We’re just starting to get to the point again where there’s some good reason to get excited about the hype. I consider IBM Watson system to be a huge step forward, both in terms of PR and actual advancement. A lot of groups of smart people are working on increasingly broad artificial intelligence systems. They’re not talking about building full agents yet, because that’s a low-status idea due to the overpromises of the 70′s, but it’s just a matter of time.
Eliezer’s put a lot of blog posts into the idea that the construction of a powerful optimization process that isn’t explicitly Friendly is an incredibly bad scenario (see: paperclip maximizers). I tend to agree with him here.
Most AI researchers out there are either totally unaware of, or dismissive of Friendly AI concerns, due at least partially to its somewhat long-term and low status nature.
This is not a good recipe. Things not going badly relies on a Friendly AI researcher developing bootstrapping AI first, which is unlikely, and successfully making it Friendly, which is also unlikely. I put the joint probability at less than 50%, based on the information currently available to me. It’s not a good position to be in.
You’re assuming that people who don’t understand Friendly AI have enough competence to actually build a functioning agent AI in the first place.
Either friendliness is still a hard thing compared to an entire AGI project, or not.
In the first case, ignoring friendliness gives significant advantage in speed, and there can be someone who cuts enough corners to win the race and builds an uFAI.
In the second case, trying to create a mostly-friendly AI and getting any partial progress wins some reputation in the AI community. This can help to sell friendliness considerations to other researchers.