I’m willing to believe that if AI-roughly-as-described-by-Eliezer gets developed, it will be able to exterminate humanity, because we apparently have already invented weapons that can exterminate humanity. As for the chance of such AI getting developed at all, why not apply the usual reference classes of futuristic technology?
Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.
However even if we eliminate their approaches, we still don’t know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.
*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.
Hmm, I wonder what would be appropriate outside views to give a good estimate of the dangers of AI?
Number of species made extinct by competition rather than natural disaster? (Assume AI is something like a new species)
How well humans can control and predict technologies?
I’m willing to believe that if AI-roughly-as-described-by-Eliezer gets developed, it will be able to exterminate humanity, because we apparently have already invented weapons that can exterminate humanity. As for the chance of such AI getting developed at all, why not apply the usual reference classes of futuristic technology?
ETA: or, more specifically, futuristic software.
Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.
However even if we eliminate their approaches, we still don’t know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.
*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.