Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.
However even if we eliminate their approaches, we still don’t know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.
*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.
Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.
However even if we eliminate their approaches, we still don’t know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.
*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.