I’m not sure what you’re objecting to—the idea of superhuman intelligence? the idea that superhuman intelligence would determine the fate of the world? the idea that “unaligned” superhuman intelligence would produce a world inhospitable to humanity?
I am objecting, on some level, to all of it. Certainly some ideas (or their associated principles) seem more clear than others but none of it feels like it is there from top to bottom. It is clear from the other responses that that is because a Popperian reading is doomed to fail.
An example of human level AI from the book (p. 52):
It is also possible that a push toward emulation technology would lead to the creation of some kind of neuromorphic AI that would adapt some neurocomputational principles discovered during emulation efforts and hybridize them with synthetic methods, and that this would happen before the completion of a fully functional whole brain emulation.
You cannot disprove neurocomputational principles (e.g. Rosenblatt’s perceptron) and ” a push toward emulation technology” is a vague enough claim to not be able to engage with productively.
I feel that both the paths and dangers have an ‘ever-branchingness’ to them such that a Popperian approach of disproving a single path toward superintelligence is like chopping of the head of a hydra.
the idea that “unaligned” superhuman intelligence would produce a world inhospitable to humanity?
I think this part is most clear, the orthogonality thesis together with the concept of a singleton and unaligned superintelligence point toward extinction.
So if I am understanding you… You think the doomsday scenario (unaligned all-powerful AI as creating a risk of extinction for humanity) is internally consistent, but you want to know if it is actually possible or likely. And you want to make this judgment in a Popperian way.
Since you undoubtedly know more than me about Popperian methods, can I first ask how a Popperian would approach a proposition like, “a nuclear war in which hundreds of cities were bombed would be a disaster”. Like certain other big risks, it’s a proposition that we would like to evaluate in some way, without just letting the event happen and seeing how bad it is… In short, can you clarify for me how falsificationism is applied to claims that a certain event is possible but must never be allowed to happen.
Quick thought. It is easy to test a nuclear weapon with relatively little harm done (in some desert) and from there note its effects and show (though a bit less convincingly) that if many of these nuclear weapons were used on cities and the like we would have a disaster on our hands. The case for superintelligence is not analogous. We cannot first build it and test it (safely) to see its destructive capabilities, we cannot even test if we can even build it as it would then already be too late if we were successful.
I cannot clarify how falsificationism is applied to claims like that. In addition I am unsure whether that is a possibility. I do think that if this is not a possibility it undermines the theory in some ways. E.g. classical Marxists still think it is only a matter of time until their global revolution.
I think there are ways to set up a falsifiable argument the other way, e.g. we will not reach AGI because 1. the human mind processes information above the Turing Limit and 2. all AI is within the Turing Limit. For this we do not even need to reach AGI to disprove it, we can try to show that human minds are within the Turing Limit or AI is/can be above it.
I’m not sure what you’re objecting to—the idea of superhuman intelligence? the idea that superhuman intelligence would determine the fate of the world? the idea that “unaligned” superhuman intelligence would produce a world inhospitable to humanity?
I am objecting, on some level, to all of it. Certainly some ideas (or their associated principles) seem more clear than others but none of it feels like it is there from top to bottom. It is clear from the other responses that that is because a Popperian reading is doomed to fail.
An example of human level AI from the book (p. 52):
You cannot disprove neurocomputational principles (e.g. Rosenblatt’s perceptron) and ” a push toward emulation technology” is a vague enough claim to not be able to engage with productively.
I feel that both the paths and dangers have an ‘ever-branchingness’ to them such that a Popperian approach of disproving a single path toward superintelligence is like chopping of the head of a hydra.
I think this part is most clear, the orthogonality thesis together with the concept of a singleton and unaligned superintelligence point toward extinction.
So if I am understanding you… You think the doomsday scenario (unaligned all-powerful AI as creating a risk of extinction for humanity) is internally consistent, but you want to know if it is actually possible or likely. And you want to make this judgment in a Popperian way.
Since you undoubtedly know more than me about Popperian methods, can I first ask how a Popperian would approach a proposition like, “a nuclear war in which hundreds of cities were bombed would be a disaster”. Like certain other big risks, it’s a proposition that we would like to evaluate in some way, without just letting the event happen and seeing how bad it is… In short, can you clarify for me how falsificationism is applied to claims that a certain event is possible but must never be allowed to happen.
Quick thought. It is easy to test a nuclear weapon with relatively little harm done (in some desert) and from there note its effects and show (though a bit less convincingly) that if many of these nuclear weapons were used on cities and the like we would have a disaster on our hands. The case for superintelligence is not analogous. We cannot first build it and test it (safely) to see its destructive capabilities, we cannot even test if we can even build it as it would then already be too late if we were successful.
I cannot clarify how falsificationism is applied to claims like that. In addition I am unsure whether that is a possibility. I do think that if this is not a possibility it undermines the theory in some ways. E.g. classical Marxists still think it is only a matter of time until their global revolution.
I think there are ways to set up a falsifiable argument the other way, e.g. we will not reach AGI because 1. the human mind processes information above the Turing Limit and 2. all AI is within the Turing Limit. For this we do not even need to reach AGI to disprove it, we can try to show that human minds are within the Turing Limit or AI is/can be above it.