I am still leaning towards the general resentment that I voiced in my first submission here, although I learnt a lot since then. I perceive the tendency to base decisions solely on logical implications to be questionable. I do not doubt that everything we know hints at the possibility that artificial intelligence could undergo explosive self-improvement and reach superhuman capabilities, but I do doubt that we know enough to justify the kind of commitment given within this community. I am not able to formalize my doubts right now or explain what exactly is wrong, but I feel the need to voice my skepticism nonetheless. And I feel that my recent discovery of some of the admitted problems reinforce my skepticism.
Take for example the risks from asteroids. There is a lot of empirical evidence that asteroids have caused damage before and that they pose an existential risk in future. One can use probability theory, the expected utility formula and other heuristics to determine how reasonable it would be to support the mitigation of that risk. Then there are risks like those posed by particle accelerators. There is some evidence that high-energy physics might pose and existential risk but more evidence that it does not. The risks and benefits are still not solely based on sheer speculation. Yet I don’t think that we could just factor in the expected utility of the whole Earth and our possible future as galactic civilization to conclude that we shouldn’t do high-energy physics. In the case of risks from AI we have a scenario that is solely based on extrapolation, on sheer speculation. The only reason to currently care strongly about risks from AI is the expected utility of success, respectively the disutility of a negative outcome. And that is where I become very skeptical, there can be no empirical criticism. Such scenarios then are those that I compare to ‘Pascal’s Mugging’ because they are susceptible to the same problem of expected utility outweighing any amount of reasonable doubt. I feel that such scenarios can lead us astray by removing themselves from other kinds of evidence and empirical criticism and therefore are solely justifying themselves by making up huge numbers when the available data and understanding doesn’t allow any such conclusions. Take for example MWI, itself a logical implication of the available data and current understanding. But is it enough to commit quantum suicide? I don’t feel that such a conclusion is reasonable. I believe that logical implications and extrapolations are not enough to outweigh any risk. If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
As you mentioned there are many uses like ballistic calculation where mere extrapolation works and is the best we can do. But since there are problems like ‘Pascal’s Mugging’, that we perceive to be undesirable and that lead to an infinite hunt for ever larger expected utility, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics. We agree that we are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We also agree that we are not going to stop loving our girlfriend just because there are many people who do not approve our relationship and who would be happy if we divorced. Therefore we already informally established some upper and lower bounds. But when do we start to take our heuristics seriously and do whatever they prove to be the optimal decision? That is an important question and I think that one of the answers, as mentioned above, is that we shouldn’t trust our heuristics without enough empirical evidence as their fuel.
If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
My usual example of the IT path being important is Microsoft. IT improvments have been responsible for much of the recent progress of humans. For many years, Microsoft played the role of the evil emperor of IT, with nasty business practices and shoddy, insecure software. They screwed humanity, and it was a nightmare—a serious setback for the whole planet. Machine intelligence could be like that—but worse.
(nods) I endorse drawing inferences from evidence, and being skeptical about the application of heuristics that were developed against one reference class to a different reference class.
I am still leaning towards the general resentment that I voiced in my first submission here, although I learnt a lot since then. I perceive the tendency to base decisions solely on logical implications to be questionable. I do not doubt that everything we know hints at the possibility that artificial intelligence could undergo explosive self-improvement and reach superhuman capabilities, but I do doubt that we know enough to justify the kind of commitment given within this community. I am not able to formalize my doubts right now or explain what exactly is wrong, but I feel the need to voice my skepticism nonetheless. And I feel that my recent discovery of some of the admitted problems reinforce my skepticism.
Take for example the risks from asteroids. There is a lot of empirical evidence that asteroids have caused damage before and that they pose an existential risk in future. One can use probability theory, the expected utility formula and other heuristics to determine how reasonable it would be to support the mitigation of that risk. Then there are risks like those posed by particle accelerators. There is some evidence that high-energy physics might pose and existential risk but more evidence that it does not. The risks and benefits are still not solely based on sheer speculation. Yet I don’t think that we could just factor in the expected utility of the whole Earth and our possible future as galactic civilization to conclude that we shouldn’t do high-energy physics. In the case of risks from AI we have a scenario that is solely based on extrapolation, on sheer speculation. The only reason to currently care strongly about risks from AI is the expected utility of success, respectively the disutility of a negative outcome. And that is where I become very skeptical, there can be no empirical criticism. Such scenarios then are those that I compare to ‘Pascal’s Mugging’ because they are susceptible to the same problem of expected utility outweighing any amount of reasonable doubt. I feel that such scenarios can lead us astray by removing themselves from other kinds of evidence and empirical criticism and therefore are solely justifying themselves by making up huge numbers when the available data and understanding doesn’t allow any such conclusions. Take for example MWI, itself a logical implication of the available data and current understanding. But is it enough to commit quantum suicide? I don’t feel that such a conclusion is reasonable. I believe that logical implications and extrapolations are not enough to outweigh any risk. If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
As you mentioned there are many uses like ballistic calculation where mere extrapolation works and is the best we can do. But since there are problems like ‘Pascal’s Mugging’, that we perceive to be undesirable and that lead to an infinite hunt for ever larger expected utility, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics. We agree that we are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We also agree that we are not going to stop loving our girlfriend just because there are many people who do not approve our relationship and who would be happy if we divorced. Therefore we already informally established some upper and lower bounds. But when do we start to take our heuristics seriously and do whatever they prove to be the optimal decision? That is an important question and I think that one of the answers, as mentioned above, is that we shouldn’t trust our heuristics without enough empirical evidence as their fuel.
My usual example of the IT path being important is Microsoft. IT improvments have been responsible for much of the recent progress of humans. For many years, Microsoft played the role of the evil emperor of IT, with nasty business practices and shoddy, insecure software. They screwed humanity, and it was a nightmare—a serious setback for the whole planet. Machine intelligence could be like that—but worse.
(nods) I endorse drawing inferences from evidence, and being skeptical about the application of heuristics that were developed against one reference class to a different reference class.