That is among the reasons why I keep telling SIAI people to never reply to “AI risks are small” with “but a small probability is still worth addressing”. Reason 2: It sounds like an attempt to shut down debate over probability. Reason 3: It sounds like the sort of thing people are more likely to say when defending a weak argument than a strong one. Listeners may instinctively recognize that as well.
Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.
To deal with people making that claim more easily, I’d like to see a post by you or someone else involved with SIAI summarizing the evidence for existential risks from AI, including the arguments for a hard takeoff and for why the AI’s goals must hit a narrow target of Friendliness.
Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.
If you talk about the probability of a coin coming up heads, that is a question that well-informed people can be expected to agree on—since it can be experimentally determined.
However, the probability of civilisation being terminally obliterated isn’t a probability that can easily be measured by us. Either all earth-sentients will be obliterated, or they won’t be. However, we can’t assign probabilities and check them aftterwards using frequency analysis. We can’t have a betting market on the probability—since one side never pays out. From the perspective of a human, the probability is just not meaningful—there’s no way for a human to measure it.
Possibly our distant descendants will figure out a reasonable estimate of what the chances of oblivion are (to a sufficiently well-informed agent) - e.g. by recreating the Earth many times and repeatedly running the experiment. I think that claims to know what the results of that experiment are would represent overconfidence. The fraction of Earths obliterated by disasters at the hands of machines could be very low, very high, or somewhere in between—we just don’t know with very much confidence.
Well, and of course “we don’t know with very much confidence” is a statement about the standard deviation, not about the mean. The standard deviation may impact a legal decision or human argument, but not the probability estimate itself.
The issue is not really about standard deviations, it is that probability is subjective. Humans are in a very bad position to determine this probability—we have little relevant experience, we can’t usefully bet on it, and if there are differences or disagreement, it is very difficult to tell who is right. The “human probability” seems practically worthless—a reflection of our ignorance, not anything with much to do with the event. We need that probability to guide our actions—but we can hardly expect two people to agree on it.
The nearest think I can think of which is well defined is the probability that our descendants put on the event retrospectively. A probability estimate by wiser and better informed creatures of the chances of a world like our own making it. That estimate could—quite plausibly—be very low or very high.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too. “It is difficult to tell who is right” is an informative statement about the human decision process, but not really informative about probability.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too.
Well, two agents with the same priors can easily come to different conclusions as a result of observing the same evidence. Different cognitive limitations can result in that happening.
On the emotional level, end of the world doesn’t bother me that much because everyone dies with me. Furthermore, there is nobody left to mourn. Losing half the Earth’s population, on the other hand, feels a lot scarier.
I enjoyed reading this comment rather a lot, since it allowed me to find myself in the not-too-common circumstance of noticing that I disagree with Eliezer to a significant (for me) degree.
Insofar as I’m able to put a number on my estimation of existential risks from AI, I also think that they’re not under 5%. But I’m not really in the habit of getting into debates on this matter with anyone. The case that I make for myself (or others) for supporting SIAI is rather of the following kind:
If there are any noticeable existential risks, it’s extremely important to spend resources on addressing them.
When looking at the various existential risks, most are somewhat simple to understand (at least after one has expended some effort on it), and are either already receiving a somewhat satisfactory amount of attention, or are likely to receive such attention before too long. (This doesn’t necessarily mean that they would be of a small probability, but rather that what can be done already seems like it’s mostly gonna get done.)
AI risks stand out as a special case, that seems really difficult to understand. There’s an exceptionally high degree of uncertainty in estimates I’m able to make of their probability; in fact I find it very difficult to make any satisfactorily rigorous estimations at all. Such lack of understanding is a potentially very dangerous thing. I want to support more research into this.
The key point in my attitude that I would emphasize, is the interest in existential risks in general. I wouldn’t try to seriously talk about AI risks to anyone who couldn’t first be stimulated to find within themselves such a more general serious interest. And then, if people have that general interest, they’re interested in going over the various existential risks there are, and it seems to me that sufficiently smart ones realize that the AI risks are a more difficult topic than others (at least after reading e.g. SIAI stuff; things might seem deceptively simple before one has a minimum threshold level of understanding).
So, my disagreement is that I indeed would to a degree avoid debates over probability. After a general interest in existential risks being present, I would instead of probabilities argue about the difficulty of the AI topic, and how such a lack of understanding is a very dangerous thing.
(I’m not really expressing a view on whether my approach is better or worse, though. Haven’t reflected on the matter sufficiently to form a real opinion on that, though for the time being I do continue to cling to my view instead of what Eliezer advocated.)
That is among the reasons why I keep telling SIAI people to never reply to “AI risks are small” with “but a small probability is still worth addressing”. Reason 2: It sounds like an attempt to shut down debate over probability. Reason 3: It sounds like the sort of thing people are more likely to say when defending a weak argument than a strong one. Listeners may instinctively recognize that as well.
Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.
To deal with people making that claim more easily, I’d like to see a post by you or someone else involved with SIAI summarizing the evidence for existential risks from AI, including the arguments for a hard takeoff and for why the AI’s goals must hit a narrow target of Friendliness.
If you talk about the probability of a coin coming up heads, that is a question that well-informed people can be expected to agree on—since it can be experimentally determined.
However, the probability of civilisation being terminally obliterated isn’t a probability that can easily be measured by us. Either all earth-sentients will be obliterated, or they won’t be. However, we can’t assign probabilities and check them aftterwards using frequency analysis. We can’t have a betting market on the probability—since one side never pays out. From the perspective of a human, the probability is just not meaningful—there’s no way for a human to measure it.
Possibly our distant descendants will figure out a reasonable estimate of what the chances of oblivion are (to a sufficiently well-informed agent) - e.g. by recreating the Earth many times and repeatedly running the experiment. I think that claims to know what the results of that experiment are would represent overconfidence. The fraction of Earths obliterated by disasters at the hands of machines could be very low, very high, or somewhere in between—we just don’t know with very much confidence.
Well, and of course “we don’t know with very much confidence” is a statement about the standard deviation, not about the mean. The standard deviation may impact a legal decision or human argument, but not the probability estimate itself.
The issue is not really about standard deviations, it is that probability is subjective. Humans are in a very bad position to determine this probability—we have little relevant experience, we can’t usefully bet on it, and if there are differences or disagreement, it is very difficult to tell who is right. The “human probability” seems practically worthless—a reflection of our ignorance, not anything with much to do with the event. We need that probability to guide our actions—but we can hardly expect two people to agree on it.
The nearest think I can think of which is well defined is the probability that our descendants put on the event retrospectively. A probability estimate by wiser and better informed creatures of the chances of a world like our own making it. That estimate could—quite plausibly—be very low or very high.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too. “It is difficult to tell who is right” is an informative statement about the human decision process, but not really informative about probability.
Well, two agents with the same priors can easily come to different conclusions as a result of observing the same evidence. Different cognitive limitations can result in that happening.
If the fear of thinking different really is stronger than the fear of death, is it possible that people just aren’t that bothered about the end of the world, whether the probability is high or low?
On the emotional level, end of the world doesn’t bother me that much because everyone dies with me. Furthermore, there is nobody left to mourn. Losing half the Earth’s population, on the other hand, feels a lot scarier.
I enjoyed reading this comment rather a lot, since it allowed me to find myself in the not-too-common circumstance of noticing that I disagree with Eliezer to a significant (for me) degree.
Insofar as I’m able to put a number on my estimation of existential risks from AI, I also think that they’re not under 5%. But I’m not really in the habit of getting into debates on this matter with anyone. The case that I make for myself (or others) for supporting SIAI is rather of the following kind:
If there are any noticeable existential risks, it’s extremely important to spend resources on addressing them.
When looking at the various existential risks, most are somewhat simple to understand (at least after one has expended some effort on it), and are either already receiving a somewhat satisfactory amount of attention, or are likely to receive such attention before too long. (This doesn’t necessarily mean that they would be of a small probability, but rather that what can be done already seems like it’s mostly gonna get done.)
AI risks stand out as a special case, that seems really difficult to understand. There’s an exceptionally high degree of uncertainty in estimates I’m able to make of their probability; in fact I find it very difficult to make any satisfactorily rigorous estimations at all. Such lack of understanding is a potentially very dangerous thing. I want to support more research into this.
The key point in my attitude that I would emphasize, is the interest in existential risks in general. I wouldn’t try to seriously talk about AI risks to anyone who couldn’t first be stimulated to find within themselves such a more general serious interest. And then, if people have that general interest, they’re interested in going over the various existential risks there are, and it seems to me that sufficiently smart ones realize that the AI risks are a more difficult topic than others (at least after reading e.g. SIAI stuff; things might seem deceptively simple before one has a minimum threshold level of understanding).
So, my disagreement is that I indeed would to a degree avoid debates over probability. After a general interest in existential risks being present, I would instead of probabilities argue about the difficulty of the AI topic, and how such a lack of understanding is a very dangerous thing.
(I’m not really expressing a view on whether my approach is better or worse, though. Haven’t reflected on the matter sufficiently to form a real opinion on that, though for the time being I do continue to cling to my view instead of what Eliezer advocated.)
If the fear of thinking different really is stronger than the fear of death, is it possible that people just aren’t that bothered about the end of the world, whether the probability is high or low?