To me the claim that human-level AI → superhuman AI in at most a matter of years seems quite likely. It might not happen, but I think the arguments about FOOMing are pretty straightforward, even if not airtight. The specific timeline depends on where on the scale of Moore’s law we are (so if I thought that AI was a large source of existential risk, then I would be trying to develop AGI as quickly as possible, so that the first AGI was slow enough to stop if something bad happened; i.e. waiting longer → computers are faster → FOOM happens on a shorter timescale).
The argument I am far more skeptical of is about the likelihood of an UFAI happening without any warning. While I place some non-negligible probability on UFAI occurring, it seems like right now we know so little about AI that it is hard to judge whether an AI would actually have a significant danger of being unfriendly. By the time we are in any position to build an AGI, it should be much more obvious whether that is a problem or not.
Depending on what you meant, this might not be relevant, but. Many arguments about AGI and FOOM are antipredictions. “Argument length” as jsteinhardt used it assumes that the argument is a conjunctive one. If an argument is disjunctive then its length implies an increased likelihood of correctness. Eliezer’s “Hard Takeoff” article on OB was pretty long, but the words were used to make an antiprediction.
It is not clear to me that there are well-defined boundaries between what you call a conjunctive and a disjunctive argument. I am also not sure how two opposing predictions are not both antipredictions.
I see that some predictions are more disjunctive than others, i.e. just some of their premises need to be true. But most of the time this seems to be a result of vagueness. It doesn’t necessarily speak in favor of a prediction if it is strongly disjunctive. If you were going to pin it down it would turn out to be conjunctive, requiring all its details to be true.
All predictions are conjunctive:
If you predict that Mary is going to buy one of a thousand products in the supermarket, 1.) if she is hungry 2.) if she is thirsty 3.) if she needs a new coffee machine, then you are seemingly making a disjunctive prediction. But someone else might be less vague and make a conjunctive antiprediction. Mary is not going to buy one of a thousand products in the supermarket because 1.) she needs money 2.) she has to have some needs 3.) the supermarket has to be open. Sure, if the latter prediction was made first then the former would become the antiprediction, which happens to be disjunctive. But being disjunctive does not speak in favor of a prediction in and of itself.
All prediction are antipredictions:
Now you might argue that the first prediction could not be an antiprediction, as it does predict something to happen. But opposing predictions are always predicting the negation of each other. If you predict that Mary is going shopping then you predict that she is not not going shopping.
Does that apply to AI going FOOM?
To me the claim that human-level AI → superhuman AI in at most a matter of years seems quite likely. It might not happen, but I think the arguments about FOOMing are pretty straightforward, even if not airtight. The specific timeline depends on where on the scale of Moore’s law we are (so if I thought that AI was a large source of existential risk, then I would be trying to develop AGI as quickly as possible, so that the first AGI was slow enough to stop if something bad happened; i.e. waiting longer → computers are faster → FOOM happens on a shorter timescale).
The argument I am far more skeptical of is about the likelihood of an UFAI happening without any warning. While I place some non-negligible probability on UFAI occurring, it seems like right now we know so little about AI that it is hard to judge whether an AI would actually have a significant danger of being unfriendly. By the time we are in any position to build an AGI, it should be much more obvious whether that is a problem or not.
Might you clarify your question?
Depending on what you meant, this might not be relevant, but. Many arguments about AGI and FOOM are antipredictions. “Argument length” as jsteinhardt used it assumes that the argument is a conjunctive one. If an argument is disjunctive then its length implies an increased likelihood of correctness. Eliezer’s “Hard Takeoff” article on OB was pretty long, but the words were used to make an antiprediction.
It is not clear to me that there are well-defined boundaries between what you call a conjunctive and a disjunctive argument. I am also not sure how two opposing predictions are not both antipredictions.
I see that some predictions are more disjunctive than others, i.e. just some of their premises need to be true. But most of the time this seems to be a result of vagueness. It doesn’t necessarily speak in favor of a prediction if it is strongly disjunctive. If you were going to pin it down it would turn out to be conjunctive, requiring all its details to be true.
All predictions are conjunctive:
If you predict that Mary is going to buy one of a thousand products in the supermarket, 1.) if she is hungry 2.) if she is thirsty 3.) if she needs a new coffee machine, then you are seemingly making a disjunctive prediction. But someone else might be less vague and make a conjunctive antiprediction. Mary is not going to buy one of a thousand products in the supermarket because 1.) she needs money 2.) she has to have some needs 3.) the supermarket has to be open. Sure, if the latter prediction was made first then the former would become the antiprediction, which happens to be disjunctive. But being disjunctive does not speak in favor of a prediction in and of itself.
All prediction are antipredictions:
Now you might argue that the first prediction could not be an antiprediction, as it does predict something to happen. But opposing predictions are always predicting the negation of each other. If you predict that Mary is going shopping then you predict that she is not not going shopping.