Some conclusions from this, assuming this holds for AGI and ASI are:
A: Eliezer was wrong about Foom being plausible, and that is probably the single most x-risk reduction for MIRI. Re AI: I’d update from a 90-99% chance of x-risk to a maximum of 30%, and often the risk estimate would be 1-10% percent due to uncertainties beyond the Singularity/Foom hypothesis.
B: We will get AGI, and it’s really a matter of time. This means we still must do safety in AI.
This is an ambiguous result for everyone in AI safety. On the one hand, we probably can get by with partial failures, primarily because there is no need to one-shot everything, so it probably won’t mean the end of human civilization. On the other hand, it does mean that we still have to do AI safety.
I would not update much on Foom from this. The paper’s results are only relevant to one branch of AI development (I would call it “enormous self-supervised DL”). There may be other branches where Foom is the default mode (e.g. some practical AIXI implementation), and which are under the radar for now.
But I agree, we now can be certain that AGI is indeed a matter of time. I also agree that it gives us a chance to experiment with a non-scary AGI first (e.g. some transformer descendant that beats humans on almost everything, but remains to be a one-way text-processing mincer).
Moreover, BIG bench makes the path to AGI shorter, as one can now measure progress towards it, and maybe even apply RL to directly maximize the score.
Basically, a FOOM scenario in AI basically means that once it reaches a certain level of intelligence, it reaches a criticality threshold where 1 improvement on average generates 1 or more improvements, essentially shortening the time it takes to get Super-intellegent.
Sorry, I should have been more clear. I know about FOOM; I was curious as to why you believe EY was wrong on FOOM and why you suggest the update on x-risk.
Basically, with the assumption of this trend continues, there’s no criticality threshold that’s necessary for discontinuity, and the most severe issues of AI Alignment are in the FOOM scenario, where we only get one chance to do it right. Basically, this trend line shows no discontinuity, but continuously improving efforts, so there’s no criticality for FOOM to be right.
Some conclusions from this, assuming this holds for AGI and ASI are:
A: Eliezer was wrong about Foom being plausible, and that is probably the single most x-risk reduction for MIRI. Re AI: I’d update from a 90-99% chance of x-risk to a maximum of 30%, and often the risk estimate would be 1-10% percent due to uncertainties beyond the Singularity/Foom hypothesis.
B: We will get AGI, and it’s really a matter of time. This means we still must do safety in AI.
This is an ambiguous result for everyone in AI safety. On the one hand, we probably can get by with partial failures, primarily because there is no need to one-shot everything, so it probably won’t mean the end of human civilization. On the other hand, it does mean that we still have to do AI safety.
I would not update much on Foom from this. The paper’s results are only relevant to one branch of AI development (I would call it “enormous self-supervised DL”). There may be other branches where Foom is the default mode (e.g. some practical AIXI implementation), and which are under the radar for now.
But I agree, we now can be certain that AGI is indeed a matter of time. I also agree that it gives us a chance to experiment with a non-scary AGI first (e.g. some transformer descendant that beats humans on almost everything, but remains to be a one-way text-processing mincer).
Moreover, BIG bench makes the path to AGI shorter, as one can now measure progress towards it, and maybe even apply RL to directly maximize the score.
Could you explain point A?
Basically, a FOOM scenario in AI basically means that once it reaches a certain level of intelligence, it reaches a criticality threshold where 1 improvement on average generates 1 or more improvements, essentially shortening the time it takes to get Super-intellegent.
Sorry, I should have been more clear. I know about FOOM; I was curious as to why you believe EY was wrong on FOOM and why you suggest the update on x-risk.
Basically, with the assumption of this trend continues, there’s no criticality threshold that’s necessary for discontinuity, and the most severe issues of AI Alignment are in the FOOM scenario, where we only get one chance to do it right. Basically, this trend line shows no discontinuity, but continuously improving efforts, so there’s no criticality for FOOM to be right.