Basically, a FOOM scenario in AI basically means that once it reaches a certain level of intelligence, it reaches a criticality threshold where 1 improvement on average generates 1 or more improvements, essentially shortening the time it takes to get Super-intellegent.
Sorry, I should have been more clear. I know about FOOM; I was curious as to why you believe EY was wrong on FOOM and why you suggest the update on x-risk.
Basically, with the assumption of this trend continues, there’s no criticality threshold that’s necessary for discontinuity, and the most severe issues of AI Alignment are in the FOOM scenario, where we only get one chance to do it right. Basically, this trend line shows no discontinuity, but continuously improving efforts, so there’s no criticality for FOOM to be right.
Could you explain point A?
Basically, a FOOM scenario in AI basically means that once it reaches a certain level of intelligence, it reaches a criticality threshold where 1 improvement on average generates 1 or more improvements, essentially shortening the time it takes to get Super-intellegent.
Sorry, I should have been more clear. I know about FOOM; I was curious as to why you believe EY was wrong on FOOM and why you suggest the update on x-risk.
Basically, with the assumption of this trend continues, there’s no criticality threshold that’s necessary for discontinuity, and the most severe issues of AI Alignment are in the FOOM scenario, where we only get one chance to do it right. Basically, this trend line shows no discontinuity, but continuously improving efforts, so there’s no criticality for FOOM to be right.