Yeah, it’s sort of awkward that there are two different things one might want to talk about with FOOM: the idea of recursive self improvement in the typical I.J. Good sense, and the “human threshold isn’t special and can be blown past quickly” idea. AlphaZero being able to hit the superhuman level at Go after 3 days of training, and doing so only a year or two after any professional Go player was defeated by a computer, feels relevant to the second thing but not the first (and is connected to the ‘fleets of cars will learn very differently’ thing Peterson is pointing at).
[And the two actually are distinct; RSI is an argument for ‘blowing past humans is possible’ but many ‘slow takeoff’ views look more like “RSI pulls humans along with it” than “things look slow to a Martian,” and there’s ways to quickly blow past humans that don’t involve RSI.]
Yeah, it’s sort of awkward that there are two different things one might want to talk about with FOOM: the idea of recursive self improvement in the typical I.J. Good sense, and the “human threshold isn’t special and can be blown past quickly” idea. AlphaZero being able to hit the superhuman level at Go after 3 days of training, and doing so only a year or two after any professional Go player was defeated by a computer, feels relevant to the second thing but not the first (and is connected to the ‘fleets of cars will learn very differently’ thing Peterson is pointing at).
[And the two actually are distinct; RSI is an argument for ‘blowing past humans is possible’ but many ‘slow takeoff’ views look more like “RSI pulls humans along with it” than “things look slow to a Martian,” and there’s ways to quickly blow past humans that don’t involve RSI.]