My understanding is it means “the AI gets to a point where software improvements allow it to outpace us and trick us into doing anything it wants us to, and understand nanotechnology at a scale that it soon has unlimited material power.”
Instead of 1e-4 I’d probably put that at 1e-6 to 1e-9, but I have little experience accurately estimating very low probabilities.
(The sticking point of my interpretation is something that seems glossed over in the stuff I’ve read about it- that the AI only has complete access to software improvements. If it’s working on chips made of silicon, all it can do is tell us better chip designs (unless it’s hacked a factory, and is able to assemble itself somehow). Even if it’s as intelligent as EY imagines it can be, I don’t see how it could derive GR from a webcam quality picture; massive intelligence is no replacement for scant evidence. Those problems can be worked around- if it has access to the internet, it’s got a lot of evidence and a lot of power- but suggest that in some limited cases FOOM is very improbable.)
I am pretty sure that the “FOOM” term is an attempt to say something about the timescale of the growth of machine intelligence. So, I am sceptical about definitions which involve the concept of trickery. Surely rapid growth need not necessarily involve trickery. My FOOM sources don’t seem to mention trickery. Do you have any references relating to the point?
The bit about “trickery” was probably just referencing the weaknesses of AI boxing. You are correct that it’s not essential to the idea of hard takeoff.
My understanding is it means “the AI gets to a point where software improvements allow it to outpace us and trick us into doing anything it wants us to, and understand nanotechnology at a scale that it soon has unlimited material power.”
Instead of 1e-4 I’d probably put that at 1e-6 to 1e-9, but I have little experience accurately estimating very low probabilities.
(The sticking point of my interpretation is something that seems glossed over in the stuff I’ve read about it- that the AI only has complete access to software improvements. If it’s working on chips made of silicon, all it can do is tell us better chip designs (unless it’s hacked a factory, and is able to assemble itself somehow). Even if it’s as intelligent as EY imagines it can be, I don’t see how it could derive GR from a webcam quality picture; massive intelligence is no replacement for scant evidence. Those problems can be worked around- if it has access to the internet, it’s got a lot of evidence and a lot of power- but suggest that in some limited cases FOOM is very improbable.)
I am pretty sure that the “FOOM” term is an attempt to say something about the timescale of the growth of machine intelligence. So, I am sceptical about definitions which involve the concept of trickery. Surely rapid growth need not necessarily involve trickery. My FOOM sources don’t seem to mention trickery. Do you have any references relating to the point?
The bit about “trickery” was probably just referencing the weaknesses of AI boxing. You are correct that it’s not essential to the idea of hard takeoff.