I’m calling it “zero-shot” in analogy with zero-shot learning, which refers to the ability to perform a task after zero demonstrations.
I see. Given this, I think “zero-shot learning” makes sense but “zero-shot reasoning” still doesn’t, since in the former “zero” refers to “zero demonstrations” and you’re learning something without doing a learning process targeted at that specific thing, whereas in the latter “zero” isn’t referring to anything and you’re trying to get the reasoning correct in one attempt so “one-shot” is a more sensible description.
I used “substantial progress” to mean “real and useful progress”, rather than “substantial fraction of the necessary progress”. Most of my examples happened in the eary to mid-1900s, suggesting that if we continue at that rate we might need at least another century.
Ok, I don’t think we have a substantive disagreement here then. My complaint was that providing only positive examples of progress in that paragraph without tempering them with negative ones is liable to give an overly optimistic impression to people who aren’t familiar with the field.
I’d feel much better about delegating the problem to a post-AGI society, because I’d expect such a society to be far more stable if takeoff is slow, and far more capable of taking its merry time to solve the full problem in earnest. (I think it will be more stable because I think it would be much harder for a single actor to attain a decisive strategic advantage over the rest of the world.)
Are you saying that in the slow-takeoff world, we will be able to coordinate to stop AI progress after reaching AGI and then solve the full alignment problem at leisure? If so, what’s your conditional probability P(successful coordination to stop AI progress | slow takeoff)?
I see. Given this, I think “zero-shot learning” makes sense but “zero-shot reasoning” still doesn’t, since in the former “zero” refers to “zero demonstrations” and you’re learning something without doing a learning process targeted at that specific thing, whereas in the latter “zero” isn’t referring to anything and you’re trying to get the reasoning correct in one attempt so “one-shot” is a more sensible description.
I was imagining something like “zero failed attempts”, where each failed attempt approximately corresponds to a demonstration.
Are you saying that in the slow-takeoff world, we will be able to coordinate to stop AI progress after reaching AGI and then solve the full alignment problem at leisure? If so, what’s your conditional probability P(successful coordination to stop AI progress | slow takeoff)?
More like, conditioning on getting international coordination after our first AGI, P(safe intelligence explosion | slow takeoff) is a lot higher, like 80%. I don’t think slow takeoff does very much to help international coordination.
I see. Given this, I think “zero-shot learning” makes sense but “zero-shot reasoning” still doesn’t, since in the former “zero” refers to “zero demonstrations” and you’re learning something without doing a learning process targeted at that specific thing, whereas in the latter “zero” isn’t referring to anything and you’re trying to get the reasoning correct in one attempt so “one-shot” is a more sensible description.
Ok, I don’t think we have a substantive disagreement here then. My complaint was that providing only positive examples of progress in that paragraph without tempering them with negative ones is liable to give an overly optimistic impression to people who aren’t familiar with the field.
Are you saying that in the slow-takeoff world, we will be able to coordinate to stop AI progress after reaching AGI and then solve the full alignment problem at leisure? If so, what’s your conditional probability P(successful coordination to stop AI progress | slow takeoff)?
I was imagining something like “zero failed attempts”, where each failed attempt approximately corresponds to a demonstration.
More like, conditioning on getting international coordination after our first AGI, P(safe intelligence explosion | slow takeoff) is a lot higher, like 80%. I don’t think slow takeoff does very much to help international coordination.