I feel like “ML exhibits easy marginal intelligence improvements” is maybe not exactly hitting the nail on the head, in terms of the bubbles that feed into it. Maybe it should be something like:
“Is There One Big Breakthrough Insight that leads to HLMI and beyond?” (or a handful of insights, but not 10,000 insights).
Has that One Big Breakthrough Insight happened yet?
If you think that there’s an insight and it’s already happened, then you would think today’s ML systems exhibit easy marginal intelligence improvements (scaling hypothesis). If you think there’s an insight but it hasn’t happened yet, then you would think today’s ML systems do not exhibit easy marginal intelligence improvements, but are rather a dead-end, like getting to the moon by climbing bigger trees or whatever, and we’ll need to wait until that big breakthrough. (I’m more close to the second camp than most people around here.) But either way, you would be more likely to believe in fast takeoff.
For example see here for how I was interpreting and responding to Hanson’s citation statistics argument.
I mostly agree, but we get into the details of how we expect improvements can occur much more in the upcoming posts on paths to HLMI and takeoff speeds.
The ‘one big breakthrough’ idea is definitely a way that you could have easy marginal intelligence improvements at HLMI, but we didnt’t call the node ‘one big breakthrough/few key insights needed’ because that’s not the only way it’s been characterised. E.g. some people talk about a ‘missing gear for intelligence’, where some minor change that isn’t really a breakthrough (like tweaking a hyperparameter in a model training procedure) produces massive jumps in capability. Like David said, there’s a subsequent post where we go through the different ways the jump to HLMI could play out, and One Big Breakthrough (we call it ’few key breakthroughs for intelligence) is just one of them.
I guess I’d just suggest that in “ML exhibits easy marginal intelligence improvements”, you should specify whether the “ML” is referring to “today’s ML algorithms” vs “Whatever ML algorithms we’re using in HLMI” vs “All ML algorithms” vs something else (or maybe you already did say which it is but I missed it).
I feel like “ML exhibits easy marginal intelligence improvements” is maybe not exactly hitting the nail on the head, in terms of the bubbles that feed into it. Maybe it should be something like:
“Is There One Big Breakthrough Insight that leads to HLMI and beyond?” (or a handful of insights, but not 10,000 insights).
Has that One Big Breakthrough Insight happened yet?
If you think that there’s an insight and it’s already happened, then you would think today’s ML systems exhibit easy marginal intelligence improvements (scaling hypothesis). If you think there’s an insight but it hasn’t happened yet, then you would think today’s ML systems do not exhibit easy marginal intelligence improvements, but are rather a dead-end, like getting to the moon by climbing bigger trees or whatever, and we’ll need to wait until that big breakthrough. (I’m more close to the second camp than most people around here.) But either way, you would be more likely to believe in fast takeoff.
For example see here for how I was interpreting and responding to Hanson’s citation statistics argument.
I mostly agree, but we get into the details of how we expect improvements can occur much more in the upcoming posts on paths to HLMI and takeoff speeds.
The ‘one big breakthrough’ idea is definitely a way that you could have easy marginal intelligence improvements at HLMI, but we didnt’t call the node ‘one big breakthrough/few key insights needed’ because that’s not the only way it’s been characterised. E.g. some people talk about a ‘missing gear for intelligence’, where some minor change that isn’t really a breakthrough (like tweaking a hyperparameter in a model training procedure) produces massive jumps in capability. Like David said, there’s a subsequent post where we go through the different ways the jump to HLMI could play out, and One Big Breakthrough (we call it ’few key breakthroughs for intelligence) is just one of them.
I guess I’d just suggest that in “ML exhibits easy marginal intelligence improvements”, you should specify whether the “ML” is referring to “today’s ML algorithms” vs “Whatever ML algorithms we’re using in HLMI” vs “All ML algorithms” vs something else (or maybe you already did say which it is but I missed it).
Looking forward to the future posts :)