My claim at the start had a typo in it. I am claiming that you can’t make a human seriously superhuman with a good education. Much like you can’t get a chimp up to human level with lots of education and “self improvement”. Serious genetic modification is another story, but at that point, your building an AI out of protien.
It does depend where you draw the line, but the for a wide range of performance levels, we went from no algorithm at that level, to a fast algorithm at that level. You couldn’t get much better results just by throwing more compute at it.
I am claiming that you can’t make a human seriously superhuman with a good education.
Is the claim that δo/δr for humans goes down over time so that o eventually hits an asymptote? If so, why won’t that apply to AI?
Serious genetic modification is another story, but at that point, your building an AI out of protien.
But it seems quite relevant that we haven’t successfully done that yet.
You couldn’t get much better results just by throwing more compute at it.
Okay, so my new story for this argument is:
For every task T, there are bottlenecks that limit its performance, which could be compute, data, algorithms, etc.
For the task of “AI research”, compute will not be the bottleneck.
So, once we get human-level performance on “AI research”, we can apply more compute to get exponential recursive self-improvement.
Is that your argument? If so, I think my question would be “why didn’t the bottleneck in point 2 vanish in point 3?” I think the only way this would be true would be if the bottleneck was algorithms, and there was a discontinuous jump in the capability of algorithms. I agree that in that world you would see a hard/fast/discontinuous takeoff, but I don’t see why we should expect that (again, the arguments in the linked posts argue against that premise).
My claim at the start had a typo in it. I am claiming that you can’t make a human seriously superhuman with a good education. Much like you can’t get a chimp up to human level with lots of education and “self improvement”. Serious genetic modification is another story, but at that point, your building an AI out of protien.
It does depend where you draw the line, but the for a wide range of performance levels, we went from no algorithm at that level, to a fast algorithm at that level. You couldn’t get much better results just by throwing more compute at it.
Is the claim that δo/δr for humans goes down over time so that o eventually hits an asymptote? If so, why won’t that apply to AI?
But it seems quite relevant that we haven’t successfully done that yet.
Okay, so my new story for this argument is:
For every task T, there are bottlenecks that limit its performance, which could be compute, data, algorithms, etc.
For the task of “AI research”, compute will not be the bottleneck.
So, once we get human-level performance on “AI research”, we can apply more compute to get exponential recursive self-improvement.
Is that your argument? If so, I think my question would be “why didn’t the bottleneck in point 2 vanish in point 3?” I think the only way this would be true would be if the bottleneck was algorithms, and there was a discontinuous jump in the capability of algorithms. I agree that in that world you would see a hard/fast/discontinuous takeoff, but I don’t see why we should expect that (again, the arguments in the linked posts argue against that premise).