I’d be down to chat with you :) although plotting it I wonder that maybe I don’t have that much to say.
I think the main differences are that (a) you’re assigning much more in the next 10 years, and (b) you’re assigning way less worlds where it’s just harder, takes more effort, but overall we’re still on the right path.
My strawman is that mine is yours plus planning fallacy. I feel like my crux between us something like “I have a bunch of probability on (even though we’re on the right track) it just having a lot of details and requiring human coordination of big projects that we’re not great at right now” but that sounds very vague and uncompelling.
Added: I think you’re focused on the scaling hypothesis being true. Given that, how do you feel about this story:
Over the next five years we scale up AI compute and use up all the existing overhang, and this doesn’t finish it but it provides strong evidence that we’re nearly there. This makes us confident that if we scaled it up another three orders of magnitude that we’d get AGI, and then we figure out how to get there faster using lots of fine-grained optimisation, so we raise funds to organise a Manhatten project with 1000s of scientists, and this takes 5 years to set up and 10 years to execute. Oh, and also there’s a war over control of the compute and so on that makes things difficult.
In a world like this, I feel like it’s bottlenecked by our ability to organise large scale projects, which I have smoother uncertainty bars over than spikey. (Like, how long did the large hadron collider take to build?) Does that sound plausible to you, that it’s bottlenecked by large scale human projects?
Oh, whoa, OK I guess I was looking at your first forecast, not your second. Your second is substantially different. Yep let’s talk then? Want to video chat someday?
I tried to account for the planning fallacy in my forecast, but yeah I admit I probably didn’t account for it enough. Idk.
My response to your story is that yeah, that’s a possible scenario, but it’s a “knife edge” result. It might take <5 OOMs more, in which case it’ll happen with existing overhang. Or it’ll take >7 OOMs more compute, in which case it’ll not happen until new insights/paradigms are invented. If it takes 5-7 OOMs more, then yeah, we’ll first burn through the overhang and then need to launch some huge project in order to reach AGI. But that’s less likely than the other two scenarios.
(I mean, it’s not literally knife’s edge. It’s probably about as likely as the we-get-AGI-real-soon scenario. But then again I have plenty of probability mass around 2030, and I think 10 years from now is plenty of time for more Manhattan projects.)
Me: Wonders how much disagreement we have.
Me: Plots it.
I’d be down to chat with you :) although plotting it I wonder that maybe I don’t have that much to say.
I think the main differences are that (a) you’re assigning much more in the next 10 years, and (b) you’re assigning way less worlds where it’s just harder, takes more effort, but overall we’re still on the right path.
My strawman is that mine is yours plus planning fallacy. I feel like my crux between us something like “I have a bunch of probability on (even though we’re on the right track) it just having a lot of details and requiring human coordination of big projects that we’re not great at right now” but that sounds very vague and uncompelling.
Added: I think you’re focused on the scaling hypothesis being true. Given that, how do you feel about this story:
In a world like this, I feel like it’s bottlenecked by our ability to organise large scale projects, which I have smoother uncertainty bars over than spikey. (Like, how long did the large hadron collider take to build?) Does that sound plausible to you, that it’s bottlenecked by large scale human projects?
Oh, whoa, OK I guess I was looking at your first forecast, not your second. Your second is substantially different. Yep let’s talk then? Want to video chat someday?
I tried to account for the planning fallacy in my forecast, but yeah I admit I probably didn’t account for it enough. Idk.
My response to your story is that yeah, that’s a possible scenario, but it’s a “knife edge” result. It might take <5 OOMs more, in which case it’ll happen with existing overhang. Or it’ll take >7 OOMs more compute, in which case it’ll not happen until new insights/paradigms are invented. If it takes 5-7 OOMs more, then yeah, we’ll first burn through the overhang and then need to launch some huge project in order to reach AGI. But that’s less likely than the other two scenarios.
(I mean, it’s not literally knife’s edge. It’s probably about as likely as the we-get-AGI-real-soon scenario. But then again I have plenty of probability mass around 2030, and I think 10 years from now is plenty of time for more Manhattan projects.)
Let’s do it. I’m super duper busy, please ping me if I’ve not replied here within a week.
Ping?
Sounds good. Also, check out the new image I added to my answer! This image summarizes the weightiest model in my mind.