Whether or not this is the safest path, the fact that OpenAI thinks it’s true and is one of the leading AI labs makes it a path we’re likely to take. Humanity successfully navigating the transition to extremely powerful AI might therefore require successfully navigating a scenario with short timelines and slow, continuous takeoff.
You can’t just choose “slow takeoff”. Takeoff speeds are mostly a function of the technology, not company choices. If we could just choose to have a slow takeoff, everything would be much easier! Unfortunately, OpenAI can’t just make their preferred timelines & “takeoff” happen. (Though I agree they have some influence, mostly in that they can somewhat accelerate timelines).
You need to think about your real options and expected value of behavior. If we’re in a world where technology allows for a fast takeoff world and alignment is hard, (EY World) I imagine the odds of survival with company acceleration is 0% and the odds of survival without is 1%.
But if we live in a world where compute/capital/other overhangs are a significant influence in AI capabilities and alignment is just tricky, company acceleration would seem like it could improve the chances of survival pretty significantly, maybe from 5% to 50%.
These obviously aren’t the only two possible worlds, but if they were and both seemed equally likely, I would strongly prefer a policy of company acceleration because the EV for me breaks down way better over the probabilities.
I guess ‘company acceleration’ doesn’t convey as much information or sell as well which is why people don’t use that phrase, but that’s the policy they’re advocating for- not ‘hoping really hard that we’re in a slow takeoff world.’
Yeah, good point. I guess the truer thing here is ‘whether or not this is the safest path, important actors seem likely to act as though it is’. Those actors probably have more direct control over timelines than takeoff speed, so I do think that this fact is informative about what sort of world we’re likely to live in—but agree that no one can just choose slow takeoff straightforwardly.
whether or not this is the safest path, important actors seem likely to act as though it is
It’s not clear to me that this is true, and it strikes me as maybe overly cynical. I get the sense that people at OpenAI and other labs are receptive to evidence and argument, and I expect us to get a bunch more evidence about takeoff speeds before it’s too late. I expect people’s takes on AGI safety plans to evolve a lot, including at OpenAI. Though TBC I’m pretty uncertain about all of this―definitely possible that you’re right here.
You can’t just choose “slow takeoff”. Takeoff speeds are mostly a function of the technology, not company choices. If we could just choose to have a slow takeoff, everything would be much easier! Unfortunately, OpenAI can’t just make their preferred timelines & “takeoff” happen. (Though I agree they have some influence, mostly in that they can somewhat accelerate timelines).
You need to think about your real options and expected value of behavior. If we’re in a world where technology allows for a fast takeoff world and alignment is hard, (EY World) I imagine the odds of survival with company acceleration is 0% and the odds of survival without is 1%.
But if we live in a world where compute/capital/other overhangs are a significant influence in AI capabilities and alignment is just tricky, company acceleration would seem like it could improve the chances of survival pretty significantly, maybe from 5% to 50%.
These obviously aren’t the only two possible worlds, but if they were and both seemed equally likely, I would strongly prefer a policy of company acceleration because the EV for me breaks down way better over the probabilities.
I guess ‘company acceleration’ doesn’t convey as much information or sell as well which is why people don’t use that phrase, but that’s the policy they’re advocating for- not ‘hoping really hard that we’re in a slow takeoff world.’
Yeah, good point. I guess the truer thing here is ‘whether or not this is the safest path, important actors seem likely to act as though it is’. Those actors probably have more direct control over timelines than takeoff speed, so I do think that this fact is informative about what sort of world we’re likely to live in—but agree that no one can just choose slow takeoff straightforwardly.
It’s not clear to me that this is true, and it strikes me as maybe overly cynical. I get the sense that people at OpenAI and other labs are receptive to evidence and argument, and I expect us to get a bunch more evidence about takeoff speeds before it’s too late. I expect people’s takes on AGI safety plans to evolve a lot, including at OpenAI. Though TBC I’m pretty uncertain about all of this―definitely possible that you’re right here.