whether or not this is the safest path, important actors seem likely to act as though it is
It’s not clear to me that this is true, and it strikes me as maybe overly cynical. I get the sense that people at OpenAI and other labs are receptive to evidence and argument, and I expect us to get a bunch more evidence about takeoff speeds before it’s too late. I expect people’s takes on AGI safety plans to evolve a lot, including at OpenAI. Though TBC I’m pretty uncertain about all of this―definitely possible that you’re right here.
It’s not clear to me that this is true, and it strikes me as maybe overly cynical. I get the sense that people at OpenAI and other labs are receptive to evidence and argument, and I expect us to get a bunch more evidence about takeoff speeds before it’s too late. I expect people’s takes on AGI safety plans to evolve a lot, including at OpenAI. Though TBC I’m pretty uncertain about all of this―definitely possible that you’re right here.