So there was a lot of competitive pressure to keep pushing to make it work. A good chunk of the Superalignment team stayed on in the hope that they could win the race and use OpenAI’s lead to align the first AGI, but many of the safety people at OpenAI quit in June. We were left with a new alignment lab, Embedded Intent, and an OpenAI newly pruned of the people most wanting to slow it down.”
“And that’s when we first started learning about this all?”
“Publicly, yes. The OpenAI defectors were initially mysterious about their reasons for leaving, citing deep disagreements over company direction. But then some memos were leaked, SF scientists began talking, and all the attention of AI Twitter was focused on speculating about what happened. They pieced pretty much the full story together before long, but that didn’t matter soon. What did matter was that the AI world became convinced there was a powerful new technology inside OpenAI.”
Yarden hesitated. “You’re saying that speculation, that summer hype, it led to the cyberattack in July?”
“Well, we can’t say for certain,” I began. “But my hunch is yes. Governments had already been thinking seriously about AI for the better part of a year, and their national plans were becoming crystallized for better or worse. But AI lab security was nowhere near ready for that kind of heat.
Wow. So many of the sociological predictions became true even though afaict the technical predictions are (thankfully) lagging behind.
Wow. So many of the sociological predictions became true even though afaict the technical predictions are (thankfully) lagging behind.