If Gemini is on track to overtake GPT-4, then we should want to understand Google’s alignment strategy, insofar as it has one.
My criterion for whether an advanced AI company or organization knows what it’s doing, is that it has a plan for “civilizational alignment” of a “superintelligence”. By civilizational alignment, I mean imparting to the AI a set of goals or values, with sufficient breadth and detail, that they are enough to govern a civilization… At the very least, the people involved need to understand that the stakes are nothing less than this.
We know that OpenAI has prioritized “superalignment”—so they appreciate that there are alignment challenges specific to superintelligence—and their head of alignment knows about CEV, which is a proposal on the level of civilizational alignment. So they more or less satisfy my criterion.
I have no corresponding understanding of Google’s alignment philosophy or chain of responsibility. Because Google has invested heavily in Anthropic, I thought of Anthropic as Google’s alignment think-tank. But Anthropic is developing Claude. Someone else is developing Gemini.
Deep Mind has a concept of “scalable alignment”, which might be their version of “superalignment”, but Deep Mind was merged with Google Brain. I don’t know who’s in charge of Gemini, who is in charge of AI safety for Gemini, or how they think about things. Google does evidently have a policy of infusing its AI products with certain values (tentatively I’ll identify its corporate value system as democratic progressivism), but do they dare to think that a Google AI might actually end up in charge of life on Earth, and prepare accordingly?
I also wonder how the thinking at the Frontier Model Forum (where Microsoft, Google, OpenAI, and Anthropic all liaise), rates according to my criterion.
Google doesn’t have an alignment philosophy, as far as I can tell. Google DeepMind’s safety team has good thinking; see Some high-level thoughts on the DeepMind alignment team’s strategy and the rest of that sequence. I don’t know how much influence they have over Gemini or Google’s AI in general.
If Gemini is on track to overtake GPT-4, then we should want to understand Google’s alignment strategy, insofar as it has one.
My criterion for whether an advanced AI company or organization knows what it’s doing, is that it has a plan for “civilizational alignment” of a “superintelligence”. By civilizational alignment, I mean imparting to the AI a set of goals or values, with sufficient breadth and detail, that they are enough to govern a civilization… At the very least, the people involved need to understand that the stakes are nothing less than this.
We know that OpenAI has prioritized “superalignment”—so they appreciate that there are alignment challenges specific to superintelligence—and their head of alignment knows about CEV, which is a proposal on the level of civilizational alignment. So they more or less satisfy my criterion.
I have no corresponding understanding of Google’s alignment philosophy or chain of responsibility. Because Google has invested heavily in Anthropic, I thought of Anthropic as Google’s alignment think-tank. But Anthropic is developing Claude. Someone else is developing Gemini.
Deep Mind has a concept of “scalable alignment”, which might be their version of “superalignment”, but Deep Mind was merged with Google Brain. I don’t know who’s in charge of Gemini, who is in charge of AI safety for Gemini, or how they think about things. Google does evidently have a policy of infusing its AI products with certain values (tentatively I’ll identify its corporate value system as democratic progressivism), but do they dare to think that a Google AI might actually end up in charge of life on Earth, and prepare accordingly?
I also wonder how the thinking at the Frontier Model Forum (where Microsoft, Google, OpenAI, and Anthropic all liaise), rates according to my criterion.
Google doesn’t have an alignment philosophy, as far as I can tell. Google DeepMind’s safety team has good thinking; see Some high-level thoughts on the DeepMind alignment team’s strategy and the rest of that sequence. I don’t know how much influence they have over Gemini or Google’s AI in general.