Google has been, IMO, very unwilling to take product risk, or risk a PR backlash of the type that Blenderbot or Sydney have gotten. Google has also been very nervous about perceived and actual bias in deployed models.
When people talk about red tape, it’s not the kind of red tape that might be useful for AGI alignment, it’s instead the kind aimed at minimizing product risks. And when Google says they are willing to take on more risk, here they mean product and reputational risk.
Maybe the same processes that would help with product risk would also help with AGI alignment risk, but frankly I’m skeptical. I think the problems are different enough that they need a different kind of thinking.
I think Google is better on the big risks than others, at least potentially, since they have some practice at understanding nonobvious secondary effects as applied to search or YouTube ranking.
Note that I’m at Google, but opinions here are mine, not Google’s.
Hi Dave, thanks for the great input from the insider perspective.
Do you have any thoughts on whether risk-aversion (either product-related or misalignment-risk) might be contributing to a migration of talent towards lower-governance zones?
If so, are there any effective ways to combat this that don’t translate to accepting higher levels of risk?
For sure product risk aversion leads towards people moving to where they can have some impact, for people who don’t want pure research roles. I think this is basically fine—I don’t think that product risk is all that concerning at least for now.
Misalignment risk would be a different story but I’m not aware of cases where people moved because of it. (I might not have heard, of course.)
There’s a subtlety here around the term risk.
Google has been, IMO, very unwilling to take product risk, or risk a PR backlash of the type that Blenderbot or Sydney have gotten. Google has also been very nervous about perceived and actual bias in deployed models.
When people talk about red tape, it’s not the kind of red tape that might be useful for AGI alignment, it’s instead the kind aimed at minimizing product risks. And when Google says they are willing to take on more risk, here they mean product and reputational risk.
Maybe the same processes that would help with product risk would also help with AGI alignment risk, but frankly I’m skeptical. I think the problems are different enough that they need a different kind of thinking.
I think Google is better on the big risks than others, at least potentially, since they have some practice at understanding nonobvious secondary effects as applied to search or YouTube ranking.
Note that I’m at Google, but opinions here are mine, not Google’s.
Hi Dave, thanks for the great input from the insider perspective.
Do you have any thoughts on whether risk-aversion (either product-related or misalignment-risk) might be contributing to a migration of talent towards lower-governance zones?
If so, are there any effective ways to combat this that don’t translate to accepting higher levels of risk?
For sure product risk aversion leads towards people moving to where they can have some impact, for people who don’t want pure research roles. I think this is basically fine—I don’t think that product risk is all that concerning at least for now.
Misalignment risk would be a different story but I’m not aware of cases where people moved because of it. (I might not have heard, of course.)