To bypass the XKCD problem. Maybe we have marketing people who know a lot about ai for the average person, but only very little compared to the average ai researcher?
That’s going to happen anyways—it’s unlikely the marketing team is going to know as much as the researcher. But the researchers communicating the importance of alignment in terms of not x-risk but ‘client-risk’ will go a long way towards equipping the marketing teams to communicating it as a priority and a competitive advantage, and common foundations of agreed upon model complexity are the jumping off point for those kinds of discussions.
If alignment is Archimedes’ “lever long enough” then the agreed upon foundations and definitions are the place to stand whereby the combination thereof can move the world.
To bypass the XKCD problem. Maybe we have marketing people who know a lot about ai for the average person, but only very little compared to the average ai researcher?
That’s going to happen anyways—it’s unlikely the marketing team is going to know as much as the researcher. But the researchers communicating the importance of alignment in terms of not x-risk but ‘client-risk’ will go a long way towards equipping the marketing teams to communicating it as a priority and a competitive advantage, and common foundations of agreed upon model complexity are the jumping off point for those kinds of discussions.
If alignment is Archimedes’ “lever long enough” then the agreed upon foundations and definitions are the place to stand whereby the combination thereof can move the world.