basically every company eventually becomes a moral maze
Agreed, but Silicon Valley wisdom says founder-led and -controlled companies are exceptionally dynamic, which matters here because the company that deploys AGI is reasonably likely to be one of those. For such companies, the personality and ideological commitments of the founder(s) are likely more predictive of external behavior than properties of moral mazes.
Facebook’s pivot to the “metaverse”, for instance, likely could not have been executed by a moral maze. If we believed that Facebook / Meta was overwhelmingly likely to deploy one of the first AGIs, I expect Mark Zuckerberg’s beliefs about AGI safety would be more important to understand than the general dynamics of moral mazes. (Facebook example deliberately chosen to avoid taking stances on the more likely AGI players, but I think it’s relatively clear which ones are moral mazes).
So my strategic corollary to this is that it’s probably weakly better for AI alignment for founders to be in charge of companies longer, and to get replaced less often.
In the case of facebook, even in the face of all of their history of actions, I think on the margin I’d prefer the founder to the median replacement to be leading the company.
(Edit: I don’t think founders remaining at the head of a company isn’t evidence that the company isn’t a moral maze. Also I’m not certain I agree that facebook’s pivot couldn’t have been done by a moral maze.)
Agreed on all points! One clarification is that large founder-led companies, including Facebook, are all moral mazes internally (i.e. from the perspective of the typical employee); but their founders often have so much legitimacy that their external actions are only weakly influenced by moral maze dynamics.
I guess that means that if AGI deployment is very incremental—a sequence of small changes to many different AI systems, that only in retrospect add up to AGI—moral maze dynamics will still be paramount, even in founder-led companies.
I think that’s right but also the moral maze will be mediating the information and decision making support that’s available to the leadership, so they’re not totally immune from the influences
Agreed, but Silicon Valley wisdom says founder-led and -controlled companies are exceptionally dynamic, which matters here because the company that deploys AGI is reasonably likely to be one of those. For such companies, the personality and ideological commitments of the founder(s) are likely more predictive of external behavior than properties of moral mazes.
Facebook’s pivot to the “metaverse”, for instance, likely could not have been executed by a moral maze. If we believed that Facebook / Meta was overwhelmingly likely to deploy one of the first AGIs, I expect Mark Zuckerberg’s beliefs about AGI safety would be more important to understand than the general dynamics of moral mazes. (Facebook example deliberately chosen to avoid taking stances on the more likely AGI players, but I think it’s relatively clear which ones are moral mazes).
Agree that founders are a bit of an exception. Actually that’s a bit in the longer version of this when I talk about it in person.
Basically: “The only people who at the very top of large tech companies are either founders or those who were able to climb to the tops of moral mazes”.
So my strategic corollary to this is that it’s probably weakly better for AI alignment for founders to be in charge of companies longer, and to get replaced less often.
In the case of facebook, even in the face of all of their history of actions, I think on the margin I’d prefer the founder to the median replacement to be leading the company.
(Edit: I don’t think founders remaining at the head of a company isn’t evidence that the company isn’t a moral maze. Also I’m not certain I agree that facebook’s pivot couldn’t have been done by a moral maze.)
Agreed on all points! One clarification is that large founder-led companies, including Facebook, are all moral mazes internally (i.e. from the perspective of the typical employee); but their founders often have so much legitimacy that their external actions are only weakly influenced by moral maze dynamics.
I guess that means that if AGI deployment is very incremental—a sequence of small changes to many different AI systems, that only in retrospect add up to AGI—moral maze dynamics will still be paramount, even in founder-led companies.
I think that’s right but also the moral maze will be mediating the information and decision making support that’s available to the leadership, so they’re not totally immune from the influences