My favored version of this project would involve >50% of the work going into the econ literature and models on investor incentives, with attention to
Principal-agent problems
Information asymmetry
Risk preferences
Time discounting
And then a smaller fraction of the work would involve looking into AI labs, specifically. I’m curious if this matches your intentions for the project or whether you think there are important lessons about the labs that will not be found in the existing econ literature.
I expect that basic econ models and their consequences on the motivations of investors are already mostly known in the AI safety community, even if only through vague statements like “VCs are more risk tolerant than pension funds”.
My main point in this post is that it might be the case that AI labs successfully removed themselves from the influence of investors, so that it actually matters very little what the investors of AI Labs want or do. I think that determining whether this is the case is important, as in this case our intuitions about how companies work generally would not apply to AI labs.
My favored version of this project would involve >50% of the work going into the econ literature and models on investor incentives, with attention to
Principal-agent problems
Information asymmetry
Risk preferences
Time discounting
And then a smaller fraction of the work would involve looking into AI labs, specifically. I’m curious if this matches your intentions for the project or whether you think there are important lessons about the labs that will not be found in the existing econ literature.
I expect that basic econ models and their consequences on the motivations of investors are already mostly known in the AI safety community, even if only through vague statements like “VCs are more risk tolerant than pension funds”.
My main point in this post is that it might be the case that AI labs successfully removed themselves from the influence of investors, so that it actually matters very little what the investors of AI Labs want or do. I think that determining whether this is the case is important, as in this case our intuitions about how companies work generally would not apply to AI labs.