Re: the Bay Area vs. other places. At this point, there’s a fair amount of (messy) empirical evidence about how much being in the Bay Area impacts performance relative to being in other places. You could match organizations by area of research and do a comparison between the Bay and London/Oxford/Cambridge. E.g. OpenAI and Anthropic vs. DeepMind, OpenPhil (long-termist research) vs. FHI-GPI-CSER, CHAI vs Oxford and DeepMind. While people are not randomly assigned to these organizations, there is enough overlap of personnel that the observational evidence is likely to be meaningful. This kind of comparison seems preferable to general arguments like that the Bay Area is expensive + has bad epistemics.
(In terms of general arguments, I’d also mention that the Bay Area has the best track record in the world by a huge margin for producing technology companies and is among the top 5 regions in the world for cutting-edge scientific research.)
ETA: I tried to clarify my thoughts in the reply to Larks.
Is your argument about personnel overlap that one could do some sort of mixed effect regression, with location as the primary independent variable and controls for individual productivity? If so I’m so somewhat skeptical about the tractability: the sample size is not that big, the data seems messy, and I’m not sure it would capture necessarily the fundamental thing we care about. I’d be interested in the results if you wanted to give it a go though!
More importantly, I’m not sure this analysis would be that useful. Geography-based-priors only really seem useful for factors we can’t directly observe; for an organization like CHAI our direct observations will almost entirely screen off this prior. The prior is only really important for factors where direct measurement is difficult, and hence we can’t update away from the prior, but for those we can’t do the regression. (Though I guess we could do the regression on known firms/researchers and extrapolate to new unknown orgs/individuals).
The way this plays out here is we’ve already spent the vast majority of the article examining the research productivity of the organizations; geography based priors only matter insomuchas you think they can proxy for something else that is not captured in this.
As befits this being a somewhat secondary factor, it’s worth noting that I think (though I haven’t explicitly checked) in the past I have supported bay area organisations more than non-bay-area ones.
I agree with most of this—and my original comment should have been clearer. I’m wondering if the past five years of direct observations leads you to update the geography-based prior (which has been included in your alignment review for since 2018). How much do you expect the quality of alignment work to differ from a new organization based in the Bay vs somewhere else? (No need to answer: I realize this is probably a small consideration and I don’t want to start an unproductive thread on this topic).
Re: the Bay Area vs. other places. At this point, there’s a fair amount of (messy) empirical evidence about how much being in the Bay Area impacts performance relative to being in other places. You could match organizations by area of research and do a comparison between the Bay and London/Oxford/Cambridge. E.g. OpenAI and Anthropic vs. DeepMind, OpenPhil (long-termist research) vs. FHI-GPI-CSER, CHAI vs Oxford and DeepMind. While people are not randomly assigned to these organizations, there is enough overlap of personnel that the observational evidence is likely to be meaningful. This kind of comparison seems preferable to general arguments like that the Bay Area is expensive + has bad epistemics.
(In terms of general arguments, I’d also mention that the Bay Area has the best track record in the world by a huge margin for producing technology companies and is among the top 5 regions in the world for cutting-edge scientific research.)
ETA: I tried to clarify my thoughts in the reply to Larks.
Is your argument about personnel overlap that one could do some sort of mixed effect regression, with location as the primary independent variable and controls for individual productivity? If so I’m so somewhat skeptical about the tractability: the sample size is not that big, the data seems messy, and I’m not sure it would capture necessarily the fundamental thing we care about. I’d be interested in the results if you wanted to give it a go though!
More importantly, I’m not sure this analysis would be that useful. Geography-based-priors only really seem useful for factors we can’t directly observe; for an organization like CHAI our direct observations will almost entirely screen off this prior. The prior is only really important for factors where direct measurement is difficult, and hence we can’t update away from the prior, but for those we can’t do the regression. (Though I guess we could do the regression on known firms/researchers and extrapolate to new unknown orgs/individuals).
The way this plays out here is we’ve already spent the vast majority of the article examining the research productivity of the organizations; geography based priors only matter insomuchas you think they can proxy for something else that is not captured in this.
As befits this being a somewhat secondary factor, it’s worth noting that I think (though I haven’t explicitly checked) in the past I have supported bay area organisations more than non-bay-area ones.
I agree with most of this—and my original comment should have been clearer. I’m wondering if the past five years of direct observations leads you to update the geography-based prior (which has been included in your alignment review for since 2018). How much do you expect the quality of alignment work to differ from a new organization based in the Bay vs somewhere else? (No need to answer: I realize this is probably a small consideration and I don’t want to start an unproductive thread on this topic).