Hello! Sorry for missing this comment the first time around :)
I will push back on democratic in the sense I think Linch is pushing the term being actually all that good a property for cosmically important orgs. See Bryan Caplan’s The Myth of the Rational Voter, and the literature around [Social-desirability bias](Social-desirability bias) for reasons why, which I’m sure Linch is familiar with, but I notice is not mentioned.
I definitely think this is a reasonable criticism. I think my overall response is the fairly trite Churchill quote “Democracy is the worst form of government, except all the others that have been tried.” I think broadly
a) monopoly on force is good and a historical advancement in many ways
b) (liberal, democratic) governments have done an okay job with the enormous responsibility that we have handed them.
c) I don’t think corporate actors should be given this much power
d) I think I want to separate out considerations of individuals’ moral goodness from what the incentives and institutions point someone towards.
di) I do think the typical OpenAI employees’ values are closer to mine, and they’re more competent than typical Americans, or typical gov’t bureaucrats
dii) OTOH I think the US gov’t has many checks and balances that private companies do not have (I think Leopold made a similar point in his most recent podcast).
relatedly:
The fact is we usually hold our companies to much higher standards than our governments
To the extent this is true, I think it’s because companies have many external checks on them (eg customers, competition, the government). I don’t think I’d be comfortable with corporate actors’ internal checks and balances (employees, their boards, etc) being nearly as strong as gov’ts’ internal checks.
e) I agree with you that democratic governments are heavily flawed. I just think it’s hard (far from impossible!) to do better, and I’m very skeptical that cosmically important organizations ought to be at what I facetiously refer to as the “forefront of corporate governance innovation.” While experiments in policy/governance innovation is very useful and necessary, I think we want to minimize the number of variables that could go wrong on our first few critical tries at doing something both cosmically important and very difficult. Governments in general, and the USG in particular, have been much more battle-tested re: handling important life and death situations, in a way that AI companies very much have not been. ---
I note too the America-centric bias with all of these examples & comparisons. Maybe the American government is just too incompetent compared to others, and we should instead embed the project within France or Norway.
I think my own preferred option is an intergovernmental operation like CERN, ruled by the UN Security Council or NATO or something. I have relatively little hope that the USG will let this happen however. And I have even less hope—vanishingly little—that the USG will be okay with a non-US governmental project in a more “competent” country like Norway or Singapore.
But if we wave aside the impracticality concerns, I’d also be worried about whether it’s strategically wise to locate an AGI project in a smaller/more “competent” government that’s less battle-tested than the US. On the object-level, I’d be very worried about information security concerns, where most of the smaller/more peacetime-competent governments might just not be robust to targeted hacks and cooption attempts (social and otherwise). On the meta-level, the lack of past experience with extreme outside pressure means we should be wary of them repeating their peacetime success “when shit hits the ceiling”, even if we can’t trace an exact causal mechanism for why.
Hello! Sorry for missing this comment the first time around :)
I definitely think this is a reasonable criticism. I think my overall response is the fairly trite Churchill quote “Democracy is the worst form of government, except all the others that have been tried.” I think broadly
a) monopoly on force is good and a historical advancement in many ways
b) (liberal, democratic) governments have done an okay job with the enormous responsibility that we have handed them.
c) I don’t think corporate actors should be given this much power
d) I think I want to separate out considerations of individuals’ moral goodness from what the incentives and institutions point someone towards.
di) I do think the typical OpenAI employees’ values are closer to mine, and they’re more competent than typical Americans, or typical gov’t bureaucrats
dii) OTOH I think the US gov’t has many checks and balances that private companies do not have (I think Leopold made a similar point in his most recent podcast).
relatedly:
To the extent this is true, I think it’s because companies have many external checks on them (eg customers, competition, the government). I don’t think I’d be comfortable with corporate actors’ internal checks and balances (employees, their boards, etc) being nearly as strong as gov’ts’ internal checks.
e) I agree with you that democratic governments are heavily flawed. I just think it’s hard (far from impossible!) to do better, and I’m very skeptical that cosmically important organizations ought to be at what I facetiously refer to as the “forefront of corporate governance innovation.” While experiments in policy/governance innovation is very useful and necessary, I think we want to minimize the number of variables that could go wrong on our first few critical tries at doing something both cosmically important and very difficult. Governments in general, and the USG in particular, have been much more battle-tested re: handling important life and death situations, in a way that AI companies very much have not been.
---
I think my own preferred option is an intergovernmental operation like CERN, ruled by the UN Security Council or NATO or something. I have relatively little hope that the USG will let this happen however. And I have even less hope—vanishingly little—that the USG will be okay with a non-US governmental project in a more “competent” country like Norway or Singapore.
But if we wave aside the impracticality concerns, I’d also be worried about whether it’s strategically wise to locate an AGI project in a smaller/more “competent” government that’s less battle-tested than the US. On the object-level, I’d be very worried about information security concerns, where most of the smaller/more peacetime-competent governments might just not be robust to targeted hacks and cooption attempts (social and otherwise). On the meta-level, the lack of past experience with extreme outside pressure means we should be wary of them repeating their peacetime success “when shit hits the ceiling”, even if we can’t trace an exact causal mechanism for why.