I do definitely expect different institutional failure in the case of Soft Takeoff. But it sort of depends on what level of abstraction you’re looking at the institutional failure through. Like, the FDA won’t be involved. But there’s a decent chance that some other regulatory will be involved, which is following the underlying FDA impulse of “Wield the one hammer we know how to wield to justify our jobs.” (In a large company, it’s possible that regulatory body could be a department inside the org, rather than a government agency)
In reasonably good outcomes, the decisions are mostly being made by tech companies full of specialists who well understand the problem. In that case the institutional failures will look more like “what ways do tech companies normally screw up due to internal politics?”
There’s a decent chance the military or someone will try to commandeer the project, in which case more typical government institutional failures will become more relevant.
One thing that seems significant is that 2 years prior to The Big Transition, you’ll have multiple companies with similar-ish tech. And some of them will be appropriately cautious (like New Zealand, Singapore), and others will not have the political wherewithal to slow down and think carefully and figure out what inconvenient things they need to do and do them (like many other countries in covid)
Yeah, these sorts of stories seem possible, and it also seems possible that institutions try some terrible policies, notice that they’re terrible, and then fix them. Like, this description:
But there’s a decent chance that some other regulatory will be involved, which is following the underlying FDA impulse of “Wield the one hammer we know how to wield to justify our jobs.” (In a large company, it’s possible that regulatory body could be a department inside the org, rather than a government agency)
just doesn’t seem to match my impression of non-EAs-or-rationalists working on AI governance. It’s possible that people in government are much less competent than people at think tanks, but this would be fairly surprising to me. In addition, while I can’t explain FDA decisions, I still pretty strongly penalize views that ascribe huge very-consequential-by-their-goals irrationality to small groups of humans working full time on something.
(Note I would defend the claim that institutions work well enough that in a slow takeoff world the probability of extinction is < 80%, and probably < 50%, just on the basis that if AI alignment turned out to be impossible, we can coordinate not to build powerful AI.)
Are you saying you think that wasn’t a fair characterization of the FDA, or that the hypothetical AI Governance bodies would be different from the FDA?
(The statement was certainly not very fair to the FDA, and I do expect there was more going on under the hood than that motivation. But, I do broadly think governing bodies do what they are incentivized to do, which includes justifying themselves, especially after being around a couple decades and gradually being infiltrated by careerists)
I do definitely expect different institutional failure in the case of Soft Takeoff. But it sort of depends on what level of abstraction you’re looking at the institutional failure through. Like, the FDA won’t be involved. But there’s a decent chance that some other regulatory will be involved, which is following the underlying FDA impulse of “Wield the one hammer we know how to wield to justify our jobs.” (In a large company, it’s possible that regulatory body could be a department inside the org, rather than a government agency)
In reasonably good outcomes, the decisions are mostly being made by tech companies full of specialists who well understand the problem. In that case the institutional failures will look more like “what ways do tech companies normally screw up due to internal politics?”
There’s a decent chance the military or someone will try to commandeer the project, in which case more typical government institutional failures will become more relevant.
One thing that seems significant is that 2 years prior to The Big Transition, you’ll have multiple companies with similar-ish tech. And some of them will be appropriately cautious (like New Zealand, Singapore), and others will not have the political wherewithal to slow down and think carefully and figure out what inconvenient things they need to do and do them (like many other countries in covid)
Yeah, these sorts of stories seem possible, and it also seems possible that institutions try some terrible policies, notice that they’re terrible, and then fix them. Like, this description:
just doesn’t seem to match my impression of non-EAs-or-rationalists working on AI governance. It’s possible that people in government are much less competent than people at think tanks, but this would be fairly surprising to me. In addition, while I can’t explain FDA decisions, I still pretty strongly penalize views that ascribe huge very-consequential-by-their-goals irrationality to small groups of humans working full time on something.
(Note I would defend the claim that institutions work well enough that in a slow takeoff world the probability of extinction is < 80%, and probably < 50%, just on the basis that if AI alignment turned out to be impossible, we can coordinate not to build powerful AI.)
Are you saying you think that wasn’t a fair characterization of the FDA, or that the hypothetical AI Governance bodies would be different from the FDA?
(The statement was certainly not very fair to the FDA, and I do expect there was more going on under the hood than that motivation. But, I do broadly think governing bodies do what they are incentivized to do, which includes justifying themselves, especially after being around a couple decades and gradually being infiltrated by careerists)
I am mostly confused, but I expect that if I learned more I would say that it wasn’t a fair characterization of the FDA.