Thank you for your elaboration, I appreciate it a lot, and upvoted for the effort. Here are your clearest points paraphrased as I understand them (sometimes just using your words), and my replies:
The FDA is net negative for health, therefore creating an FDA-for-AI would be likely net negative for the AI challenges.
I don’t think you can come to this conclusion, even if I agree with the premise. The counterfactuals are very different. With drugs the counterfactual of no FDA might be some people get more treatments, and some die but many don’t, and they were sick anyway so need to do something, and maybe fewer die than do with the FDA around, so maybe the existence of the FDA compared to the counterfactual is net bad. I won’t dispute this, I don’t know enough about it. However, the counterfactual in AI is different. If unregulated, AI progress steams on ahead, competition over the high rewards is high, and if we don’t have good safety plan (which we don’t) then maybe we all die at some point, who knows when. However, if an FDA-for-AI creates bad regulation (as long as it’s not bad enough to cause AI regulation winter) then it starts slowing down that progress. Maybe it’s bad for, idk, the diseases that could have been solved during the 10 years slowing down from when AI would have solved cancer vs not, and that kind of thing, but it’s nowhere near as bad as the counterfactual! These scenarios are different and not comparable, because the counterfactual of no FDA is not as bad as the counterfactual of no AI regulator.
Enough errors would almost certainly occur in AI regulation to make it net negative.
You gave a bunch of examples from non-AI regulation of bad regulation (I am not going to bother to think about whether I agree that they are bad regulation as it’s not cruxy) - but you didn’t explain how exactly errors lead to making AI regulation net negative? Again I think similar to the previous claim, the counterfactuals likely make this not hold.
...a field where there is bound to be vastly more misunderstanding should be at least as prone to regulation backfiring
That is an interesting claim, I am not sure what makes you think it’s obviously true, as it depends what your goal is. My understanding of the OP is that the goal of the type of regulation they advocate is simply to slow down AI development, nothing more, nothing less. If the goal is to do good regulation of AI, that’s totally different. Is there a specific way in which you imagine it backfiring for the goal of simply slowing down AI progress?
...an [oppressive] regime gaining controllable AI would produce an astronomical suffering risk.
I am unsure what point you were making in the paragraph about evil. Was it about another regime getting there first that might not do safety? For response, see the OP Objection 4 which I share and added additional reason for that not being a real worry in this world.
...unwise to think that people who take blatant actions to kill innocents for political convenience would be safe custodians of AI..
I don’t think it’s fair to say regulators would be a custodian. They have a special kind of lever called “slow things down”, and that lever does not mean that they can, for example, seize and start operating the AI. It is not in their power to do that, legally, nor do they have the capability to do anything with it. We are talking here about slowing things down before AGI, not post AGI.
the electorate does not understand AI
Answer is same as my answer to 3. and also similar to OP Objection 1.
And finally to reply to this: “hopefully this should clarify to a degree why I anticipate both severe X risks and S risks from most attempts at AI regulation”
Basically, no, it doesn’t really clarify it. You started off with a premise I agreed with or at least do not know enough to refute, that the FDA may be net negative, and then drew a conclusion that I disagree with (see 1. above), and then all your other points were assuming that conclusion, so I couldn’t really follow. I tried to pick out bits that seemed like possible key points and reply, but yeah I think you’re pretty confused.
What do you think of my reply to 1. - the counterfactuals being different. I think that’s the best way to progress the conversation.
Thank you for your elaboration, I appreciate it a lot, and upvoted for the effort. Here are your clearest points paraphrased as I understand them (sometimes just using your words), and my replies:
The FDA is net negative for health, therefore creating an FDA-for-AI would be likely net negative for the AI challenges.
I don’t think you can come to this conclusion, even if I agree with the premise. The counterfactuals are very different. With drugs the counterfactual of no FDA might be some people get more treatments, and some die but many don’t, and they were sick anyway so need to do something, and maybe fewer die than do with the FDA around, so maybe the existence of the FDA compared to the counterfactual is net bad. I won’t dispute this, I don’t know enough about it. However, the counterfactual in AI is different. If unregulated, AI progress steams on ahead, competition over the high rewards is high, and if we don’t have good safety plan (which we don’t) then maybe we all die at some point, who knows when. However, if an FDA-for-AI creates bad regulation (as long as it’s not bad enough to cause AI regulation winter) then it starts slowing down that progress. Maybe it’s bad for, idk, the diseases that could have been solved during the 10 years slowing down from when AI would have solved cancer vs not, and that kind of thing, but it’s nowhere near as bad as the counterfactual! These scenarios are different and not comparable, because the counterfactual of no FDA is not as bad as the counterfactual of no AI regulator.
Enough errors would almost certainly occur in AI regulation to make it net negative.
You gave a bunch of examples from non-AI regulation of bad regulation (I am not going to bother to think about whether I agree that they are bad regulation as it’s not cruxy) - but you didn’t explain how exactly errors lead to making AI regulation net negative? Again I think similar to the previous claim, the counterfactuals likely make this not hold.
...a field where there is bound to be vastly more misunderstanding should be at least as prone to regulation backfiring
That is an interesting claim, I am not sure what makes you think it’s obviously true, as it depends what your goal is. My understanding of the OP is that the goal of the type of regulation they advocate is simply to slow down AI development, nothing more, nothing less. If the goal is to do good regulation of AI, that’s totally different. Is there a specific way in which you imagine it backfiring for the goal of simply slowing down AI progress?
...an [oppressive] regime gaining controllable AI would produce an astronomical suffering risk.
I am unsure what point you were making in the paragraph about evil. Was it about another regime getting there first that might not do safety? For response, see the OP Objection 4 which I share and added additional reason for that not being a real worry in this world.
...unwise to think that people who take blatant actions to kill innocents for political convenience would be safe custodians of AI..
I don’t think it’s fair to say regulators would be a custodian. They have a special kind of lever called “slow things down”, and that lever does not mean that they can, for example, seize and start operating the AI. It is not in their power to do that, legally, nor do they have the capability to do anything with it. We are talking here about slowing things down before AGI, not post AGI.
the electorate does not understand AI
Answer is same as my answer to 3. and also similar to OP Objection 1.
And finally to reply to this: “hopefully this should clarify to a degree why I anticipate both severe X risks and S risks from most attempts at AI regulation”
Basically, no, it doesn’t really clarify it. You started off with a premise I agreed with or at least do not know enough to refute, that the FDA may be net negative, and then drew a conclusion that I disagree with (see 1. above), and then all your other points were assuming that conclusion, so I couldn’t really follow. I tried to pick out bits that seemed like possible key points and reply, but yeah I think you’re pretty confused.
What do you think of my reply to 1. - the counterfactuals being different. I think that’s the best way to progress the conversation.