For reference classes, you might discuss why you don’t think “power / influence of different biological species” should count.
For multiple copies of the same AI, I guess my very brief discussion of “zombie dynamic” here could be a foil that you might respond to, if you want.
For things like “the potential harms will be noticeable before getting too extreme, and we can take measures to pull back”, you might discuss the possibility that the harms are noticeable but effective “measures to pull back” do not exist or are not taken. E.g. the harms of climate change have been noticeable for a long time but mitigating is hard and expensive and many people (including the previous POTUS) are outright opposed to mitigating it anyway partly because it got culture-war-y; the harms of COVID-19 were noticeable in January 2020 but the USA effectively banned testing and the whole thing turned culture-war-y; the harms of nuclear war and launch-on-warning are obvious but they’re still around; the ransomware and deepfake-porn problems are obvious but kinda unsolvable (partly because of unbannable open-source software); gain-of-function research is still legal in the USA (and maybe in every country on Earth?) despite decades-long track record of lab leaks, and despite COVID-19, and despite a lack of powerful interest groups in favor or culture war issues; etc. Anyway, my modal assumption has been that the development of (what I consider) “real” dangerous AGI will “gradually” unfold over a few years, and those few years will mostly be squandered.
For “we aren’t really a threat to its power”, I’m sure you’ve heard the classic response that humans are an indirect threat as long as they’re able to spin up new AGIs with different goals.
For “war is wasteful”, it’s relevant how big is this waste compared to the prize if you win the war. For an AI that could autonomously (in coordination with copies) build Dyson spheres etc., the costs of fighting a war on Earth may seem like a rounding error compared to what’s at stake. If it sets the AI back 50 years because it has to rebuild the stuff that got destroyed in the war, again, that might seem like no problem.
For “a system of compromise, trade, and law”, I hope you’ll also discuss who has hard power in that system. Historically, it’s very common for the parties with hard power to just decide to start expropriating stuff (or, less extremely, impose high taxes). And then the parties with the stuff might decide they need their own hard power to prevent that.
Looking forward to this! Feel free to ignore any or all of these.
For reference classes, you might discuss why you don’t think “power / influence of different biological species” should count.
For multiple copies of the same AI, I guess my very brief discussion of “zombie dynamic” here could be a foil that you might respond to, if you want.
For things like “the potential harms will be noticeable before getting too extreme, and we can take measures to pull back”, you might discuss the possibility that the harms are noticeable but effective “measures to pull back” do not exist or are not taken. E.g. the harms of climate change have been noticeable for a long time but mitigating is hard and expensive and many people (including the previous POTUS) are outright opposed to mitigating it anyway partly because it got culture-war-y; the harms of COVID-19 were noticeable in January 2020 but the USA effectively banned testing and the whole thing turned culture-war-y; the harms of nuclear war and launch-on-warning are obvious but they’re still around; the ransomware and deepfake-porn problems are obvious but kinda unsolvable (partly because of unbannable open-source software); gain-of-function research is still legal in the USA (and maybe in every country on Earth?) despite decades-long track record of lab leaks, and despite COVID-19, and despite a lack of powerful interest groups in favor or culture war issues; etc. Anyway, my modal assumption has been that the development of (what I consider) “real” dangerous AGI will “gradually” unfold over a few years, and those few years will mostly be squandered.
For “we aren’t really a threat to its power”, I’m sure you’ve heard the classic response that humans are an indirect threat as long as they’re able to spin up new AGIs with different goals.
For “war is wasteful”, it’s relevant how big is this waste compared to the prize if you win the war. For an AI that could autonomously (in coordination with copies) build Dyson spheres etc., the costs of fighting a war on Earth may seem like a rounding error compared to what’s at stake. If it sets the AI back 50 years because it has to rebuild the stuff that got destroyed in the war, again, that might seem like no problem.
For “a system of compromise, trade, and law”, I hope you’ll also discuss who has hard power in that system. Historically, it’s very common for the parties with hard power to just decide to start expropriating stuff (or, less extremely, impose high taxes). And then the parties with the stuff might decide they need their own hard power to prevent that.
Looking forward to this! Feel free to ignore any or all of these.