When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.
We (the AI Safety community/ generally alignment-concerned people/ EAs) almost definitely can’t choose what type of regulations are adopted. If we’re very lucky/ dedicated we might be able to get a place at the table. Everyone else at the table will be members of slightly, or very, misaligned interest groups who we have to compromise with.
Various stripes of “Neo-Luddite” and AI-x-risk people have different concerns, but this is how political alliances work. You get at the table and work out what you have in common. We can try to take a leadership role in this alliance, with safety/ alignment as our bottom line- we’ll probably be a smaller interest group than the growing ranks of newly unemployed creatives, but we could be more professionalised and aware of how to enact political change.
If we could persuade an important neo-Luddite ‘KOL’ to share our concerns about x-risk and alignment, this could make them a really valuable ally. This isn’t too unrealistic- I suspect that, once you start feeling critical towards AI for taking your livelihood, it’s much easier to see it as an existential menace.
Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not [to] endorse a proposal because it’s “better than nothing” unless it’s also literally the only chance we get to delay AI.
Expecting anything close to optimal regulation in the current national/ international order on the first shot is surely folly. We should endorse any proposal that is “better than nothing” while factoring potential suboptimal regime shifts into our equations.
Well, inducing a mass societal collapse is perhaps one of the few ways that a small group of people with no political power or allies would be able to significantly influence AI policy. But, as I stressed in my post, that is probably a bad idea, so you shouldn’t do it.
I’m confused here at what this response is arguing against. As far as I can tell, no major alignment organization is arguing that collapse is desirable, so I do not understand what you’re arguing for.
No organisation, but certainly the individual known as Dzoldzaya, who wrote both the article I linked and the comment I was replying to. By Dzoldzaya’s use of “We”, they place themselves within the AI Safety/etc. community, if not any particular organisation.
Here, Dzoldzaya recommends teaming up with neo-Luddites to stop AI, by unspecified means that I would like to see elaborated on. (What does ‘KOL’ mean in this context?) There, Dzoldzaya considers trying to delay AI by destroying society, by ways such as by engineering a pandemic that kills 95% of the population (i.e. 7.6 billion people). The only problems Dzoldzaya sees with such proposals is that they might not work, e.g. if the pandemic kills 99.9% instead and makes recovery impossible. But it would delay catastrophic AI and give us a better chance at the far future where (Dzoldzaya says) most value lies, against which 7.6 billion people are nothing. Dzoldzaya claims to be only 40% sure that such plans would lead to a desirable outcome, but given the vast future at stake, a 40% shot at it is a loud call to action.
KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).
We (the AI Safety community/ generally alignment-concerned people/ EAs) almost definitely can’t choose what type of regulations are adopted. If we’re very lucky/ dedicated we might be able to get a place at the table. Everyone else at the table will be members of slightly, or very, misaligned interest groups who we have to compromise with.
Various stripes of “Neo-Luddite” and AI-x-risk people have different concerns, but this is how political alliances work. You get at the table and work out what you have in common. We can try to take a leadership role in this alliance, with safety/ alignment as our bottom line- we’ll probably be a smaller interest group than the growing ranks of newly unemployed creatives, but we could be more professionalised and aware of how to enact political change.
If we could persuade an important neo-Luddite ‘KOL’ to share our concerns about x-risk and alignment, this could make them a really valuable ally. This isn’t too unrealistic- I suspect that, once you start feeling critical towards AI for taking your livelihood, it’s much easier to see it as an existential menace.
Expecting anything close to optimal regulation in the current national/ international order on the first shot is surely folly. We should endorse any proposal that is “better than nothing” while factoring potential suboptimal regime shifts into our equations.
Neither can you choose what follows after your plan to kill 7.6 billion people succeeds.
Well, inducing a mass societal collapse is perhaps one of the few ways that a small group of people with no political power or allies would be able to significantly influence AI policy. But, as I stressed in my post, that is probably a bad idea, so you shouldn’t do it.
I’m confused here at what this response is arguing against. As far as I can tell, no major alignment organization is arguing that collapse is desirable, so I do not understand what you’re arguing for.
No organisation, but certainly the individual known as Dzoldzaya, who wrote both the article I linked and the comment I was replying to. By Dzoldzaya’s use of “We”, they place themselves within the AI Safety/etc. community, if not any particular organisation.
Here, Dzoldzaya recommends teaming up with neo-Luddites to stop AI, by unspecified means that I would like to see elaborated on. (What does ‘KOL’ mean in this context?) There, Dzoldzaya considers trying to delay AI by destroying society, by ways such as by engineering a pandemic that kills 95% of the population (i.e. 7.6 billion people). The only problems Dzoldzaya sees with such proposals is that they might not work, e.g. if the pandemic kills 99.9% instead and makes recovery impossible. But it would delay catastrophic AI and give us a better chance at the far future where (Dzoldzaya says) most value lies, against which 7.6 billion people are nothing. Dzoldzaya claims to be only 40% sure that such plans would lead to a desirable outcome, but given the vast future at stake, a 40% shot at it is a loud call to action.
ETA: Might “KOL” mean “voice”, as in “bat kol”?
KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).