Have you considered emphasizing this part of your position:
“We want to shut down AGI research including governments, military, and spies in all countries”.
I think this is an important point that is missed in current regulation, which focuses on slowing down only the private sector. It’s hard to achieve because policymakers often favor their own institutions, but it’s absolutely needed, so it needs to be said early and often. This will actually win you points with the many people who are cynical of the institutions, who are not just libertarians, but a growing portion of the public.
I don’t think anyone is saying this, but it fits your honest and confronting communication strategy.
I am not sure which way you intended that sentence. Did you mean:
A. We want to shut down all AGI research everywhere by everyone, or B. We want to shut down AGI research and we also want to shut down governments and militaries and spies
I assume you meant the first thing, but want to be sure!
We support A. Eliezer has been very clear about that in his tweets. In broader MIRI communications, it depends on how many words we have to express our ideas, but when we have room we spell out that idea.
I agree that current / proposed regulation is mostly not aimed at A.
Definitely A, and while it’s clear MIRI means well, I’m suggesting a focus on preventing military and spy arms races in AI. Because it seems like a likely failure mode, which no one is focusing on. It seems like a place where a bunch of blunt people can expand the Overton window to everyone’s advantage.
MIRI has used nuclear non-proliferation as an example (getting lots of pushback). But non-proliferation did not stop new countries from getting the bomb, it did certainly did not stop existing countries from scaling up their nuclear arsenals. Global de-escalation after the end of the Cold War is what caused that. For example, look at this graph it doesn’t go down after the 1968 treaty, it goes down after the Cold War (>1985).
We would not want to see a similar situation with AI, where existing countries race to scale up their efforts and research.
This is in no way a criticism, MIRI is probably already doing the most here, and facing criticism for it. I’m just suggesting the idea.
Have you considered emphasizing this part of your position:
“We want to shut down AGI research including governments, military, and spies in all countries”.
I think this is an important point that is missed in current regulation, which focuses on slowing down only the private sector. It’s hard to achieve because policymakers often favor their own institutions, but it’s absolutely needed, so it needs to be said early and often. This will actually win you points with the many people who are cynical of the institutions, who are not just libertarians, but a growing portion of the public.
I don’t think anyone is saying this, but it fits your honest and confronting communication strategy.
I am not sure which way you intended that sentence. Did you mean:
A. We want to shut down all AGI research everywhere by everyone, or
B. We want to shut down AGI research and we also want to shut down governments and militaries and spies
I assume you meant the first thing, but want to be sure!
We support A. Eliezer has been very clear about that in his tweets. In broader MIRI communications, it depends on how many words we have to express our ideas, but when we have room we spell out that idea.
I agree that current / proposed regulation is mostly not aimed at A.
Definitely A, and while it’s clear MIRI means well, I’m suggesting a focus on preventing military and spy arms races in AI. Because it seems like a likely failure mode, which no one is focusing on. It seems like a place where a bunch of blunt people can expand the Overton window to everyone’s advantage.
MIRI has used nuclear non-proliferation as an example (getting lots of pushback). But non-proliferation did not stop new countries from getting the bomb, it did certainly did not stop existing countries from scaling up their nuclear arsenals. Global de-escalation after the end of the Cold War is what caused that. For example, look at this graph it doesn’t go down after the 1968 treaty, it goes down after the Cold War (>1985).
We would not want to see a similar situation with AI, where existing countries race to scale up their efforts and research.
This is in no way a criticism, MIRI is probably already doing the most here, and facing criticism for it. I’m just suggesting the idea.