I think rationalists generally agree that speeding up the development of AGI (that doesn’t kill all of us) is extremely important
Didn’t Eli want a worldwide moratorium on AI development, with data center airstrikes if necessary?
Granted, I understood this to be on the grounds that we were at the point that AGI killing us was a serious concern. But still, being in favor of “speeding up AGI that doesn’t kill us” is kind of misleading if you think the plan should be
Slow down AGI to 0.
Figure out all of the alignment stuff.
Develop AGI with alignment as fast as possible.
I mean, sure, you want all 3 steps to happen as fast as possible, but that’s not why there’s a difference of opinion. There’s a reason why e/acc refer to the other side as “decels” and it’s not unwarranted IMO.
I would be more worried about getting kicked out of parties because you think “the NRC is a good thing”
Let’s say “An NRC would be a good thing (at least on the assumption that we don’t intend to be 100% libertarian in the short run)”. I’m not going to die on the hill of whatever they may have done recently.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?
Didn’t Eli want a worldwide moratorium on AI development, with data center airstrikes if necessary?
Granted, I understood this to be on the grounds that we were at the point that AGI killing us was a serious concern. But still, being in favor of “speeding up AGI that doesn’t kill us” is kind of misleading if you think the plan should be
Slow down AGI to 0.
Figure out all of the alignment stuff.
Develop AGI with alignment as fast as possible.
I mean, sure, you want all 3 steps to happen as fast as possible, but that’s not why there’s a difference of opinion. There’s a reason why e/acc refer to the other side as “decels” and it’s not unwarranted IMO.
Let’s say “An NRC would be a good thing (at least on the assumption that we don’t intend to be 100% libertarian in the short run)”. I’m not going to die on the hill of whatever they may have done recently.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?