I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?