I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?
I didn’t say it wasn’t sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as “accelerating the development of Friendly AI” is misleading, or at least confused. What you’re actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it “acceleration.”
Incidentally, people don’t seem to say “Friendly AI” anymore. What’s up with that?