I want to step in here as a moderator. We’re getting a substantial wave of new people joining the site who aren’t caught up on all the basic arguments for why AI is likely to be dangerous.
I do want people with novel critiques of AI to be able to present them. But LessWrong is a site focused on progressing the cutting edge of thinking, and that means we can’t rehash every debate endlessly. This comment makes a lot of arguments that have been dealt with extensively on this forum, in the AI box experiment, Cold Takes, That Alien Message, So It Looks Like You’re Trying to Take Over The World, and many other places.
If you want to critique this sort of claim, the place to do it is on another thread. (By default you can bring it up in the periodic All AGI Safety questions welcome threads). And if you want to engage significantly about this topic on LessWrong, you should focus on understanding why AI is commonly regarded as dangerous here, and make specific arguments about where you expect those assumptions to be wrong. You can also check out https://ui.stampy.ai which is an FAQ site optimized for answering many common questions.
The LessWrong moderation team is generally shifting to moderate more aggressively as a large wave of people start engaging. John Kluge has made a few comments in this reference class so for now I’m rate limiting them to one-comment per 3 days.
I want to step in here as a moderator. We’re getting a substantial wave of new people joining the site who aren’t caught up on all the basic arguments for why AI is likely to be dangerous.
I do want people with novel critiques of AI to be able to present them. But LessWrong is a site focused on progressing the cutting edge of thinking, and that means we can’t rehash every debate endlessly. This comment makes a lot of arguments that have been dealt with extensively on this forum, in the AI box experiment, Cold Takes, That Alien Message, So It Looks Like You’re Trying to Take Over The World, and many other places.
If you want to critique this sort of claim, the place to do it is on another thread. (By default you can bring it up in the periodic All AGI Safety questions welcome threads). And if you want to engage significantly about this topic on LessWrong, you should focus on understanding why AI is commonly regarded as dangerous here, and make specific arguments about where you expect those assumptions to be wrong. You can also check out https://ui.stampy.ai which is an FAQ site optimized for answering many common questions.
The LessWrong moderation team is generally shifting to moderate more aggressively as a large wave of people start engaging. John Kluge has made a few comments in this reference class so for now I’m rate limiting them to one-comment per 3 days.
I put together a bunch of the standard links for the topic of “how can software act in the world and kill you” in this comment.
There’s no particular need to: there’s a technology that allows you just store succinct pre-written answers to Frequently Asked Questions.
And it turns out there is a FAQ
https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq
although it is old (and not prominently displayed).