Most AI researchers have not done any research into the topic of AI [safety], so their opinions are irrelevant.
(I assume my edit is correct?)
One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Lastly this is not an outlier or ‘extremist’ view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it’s as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren’t adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
Really? There’s a lot of frequent posters here that don’t hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Considering that you’re using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.
most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
Really? There’s a lot of frequent posters here that don’t hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
I didn’t say there wasn’t dissent. I said it wasn’t an outlier view, and seems to be the majority opinion.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Look I’m sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn’t really add anything. There was no argument, just an appeal to authority, and calling GP “extremist” for something that’s a common view on this site. At the very least, read some of the previous discussions first. You don’t need to read everything, but there is a list of posts here.
. There was no argument, just an appeal to authority, and calling GP “extremist” for something that’s a common view on this site.
A view can be extreme within the wider AI community, and normal within less wrong. The disconnection between LW and everyone else is part of the problem.
(I assume my edit is correct?)
One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Really? There’s a lot of frequent posters here that don’t hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Considering that you’re using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
I didn’t say there wasn’t dissent. I said it wasn’t an outlier view, and seems to be the majority opinion.
Look I’m sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn’t really add anything. There was no argument, just an appeal to authority, and calling GP “extremist” for something that’s a common view on this site. At the very least, read some of the previous discussions first. You don’t need to read everything, but there is a list of posts here.
A view can be extreme within the wider AI community, and normal within less wrong. The disconnection between LW and everyone else is part of the problem.