I downvoted the comment. We want to have a culture where people who chose to do good ambitious projected to get social approval instead of being told: “What you want to do isn’t very advanced, advanced people should do X”.
The fact that I was basically serious, and in no way attempting to discourage @elriggs, and yet the comment is (after 12 hours) at −17, suggests that LW now has a problem with people who *do* want to do advanced things.
I don’t think it should be at −17, but I don’t think what its low score indicates is that LW has a problem with people who want to do advanced things.
Your suggestion, taken as a serious one, is obviously absurdly overambitious. Its premise is that the experts in the relevant fields have nearly enough knowledge to (1) create a superhuman AI and (2) arrange for it to behave in ways that are good rather than bad for us. But so far as I can tell, those same experts are pretty much universally agreed that those are super-hard problems. (There are some people who are arguably experts on #1 and think #2 might be easy. There are some people who are arguably experts on #2 and think #1 might happen much sooner than we’d guess. But I don’t think I know of anyone who’s an expert on #1 and thinks #1 is feasible within, say, a year, or of anyone who’s an expert on #2 and thinks #2 is feasible on that timescale.)
So you are simultaneously arguing that the state of the art in these things is really far advanced and that the people whose work makes it so are hopelessly incompetent to evaluate how close we are.
For sure, it could turn out that you’re right. But it seems staggeringly unlikely, and in any case anyone actually in a position to solve those problems within 80 days is surely already working on them.
Also: If I try to think about the possible worlds most similar to the one I think we’re actually in in which at least one of those problems does get solved within 80 days, it seems to me that a substantial fraction are ones in which just one of them does, and the most likely way for that to happen is some sort of rapidly recursively self-improving AI (“FOOM”), and if that happens without the other problem getting solved there’s a substantial danger that we’re all screwed. In that possible world, advising people to rush to solve those problems seems like rather a bad idea.
(I don’t think FOOM+doom is a terribly likely outcome. But I think it’s quite likely conditional on any part of your proposal turning out to be feasible.)
The problem isn’t conscious intent but the social effect of a statement. Being bad at social skills and thus lacking awareness of making status moves is no good justification for them being proper.
If your position is to seriously propose that trial, there’s no good reason to do so in this thread but you could have written your own post for it.
I downvoted the comment. We want to have a culture where people who chose to do good ambitious projected to get social approval instead of being told: “What you want to do isn’t very advanced, advanced people should do X”.
The fact that I was basically serious, and in no way attempting to discourage @elriggs, and yet the comment is (after 12 hours) at −17, suggests that LW now has a problem with people who *do* want to do advanced things.
I don’t think it should be at −17, but I don’t think what its low score indicates is that LW has a problem with people who want to do advanced things.
Your suggestion, taken as a serious one, is obviously absurdly overambitious. Its premise is that the experts in the relevant fields have nearly enough knowledge to (1) create a superhuman AI and (2) arrange for it to behave in ways that are good rather than bad for us. But so far as I can tell, those same experts are pretty much universally agreed that those are super-hard problems. (There are some people who are arguably experts on #1 and think #2 might be easy. There are some people who are arguably experts on #2 and think #1 might happen much sooner than we’d guess. But I don’t think I know of anyone who’s an expert on #1 and thinks #1 is feasible within, say, a year, or of anyone who’s an expert on #2 and thinks #2 is feasible on that timescale.)
So you are simultaneously arguing that the state of the art in these things is really far advanced and that the people whose work makes it so are hopelessly incompetent to evaluate how close we are.
For sure, it could turn out that you’re right. But it seems staggeringly unlikely, and in any case anyone actually in a position to solve those problems within 80 days is surely already working on them.
Also: If I try to think about the possible worlds most similar to the one I think we’re actually in in which at least one of those problems does get solved within 80 days, it seems to me that a substantial fraction are ones in which just one of them does, and the most likely way for that to happen is some sort of rapidly recursively self-improving AI (“FOOM”), and if that happens without the other problem getting solved there’s a substantial danger that we’re all screwed. In that possible world, advising people to rush to solve those problems seems like rather a bad idea.
(I don’t think FOOM+doom is a terribly likely outcome. But I think it’s quite likely conditional on any part of your proposal turning out to be feasible.)
Just a reminder that karma works slightly differently on LW 2.0, so karma −17 today means less than karma −17 would have meant on LW 1.0.
The problem isn’t conscious intent but the social effect of a statement. Being bad at social skills and thus lacking awareness of making status moves is no good justification for them being proper.
If your position is to seriously propose that trial, there’s no good reason to do so in this thread but you could have written your own post for it.