We’ve had about 5-6 years of active discussion on AI risk estimation on LessWrong, as well as the shape of intelligence explosions. If you haven’t read superintelligence or Rationality:A-Z yet, then I would recommend reading those, since most of the writing on LW will assume you’ve read those and take knowing the arguments in them mostly as given.
Note: I actually think your question is legitimate and valid, it’s just something we’ve literally spent about 40% of the content on this site discussing, and I think it’s important that we aren’t forced to have the same conversations over and over again. A lot of that discussion is now a few years old, which does mean it isn’t really actively discussed anymore, but the current state of things does mean that I usually want someone to credibly signal that they’ve read the old discussions, before I engage in a super long debate with them rehashing the old arguments.
I’m vaguely aware there used to be discussion, though I wasn’t there for it. So what happened? I’m not suggesting that you should replay the same arguments (though we might, surely I’m not the only new person here). I’m suggesting that you should have new arguments. Time has passed, new developments have happened, presumably progress is made on something and new problems were discovered and alignment remains unsolved.
Raemon suggests that you should spend time thinking about AI. If you agree how important it is, you probably should. And if you’re going to think, you might as well publish what you figure out. And if you’re going to publish, you might as well do in on LW, right?
Regarding reading the old arguments, it there some way to find the good ones? A lot of the arguments I see are kind of weak, long, intuitive, make no assumptions that the reader knows CS 101, etc. Rationality: A-Z falls into this category, I think (though I haven’t read it in a long time). Is “Superintelligence” better?
By the way, you saw my recent post on AI. Was it also a point previously talked about? Do you have links?
We’ve had about 5-6 years of active discussion on AI risk estimation on LessWrong, as well as the shape of intelligence explosions. If you haven’t read superintelligence or Rationality:A-Z yet, then I would recommend reading those, since most of the writing on LW will assume you’ve read those and take knowing the arguments in them mostly as given.
Note: I actually think your question is legitimate and valid, it’s just something we’ve literally spent about 40% of the content on this site discussing, and I think it’s important that we aren’t forced to have the same conversations over and over again. A lot of that discussion is now a few years old, which does mean it isn’t really actively discussed anymore, but the current state of things does mean that I usually want someone to credibly signal that they’ve read the old discussions, before I engage in a super long debate with them rehashing the old arguments.
I’m vaguely aware there used to be discussion, though I wasn’t there for it. So what happened? I’m not suggesting that you should replay the same arguments (though we might, surely I’m not the only new person here). I’m suggesting that you should have new arguments. Time has passed, new developments have happened, presumably progress is made on something and new problems were discovered and alignment remains unsolved.
Raemon suggests that you should spend time thinking about AI. If you agree how important it is, you probably should. And if you’re going to think, you might as well publish what you figure out. And if you’re going to publish, you might as well do in on LW, right?
Regarding reading the old arguments, it there some way to find the good ones? A lot of the arguments I see are kind of weak, long, intuitive, make no assumptions that the reader knows CS 101, etc. Rationality: A-Z falls into this category, I think (though I haven’t read it in a long time). Is “Superintelligence” better?
By the way, you saw my recent post on AI. Was it also a point previously talked about? Do you have links?