That’s a lot of text that leaves me confused in a lot of ways.
The claim about AI timeline is, I think, not contentious. But what about “AI will be dangerous”? Surely that’s Claim 4? Is it so obvious that it’s not even worth listing?
Regarding AI risk estimation, this seems a weirdly rare topic of discussion on LW. The few recent AI related posts feel like summaries of stuff like the superintelligence FAQ without much new substance. Are those discussions happening elsewhere? Maybe there aren’t any new discussions to be had?
You have a section called “what to think about”, but I didn’t actually understand what to think about. Is this about solving alignment? Or about updating our personal plans to account for the coming space communism/apocalypse? You sort of hinted that there is a list of things that need to be done, but haven’t really explained what they would be.
Regarding “thinking”, would you have told a farmer in 1700s to stop working, learn engineering and think about what the industrial revolution is going to look like? Does that really make sense? I think “wait and see” is also a good strategy.
We’ve had about 5-6 years of active discussion on AI risk estimation on LessWrong, as well as the shape of intelligence explosions. If you haven’t read superintelligence or Rationality:A-Z yet, then I would recommend reading those, since most of the writing on LW will assume you’ve read those and take knowing the arguments in them mostly as given.
Note: I actually think your question is legitimate and valid, it’s just something we’ve literally spent about 40% of the content on this site discussing, and I think it’s important that we aren’t forced to have the same conversations over and over again. A lot of that discussion is now a few years old, which does mean it isn’t really actively discussed anymore, but the current state of things does mean that I usually want someone to credibly signal that they’ve read the old discussions, before I engage in a super long debate with them rehashing the old arguments.
I’m vaguely aware there used to be discussion, though I wasn’t there for it. So what happened? I’m not suggesting that you should replay the same arguments (though we might, surely I’m not the only new person here). I’m suggesting that you should have new arguments. Time has passed, new developments have happened, presumably progress is made on something and new problems were discovered and alignment remains unsolved.
Raemon suggests that you should spend time thinking about AI. If you agree how important it is, you probably should. And if you’re going to think, you might as well publish what you figure out. And if you’re going to publish, you might as well do in on LW, right?
Regarding reading the old arguments, it there some way to find the good ones? A lot of the arguments I see are kind of weak, long, intuitive, make no assumptions that the reader knows CS 101, etc. Rationality: A-Z falls into this category, I think (though I haven’t read it in a long time). Is “Superintelligence” better?
By the way, you saw my recent post on AI. Was it also a point previously talked about? Do you have links?
This isn’t intended to argue about AI safety from the ground up, this is targeted towards people who are familiar (and buy into) the arguments, but aren’t taking action on them. (Scott Alexander’s Superintelligence FAQ is the summary I point people to if they aren’t buying the basic paradigm. If you’ve read that, and either aren’t fully convinced or feel like you want more context, and you haven’t read the Sequences and Superintelligence, I do literally suggest doing that first, as Habryka suggests)
So, “AI timelines have shifted sooner, even among people who were taking them seriously” is the bit of new information for people who’ve been sort of following things but not religiously keeping track of a lot of in-person and FB discussions.
Just pointing out I’m still waiting on a response from my comment asking a similar question the zulu here. I read the sequences and superintelligence, but I still don’t see how an AI would proliferate and advance faster than our ability to kill it—a year to get from baby to einstein level intelligence is plenty long enough to react.
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.
OK, that answers my first two point. So, if I bought into the arguments, it would be clear to me what I should be thinking about? And the value of this thinking would also be clear to me?
That seems dubious but, anyway, since I don’t yet fully buy into the arguments, would you explain what exactly to think about and why that should be a good idea?
That’s a lot of text that leaves me confused in a lot of ways.
The claim about AI timeline is, I think, not contentious. But what about “AI will be dangerous”? Surely that’s Claim 4? Is it so obvious that it’s not even worth listing?
Regarding AI risk estimation, this seems a weirdly rare topic of discussion on LW. The few recent AI related posts feel like summaries of stuff like the superintelligence FAQ without much new substance. Are those discussions happening elsewhere? Maybe there aren’t any new discussions to be had?
You have a section called “what to think about”, but I didn’t actually understand what to think about. Is this about solving alignment? Or about updating our personal plans to account for the coming space communism/apocalypse? You sort of hinted that there is a list of things that need to be done, but haven’t really explained what they would be.
Regarding “thinking”, would you have told a farmer in 1700s to stop working, learn engineering and think about what the industrial revolution is going to look like? Does that really make sense? I think “wait and see” is also a good strategy.
We’ve had about 5-6 years of active discussion on AI risk estimation on LessWrong, as well as the shape of intelligence explosions. If you haven’t read superintelligence or Rationality:A-Z yet, then I would recommend reading those, since most of the writing on LW will assume you’ve read those and take knowing the arguments in them mostly as given.
Note: I actually think your question is legitimate and valid, it’s just something we’ve literally spent about 40% of the content on this site discussing, and I think it’s important that we aren’t forced to have the same conversations over and over again. A lot of that discussion is now a few years old, which does mean it isn’t really actively discussed anymore, but the current state of things does mean that I usually want someone to credibly signal that they’ve read the old discussions, before I engage in a super long debate with them rehashing the old arguments.
I’m vaguely aware there used to be discussion, though I wasn’t there for it. So what happened? I’m not suggesting that you should replay the same arguments (though we might, surely I’m not the only new person here). I’m suggesting that you should have new arguments. Time has passed, new developments have happened, presumably progress is made on something and new problems were discovered and alignment remains unsolved.
Raemon suggests that you should spend time thinking about AI. If you agree how important it is, you probably should. And if you’re going to think, you might as well publish what you figure out. And if you’re going to publish, you might as well do in on LW, right?
Regarding reading the old arguments, it there some way to find the good ones? A lot of the arguments I see are kind of weak, long, intuitive, make no assumptions that the reader knows CS 101, etc. Rationality: A-Z falls into this category, I think (though I haven’t read it in a long time). Is “Superintelligence” better?
By the way, you saw my recent post on AI. Was it also a point previously talked about? Do you have links?
This isn’t intended to argue about AI safety from the ground up, this is targeted towards people who are familiar (and buy into) the arguments, but aren’t taking action on them. (Scott Alexander’s Superintelligence FAQ is the summary I point people to if they aren’t buying the basic paradigm. If you’ve read that, and either aren’t fully convinced or feel like you want more context, and you haven’t read the Sequences and Superintelligence, I do literally suggest doing that first, as Habryka suggests)
So, “AI timelines have shifted sooner, even among people who were taking them seriously” is the bit of new information for people who’ve been sort of following things but not religiously keeping track of a lot of in-person and FB discussions.
Just pointing out I’m still waiting on a response from my comment asking a similar question the zulu here. I read the sequences and superintelligence, but I still don’t see how an AI would proliferate and advance faster than our ability to kill it—a year to get from baby to einstein level intelligence is plenty long enough to react.
This post of mine is (among other things), one piece of a reply to this comment.
https://www.lesserwrong.com/posts/RHurATLtM7S5JWe9v/factorio-accelerando-empathizing-with-empires-and-moderate
React to what? Such an AI might appear perfectly safe and increasingly useful right up to the point where it can no longer be turned off.
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.
OK, that answers my first two point. So, if I bought into the arguments, it would be clear to me what I should be thinking about? And the value of this thinking would also be clear to me?
That seems dubious but, anyway, since I don’t yet fully buy into the arguments, would you explain what exactly to think about and why that should be a good idea?