This isn’t intended to argue about AI safety from the ground up, this is targeted towards people who are familiar (and buy into) the arguments, but aren’t taking action on them. (Scott Alexander’s Superintelligence FAQ is the summary I point people to if they aren’t buying the basic paradigm. If you’ve read that, and either aren’t fully convinced or feel like you want more context, and you haven’t read the Sequences and Superintelligence, I do literally suggest doing that first, as Habryka suggests)
So, “AI timelines have shifted sooner, even among people who were taking them seriously” is the bit of new information for people who’ve been sort of following things but not religiously keeping track of a lot of in-person and FB discussions.
Just pointing out I’m still waiting on a response from my comment asking a similar question the zulu here. I read the sequences and superintelligence, but I still don’t see how an AI would proliferate and advance faster than our ability to kill it—a year to get from baby to einstein level intelligence is plenty long enough to react.
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.
OK, that answers my first two point. So, if I bought into the arguments, it would be clear to me what I should be thinking about? And the value of this thinking would also be clear to me?
That seems dubious but, anyway, since I don’t yet fully buy into the arguments, would you explain what exactly to think about and why that should be a good idea?
This isn’t intended to argue about AI safety from the ground up, this is targeted towards people who are familiar (and buy into) the arguments, but aren’t taking action on them. (Scott Alexander’s Superintelligence FAQ is the summary I point people to if they aren’t buying the basic paradigm. If you’ve read that, and either aren’t fully convinced or feel like you want more context, and you haven’t read the Sequences and Superintelligence, I do literally suggest doing that first, as Habryka suggests)
So, “AI timelines have shifted sooner, even among people who were taking them seriously” is the bit of new information for people who’ve been sort of following things but not religiously keeping track of a lot of in-person and FB discussions.
Just pointing out I’m still waiting on a response from my comment asking a similar question the zulu here. I read the sequences and superintelligence, but I still don’t see how an AI would proliferate and advance faster than our ability to kill it—a year to get from baby to einstein level intelligence is plenty long enough to react.
This post of mine is (among other things), one piece of a reply to this comment.
https://www.lesserwrong.com/posts/RHurATLtM7S5JWe9v/factorio-accelerando-empathizing-with-empires-and-moderate
React to what? Such an AI might appear perfectly safe and increasingly useful right up to the point where it can no longer be turned off.
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.
OK, that answers my first two point. So, if I bought into the arguments, it would be clear to me what I should be thinking about? And the value of this thinking would also be clear to me?
That seems dubious but, anyway, since I don’t yet fully buy into the arguments, would you explain what exactly to think about and why that should be a good idea?