I think The Sequences spend a lot of words making these arguments, not to mention the enormous quantity of more recent content on LessWrong. Much of Holden’s recent writing has been dedicated to making this exact argument. The case for AGI being singularly impactful does feel pretty overdetermined to me based on the current arguments, so my view is that the ball is in the other court, for proactively arguing against the current set of arguments in favor.
I think The Sequences spend a lot of words making these arguments,
To be a little blunt, the talk about AGI is probably the weakest point of the sequences, primarily because it gets a lot of things flat out wrong. To be fair, Eliezer was writing before the endgame, where there was massive successful investment in AI, so he was to get things wronf.
Some examples of his wrongness on AI was:
It ultimately turned out that AI boxing does work, and Eliezer was flat wrong.
He was wrong in the idea that deep learning couldn’t ever scale to AGI, and his dismissal of neural networks was the single strongest thing I’ve seen in the sequences, primarily because the human brain that acts like a neural network was way more efficient, and arguably close to the optimal design at least for classical, non-exotic computers. At most, you’d get a 1 OOM improvement to the efficiency of the design.
To be blunt, Eliezer is severely unreliable as a source on AGI.
Next, I’ll address this:
not to mention the enormous quantity of more recent content on LessWrong.
Mostly, this content is premised on the assumption that AGI is a huge deal. Little content on LW actually tries to actually show why AGI would be a huge deal without assuming it upfront.
Lastly, I’ll deal with this source:
Much of Holden’s recent writing has been dedicated to making this exact argument.
This is way better as an actual source, and indeed it’s probably the closest any writing on LW tried to ask whether AGI is a huge deal without assuming it.
So I have one good source, one irrelevant source and one bad to terrible source on the question of whether AGI is a huge deal. The good source is probably enough to at least take LW arguments for AI seriously, though without at least a fragment of the assumption that AGI is a huge deal, one probably can’t get very certain, as in say over 90% probability.
It ultimately turned out that AI boxing does work, and Eliezer was flat wrong.
This is so wrong that I suspect you mean something completely different from the common understanding of the concept.
He was wrong in the idea that deep learning couldn’t ever scale to AGI, and his dismissal of neural networks was the single strongest thing I’ve seen in the sequences, primarily because the human brain that acts like a neural network was way more efficient, and arguably close to the optimal design at least for classical, non-exotic computers. At most, you’d get a 1 OOM improvement to the efficiency of the design.
This is not a substantial part of his model of AGI, and why one might expect it to be impactful.
Mostly, this content is premised on the assumption that AGI is a huge deal. Little content on LW actually tries to actually show why AGI would be a huge deal without assuming it upfront.
Of course plenty of more recent content on LessWrong operates on the background assumption that AGI is going to be a big deal, in large part because the arguments to that effect are quite strong and the arguments against are not. It is at the same time untrue that those arguments don’t exist on LessWrong.
So I have one good source, one irrelevant source and one bad to terrible source on the question of whether AGI is a huge deal.
There are many other sources of such arguments on LessWrong, that’s just the one that came to mind in the first five seconds. If you are going to make strong, confident claims about core subjects on LessWrong, you have a duty to have done the necessary background reading to understand at least the high-level outline of the existing arguments on the subject (including the fact that such arguments exist).
While I still have issues with some of the evidence shown, I’m persuaded enough that I’ll take it seriously and retract my earlier comment on the subject.
I think this comment isn’t rigorous enough for Noosphere89 to retract his comment this one responds to, but that’s up to him.
Claims of the form “Yudkowsky was wrong about things like mind-design space, the architecture of neural networks (specifically how he thought making large generalizations about the structure of the human brain wouldn’t work for designing neural architectures), and in general, probably his tendency to assume that certain abstractions just don’t apply whenever intelligence or capability is scaled way up.” I think have been argued well enough by now that they have at least some merit to them.
The claim about AI boxing I’m not sure about, but my understanding is that it’s currently being debated (somewhat hotly). [Fill in the necessary details where this comment leads a void, but I think this is mainly about GPT-4′s API and it being embedded into apps where it can execute code on its own and things like that.]
Claims of the form “Yudkowsky was wrong about things like mind-design space, the architecture of neural networks (specifically how he thought making large generalizations about the structure of the human brain wouldn’t work for designing neural architectures), and in general, probably his tendency to assume that certain abstractions just don’t apply whenever intelligence or capability is scaled way up.”
This is what I was gesturing at in my comments.
The claim about AI boxing I’m not sure about, but my understanding is that it’s currently being debated (somewhat hotly).
I’m talking about simboxing, which was shown to work by Jacob Cannell here:
Basically as long as we can manipulate their perception of reality, which is trivial to do in offline learning, then it’s easy to recreate a finite time Cartesian agent, where data only passes through approved channels, then the AI updates it’s state to learn new things, ad infinitum until the end of offline learning.
Thus simboxing is achieved.
The reason I retracted my comment is because of this quote was correct:
Of course plenty of more recent content on LessWrong operates on the background assumption that AGI is going to be a big deal, in large part because the arguments to that effect are quite strong and the arguments against are not. It is at the same time untrue that those arguments don’t exist on LessWrong.
Primarily because of the post below. There are some caveats to this, but this largely goes through.
I think The Sequences spend a lot of words making these arguments, not to mention the enormous quantity of more recent content on LessWrong. Much of Holden’s recent writing has been dedicated to making this exact argument. The case for AGI being singularly impactful does feel pretty overdetermined to me based on the current arguments, so my view is that the ball is in the other court, for proactively arguing against the current set of arguments in favor.
Let’s address the sources, one by one:
To be a little blunt, the talk about AGI is probably the weakest point of the sequences, primarily because it gets a lot of things flat out wrong. To be fair, Eliezer was writing before the endgame, where there was massive successful investment in AI, so he was to get things wronf.
Some examples of his wrongness on AI was:
It ultimately turned out that AI boxing does work, and Eliezer was flat wrong.
He was wrong in the idea that deep learning couldn’t ever scale to AGI, and his dismissal of neural networks was the single strongest thing I’ve seen in the sequences, primarily because the human brain that acts like a neural network was way more efficient, and arguably close to the optimal design at least for classical, non-exotic computers. At most, you’d get a 1 OOM improvement to the efficiency of the design.
To be blunt, Eliezer is severely unreliable as a source on AGI.
Next, I’ll address this:
Mostly, this content is premised on the assumption that AGI is a huge deal. Little content on LW actually tries to actually show why AGI would be a huge deal without assuming it upfront.
Lastly, I’ll deal with this source:
This is way better as an actual source, and indeed it’s probably the closest any writing on LW tried to ask whether AGI is a huge deal without assuming it.
So I have one good source, one irrelevant source and one bad to terrible source on the question of whether AGI is a huge deal. The good source is probably enough to at least take LW arguments for AI seriously, though without at least a fragment of the assumption that AGI is a huge deal, one probably can’t get very certain, as in say over 90% probability.
This is so wrong that I suspect you mean something completely different from the common understanding of the concept.
This is not a substantial part of his model of AGI, and why one might expect it to be impactful.
Of course plenty of more recent content on LessWrong operates on the background assumption that AGI is going to be a big deal, in large part because the arguments to that effect are quite strong and the arguments against are not. It is at the same time untrue that those arguments don’t exist on LessWrong.
There are many other sources of such arguments on LessWrong, that’s just the one that came to mind in the first five seconds. If you are going to make strong, confident claims about core subjects on LessWrong, you have a duty to have done the necessary background reading to understand at least the high-level outline of the existing arguments on the subject (including the fact that such arguments exist).
While I still have issues with some of the evidence shown, I’m persuaded enough that I’ll take it seriously and retract my earlier comment on the subject.
I think this comment isn’t rigorous enough for Noosphere89 to retract his comment this one responds to, but that’s up to him.
Claims of the form “Yudkowsky was wrong about things like mind-design space, the architecture of neural networks (specifically how he thought making large generalizations about the structure of the human brain wouldn’t work for designing neural architectures), and in general, probably his tendency to assume that certain abstractions just don’t apply whenever intelligence or capability is scaled way up.” I think have been argued well enough by now that they have at least some merit to them.
The claim about AI boxing I’m not sure about, but my understanding is that it’s currently being debated (somewhat hotly). [Fill in the necessary details where this comment leads a void, but I think this is mainly about GPT-4′s API and it being embedded into apps where it can execute code on its own and things like that.]
This is what I was gesturing at in my comments.
I’m talking about simboxing, which was shown to work by Jacob Cannell here:
https://www.lesswrong.com/posts/WKGZBCYAbZ6WGsKHc/love-in-a-simbox-is-all-you-need
Basically as long as we can manipulate their perception of reality, which is trivial to do in offline learning, then it’s easy to recreate a finite time Cartesian agent, where data only passes through approved channels, then the AI updates it’s state to learn new things, ad infinitum until the end of offline learning.
Thus simboxing is achieved.
The reason I retracted my comment is because of this quote was correct:
Primarily because of the post below. There are some caveats to this, but this largely goes through.
Post below:
https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long