I had the same impression at first, but in the areas where I most wanted these, I realized that Jacob linked to additional posts where he has defended specific claims at length.
Here is one example:
EY is just completely out of his depth here: he doesn’t seem to understand how the Landauer limit actually works, doesn’t seem to understand that synapses are analog MACs which minimally require OOMs more energy than simple binary switches, doesn’t seem to understand that interconnect dominates energy usage regardless, etc.
I usually find Tyler Cowenesque (and heck, Yudkowskian) phrases like this irritating, and usually they’re pretty hard to interrogate, but Jacob helpfully links to an entire factpost he wrote on this specific point, elaborating on this claim in detail.
He does something similar here:
EY derived much of his negative beliefs about the human mind from the cognitive biases and ev psych literature, and especially Tooby and Cosmide’s influential evolved modularity hypothesis. The primary competitor to evolved modularity was/is the universal learning hypothesis and associated scaling hypothesis, and there was already sufficient evidence to rule out evolved modularity back in 2015 or earlier.
He is doing an admirable job of summarizing the core arguments of Eliezer’s overall model of AI risk, then providing deeply-thought-out counterarguments. The underlying posts appear to be pretty well-cited.
That doesn’t mean that every claim is backed by an additional entire post of its own, or that every argument is convincingly correct.
for various reasons mostly stemming from the strong prior that biology is super effecient I expect humanity to be very difficult to kill in this way (and growing harder to kill every year as we advance prosaic AI tech)
This is more of the Cowenesque/Yudkowskianesque handwavey appeal to authority, intuition, and the implication that any informed listener ought to have mastered the same background knowledge and come naturally to the same conclusions. Here, my response would be “viruses are not optimizing to kill humans—they are optimizing for replication, which often means keeping hosts alive.”
He follows this with another easy-to-argue-with claim:
killing humanity would likely not be in the best interests of even unaligned AGI, because humans will probably continue to be key components of the economy (as highly efficient general purpose robots) long after AGI running in datacenters takes most higher paying intellectual jobs
This sounds at best like an S-risk (how would the AI cause humans to behave like highly efficient general purpose robots? Probably not in ways we would enjoy from our perspective now). And we don’t need to posit that an unaligned AI would be bent on keeping any semblance of an “economy” running. ChaosGPT 5.0 might specifically have the goal of destroying the entire world as efficiently as possible.
But my goal in articulating these counterarguments isn’t because I’m hoping to embroil myself in a debate downthread. It’s to point out that while Jacob’s still very far from dealing with every facet of Eliezer’s argument, he is at the same time doing what I regard as a pretty admirable job of interrogating certain specific claims in depth.
And Jacob doesn’t read to me as making what I would call “personal attacks on Eliezer.” He does do things like:
Accurately paraphrase arrogant-sounding things Eliezer says
Moreover he predicts that this is the default outcome, and AI alignment is so incredibly difficult that even he failed to solve it.
Accurately describe the confidence with which Eliezer makes his arguments (“brazenly”).
Be very clear and honest about just how weak he finds Eliezer’s argument, after putting in a very substantial amount of work in order to come to this evaluation
I have evaluated this model in detail and found it substantially incorrect and in fact brazenly naively overconfident.
Every one of his key assumptions is mostly wrong, as I and others predicted well in advance. EY seems to have been systematically overconfident as an early futurist, and then perhaps updated later to avoid specific predictions, but without much updating his mental models (specifically his nanotech-woo model, as we will see).
Offer general reasons to think the epistemics are stronger on his side of the debate:
Even though the central prediction of the doom model is necessarily un-observable for anthropic reasons, alternative models (such as my own, or moravec’s, or hanson’s) have already made substantially better predictions, such that EY’s doom model has low posterior probability.
EY has espoused this doom model for over a decade, and hasn’t updated it much from what I can tell.
Note: this is one area I do think Jacob’s argument is weak—I would need to see one or more specific predictions from Jacob or the others he cites from some time ago in order to evaluate this claim. But I trust it is true in his own mind and could be made legible.
Overall, I think it is straightforwaredly false to say that this post contains any “personal attacks on Eliezer.”
Edit: However, I also would say that Jacob is every bit as brazen and (over?)confident as Eliezer, and has earned the same level of vociferous pushback where his arguments are weak as he has demonstrated in his arguments here.
And I want to emphasizes that in my opinion, there’s nothing wrong with a full-throated debate on an important issue!
I had the same impression at first, but in the areas where I most wanted these, I realized that Jacob linked to additional posts where he has defended specific claims at length.
Here is one example:
I usually find Tyler Cowenesque (and heck, Yudkowskian) phrases like this irritating, and usually they’re pretty hard to interrogate, but Jacob helpfully links to an entire factpost he wrote on this specific point, elaborating on this claim in detail.
He does something similar here:
He is doing an admirable job of summarizing the core arguments of Eliezer’s overall model of AI risk, then providing deeply-thought-out counterarguments. The underlying posts appear to be pretty well-cited.
That doesn’t mean that every claim is backed by an additional entire post of its own, or that every argument is convincingly correct.
This is more of the Cowenesque/Yudkowskianesque handwavey appeal to authority, intuition, and the implication that any informed listener ought to have mastered the same background knowledge and come naturally to the same conclusions. Here, my response would be “viruses are not optimizing to kill humans—they are optimizing for replication, which often means keeping hosts alive.”
He follows this with another easy-to-argue-with claim:
This sounds at best like an S-risk (how would the AI cause humans to behave like highly efficient general purpose robots? Probably not in ways we would enjoy from our perspective now). And we don’t need to posit that an unaligned AI would be bent on keeping any semblance of an “economy” running. ChaosGPT 5.0 might specifically have the goal of destroying the entire world as efficiently as possible.
But my goal in articulating these counterarguments isn’t because I’m hoping to embroil myself in a debate downthread. It’s to point out that while Jacob’s still very far from dealing with every facet of Eliezer’s argument, he is at the same time doing what I regard as a pretty admirable job of interrogating certain specific claims in depth.
And Jacob doesn’t read to me as making what I would call “personal attacks on Eliezer.” He does do things like:
Accurately paraphrase arrogant-sounding things Eliezer says
Accurately describe the confidence with which Eliezer makes his arguments (“brazenly”).
Be very clear and honest about just how weak he finds Eliezer’s argument, after putting in a very substantial amount of work in order to come to this evaluation
Offer general reasons to think the epistemics are stronger on his side of the debate:
Note: this is one area I do think Jacob’s argument is weak—I would need to see one or more specific predictions from Jacob or the others he cites from some time ago in order to evaluate this claim. But I trust it is true in his own mind and could be made legible.
Overall, I think it is straightforwaredly false to say that this post contains any “personal attacks on Eliezer.”
Edit: However, I also would say that Jacob is every bit as brazen and (over?)confident as Eliezer, and has earned the same level of vociferous pushback where his arguments are weak as he has demonstrated in his arguments here.
And I want to emphasizes that in my opinion, there’s nothing wrong with a full-throated debate on an important issue!