I can’t see what you’re getting at. Holden seems to say not just “you should do this”, but “the fact that you’re not already doing this reflects badly on your decision making”. Eliezer replies that the first may be true but the second seems unwarranted.
Below, I list my major objections. I do not believe that these objections constitute a sharp/tight case for the idea that SI’s work has low/negative value; I believe, instead, that SI’s own arguments are too vague for such a rebuttal to be possible.
In section 1 and 2, Holden makes the argument that pinning our hopes on a utility function seems dangerous, because maximizers in general are dangerous. Better to just make information processing tools that make us more intelligent.
When discussing SI as an organization, Holden says,
One of SI’s major goals is to raise awareness of AI-related risks; given this, the fact that it has not advanced clear/concise/compelling arguments speaks, in my view, to its general competence.
The jump from “speaks to its general competence” to “horribl[y] negligent” is a large and uncharitable one. If one focuses on “compelling,” then yes, Holden is saying “SI is incompetent because I wasn’t convinced by them,” and that does seem unwarranted, or at least weak. But if one focuses on “clear” or “concise,” then I agree with Holden- if SI’s core mission is to communicate about AI risks, and they’re unable to communicate clearly and concisely, then that speaks to their ability to complete their core mission! And there’s the other bit where charity seemed lacking to me- it seems that Holden’s strongest complaints are about clarity and concision.
Now, that’s my impression as a bystander, and I “remember with compassion that it’s not always obvious to one person what another person will think was the central point”, so it is an observation about tone and little more.
Your link to Holden’s post is broken.
In a paragraph begging for charity, this sentence seems out of place.
(Commentary to follow.)
I can’t see what you’re getting at. Holden seems to say not just “you should do this”, but “the fact that you’re not already doing this reflects badly on your decision making”. Eliezer replies that the first may be true but the second seems unwarranted.
Consider three sections of Holden’s post:
In section 1 and 2, Holden makes the argument that pinning our hopes on a utility function seems dangerous, because maximizers in general are dangerous. Better to just make information processing tools that make us more intelligent.
When discussing SI as an organization, Holden says,
The jump from “speaks to its general competence” to “horribl[y] negligent” is a large and uncharitable one. If one focuses on “compelling,” then yes, Holden is saying “SI is incompetent because I wasn’t convinced by them,” and that does seem unwarranted, or at least weak. But if one focuses on “clear” or “concise,” then I agree with Holden- if SI’s core mission is to communicate about AI risks, and they’re unable to communicate clearly and concisely, then that speaks to their ability to complete their core mission! And there’s the other bit where charity seemed lacking to me- it seems that Holden’s strongest complaints are about clarity and concision.
Now, that’s my impression as a bystander, and I “remember with compassion that it’s not always obvious to one person what another person will think was the central point”, so it is an observation about tone and little more.