Thank you very much for writing this. I, um, wish you hadn’t posted it literally directly before the May Minicamp when I can’t realistically respond until Tuesday. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI.
It looks to me as though Holden had the criticisms he expresses even before becoming “informed”, presumably by reading the sequences, but was too intimidated to share them. Perhaps it is worth listening to/encouraging uninformed criticisms as well as informed ones?
[Holden’s critique] already has a warm place in my heart… as the second attempt to mount informed criticism of SIAI.
To those who think Eliezer is exaggerating: please link me to “informed criticism of SIAI.”
It is so hard to find good critics.
Edit: Well, I guess there are more than two examples, though relatively few. I was wrong to suggest otherwise. Much of this has to do with the fact that SI hasn’t been very clear about many of its positions and arguments: see Beckstead’s comment and Hallquist’s followup.
1) Most criticism of key ideas underlying SIAI’s strategies does not reference SIAI, e.g. Chris Malcolm’s “Why Robots Won’t Rule” website is replying to Hans Moravec.
2) Dispersed criticism, with many people making local points, e.g. those referenced by Wei Dai, is still criticism and much of that is informed and reasonable.
3) Much criticism is unwritten, e.g. consider the more FAI-skeptical Singularity Summit speaker talks, or takes the form of brief responses to questions or the like. This doesn’t mean it isn’t real or important.
4) Gerrymandering the bounds of “informed criticism” to leave almost no one within bounds is in general a scurrilous move that one should bend over backwards to avoid.
5) As others have suggested, even within the narrow confines of Less Wrong and adjacent communities there have been many informed critics. Here’s Katja Grace’s criticism of hard takeoff (although I am not sure how separate it is from Robin’s). Here’s Brandon Reinhart’s examination of SIAI, which includes some criticism and brings more in comments. Here’s Kaj Sotala’s comparison of FHI and SIAI. And there are of course many detailed and often highly upvoted comments in response to various SIAI-discussing posts and threads, many of which you have participated in.
This is a bit exasperating. Did you not see my comments in this thread? Have you and Eliezer considered that if there really have been only two attempts to mount informed criticism of SIAI, then LessWrong must be considered a massive failure that SIAI ought to abandon ASAP?
Wei Dai has written many comments and posts that have some measure of criticism, and various members of the community, including myself, have expressed agreement with them. I think what might be a problem is that such criticisms haven’t been collected into a single place where they can draw attention and stir up drama, as Holden’s post has.
There are also critics like XiXiDu. I think he’s unreliable, and I think he’d admit to that, but he also makes valid criticisms that are shared by other LW folk, and LW’s moderation makes it easy to sift his comments for the better stuff.
Perhaps an institution could be designed. E.g., a few self-ordained SingInst critics could keep watch for critiques of SingInst, collect them, organize them, and update a page somewhere out-of-the-way over at the LessWrong Wiki that’s easily checkable by SI folk like yourself. LW philanthropists like User:JGWeissman or User:Rain could do it, for example. If SingInst wanted to signal various good things then it could even consider paying a few people to collect and organize criticisms of SingInst. Presumably if there are good critiques out there then finding them would be well worth a small investment.
I think what might be a problem is that such criticisms haven’t been collected into a single place where they can draw attention and stir up drama, as Holden’s post has.
I put them in discussion, because well, I bring them up for the purpose of discussion, and not for the purpose of forming an overall judgement of SIAI or trying to convince people to stop donating to SIAI. I’m rarely sure that my overall beliefs are right and SI people’s are wrong, especially on core issues that I know SI people have spent a lot of time thinking about, so mostly I try to bring up ideas, arguments, and possible scenarios that I suspect they may not have considered. (This is one major area where I differ from Holden: I have greater respect for SI people’s rationality, at least their epistemic rationality. And I don’t know why Holden is so confident about some of his own original ideas, like his solution to Pascal’s Mugging, and Tool-AI ideas. (Well I guess I do, it’s probably just typical human overconfidence.))
Having said that, I reserve the right to collect all my criticisms together and make a post in main in the future if I decide that serves my purposes, although I suspect that without the influence of GiveWell behind me it won’t stir up nearly as much as drama as Holden’s post. :)
ETA: Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post). This episode makes me think I may have overestimated how much attention they pay. It would be good if Luke or Eliezer could comment on this.
Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general
I read most such (apparently-relevant from post titles) discussions, and Anna reads a minority. I think Eliezer reads very few. I’m not very sure about Luke.
Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post).
I’m somewhat confident (from directly asking him a related question and also from many related observations over the last two years) that Eliezer mostly doesn’t, or is very good at pretending that he doesn’t. He’s also not good at reading so even if he sees something he’s only somewhat likely to understand it unless he already thinks it’s worth it for him to go out of his way to understand it. If you want to influence Eliezer it’s best to address him specifically and make sure to state your arguments clearly, and to explicitly disclaim that you’re specifically not making any of the stupid arguments that your arguments could be pattern-matched to.
Also I know that Anna is often too busy to read LessWrong.
Good point. Wei Dai qualifies as informed criticism. Though, he seems to agree with us on all the basics, so that might not be the kind of criticism Eliezer was talking about.
To those who think Eliezer is exaggerating: please link me to “informed criticism of SIAI.”
It would help if you could elaborate on what you mean by “informed”.
Most of what Holden wrote, and much more, has been said by other people, excluding myself, before.
I don’t have the time right now to wade through all those years of posts and comments but might do so later.
And if you are not willing to take into account what I myself wrote, for being uninformed, then maybe you will however agree that at least all of my critical comments that have been upvoted to +10 (ETA changed to +10, although there is a lot more on-topic at +5) should have been taken into account. If you do so you will find that SI could have updated some time ago on some of what has been said in Holden’s post.
It would help if you could elaborate on what you mean by “informed”.
Seconded. It seems to me like it’s not even possible to mount properly informed criticism if much of the findings are just sittingunpublished somewhere. I’m hopeful that this is actually getting fixed sometime this year, but it doesn’t seem fair to not release information and then criticize the critics for being uninformed.
I’m not sure how much he’s put into writing, but Ben Goertzel is surely informed. One might argue he comes to the wrong conclusions about AI danger, but it’s not from not thinking about it.
if you don’t have a good argument you won’t find good critics. (Unless you are as influential as religion. Then you can get good critic simply because you stepped onto good critic’s foot. The critic probably ain’t going to come to church to talk about it though, and also the ulterior motives (having had foot stepped onto) may make you qualify it as bad critic).
Much of this has to do with the fact that SI hasn’t been very clear about many of its positions and arguments
When you look through a matte glass, and you see some blurred text that looks like it got equations in it, and you are told that what you see is a fuzzy image of proof that P!=NP (maybe you can make out the headers which are in bigger font, and those look like the kind of headers that valid proof might have), do you assume that it is really a valid proof, and they only need to polish the glass? What if it is P=NP instead? What if it doesn’t look like it got equations in it?
Thank you very much for writing this. I, um, wish you hadn’t posted it literally directly before the May Minicamp when I can’t realistically respond until Tuesday. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI.
It looks to me as though Holden had the criticisms he expresses even before becoming “informed”, presumably by reading the sequences, but was too intimidated to share them. Perhaps it is worth listening to/encouraging uninformed criticisms as well as informed ones?
Note the following criticism of SI identified by Holden:
To those who think Eliezer is exaggerating: please link me to “informed criticism of SIAI.”
It is so hard to find good critics.
Edit: Well, I guess there are more than two examples, though relatively few. I was wrong to suggest otherwise. Much of this has to do with the fact that SI hasn’t been very clear about many of its positions and arguments: see Beckstead’s comment and Hallquist’s followup.
1) Most criticism of key ideas underlying SIAI’s strategies does not reference SIAI, e.g. Chris Malcolm’s “Why Robots Won’t Rule” website is replying to Hans Moravec.
2) Dispersed criticism, with many people making local points, e.g. those referenced by Wei Dai, is still criticism and much of that is informed and reasonable.
3) Much criticism is unwritten, e.g. consider the more FAI-skeptical Singularity Summit speaker talks, or takes the form of brief responses to questions or the like. This doesn’t mean it isn’t real or important.
4) Gerrymandering the bounds of “informed criticism” to leave almost no one within bounds is in general a scurrilous move that one should bend over backwards to avoid.
5) As others have suggested, even within the narrow confines of Less Wrong and adjacent communities there have been many informed critics. Here’s Katja Grace’s criticism of hard takeoff (although I am not sure how separate it is from Robin’s). Here’s Brandon Reinhart’s examination of SIAI, which includes some criticism and brings more in comments. Here’s Kaj Sotala’s comparison of FHI and SIAI. And there are of course many detailed and often highly upvoted comments in response to various SIAI-discussing posts and threads, many of which you have participated in.
This is a bit exasperating. Did you not see my comments in this thread? Have you and Eliezer considered that if there really have been only two attempts to mount informed criticism of SIAI, then LessWrong must be considered a massive failure that SIAI ought to abandon ASAP?
See here.
Wei Dai has written many comments and posts that have some measure of criticism, and various members of the community, including myself, have expressed agreement with them. I think what might be a problem is that such criticisms haven’t been collected into a single place where they can draw attention and stir up drama, as Holden’s post has.
There are also critics like XiXiDu. I think he’s unreliable, and I think he’d admit to that, but he also makes valid criticisms that are shared by other LW folk, and LW’s moderation makes it easy to sift his comments for the better stuff.
Perhaps an institution could be designed. E.g., a few self-ordained SingInst critics could keep watch for critiques of SingInst, collect them, organize them, and update a page somewhere out-of-the-way over at the LessWrong Wiki that’s easily checkable by SI folk like yourself. LW philanthropists like User:JGWeissman or User:Rain could do it, for example. If SingInst wanted to signal various good things then it could even consider paying a few people to collect and organize criticisms of SingInst. Presumably if there are good critiques out there then finding them would be well worth a small investment.
I put them in discussion, because well, I bring them up for the purpose of discussion, and not for the purpose of forming an overall judgement of SIAI or trying to convince people to stop donating to SIAI. I’m rarely sure that my overall beliefs are right and SI people’s are wrong, especially on core issues that I know SI people have spent a lot of time thinking about, so mostly I try to bring up ideas, arguments, and possible scenarios that I suspect they may not have considered. (This is one major area where I differ from Holden: I have greater respect for SI people’s rationality, at least their epistemic rationality. And I don’t know why Holden is so confident about some of his own original ideas, like his solution to Pascal’s Mugging, and Tool-AI ideas. (Well I guess I do, it’s probably just typical human overconfidence.))
Having said that, I reserve the right to collect all my criticisms together and make a post in main in the future if I decide that serves my purposes, although I suspect that without the influence of GiveWell behind me it won’t stir up nearly as much as drama as Holden’s post. :)
ETA: Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post). This episode makes me think I may have overestimated how much attention they pay. It would be good if Luke or Eliezer could comment on this.
I read most such (apparently-relevant from post titles) discussions, and Anna reads a minority. I think Eliezer reads very few. I’m not very sure about Luke.
Do you forward relevant posts to other SI people?
Ones that seem novel and valuable, either by personal discussion or email.
Yes, I read most LW posts that seem to be relevant to my concerns, based on post titles. I also skim the comments on those posts.
I’m somewhat confident (from directly asking him a related question and also from many related observations over the last two years) that Eliezer mostly doesn’t, or is very good at pretending that he doesn’t. He’s also not good at reading so even if he sees something he’s only somewhat likely to understand it unless he already thinks it’s worth it for him to go out of his way to understand it. If you want to influence Eliezer it’s best to address him specifically and make sure to state your arguments clearly, and to explicitly disclaim that you’re specifically not making any of the stupid arguments that your arguments could be pattern-matched to.
Also I know that Anna is often too busy to read LessWrong.
Good point. Wei Dai qualifies as informed criticism. Though, he seems to agree with us on all the basics, so that might not be the kind of criticism Eliezer was talking about.
It would help if you could elaborate on what you mean by “informed”.
Most of what Holden wrote, and much more, has been said by other people, excluding myself, before.
I don’t have the time right now to wade through all those years of posts and comments but might do so later.
And if you are not willing to take into account what I myself wrote, for being uninformed, then maybe you will however agree that at least all of my critical comments that have been upvoted to +10 (ETA changed to +10, although there is a lot more on-topic at +5) should have been taken into account. If you do so you will find that SI could have updated some time ago on some of what has been said in Holden’s post.
Seconded. It seems to me like it’s not even possible to mount properly informed criticism if much of the findings are just sitting unpublished somewhere. I’m hopeful that this is actually getting fixed sometime this year, but it doesn’t seem fair to not release information and then criticize the critics for being uninformed.
I’m not sure how much he’s put into writing, but Ben Goertzel is surely informed. One might argue he comes to the wrong conclusions about AI danger, but it’s not from not thinking about it.
if you don’t have a good argument you won’t find good critics. (Unless you are as influential as religion. Then you can get good critic simply because you stepped onto good critic’s foot. The critic probably ain’t going to come to church to talk about it though, and also the ulterior motives (having had foot stepped onto) may make you qualify it as bad critic).
When you look through a matte glass, and you see some blurred text that looks like it got equations in it, and you are told that what you see is a fuzzy image of proof that P!=NP (maybe you can make out the headers which are in bigger font, and those look like the kind of headers that valid proof might have), do you assume that it is really a valid proof, and they only need to polish the glass? What if it is P=NP instead? What if it doesn’t look like it got equations in it?