Luke seems ready to take all this to heart, and make improvements to address each of these points.
Yes, especially if by “ready to take all this to heart” you mean “already agreed with most of the stuff on organizational problems before Holden wrote the post.” :)
That was my half my initial reaction as well,the other half:
The critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don’t most people regard this as a great summary of their current views, rather than persuasive in any way? In fact, the only effect I suspect this had on most people’s thinking was to increase their willingness to listen to Karnofsky in the future if he should change his mind. Since the post is basically directed at LessWrongians as an audience, I find all of that a bit suspicious (not in the sense that he’s doing this deliberately).
Also, the only part of the post that interested me was this one (about the SI as an organization); the other stuff seemed kinda minor—praising with faint damns, relative to true outsiders, and so perhaps slightly misleading to LessWrongians.
Reading this (at least a year old, I believe) makes me devalue current protestations:
I just assume people are pretty good at manipulating my opinion, and honestly, that often seems more the focus in the “academic outreach”. People who think about signalling (outside of economics, evolution, etc) are usually signalling bad stuff. Paying 20K or whatever to have someone write a review of your theory is also really really interesting, as apparently SI is doing (it’s on the balance sheet somewhere for that “commissioned” review; forget the exact amount). Working on a dozen papers on which you might only have 5% involvement (again: or whatever) is also really really interesting. I can’t evaluate SI, but they smell totally unlike scientists and quite like philosophers. Which is probably true and only problematic inasmuch as EY thinks other philosophy is mostly bunk. The closest thing to actually performed science on LW I’ve seen was that bit about rates of evolution, which was rather scatterbrained. If anyone can point me to some science, I’d be grateful. The old joke about Comp Sci (neither about Comp nor Sci) need not apply.
Apart from the value of having a smart, sincere person who likes and has seriously tried to appreciate you give you their opinion of you … Holden’s post directly addresses “why the hell should people give money to you?” Particularly as his answer—as a staff member of a charity directory—is “to support your goals, they should not give money to you.” That’s about as harsh an answer as anyone could give a charity: “you are a net negative.”
My small experience is on the fringes of Wikimedia. We get money mostly in the form of lots of small donations from readers. We have a few large donations (and we are very grateful indeed for them!) but we actively look for more small donations (a) to make ourselves less susceptible to the influence of large donors (b) to recruit co-conspirators: if people donate even a small amount, they feel like part of the team, and that’s worth quite a lot to us.
The thing is that Wikimedia has never been very good at playing the game. We run this website and we run programmes associated with it. Getting money out of people has been a matter of shoving a banner up. We do A/B testing on the banners! But if we wanted to get rabid about money, there’s a lot more we could be doing. (At possible expense of the actual mission.)
SIAI doesn’t have the same wide reader base to get donations from. But the goal of a charity that cares about its objectives should be independence. I wonder how far they can go in this direction: to be able to say “well, we don’t care what you say about us, us and our readers are enough.” I wonder how far the CMR will go.
Sorry, I’m not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to “academic outreach”, except it’s a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.
On the first paragaph, apologies for repeating, but just clarifying: I’m assuming that everyone already should know that even if you’re sympathetic to SI goals, it’s a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I’m just saying I don’t think it was strictly speaking “persuasive” to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I’m just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I’ve now decided that I won’t weight that as very important in my own evaluation. I don’t mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I’m just explicating my own thought process.
I don’t think it was strictly speaking “persuasive” to anyone
Just as a data point, I was rather greatly persuaded by Karnofsky’s argument here. As someone who reads LW more often for the cognitive science/philosophy stuff and not so much for the FAI/Singularity stuff, I did not have a very coherent opinion of the SI, particularly one that incorporated objective critiques (such as Karnofsky’s).
Furthermore, I certainly did not, as you assert, know that it is a bad idea to donate to the Singularity Institute. In fact, I had often heard the opposite here.
Thanks. That’s very interesting to me, even as an anecdote. I’ve heard the opposite here too; that’s why I made it a normative statement (“everyone already should know”). Between the missing money and the publication record, I can’t imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh:
http://lesswrong.com/lw/43m/optimal_employment/?sort=top
are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.
How much should you like this place just because they’re hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that’s usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I’ve become inappropriately off topic for this page; I’m just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I’m sure, and may have some relevance (both as SI tries to get money or “commission” pieces).
Harsh but true. Luke seems ready to take all this to heart, and make improvements to address each of these points.
Yes, especially if by “ready to take all this to heart” you mean “already agreed with most of the stuff on organizational problems before Holden wrote the post.” :)
That was my half my initial reaction as well,the other half:
The critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don’t most people regard this as a great summary of their current views, rather than persuasive in any way? In fact, the only effect I suspect this had on most people’s thinking was to increase their willingness to listen to Karnofsky in the future if he should change his mind. Since the post is basically directed at LessWrongians as an audience, I find all of that a bit suspicious (not in the sense that he’s doing this deliberately).
Also, the only part of the post that interested me was this one (about the SI as an organization); the other stuff seemed kinda minor—praising with faint damns, relative to true outsiders, and so perhaps slightly misleading to LessWrongians.
Reading this (at least a year old, I believe) makes me devalue current protestations:
http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc
I just assume people are pretty good at manipulating my opinion, and honestly, that often seems more the focus in the “academic outreach”. People who think about signalling (outside of economics, evolution, etc) are usually signalling bad stuff. Paying 20K or whatever to have someone write a review of your theory is also really really interesting, as apparently SI is doing (it’s on the balance sheet somewhere for that “commissioned” review; forget the exact amount). Working on a dozen papers on which you might only have 5% involvement (again: or whatever) is also really really interesting. I can’t evaluate SI, but they smell totally unlike scientists and quite like philosophers. Which is probably true and only problematic inasmuch as EY thinks other philosophy is mostly bunk. The closest thing to actually performed science on LW I’ve seen was that bit about rates of evolution, which was rather scatterbrained. If anyone can point me to some science, I’d be grateful. The old joke about Comp Sci (neither about Comp nor Sci) need not apply.
Apart from the value of having a smart, sincere person who likes and has seriously tried to appreciate you give you their opinion of you … Holden’s post directly addresses “why the hell should people give money to you?” Particularly as his answer—as a staff member of a charity directory—is “to support your goals, they should not give money to you.” That’s about as harsh an answer as anyone could give a charity: “you are a net negative.”
My small experience is on the fringes of Wikimedia. We get money mostly in the form of lots of small donations from readers. We have a few large donations (and we are very grateful indeed for them!) but we actively look for more small donations (a) to make ourselves less susceptible to the influence of large donors (b) to recruit co-conspirators: if people donate even a small amount, they feel like part of the team, and that’s worth quite a lot to us.
The thing is that Wikimedia has never been very good at playing the game. We run this website and we run programmes associated with it. Getting money out of people has been a matter of shoving a banner up. We do A/B testing on the banners! But if we wanted to get rabid about money, there’s a lot more we could be doing. (At possible expense of the actual mission.)
SIAI doesn’t have the same wide reader base to get donations from. But the goal of a charity that cares about its objectives should be independence. I wonder how far they can go in this direction: to be able to say “well, we don’t care what you say about us, us and our readers are enough.” I wonder how far the CMR will go.
Sorry, I’m not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to “academic outreach”, except it’s a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.
On the first paragaph, apologies for repeating, but just clarifying: I’m assuming that everyone already should know that even if you’re sympathetic to SI goals, it’s a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I’m just saying I don’t think it was strictly speaking “persuasive” to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I’m just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I’ve now decided that I won’t weight that as very important in my own evaluation. I don’t mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I’m just explicating my own thought process.
I felt it was very persuasive.
Just as a data point, I was rather greatly persuaded by Karnofsky’s argument here. As someone who reads LW more often for the cognitive science/philosophy stuff and not so much for the FAI/Singularity stuff, I did not have a very coherent opinion of the SI, particularly one that incorporated objective critiques (such as Karnofsky’s).
Furthermore, I certainly did not, as you assert, know that it is a bad idea to donate to the Singularity Institute. In fact, I had often heard the opposite here.
Thanks. That’s very interesting to me, even as an anecdote. I’ve heard the opposite here too; that’s why I made it a normative statement (“everyone already should know”). Between the missing money and the publication record, I can’t imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh: http://lesswrong.com/lw/43m/optimal_employment/?sort=top
are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.
How much should you like this place just because they’re hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that’s usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I’ve become inappropriately off topic for this page; I’m just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I’m sure, and may have some relevance (both as SI tries to get money or “commission” pieces).