Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
(nods) That makes sense.
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
I’m curious if among those you saw left, there were any who you wish had stayed.
It seems to me that if one has a a deep need for precise right answers, it’s hard to beat participating in the LW community.
Not especially; the disagreements never seem to resolve.
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.