What scares me is that people say EY’s position is “plainly false” so rarely. Even if EY is almost always right, you would still expect a huge number of people to say that his positions are plainly false, especially when talking about such difficult and debated questions as those of philosophy and predicting the future.
What scares me is that people say EY’s position is “plainly false” so rarely.
What scares me is how often people express this concern relative to how often people actually agree with EY. Eliezer’s beliefs and assertions take an absolute hammering. I agree with him fairly often—no surprise, he is intelligent, has a similar cognitive style mine and has spent a whole lot of time thinking. But I disagree with him vocally whenever he seems wrong. I am far from the only person who does so.
If the topics are genuinely difficult, I don’t think it’s likely that many people who understand them would argue that Eliezer’s points are plainly false. Occasionally people drop in to argue such who clearly don’t have a very good understanding of rationality or the subject material. People do disagree with Eliezer for more substantive reasons with some frequency, but I don’t find the fact that they rarely pronounce him to be obviously wrong particularly worrying.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
There’s one IRC channel populated with smart CS / math majors, where I drop LW links every now and then. Pretty frequently they’re met with a rather critical reception, but while those people are happy to tear them apart on IRC, they have little reason to bother to come to LW and explain in detail why they disagree.
(Of the things they disagree on, I mainly recall that they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.)
they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding.
</channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.
What scares me is that people say EY’s position is “plainly false” so rarely. Even if EY is almost always right, you would still expect a huge number of people to say that his positions are plainly false, especially when talking about such difficult and debated questions as those of philosophy and predicting the future.
What scares me is how often people express this concern relative to how often people actually agree with EY. Eliezer’s beliefs and assertions take an absolute hammering. I agree with him fairly often—no surprise, he is intelligent, has a similar cognitive style mine and has spent a whole lot of time thinking. But I disagree with him vocally whenever he seems wrong. I am far from the only person who does so.
If the topics are genuinely difficult, I don’t think it’s likely that many people who understand them would argue that Eliezer’s points are plainly false. Occasionally people drop in to argue such who clearly don’t have a very good understanding of rationality or the subject material. People do disagree with Eliezer for more substantive reasons with some frequency, but I don’t find the fact that they rarely pronounce him to be obviously wrong particularly worrying.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
There’s one IRC channel populated with smart CS / math majors, where I drop LW links every now and then. Pretty frequently they’re met with a rather critical reception, but while those people are happy to tear them apart on IRC, they have little reason to bother to come to LW and explain in detail why they disagree.
(Of the things they disagree on, I mainly recall that they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.)
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding. </channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
Edit: see for example here and here.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
(nods) That makes sense.
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
I’m curious if among those you saw left, there were any who you wish had stayed.
It seems to me that if one has a a deep need for precise right answers, it’s hard to beat participating in the LW community.
Not especially; the disagreements never seem to resolve.
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.