You’re speaking my language, thanks! I hope this is EY’s view, because I know what this means. Maybe now I can go back and read EY’s sequence in light of this interpretation and it will make more sense to me now.
EY’s theory as presented above makes me suspicious that making basic evaluative moral terms rigid designators is a kind of ‘trick’ which, though perhaps not intended, very easily has the effect of carrying along some common absolutist connotations of those terms where they no longer apply in EY’s use of those terms.
At the moment, I’m not so worried about objection (1), but objections (2) and (3) are close to what bother me about EY’s theory, especially if this is foundational for EY’s thinking about how we ought to be designing a Friendly AI. If we’re working on a project as important as Friendly AI, it becomes an urgent problem to get our meta-ethics right, and I’m not sure Eliezer has done it yet. Which is why we need more minds working on this problem. I hope to be one of those minds, even if my current meta-ethics turns out to be wrong (I’ve held my current meta-ethics for under 2 years, anyway, and it has shifted slightly since adoption).
But, at the moment it remains plausible to me that Eliezer is right, and I just don’t see why right now. Eliezer is a very smart guy who has invested a lot of energy into training himself to think straight about things and respond to criticism either with adequate counterargument or by dropping the criticized belief.
invested a lot of energy into training himself to think straight about things and respond to criticism either with adequate counterargument or by dropping the criticized belief
Maybe; I can’t say I’ve noticed that so much myself -- e.g. he just disappeared from this discussion when I refuted his assumptions about philosophy of language (that underpin his objection to zombies), but I haven’t seen him retract his claim that zombies are demonstrably incoherent.
e.g. he just disappeared from this discussion when I refuted his assumptions about philosophy of language (that underpin his objection to zombies), but I haven’t seen him retract his claim that zombies are demonstrably incoherent.
Clearly, from his standpoint a lot of things you believed were confused, and he decided against continuing to argue. This is a statement about willingness to engage situations where someone’s wrong on the Internet and presence of disagreement, not external evidence about correctness (distinct from your own estimate of correctness of your opponent’s position).
I won’t actually argue, just list some things that seem to be points where Richard talks past the intended meaning of the posts (irrespective of technical accuracy of the statements in themselves, if their meaning intended by Richard was what the posts referred to). Link to the post for convenience.
“premise that words refer to whatever generally causes us to utter them”: There is a particular sense of “refer” in which we can trace the causal history of words being uttered.
“It’s worth highlighting that this premise can’t be right, for we can talk about things that do not causally affect us. “: Yes, we can consider other senses of “refer”, make the discussion less precise, but those are not the senses used.
“We know perfectly well what we mean by the term ‘phenomenal consciousness’.”: Far from “perfectly well”.
“We most certainly do not just mean ‘whatever fills the role of causing me to make such-and-such utterances’” Maybe we don’t reason so, but it’s one tool to see what we actually mean, even if it explores this meaning in a different sense from what’s informally used (as a way of dissolving a potentially wrong question).
“No, the example of unicorns is merely to show that we can talk about non-causally related things.”: We can think/talk about ideas that cause us to think/talk about them in certain ways, and in this way the meaning of the idea (as set of properties which our minds see in it) causally influences uttering of words about it. Whether what the idea refers to causally influences us in other ways is irrelevant. On the other hand, if it’s claimed that the idea talks about the world (and is not an abstract logical fact unrelated to the world), there must be a pattern (event) of past observations that causes the idea to be evaluated as “correct”, and alternative observations that cause it to be evaluated as “wrong” (or a quantitative version of that). If that’s not possible, then it can’t be about our world.
Yes, if you’re talking about corporations, you cannot use exactly the same math than you do if you’re talking about evolutionary biology. But there are still some similarities that make it useful to know things about how selection works in evolutionary biology. Eliezer seems to be saying that if you want to call something “evolution”, then it has to meet these strictly-chosen criteria that he’ll tell you. But pretty much the only justification he offers is “if it doesn’t meet these criteria, then Price’s equation doesn’t apply”, and I don’t see why “evolution” would need to be strictly defined as “those processes which behave in a way specified by Price’s equation”. It can still be a useful analogy.
The rest are fine in my eyes, though the argument in The Psychological Unity of Humankind seems rather overstated for severalreasons.
Do you have recommendations for people/books that take this perspective seriously and then go on to explore interesting things with it? I haven’t seen anyone include the memetic perspective as part of their everyday worldview besides some folk at SIAI and yourself, which I find pretty sad.
Also, I get the impression you have off-kilter-compared-to-LW views on evolutionary biology, though I don’t remember any concrete examples. Do you have links to somewhere where I could learn more about what phenomena/perspectives you think aren’t emphasized or what not?
In academia, memetics is typically called “cultural evolution”. Probably the best book on that is “Not by Genes Alone”.
Your “evolutionary biology” question is rather vague. The nearest thing that springs to mind is this. Common views on that topic around here are more along the lines expressed in the The Robot’s Rebellion. If I am in a good mood, I describe such views as “lacking family values”—and if I am not, they get likened to a “culture of death”.
His response to it, or that it’s done so infrequently?
I for one am less worried the less often he writes things that are plainly false, so his being called out rarely doesn’t strike me as a cause for concern.
What scares me is that people say EY’s position is “plainly false” so rarely. Even if EY is almost always right, you would still expect a huge number of people to say that his positions are plainly false, especially when talking about such difficult and debated questions as those of philosophy and predicting the future.
What scares me is that people say EY’s position is “plainly false” so rarely.
What scares me is how often people express this concern relative to how often people actually agree with EY. Eliezer’s beliefs and assertions take an absolute hammering. I agree with him fairly often—no surprise, he is intelligent, has a similar cognitive style mine and has spent a whole lot of time thinking. But I disagree with him vocally whenever he seems wrong. I am far from the only person who does so.
If the topics are genuinely difficult, I don’t think it’s likely that many people who understand them would argue that Eliezer’s points are plainly false. Occasionally people drop in to argue such who clearly don’t have a very good understanding of rationality or the subject material. People do disagree with Eliezer for more substantive reasons with some frequency, but I don’t find the fact that they rarely pronounce him to be obviously wrong particularly worrying.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
There’s one IRC channel populated with smart CS / math majors, where I drop LW links every now and then. Pretty frequently they’re met with a rather critical reception, but while those people are happy to tear them apart on IRC, they have little reason to bother to come to LW and explain in detail why they disagree.
(Of the things they disagree on, I mainly recall that they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.)
they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding.
</channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.
Richard,
You’re speaking my language, thanks! I hope this is EY’s view, because I know what this means. Maybe now I can go back and read EY’s sequence in light of this interpretation and it will make more sense to me now.
EY’s theory as presented above makes me suspicious that making basic evaluative moral terms rigid designators is a kind of ‘trick’ which, though perhaps not intended, very easily has the effect of carrying along some common absolutist connotations of those terms where they no longer apply in EY’s use of those terms.
At the moment, I’m not so worried about objection (1), but objections (2) and (3) are close to what bother me about EY’s theory, especially if this is foundational for EY’s thinking about how we ought to be designing a Friendly AI. If we’re working on a project as important as Friendly AI, it becomes an urgent problem to get our meta-ethics right, and I’m not sure Eliezer has done it yet. Which is why we need more minds working on this problem. I hope to be one of those minds, even if my current meta-ethics turns out to be wrong (I’ve held my current meta-ethics for under 2 years, anyway, and it has shifted slightly since adoption).
But, at the moment it remains plausible to me that Eliezer is right, and I just don’t see why right now. Eliezer is a very smart guy who has invested a lot of energy into training himself to think straight about things and respond to criticism either with adequate counterargument or by dropping the criticized belief.
Maybe; I can’t say I’ve noticed that so much myself -- e.g. he just disappeared from this discussion when I refuted his assumptions about philosophy of language (that underpin his objection to zombies), but I haven’t seen him retract his claim that zombies are demonstrably incoherent.
Clearly, from his standpoint a lot of things you believed were confused, and he decided against continuing to argue. This is a statement about willingness to engage situations where someone’s wrong on the Internet and presence of disagreement, not external evidence about correctness (distinct from your own estimate of correctness of your opponent’s position).
You think that “clearly” Eliezer believed many of Richard’s beliefs were confused. Which beliefs, do you think?
I won’t actually argue, just list some things that seem to be points where Richard talks past the intended meaning of the posts (irrespective of technical accuracy of the statements in themselves, if their meaning intended by Richard was what the posts referred to). Link to the post for convenience.
“premise that words refer to whatever generally causes us to utter them”: There is a particular sense of “refer” in which we can trace the causal history of words being uttered.
“It’s worth highlighting that this premise can’t be right, for we can talk about things that do not causally affect us. “: Yes, we can consider other senses of “refer”, make the discussion less precise, but those are not the senses used.
“We know perfectly well what we mean by the term ‘phenomenal consciousness’.”: Far from “perfectly well”.
“We most certainly do not just mean ‘whatever fills the role of causing me to make such-and-such utterances’” Maybe we don’t reason so, but it’s one tool to see what we actually mean, even if it explores this meaning in a different sense from what’s informally used (as a way of dissolving a potentially wrong question).
“No, the example of unicorns is merely to show that we can talk about non-causally related things.”: We can think/talk about ideas that cause us to think/talk about them in certain ways, and in this way the meaning of the idea (as set of properties which our minds see in it) causally influences uttering of words about it. Whether what the idea refers to causally influences us in other ways is irrelevant. On the other hand, if it’s claimed that the idea talks about the world (and is not an abstract logical fact unrelated to the world), there must be a pattern (event) of past observations that causes the idea to be evaluated as “correct”, and alternative observations that cause it to be evaluated as “wrong” (or a quantitative version of that). If that’s not possible, then it can’t be about our world.
This is the first time I saw anyone telling EY that what he wrote is plainly false.
I made a personal list of top frequently-cited-yet-irritatingly-misleading EY posts:
http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/
http://lesswrong.com/lw/iv/the_futility_of_emergence/
http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/
http://lesswrong.com/lw/rl/the_psychological_unity_of_humankind/
http://lesswrong.com/lw/y3/value_is_fragile/
I agree with the first one of those being bad.
Yes, if you’re talking about corporations, you cannot use exactly the same math than you do if you’re talking about evolutionary biology. But there are still some similarities that make it useful to know things about how selection works in evolutionary biology. Eliezer seems to be saying that if you want to call something “evolution”, then it has to meet these strictly-chosen criteria that he’ll tell you. But pretty much the only justification he offers is “if it doesn’t meet these criteria, then Price’s equation doesn’t apply”, and I don’t see why “evolution” would need to be strictly defined as “those processes which behave in a way specified by Price’s equation”. It can still be a useful analogy.
The rest are fine in my eyes, though the argument in The Psychological Unity of Humankind seems rather overstated for several reasons.
FWIW, cultural evolution is not an analogy. Culture literally evolves—via differential reproductive success of memes...
Do you have recommendations for people/books that take this perspective seriously and then go on to explore interesting things with it? I haven’t seen anyone include the memetic perspective as part of their everyday worldview besides some folk at SIAI and yourself, which I find pretty sad.
Also, I get the impression you have off-kilter-compared-to-LW views on evolutionary biology, though I don’t remember any concrete examples. Do you have links to somewhere where I could learn more about what phenomena/perspectives you think aren’t emphasized or what not?
My current project is a book on memetics. I also have a blog on memetics.
Probably the best existing book on the topic is The Meme Machine by Susan Blackmore.
I also maintain some memetics links, some memetics references, a memetics glossary—and I have a bunch of memetics videos.
In academia, memetics is typically called “cultural evolution”. Probably the best book on that is “Not by Genes Alone”.
Your “evolutionary biology” question is rather vague. The nearest thing that springs to mind is this. Common views on that topic around here are more along the lines expressed in the The Robot’s Rebellion. If I am in a good mood, I describe such views as “lacking family values”—and if I am not, they get likened to a “culture of death”.
Wow, thanks! Glad I asked. I will start a tab explosion.
Really? That’s kind of scary...
His response to it, or that it’s done so infrequently?
I for one am less worried the less often he writes things that are plainly false, so his being called out rarely doesn’t strike me as a cause for concern.
What scares me is that people say EY’s position is “plainly false” so rarely. Even if EY is almost always right, you would still expect a huge number of people to say that his positions are plainly false, especially when talking about such difficult and debated questions as those of philosophy and predicting the future.
What scares me is how often people express this concern relative to how often people actually agree with EY. Eliezer’s beliefs and assertions take an absolute hammering. I agree with him fairly often—no surprise, he is intelligent, has a similar cognitive style mine and has spent a whole lot of time thinking. But I disagree with him vocally whenever he seems wrong. I am far from the only person who does so.
If the topics are genuinely difficult, I don’t think it’s likely that many people who understand them would argue that Eliezer’s points are plainly false. Occasionally people drop in to argue such who clearly don’t have a very good understanding of rationality or the subject material. People do disagree with Eliezer for more substantive reasons with some frequency, but I don’t find the fact that they rarely pronounce him to be obviously wrong particularly worrying.
Most of the people who are most likely to think that EY’s positions on things are plainly false probably don’t bother registering here to say so.
There’s one IRC channel populated with smart CS / math majors, where I drop LW links every now and then. Pretty frequently they’re met with a rather critical reception, but while those people are happy to tear them apart on IRC, they have little reason to bother to come to LW and explain in detail why they disagree.
(Of the things they disagree on, I mainly recall that they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.)
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding. </channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
Edit: see for example here and here.
True, but I still wouldn’t expect sharp disagreement with Eliezer to be so rare. One contributing factor may be that Eliezer at least appears to be so confident in so many of his positions, and does not put many words of uncertainty into his writing about theoretical issues.
When I first found this site, I read through all the OB posts chronologically, rather than reading the Sequences as sequences. So I got to see the history of several commenters, many of whom disagreed sharply with EY, with their disagreement evolving over several posts.
They tend to wander off after a while. Which is not surprising, as there is very little reward for it.
So I guess I’d ask this a different way: if you were an ethical philosopher whose positions disagreed with EY, what in this community would encourage you to post (or comment) about your disagreements?
The presence of a large, sharp, and serious audience. The disadvantage, of course, is that the audience tends not to already be familiar with standard philosophical jargon.
By contrast, at a typical philosophy blog, you can share your ideas with an audience that already knows the jargon, and is also sharp and serious. The disadvantage, of course, is that at the typical philosophy blog, the audience is not large.
These considerations suggest that a philosopher might wish to produce his own competing meta-ethics sequence here if he were in the early stages of producing a semi-popular book on his ideas. He might be less interested if he is interested only in presenting to trained philosophers.
(nods) That makes sense.
Caring about the future of humanity, I suppose, and thinking that SIAI’s choices may have a big impact on the future of humanity. That’s what motivates me to post my disagreements—in an effort to figure out what’s correct.
Unfortunately, believing that the SIAI is likely to have a significant impact on the future of humanity already implies accepting many of its core claims: that the intelligence-explosion view of the singularity is accurate, that special measures to produce a friendly AI are feasible and necessary, and that the SIAI has enough good ideas that it has a reasonable chance of not being beaten to the punch by some other project. Otherwise it’s either chasing a fantasy or going after a real goal in a badly suboptimal way, and either way it’s not worth spending effort on influencing.
That still leaves plenty of room for disagreement with Eliezer, theoretically, but it narrows the search space enough that I’m not sure there are many talented ethical philosophers left in it whose views diverge significantly from the SIAI party line. There aren’t so many ethical philosophers in the world that a problem this specialized is going to attract very many of them.
As a corollary, I think it’s a good idea to downplay the SIAI applications of this site. Human rationality is a much broader topic than the kind of formal ethics that go into the SIAI’s work, and seems more likely to attract interesting and varied attention.
I’m curious if among those you saw left, there were any who you wish had stayed.
It seems to me that if one has a a deep need for precise right answers, it’s hard to beat participating in the LW community.
Not especially; the disagreements never seem to resolve.
I occasionally go through phases where it amuses me to be heckled about being a deontologist; does that count? (I was at one time studying for a PhD in philosophy and would likely have concentrated in ethics if I’d stayed.)
I suppose? Though if I generalize from that, it seems that the answer to luke’s original question is “because not enough people enjoy being heckled.” Which, um… well, I guess that’s true, in some sense.