I’ve occasionally seen lists of peoples favorite sequences articles or similar but is there any inverse? Articles or parts of sequences on lesswrong which contain errors or which are probably misleading or poorly written that anyone would like to point to?
I definitely didn’t get it the first time I read it, but currently I think it’s quite good. Maybe it’s written in a way that’s confusing if you don’t already know the punchline (or maybe metaethics confuses people).
I know the punchline—CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing.
Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.
I meant ‘program the FAI to calculate CEV’ might be a reasonably good Schelling point for FAI design. I wasn’t suggesting that you or I could calculate it to inform everyday ethics.
How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don’t know how to calculate it—we have no good idea what it is.
My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Anyway, since I’ve heard a lot about CEV … it seems that CEV is a Schelling point
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
Do you think that it is impossible for an FAI to implement CEV?
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Hmm, that’s not what I think is the punchline :P I think it’s something like “your morality is an idealized version of the computation you use to make moral decisions.”
Well, perhaps the controversy is that that’s it. That it’s okay that there’s no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by “should” and “morally” depends on ourselves.
That’s not the chewbacca defense. It’s going on the offense against something he disagrees with by pointing out implications. The Aumann bit is just throwing his hands up in the air.
The Aumann bit is him quoting something which doesn’t actually prove what he’s quoting it to prove, but which he knows his opponent can’t refute because he’s never heard of it. It isn’t him throwing his hands up in the air—it’s an argument, just a fallacious one.
Both times he used it, he’s giving up on getting somewhere and is just screwing with the guy; it’s not part of his main argument.
The first time he’s trying to stop him from weaseling out. Plus, Aumann doesn’t mean that, taken in its literal form. But, it applies indirectly, aspirationally: to try to be rational and try to share relevant information, etc. so as to approximate the conditions under which it would apply. Indeed, the most reasonable interpretation of the other’s suggestion to agree to disagree is that they both stop trying to be more right than they are (because uncomfortable, can of worms, etc). That’s the opposite of the rationalist approach, and going against that is exactly how he used it - ‘if they disagree, someone is doing something wrong’, is not very wrong.
The second time, it’s just ‘Screw this, I’m out of here’.
Both times he used it, he’s giving up on getting somewhere and is just screwing with the guy; it’s not part of his main argument.
It’s worded like an argument. And he and the bystanders would, when listening to it, believe that Eliezer had made an argument that nobody was able to refute. The impact of Eliezer’s words depends on deceiving him and the bystanders into thinking it is, and was intended as, a valid argument.
In one sense this is a matter of semantics. If you knowingly state something that sounds like an argument, but is fallacious, for the purposes of tricking someone, does that count as “making a bad argument” (in which case Eliezer is using the Chewbacca Defense) or “not making an argument at all” (in which case he isn’t)?
I’ve occasionally seen lists of peoples favorite sequences articles or similar but is there any inverse? Articles or parts of sequences on lesswrong which contain errors or which are probably misleading or poorly written that anyone would like to point to?
I understand that the quantum physics sequence is controversial even within LessWrong.
Generally, though, all of the sequences could benefit from annotations.
Apparently the metaethics sequence confused everyone.
I definitely didn’t get it the first time I read it, but currently I think it’s quite good. Maybe it’s written in a way that’s confusing if you don’t already know the punchline (or maybe metaethics confuses people).
I know the punchline—CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing.
Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.
How could it be a Schelling point when no one has any idea what it is?
I meant ‘program the FAI to calculate CEV’ might be a reasonably good Schelling point for FAI design. I wasn’t suggesting that you or I could calculate it to inform everyday ethics.
Um, doesn’t the same objection apply?
How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don’t know how to calculate it—we have no good idea what it is.
Its, you know, human values.
My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Hmm, that’s not what I think is the punchline :P I think it’s something like “your morality is an idealized version of the computation you use to make moral decisions.”
Really? That seems almost tautological to me, and about as helpful as ‘do what is right’.
Well, perhaps the controversy is that that’s it. That it’s okay that there’s no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by “should” and “morally” depends on ourselves.
It all adds up to normality, and don’t worry about it.
See, I can sum up an entire sequence in one sentence!
This also doesn’t seem like the most original idea, in fact I think this “you create your own values” is the central idea of existentialism.
http://lesswrong.com/lw/i5/bayesian_judo/
This one where Eliezer seems to be bragging about using the Chewbacca defense.
That’s not the chewbacca defense. It’s going on the offense against something he disagrees with by pointing out implications. The Aumann bit is just throwing his hands up in the air.
The Aumann bit is him quoting something which doesn’t actually prove what he’s quoting it to prove, but which he knows his opponent can’t refute because he’s never heard of it. It isn’t him throwing his hands up in the air—it’s an argument, just a fallacious one.
Both times he used it, he’s giving up on getting somewhere and is just screwing with the guy; it’s not part of his main argument.
The first time he’s trying to stop him from weaseling out. Plus, Aumann doesn’t mean that, taken in its literal form. But, it applies indirectly, aspirationally: to try to be rational and try to share relevant information, etc. so as to approximate the conditions under which it would apply. Indeed, the most reasonable interpretation of the other’s suggestion to agree to disagree is that they both stop trying to be more right than they are (because uncomfortable, can of worms, etc). That’s the opposite of the rationalist approach, and going against that is exactly how he used it - ‘if they disagree, someone is doing something wrong’, is not very wrong.
The second time, it’s just ‘Screw this, I’m out of here’.
It’s worded like an argument. And he and the bystanders would, when listening to it, believe that Eliezer had made an argument that nobody was able to refute. The impact of Eliezer’s words depends on deceiving him and the bystanders into thinking it is, and was intended as, a valid argument.
In one sense this is a matter of semantics. If you knowingly state something that sounds like an argument, but is fallacious, for the purposes of tricking someone, does that count as “making a bad argument” (in which case Eliezer is using the Chewbacca Defense) or “not making an argument at all” (in which case he isn’t)?