Citation for this was hard; the closest I got was Etzioni’s 1962 The Hard Way to Peace, pg 110. There’s also a version in the 1998 Linus Pauling on peace:
a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes
I have made a modern formulation of the Golden Rule: “Do unto others 20 percent better than you would be done by—the 20 percent is to correct for subjective error.”
Yeah, but it’s not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don’t just make a guess and assume you’re correct.
You don’t think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn’t really exclude that, it’s just on the positive side of doing/ being done unto.
As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.
I mean if it was practical, you’d give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.
...living on planet earth, giving you 80% of the crap you gave me seems about right.
Consider the consequences if everyone follows your rule. Assume someone gives you one unit of crap, possibly accidentally. You respond with 0.8 units. (It’s hard to measure this precisely, but for the sake of argument let’s assume that both of you manage to get it exactly right). He, in turn, responds with a further 0.64 units of crap. You respond to this with 0.512 units.
This is, of course, an infinite geometric series. The end result (over an infinite time period) is that you recieve 2 and 7⁄9 units of crap, while the other person recieves 2 and 2⁄9 units of crap. He recieves exactly 80% of the amount that you recieved, but you recieved over twice as much as you started out recieving.
If you return x% of the crap you get (for 0<x<100), and everyone else follows the same rule, then the total crap you recieve for every starting unit of crap is:
That assumes that he is following a different rule from the rule that you are following. Does knowing that he will give you the 0.64 units prevent you from giving him the 0.8 units?
Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.
That’s a lot of effort and pain to prevent someone stepping on your toes.
Also, I’m not sure that’d be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim’s family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.
You are correct; it is not terribly effective. However, any disproportionate response to a minor, or even an imagined, slight will reduce total unhappiness while discouraging others from hurting me.
Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling’s recommendation.
It’s impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you’d try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you’d follow this method.
And it’s not hard to think of real life examples of atrocities “justified” on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.
It’s better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you’re not going to do well if you just find something easier to optimize.
Most human utility functions give their own happiness more weight than other’s. If you take into account that humans increase the happiness of others because it makes themself happy, you could even say that human utility functions only care about the happiness of their corresponding humans—but that is close to a tautology (“the utility function cares about the utility of the agent only”).
You’ve heard that it was said, “An eye in place of an eye, and a tooth in place of a tooth.” But I tell you, don’t oppose someone who is evil. Whoever slaps you on the right cheek, turn the other cheek to him as well; and to whoever wants to get a judgment against you and takes your shirt, give him your coat too; and if someone forces you to go one mile, go with him two.
(Matthew 5:38-41)
On self-superiority bias:
Why do you look at the speck that’s in your brother’s eye, and don’t notice the plank that’s in your own eye?
(Matthew 7:3)
I disagree with Will’s interpretation that the former follows from the latter.
Sorry, yeah, I was being a little too trollish. What I meant was that it was a single step implication from combining a few parts of the Sermon on the Mount; the examples you gave are likely indeed the two most representative ones for reaching that conclusion. Out of context I agree the mote and beam exhortation isn’t enough.
(Also, I disagree with your choice of translation, but that of course doesn’t matter in the scheme of things. Just felt that there was a 20% chance you’d care what I thought about that matter.)
Eh. What irritates me is his implicit claim that the ideas there are original or exclusive to Jesus.
With the amount of censorship, deliberate credit-stealing, and other failures of memetic replication in the ancient world, the chance is pretty slim that the earliest instance you’ve heard of of a moral or religious idea is actually its earliest invention. We should expect that the earliest extant sources for an idea do not correctly attribute its origin: there are so many more ways to be wrong than right, and the correct attribution of an idea (or an entertaining story, for that matter) is not under selection pressure to stay correct as the idea itself is.
(Consider that until the 1853 discovery of the Epic of Gilgamesh, a European scholar might well have believed that the story of Noah’s Flood originated with the Bible. We similarly know that much of the mythos of Jesus echoes earlier salvific gods and demigods — Mithras, Dionysus, Osiris, etc. — whose cults were later suppressed as pagan.)
So thinking of the moral teachings of Jesus as originally Christian seems problematic. For instance, given the extensive contact between the Near East and India since the time of Alexander, it’s reasonable to consider some contact with ideas from Buddhism, Jainism, etc. — as well as the Greek (or Greco-Egyptian) philosophy more readily recognized by Christian sources.
My point here isn’t to say that Jesus was a Buddhist, of course — but rather that if we happen to observe what look like moral truths (or just moral good ideas) in one particular tradition, we shouldn’t take that tradition seriously when it claims to have discovered them or to possess unique access to them.
What irritates me is his implicit claim that the ideas there are original or exclusive to Jesus.
I don’t really care about credit for originality, just beautifulness and deepness of message. Linus’ take is ugly, whereas the Sermon is beautifully constructed. Just seems a shame not to go for the latter whenever possible.
(Upvoted so someone can explain it without Karma cost.)
Downvoted because feeding Will when he is speaking this kind of pretentious drivel is precisely the kind of thing that the cost is intended to penalize. It is an example of the system working as it should!
(Note that my own earlier reply would be penalized if I made it now and that too would be a desirable outcome. If I was confident that Will’s claim about the Sermon on the Mount would be dismissed and downvoted as it has been then I would not have made a response.)
It is an example of the system working as it should!
Really, it’s an example of the system backfiring,causing someone to upvote a comment that deserved a downvoting it would probably otherwise have received.
What probability do you assign to someone with total karma less than 5 coming and translating this specific Will_Newsome’s comment into intelligible speach? My estimate is: epsilon.
Breaking a rule, and explaining that it has to be done to provide an opportunity for something with epsilon probability and a very low value even if it happened… that’s just an example of a person deliberately breaking a rule, and signalling dissatisfaction with the rule.
People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply—the set of people with karma greater than five will just be disincentivized, and that’s still something.
The prior probability of someone being able to explain negative-value Will_Newsome’s comments in a way that provides value for LW readers is already epsilon. Even without the disincentives.
I think that people less responding to intentionally meaningless comments is a good thing. Therefore, trivial disincentives for doing this are a good thing. Therefore, removing them in this specific situation is a bad thing.
Don’t judge others, and you won’t be judged. For whatever standard you use to judge others will be used to judge you, and whatever measurement you use to measure others will be used to measure you. Why do you look at the speck that’s in your brother’s eye, and don’t notice the plank that’s in your own eye? How are you going to say to your brother, “Let me take out the speck from your eye” when you have a plank in your own eye? You hypocrite, first get rid of the plank in your own eye, and then you’ll see clearly to take out the speck from your brother’s eye.
This claims that people underestimate their flaws relative to others’. Will claims that the obvious implication is that one must judge others more leniently to compensate, rather than refraining from judgement entirely as said two sentences earlier.
There’s also a suggestion of projection there. Having discovered that I have some flaw (say, anger; or baseless faith), I may go about finding the same fault in others — but if I correct the flaw in myself first, the world may look different. The one with road rage drives on a highway populated by assholes and maniacs; the creationist accuses Darwinism of “being a religion”.
the creationist accuses Darwinism of “being a religion”.
It is, the way they’re trying to use that word. Also is intelligent design a type of creationism? ’Cuz I think I like ID, at least more than the standard model. I’d like to think of myself as a human in the reference class “creationist who accuses Darwinism of being a religion”.
Someone who claims that faith is a good thing should not also use it as an accusation of impropriety.
The creationist does not claim — before cowans, gentiles, and the unwashed — that Darwinism is the wrong religion; rather, he claims that it is “a religion” as if to say that this is condemnation enough. To fellow creationists he may well say that Darwinism is Satanism, or a rival tribe to be vanquished by force or deception. But he does not expect that argument to fly with outsiders. With them he merely asserts that the (straw-)Darwinist is a hypocrite, a know-it-all elitist nerd who commits the grave faux-pas of mistaking his religion for science.
Meanwhile the sociologist of religion wonders where the temples of Darwin are. The strong-programme sociologist of science (who uses the methodological assumption that science doesn’t work, even as he posts on the Internet!) can mistake a laboratory for a center of ritual, but one who has studied comparative religion does not see worship happening in the microscope, the genomics software, or the fMRI.
Someone who claims that faith is a good thing should not also use it as an accusation of impropriety.
I get the impression that that argument is used more to undermine claims that darwinism is a science than anything else.
Physics is a clear science; you can use the right equations and predict the motion of the Earth about the Sun, or the time a barometer will take to fall from a given height. This gives it a certain degree of credibility. The theory of evolution (and how the creationists love to remind everyone of that word, ‘theory’!) is also science; but they would deny it, on the basis that accepting it suggests that it is as credible as physics or mathematics. If they insist that darwinism is a religion, then both alternatives start from the same basis of credibility; the creationists can then point out, quite accurately, that their version is older and has been around for longer, and therefore at least claim seniority.
Meanwhile the sociologist of religion wonders where the temples of Darwin are.
Remember that Darwinism is a lot more than biology. Sure, a computer isn’t exactly an altar. That doesn’t change that most of what universities are famous for in the wider world is their ideology.
Eh that’s sort of a less charitable reading of me than you could have given. But I suppose you’ve already walked with me 1.2 miles, and it’d be a stretch for me to ask for .8 more. ;)
-- Linus Pauling
Citation for this was hard; the closest I got was Etzioni’s 1962 The Hard Way to Peace, pg 110. There’s also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes
Did you take “expect” to mean as in prediction, or as in what you would have them do, like the Jesus version?
How about doing unto others what maximizes total happiness, regardless of what they’d do unto you?
The former is computationally far more feasible.
By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness.
Yeah, but it’s not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don’t just make a guess and assume you’re correct.
You don’t think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn’t really exclude that, it’s just on the positive side of doing/ being done unto.
You should probably discourage others from hurting you. It’s just not clear how much.
As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.
I mean if it was practical, you’d give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.
Consider the consequences if everyone follows your rule. Assume someone gives you one unit of crap, possibly accidentally. You respond with 0.8 units. (It’s hard to measure this precisely, but for the sake of argument let’s assume that both of you manage to get it exactly right). He, in turn, responds with a further 0.64 units of crap. You respond to this with 0.512 units.
This is, of course, an infinite geometric series. The end result (over an infinite time period) is that you recieve 2 and 7⁄9 units of crap, while the other person recieves 2 and 2⁄9 units of crap. He recieves exactly 80% of the amount that you recieved, but you recieved over twice as much as you started out recieving.
If you return x% of the crap you get (for 0<x<100), and everyone else follows the same rule, then the total crap you recieve for every starting unit of crap is:
This is clearly minimized at x=0.
Alternatively: he could notice that he gave you 1 unit of crap and assume the 0.8 units of crap you gave him is an equal penalty.
If someone yells at you, you’re likely to respond—but if someone yells at you because you just pushed them, you’re less likely to respond.
Or he could know I was going to give him the .512 units, from prior experience, and not give .64, which is the whole point.
That assumes that he is following a different rule from the rule that you are following. Does knowing that he will give you the 0.64 units prevent you from giving him the 0.8 units?
Yes. Depending on the circumstance, I might give him much less or much more and/ or choose a different course of action entirely.
Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.
That’s a lot of effort and pain to prevent someone stepping on your toes.
Also, I’m not sure that’d be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim’s family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.
You are correct; it is not terribly effective. However, any disproportionate response to a minor, or even an imagined, slight will reduce total unhappiness while discouraging others from hurting me.
No. I just told you. Sometimes a disproportionate response encourages other people to hurt you. That’s actually part of the rule.
Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling’s recommendation.
I never mentioned causation. If you find a way to maximize it acausally, do that.
It has a tendency to go horribly wrong.
It’s impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you’d try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you’d follow this method.
Also, linking to TVTropes tends to fall under generalizing from fictional evidence.
Art imitates life. ;)
And it’s not hard to think of real life examples of atrocities “justified” on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.
Would it help if the link were aimed at the real life section?
It has been deleted to prevent edit war.
It’s a nice sentiment, but the optimization problem you suggest is usually intractable.
It’s better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you’re not going to do well if you just find something easier to optimize.
Yes, but there’s no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic.
You may do that if you must, I recommend against it.
Why do you recommend against it? Do you have a more complicated utility function?
Most human utility functions give their own happiness more weight than other’s. If you take into account that humans increase the happiness of others because it makes themself happy, you could even say that human utility functions only care about the happiness of their corresponding humans—but that is close to a tautology (“the utility function cares about the utility of the agent only”).
That quote is really annoying because Jesus says the same thing way better, repeatedly, in the Sermon on the Mount.
Jesus used a clever quip to point out the importance of self-monitoring for illusory superiority?
Just read the Sermon on the Mount.
What’s the Jesus quote? (Or, I guess, one instance of it.)
On being nicer than you think you should be:
(Matthew 5:38-41)
On self-superiority bias:
(Matthew 7:3)
I disagree with Will’s interpretation that the former follows from the latter.
Sorry, yeah, I was being a little too trollish. What I meant was that it was a single step implication from combining a few parts of the Sermon on the Mount; the examples you gave are likely indeed the two most representative ones for reaching that conclusion. Out of context I agree the mote and beam exhortation isn’t enough.
(Also, I disagree with your choice of translation, but that of course doesn’t matter in the scheme of things. Just felt that there was a 20% chance you’d care what I thought about that matter.)
No he didn’t. You are wrong about either the religious teaching you advocate or the thing that is being advocated in the grandparent.
Eh. What irritates me is his implicit claim that the ideas there are original or exclusive to Jesus.
With the amount of censorship, deliberate credit-stealing, and other failures of memetic replication in the ancient world, the chance is pretty slim that the earliest instance you’ve heard of of a moral or religious idea is actually its earliest invention. We should expect that the earliest extant sources for an idea do not correctly attribute its origin: there are so many more ways to be wrong than right, and the correct attribution of an idea (or an entertaining story, for that matter) is not under selection pressure to stay correct as the idea itself is.
(Consider that until the 1853 discovery of the Epic of Gilgamesh, a European scholar might well have believed that the story of Noah’s Flood originated with the Bible. We similarly know that much of the mythos of Jesus echoes earlier salvific gods and demigods — Mithras, Dionysus, Osiris, etc. — whose cults were later suppressed as pagan.)
So thinking of the moral teachings of Jesus as originally Christian seems problematic. For instance, given the extensive contact between the Near East and India since the time of Alexander, it’s reasonable to consider some contact with ideas from Buddhism, Jainism, etc. — as well as the Greek (or Greco-Egyptian) philosophy more readily recognized by Christian sources.
My point here isn’t to say that Jesus was a Buddhist, of course — but rather that if we happen to observe what look like moral truths (or just moral good ideas) in one particular tradition, we shouldn’t take that tradition seriously when it claims to have discovered them or to possess unique access to them.
I don’t really care about credit for originality, just beautifulness and deepness of message. Linus’ take is ugly, whereas the Sermon is beautifully constructed. Just seems a shame not to go for the latter whenever possible.
Linus’s take fits my aesthetic better, and “beautiful” language is often unclear.
mote beam single step implications
What?
(Upvoted so someone can explain it without Karma cost.)
Downvoted because feeding Will when he is speaking this kind of pretentious drivel is precisely the kind of thing that the cost is intended to penalize. It is an example of the system working as it should!
(Note that my own earlier reply would be penalized if I made it now and that too would be a desirable outcome. If I was confident that Will’s claim about the Sermon on the Mount would be dismissed and downvoted as it has been then I would not have made a response.)
Really, it’s an example of the system backfiring,causing someone to upvote a comment that deserved a downvoting it would probably otherwise have received.
That was my point.
What probability do you assign to someone with total karma less than 5 coming and translating this specific Will_Newsome’s comment into intelligible speach? My estimate is: epsilon.
Breaking a rule, and explaining that it has to be done to provide an opportunity for something with epsilon probability and a very low value even if it happened… that’s just an example of a person deliberately breaking a rule, and signalling dissatisfaction with the rule.
People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply—the set of people with karma greater than five will just be disincentivized, and that’s still something.
The prior probability of someone being able to explain negative-value Will_Newsome’s comments in a way that provides value for LW readers is already epsilon. Even without the disincentives.
I think that people less responding to intentionally meaningless comments is a good thing. Therefore, trivial disincentives for doing this are a good thing. Therefore, removing them in this specific situation is a bad thing.
Will is referring to Matthew 7:1-5.
This claims that people underestimate their flaws relative to others’. Will claims that the obvious implication is that one must judge others more leniently to compensate, rather than refraining from judgement entirely as said two sentences earlier.
There’s also a suggestion of projection there. Having discovered that I have some flaw (say, anger; or baseless faith), I may go about finding the same fault in others — but if I correct the flaw in myself first, the world may look different. The one with road rage drives on a highway populated by assholes and maniacs; the creationist accuses Darwinism of “being a religion”.
It is, the way they’re trying to use that word. Also is intelligent design a type of creationism? ’Cuz I think I like ID, at least more than the standard model. I’d like to think of myself as a human in the reference class “creationist who accuses Darwinism of being a religion”.
Someone who claims that faith is a good thing should not also use it as an accusation of impropriety.
The creationist does not claim — before cowans, gentiles, and the unwashed — that Darwinism is the wrong religion; rather, he claims that it is “a religion” as if to say that this is condemnation enough. To fellow creationists he may well say that Darwinism is Satanism, or a rival tribe to be vanquished by force or deception. But he does not expect that argument to fly with outsiders. With them he merely asserts that the (straw-)Darwinist is a hypocrite, a know-it-all elitist nerd who commits the grave faux-pas of mistaking his religion for science.
Meanwhile the sociologist of religion wonders where the temples of Darwin are. The strong-programme sociologist of science (who uses the methodological assumption that science doesn’t work, even as he posts on the Internet!) can mistake a laboratory for a center of ritual, but one who has studied comparative religion does not see worship happening in the microscope, the genomics software, or the fMRI.
I get the impression that that argument is used more to undermine claims that darwinism is a science than anything else.
Physics is a clear science; you can use the right equations and predict the motion of the Earth about the Sun, or the time a barometer will take to fall from a given height. This gives it a certain degree of credibility. The theory of evolution (and how the creationists love to remind everyone of that word, ‘theory’!) is also science; but they would deny it, on the basis that accepting it suggests that it is as credible as physics or mathematics. If they insist that darwinism is a religion, then both alternatives start from the same basis of credibility; the creationists can then point out, quite accurately, that their version is older and has been around for longer, and therefore at least claim seniority.
There’s a short story by Asimov that gives a very nice view of the whole argument.
That is a quintessentially Asimovian story. +1.
Remember that Darwinism is a lot more than biology. Sure, a computer isn’t exactly an altar. That doesn’t change that most of what universities are famous for in the wider world is their ideology.
Eh that’s sort of a less charitable reading of me than you could have given. But I suppose you’ve already walked with me 1.2 miles, and it’d be a stretch for me to ask for .8 more. ;)
One way we say it here is to be cautious of other-optimizing …
Though sadly sometimes the only alternative is no optimization at all.
Yes, but more frequently than that, the only alternative appears to be no optimization at all. Hence the heuristic.