I’m a bit skeptical of using majority survey response to determine “morality.” After all, given a Bayesian probability problem, (the exact problem was patients with cancer tests, with a chance of returning a false positive,) most people will give the wrong answer, but we certainly don’t want our computers to make this kind of error.
As to the torture vs. dust specks, when I thought about it, I decided first that torture was unacceptable, and then tried to modify my utility function to round to zero, etc. I was very appalled with myself to find that I decided the answer in advance, and then tried to make my utility function fit a predetermined answer. It felt an awful lot like rationalizing. I don’t know if everyone else is doing the same thing, but if you are, I urge you to reconsider. If we always go with what feels right, what’s the point of using utility functions at all?
I’m a bit skeptical of using majority survey response to determine “morality.” After all, given a Bayesian probability problem, (the exact problem was patients with cancer tests, with a chance of returning a false positive,) most people will give the wrong answer, but we certainly don’t want our computers to make this kind of error.
Morality may be the sort of thing that people are especially likely to get right. Specifically, morality may be a set of rules created, supported, and observed by virtually everyone. If so, then a majority survey response about morality may be much like a majority survey response about the rules of chess, restricted to avid chess players (i.e., that subset of the population which observes and supports the rules of chess as a nearly daily occurrence, just as virtually the whole of humanity observes and supports the rules of morality on a daily basis).
If you go to a chess tournament and ask the participants to demonstrate how the knight moves in chess, then (a) the vast majority will almost certainly give you the same answer, and (b) that answer will almost certainly be right.
As long as you take care not to overextend. Today my hypothesis is that moralities are sets of cached answers to game theory (possibly cached in our genes), and extending those rules beyond what they’re tested against is likely to lead to trouble.
Humans try hard to formalise their moralities, but that doesn’t make it a good idea per se. (On the other hand, it may require explanation as to why they do.)
Yes, part of an accurate description is identifying the boundary conditions within which that description applies, and applying it outside that boundary is asking for trouble. Agreed.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
I also find that I want to formalize other people’s intuitions as a way of subverting the “tyranny of structurelessness”—that is, the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
Oh yeah. My point—if I have a point, which I may or may not do—is that you can’t do it on the level of the morality itself and get good results, as that’s all cached derived resuits; you have to go to metamorality, i.e. game theory (at least), not to risk going over the edge into silliness. It’s possible this says nothing and adds up to normality, which is the “may not do” bit.
I’m currently reading back through abstruse game theory posts on LessWrong and particularly this truly marvellous book and realising just how damn useful this stuff is going to be in real life.
the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
Looks like proper philosophers have been working through the notion since the 1970s. It would be annoying to have come up with a workable version of libertarianism.
Found a bit of popular science suggesting I’m on the right track about the origins. (I’m ignoring the Liberal/Conservative guff, that just detracts from the actual point and leads me to think less of the researcher.) I don’t want to actually have to buy a copy of this, but it looks along the right lines.
The implication that overextending the generated rules without firmly checking against the generator’s reasons leads to trouble—and is what often leads to trouble—is mine, but would, I’d hope, follow fairly obviously.
That’s actually a very good point. I endorse having it, should you ever do.
I’m hoping not to have to read the entirety of LessWrong (and I thought the sequences were long) before being able to be confident I have indeed had it :-)
May I particularly strongly recommend the Schelling book. Amazing. I’m getting useful results in such practical fields as dealing with four-year-olds and surly teenagers already.
Today my hypothesis is that moralities are sets of cached answers to game theory
In game theory the stable solution such as a nash equilibrium is not necessarily one that maximizes aggregate utility. A game theory approach is for this reason probably at odds with a utilitarian approach to morality. If the game theory approach to morality is right, then utilitarianism is probably wrong.
I’m a bit skeptical of using majority survey response to determine “morality.” After all, given a Bayesian probability problem, (the exact problem was patients with cancer tests, with a chance of returning a false positive,) most people will give the wrong answer, but we certainly don’t want our computers to make this kind of error.
As to the torture vs. dust specks, when I thought about it, I decided first that torture was unacceptable, and then tried to modify my utility function to round to zero, etc. I was very appalled with myself to find that I decided the answer in advance, and then tried to make my utility function fit a predetermined answer. It felt an awful lot like rationalizing. I don’t know if everyone else is doing the same thing, but if you are, I urge you to reconsider. If we always go with what feels right, what’s the point of using utility functions at all?
Morality may be the sort of thing that people are especially likely to get right. Specifically, morality may be a set of rules created, supported, and observed by virtually everyone. If so, then a majority survey response about morality may be much like a majority survey response about the rules of chess, restricted to avid chess players (i.e., that subset of the population which observes and supports the rules of chess as a nearly daily occurrence, just as virtually the whole of humanity observes and supports the rules of morality on a daily basis).
If you go to a chess tournament and ask the participants to demonstrate how the knight moves in chess, then (a) the vast majority will almost certainly give you the same answer, and (b) that answer will almost certainly be right.
One point could be to formalize our feelings about what is right.
As long as you take care not to overextend. Today my hypothesis is that moralities are sets of cached answers to game theory (possibly cached in our genes), and extending those rules beyond what they’re tested against is likely to lead to trouble.
Humans try hard to formalise their moralities, but that doesn’t make it a good idea per se. (On the other hand, it may require explanation as to why they do.)
Yes, part of an accurate description is identifying the boundary conditions within which that description applies, and applying it outside that boundary is asking for trouble. Agreed.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
I also find that I want to formalize other people’s intuitions as a way of subverting the “tyranny of structurelessness”—that is, the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
Oh yeah. My point—if I have a point, which I may or may not do—is that you can’t do it on the level of the morality itself and get good results, as that’s all cached derived resuits; you have to go to metamorality, i.e. game theory (at least), not to risk going over the edge into silliness. It’s possible this says nothing and adds up to normality, which is the “may not do” bit.
I’m currently reading back through abstruse game theory posts on LessWrong and particularly this truly marvellous book and realising just how damn useful this stuff is going to be in real life.
Free will as undiscoverability?
Oh!
(blink)
That’s actually a very good point. I endorse having it, should you ever do.
Looks like proper philosophers have been working through the notion since the 1970s. It would be annoying to have come up with a workable version of libertarianism.
Found a bit of popular science suggesting I’m on the right track about the origins. (I’m ignoring the Liberal/Conservative guff, that just detracts from the actual point and leads me to think less of the researcher.) I don’t want to actually have to buy a copy of this, but it looks along the right lines.
The implication that overextending the generated rules without firmly checking against the generator’s reasons leads to trouble—and is what often leads to trouble—is mine, but would, I’d hope, follow fairly obviously.
I’m hoping not to have to read the entirety of LessWrong (and I thought the sequences were long) before being able to be confident I have indeed had it :-)
May I particularly strongly recommend the Schelling book. Amazing. I’m getting useful results in such practical fields as dealing with four-year-olds and surly teenagers already.
Same here. I think Schelling’s book has helped me win at life more than all of LW did. That’s why I gave it such a glowing review :-)
Now you need to find a book that similarly pwns the field of dog training.
Awesome!
I also found “Don’t Shoot The Dog” very useful in those fields, incidentally.
“Every parent needs to learn the basics of one, avoiding a nuclear holocaust and two, dog training.”
Can we use folk physics and the development of physics as a model for the proper relationship between “folk ethics” and ethics?
In game theory the stable solution such as a nash equilibrium is not necessarily one that maximizes aggregate utility. A game theory approach is for this reason probably at odds with a utilitarian approach to morality. If the game theory approach to morality is right, then utilitarianism is probably wrong.