As long as you take care not to overextend. Today my hypothesis is that moralities are sets of cached answers to game theory (possibly cached in our genes), and extending those rules beyond what they’re tested against is likely to lead to trouble.
Humans try hard to formalise their moralities, but that doesn’t make it a good idea per se. (On the other hand, it may require explanation as to why they do.)
Yes, part of an accurate description is identifying the boundary conditions within which that description applies, and applying it outside that boundary is asking for trouble. Agreed.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
I also find that I want to formalize other people’s intuitions as a way of subverting the “tyranny of structurelessness”—that is, the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
Oh yeah. My point—if I have a point, which I may or may not do—is that you can’t do it on the level of the morality itself and get good results, as that’s all cached derived resuits; you have to go to metamorality, i.e. game theory (at least), not to risk going over the edge into silliness. It’s possible this says nothing and adds up to normality, which is the “may not do” bit.
I’m currently reading back through abstruse game theory posts on LessWrong and particularly this truly marvellous book and realising just how damn useful this stuff is going to be in real life.
the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
Looks like proper philosophers have been working through the notion since the 1970s. It would be annoying to have come up with a workable version of libertarianism.
Found a bit of popular science suggesting I’m on the right track about the origins. (I’m ignoring the Liberal/Conservative guff, that just detracts from the actual point and leads me to think less of the researcher.) I don’t want to actually have to buy a copy of this, but it looks along the right lines.
The implication that overextending the generated rules without firmly checking against the generator’s reasons leads to trouble—and is what often leads to trouble—is mine, but would, I’d hope, follow fairly obviously.
That’s actually a very good point. I endorse having it, should you ever do.
I’m hoping not to have to read the entirety of LessWrong (and I thought the sequences were long) before being able to be confident I have indeed had it :-)
May I particularly strongly recommend the Schelling book. Amazing. I’m getting useful results in such practical fields as dealing with four-year-olds and surly teenagers already.
Today my hypothesis is that moralities are sets of cached answers to game theory
In game theory the stable solution such as a nash equilibrium is not necessarily one that maximizes aggregate utility. A game theory approach is for this reason probably at odds with a utilitarian approach to morality. If the game theory approach to morality is right, then utilitarianism is probably wrong.
One point could be to formalize our feelings about what is right.
As long as you take care not to overextend. Today my hypothesis is that moralities are sets of cached answers to game theory (possibly cached in our genes), and extending those rules beyond what they’re tested against is likely to lead to trouble.
Humans try hard to formalise their moralities, but that doesn’t make it a good idea per se. (On the other hand, it may require explanation as to why they do.)
Yes, part of an accurate description is identifying the boundary conditions within which that description applies, and applying it outside that boundary is asking for trouble. Agreed.
I don’t see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.
For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.
I also find that I want to formalize other people’s intuitions as a way of subverting the “tyranny of structurelessness”—that is, the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.
Oh yeah. My point—if I have a point, which I may or may not do—is that you can’t do it on the level of the morality itself and get good results, as that’s all cached derived resuits; you have to go to metamorality, i.e. game theory (at least), not to risk going over the edge into silliness. It’s possible this says nothing and adds up to normality, which is the “may not do” bit.
I’m currently reading back through abstruse game theory posts on LessWrong and particularly this truly marvellous book and realising just how damn useful this stuff is going to be in real life.
Free will as undiscoverability?
Oh!
(blink)
That’s actually a very good point. I endorse having it, should you ever do.
Looks like proper philosophers have been working through the notion since the 1970s. It would be annoying to have come up with a workable version of libertarianism.
Found a bit of popular science suggesting I’m on the right track about the origins. (I’m ignoring the Liberal/Conservative guff, that just detracts from the actual point and leads me to think less of the researcher.) I don’t want to actually have to buy a copy of this, but it looks along the right lines.
The implication that overextending the generated rules without firmly checking against the generator’s reasons leads to trouble—and is what often leads to trouble—is mine, but would, I’d hope, follow fairly obviously.
I’m hoping not to have to read the entirety of LessWrong (and I thought the sequences were long) before being able to be confident I have indeed had it :-)
May I particularly strongly recommend the Schelling book. Amazing. I’m getting useful results in such practical fields as dealing with four-year-olds and surly teenagers already.
Same here. I think Schelling’s book has helped me win at life more than all of LW did. That’s why I gave it such a glowing review :-)
Now you need to find a book that similarly pwns the field of dog training.
Awesome!
I also found “Don’t Shoot The Dog” very useful in those fields, incidentally.
“Every parent needs to learn the basics of one, avoiding a nuclear holocaust and two, dog training.”
Can we use folk physics and the development of physics as a model for the proper relationship between “folk ethics” and ethics?
In game theory the stable solution such as a nash equilibrium is not necessarily one that maximizes aggregate utility. A game theory approach is for this reason probably at odds with a utilitarian approach to morality. If the game theory approach to morality is right, then utilitarianism is probably wrong.