Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.
If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bias and confusion.
Austere-san may come off as a little callous, but Empathetic-san comes off as a meddler. I’d still rather just be a friendly Mr. Austere supplemented with other LW concepts, especially from the Human’s Guide to Words sequence. After all, if it is just confusion and bias getting in the way, all there is to do is to sweep those errors away. Any additional offer of “help” in deciding what it is “right” for me to feel would tingle my Spidey sense pretty hard.
We are trying to be ‘less wrong’ because human brains are so far from ideal at epistemology and at instrumental rationality (‘agency’). But it’s a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong. And since we are humans, it helps to retrain our emotions: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.”
And since we are humans, it helps to retrain our emotions: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.”
I’d rather call this “self-help” than “meta-ethics.” Why self-help? Because...
But it’s a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong.
...even if my emotions are “wrong,” why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it “right”, which seems to fall squarely under the purview of self-help.
Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label “ethics” that I’d prefer to get away from it as soon as possible.
I always assumed this site [...] tries to avoid driving readers to any particular conclusion, but simply away from misguided ones[.]
As a larger point, separate from the context of lukeprog’s particular post:
What you assumed above will not always be possible. If models M0...Mn are all misguided, and M(n+1) isn’t, driving readers away from misguided models necessarily drives them to one particular conclusion, M(n+1).
I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth.
I’m not sure what this means. Could you elaborate?
What I imagine you to mean seems similar to the sentiment expressed in the first comment to this blog post. That comment seems to me to be so horrifically misguided that I had a strong physiological response to reading it. Basically the commenter thought that since he doesn’t experience himself as following rules of formulating thoughts and sentences, he doesn’t follow them. This is a confusion of the map and territory that stuck in my memory for some reason, and your comment reminded me of it because you seem to be expressing a very strong faith in the accuracy of how things seem to you.
Feel free to just explain yourself without feeling obligated to read a random blog post or telling me how I am misreading you, which would be a side issue.
I think my response to lukeprog above answers this in a way, but it’s more just a question of what we mean by “help me decide.” I’m not against people helping me be less wrong about the actual content of the territory. I’m just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself.
If I am happy because I have plenty of food (in the map), but I actually don’t (in the territory), I’d certainly like to be informed of that. It’s just that I can handle the transition from happy to “oh shit!” all by myself, thank you very much.
In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they’re going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.
If you mean that in service of my goal of satisfying my actual desires, there is more of a danger of being misled when getting input from others as to whether my emotions are a good match for reality than when getting input as to whether reality matches my perception of it, I tentatively agree.
If you mean that getting input from others as to whether my emotions are a good match for reality has a greater cost than benefit, I disagree assuming basic advice filters similar to those used when getting input as to whether reality matches my perception of it. As per above, there will all else equal be a lower expected payoff for me getting advice in this area, even though the advantages are similar.
If you mean that there is a fundamental difference in kind between matching perception to reality and emotions to perceptions that makes getting input an act that is beneficial in the former case and corrosive in the latter, I disagree.
I have low confidence regarding what emotions are most appropriate for various crises and non-crises, and suspect what I think of as ideal are at best local peaks with little chance of being optimal. In addition, what I think of as optimal emotional responses are likely to be too resistant to exceptions. E.g., if one is trapped in a mine shaft the emotional response suitable for typical cases of being trapped is likely to consume too much oxygen.
I’m generally open to ideas regarding what my emotions should be in different situations, and how I can act to change my emotions.
Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.
If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bias and confusion.
Austere-san may come off as a little callous, but Empathetic-san comes off as a meddler. I’d still rather just be a friendly Mr. Austere supplemented with other LW concepts, especially from the Human’s Guide to Words sequence. After all, if it is just confusion and bias getting in the way, all there is to do is to sweep those errors away. Any additional offer of “help” in deciding what it is “right” for me to feel would tingle my Spidey sense pretty hard.
We are trying to be ‘less wrong’ because human brains are so far from ideal at epistemology and at instrumental rationality (‘agency’). But it’s a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong. And since we are humans, it helps to retrain our emotions: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.”
I’d rather call this “self-help” than “meta-ethics.” Why self-help? Because...
...even if my emotions are “wrong,” why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it “right”, which seems to fall squarely under the purview of self-help.
Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label “ethics” that I’d prefer to get away from it as soon as possible.
As a larger point, separate from the context of lukeprog’s particular post:
What you assumed above will not always be possible. If models M0...Mn are all misguided, and M(n+1) isn’t, driving readers away from misguided models necessarily drives them to one particular conclusion, M(n+1).
I’m not sure what this means. Could you elaborate?
What I imagine you to mean seems similar to the sentiment expressed in the first comment to this blog post. That comment seems to me to be so horrifically misguided that I had a strong physiological response to reading it. Basically the commenter thought that since he doesn’t experience himself as following rules of formulating thoughts and sentences, he doesn’t follow them. This is a confusion of the map and territory that stuck in my memory for some reason, and your comment reminded me of it because you seem to be expressing a very strong faith in the accuracy of how things seem to you.
Feel free to just explain yourself without feeling obligated to read a random blog post or telling me how I am misreading you, which would be a side issue.
I think my response to lukeprog above answers this in a way, but it’s more just a question of what we mean by “help me decide.” I’m not against people helping me be less wrong about the actual content of the territory. I’m just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself.
If I am happy because I have plenty of food (in the map), but I actually don’t (in the territory), I’d certainly like to be informed of that. It’s just that I can handle the transition from happy to “oh shit!” all by myself, thank you very much.
In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they’re going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.
If you mean that in service of my goal of satisfying my actual desires, there is more of a danger of being misled when getting input from others as to whether my emotions are a good match for reality than when getting input as to whether reality matches my perception of it, I tentatively agree.
If you mean that getting input from others as to whether my emotions are a good match for reality has a greater cost than benefit, I disagree assuming basic advice filters similar to those used when getting input as to whether reality matches my perception of it. As per above, there will all else equal be a lower expected payoff for me getting advice in this area, even though the advantages are similar.
If you mean that there is a fundamental difference in kind between matching perception to reality and emotions to perceptions that makes getting input an act that is beneficial in the former case and corrosive in the latter, I disagree.
I have low confidence regarding what emotions are most appropriate for various crises and non-crises, and suspect what I think of as ideal are at best local peaks with little chance of being optimal. In addition, what I think of as optimal emotional responses are likely to be too resistant to exceptions. E.g., if one is trapped in a mine shaft the emotional response suitable for typical cases of being trapped is likely to consume too much oxygen.
I’m generally open to ideas regarding what my emotions should be in different situations, and how I can act to change my emotions.