I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ’It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Then your usage of ‘bad’ is very unusual.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.