This was pretty interesting, and pretty different from the kind of content you usually find on LessWrong.
I often see arguments against “spontaneous inconvenient moral behaviour”, such as worrying whether to kill ants infesting your house or stop eating meat, that advocate these behaviours should be replaced with more effective planned behaviours, but I don’t really think most of the first behaviours prevent the others.
Suggesting that someone currently in his house should stop thinking about how to humanly get rid of ants, start working for an hour and using those overtime moneys to donate to ants charity isn’t a feasible model, since most people wouldn’t have a job where they can just take an hour of spare time whenever they want and convert it to extra money. You are converting “fun time” into “care for the ants time”.
Thinking about how you can be more effective to produce charity or moral value is certainly a good idea, 15 minutes of your time can easily improve the charity you can output in the next years by ten times or more without any real drawback, but the kind of “moral rigor” that’s required when one wants to contest a behaviour he doesn’t want to adopt it’s usually the level of rigor that requires someone to drop his career, start working on friendly AI full time and donating every material possession that he doesn’t think it’s needed to keep his productivity high to friendly AI research.
You’ll need a Schelling point about morality if you don’t want to donate your every value to friendly AI research ( if you want to I won’t certainly try to stop you), at some point you have to go “screw it, I’ll do this less effective thing instead because I want to”, and this Schelling point will likely include a lot of behaviours that are spontaneous things you care about but are also ineffective.
Also the way some critiques try to evaluate non-human lives doesn’t really make sense. I agree on a “humans > complex animals > simple animals logic”, but there should be some kind of quantitative relations between the wellbeing of the groups. You can argue that you would save a human over any number of cow and I guess that can sorta makes sense, but there still should be some amount of human pleasure you should be willing to give up to prevent some amount of animal suffering, or you might as well give up on quantitative moral at all.
If one’s suggesting a 1:1000 exchange of human pleasure:animal suffering, you can’t refuse by arguing that you’d refuse a 10:10 exchange.
This was pretty interesting, and pretty different from the kind of content you usually find on LessWrong.
I often see arguments against “spontaneous inconvenient moral behaviour”, such as worrying whether to kill ants infesting your house or stop eating meat, that advocate these behaviours should be replaced with more effective planned behaviours, but I don’t really think most of the first behaviours prevent the others.
Suggesting that someone currently in his house should stop thinking about how to humanly get rid of ants, start working for an hour and using those overtime moneys to donate to ants charity isn’t a feasible model, since most people wouldn’t have a job where they can just take an hour of spare time whenever they want and convert it to extra money. You are converting “fun time” into “care for the ants time”.
Thinking about how you can be more effective to produce charity or moral value is certainly a good idea, 15 minutes of your time can easily improve the charity you can output in the next years by ten times or more without any real drawback, but the kind of “moral rigor” that’s required when one wants to contest a behaviour he doesn’t want to adopt it’s usually the level of rigor that requires someone to drop his career, start working on friendly AI full time and donating every material possession that he doesn’t think it’s needed to keep his productivity high to friendly AI research.
You’ll need a Schelling point about morality if you don’t want to donate your every value to friendly AI research ( if you want to I won’t certainly try to stop you), at some point you have to go “screw it, I’ll do this less effective thing instead because I want to”, and this Schelling point will likely include a lot of behaviours that are spontaneous things you care about but are also ineffective.
Also the way some critiques try to evaluate non-human lives doesn’t really make sense. I agree on a “humans > complex animals > simple animals logic”, but there should be some kind of quantitative relations between the wellbeing of the groups. You can argue that you would save a human over any number of cow and I guess that can sorta makes sense, but there still should be some amount of human pleasure you should be willing to give up to prevent some amount of animal suffering, or you might as well give up on quantitative moral at all.
If one’s suggesting a 1:1000 exchange of human pleasure:animal suffering, you can’t refuse by arguing that you’d refuse a 10:10 exchange.