The arguments here sound like “morality is actually complex, and you shouldn’t oversimplify it”. But utilitarianism is pretty complex, in the relevant sense, so this kind of fails to land for me.
Hmm. What do you mean by “complex in the relevant sense?”. The two obvious things you might call complex are “the part where you figure out to estimate a person’s utility in the first place, and aggregate that across people”, and “the part where in practice you need all kinds of complex rules of thumb or brute force evaluation of second-order consequences.”
The former seems legit “hard”, I guess, but sorta seems like a one-shot upfront scientific/philosophical problem that isn’t that hard. (I realize it’s, like, unsolved after decades of relevant work, but, idk, still, doesn’t seem fundamentally confusing to me?). Is this what you meant by “relevant sense?”
The second seems complex in some sense, but seems sorta like how AlphaGo can figure out complex Go strategy given the simple task of “play go against yourself a bunch”. And it seemed like the sequences were arguing against this sort of thing being that easy.
Also I guess depends what sort of utilitarianism you mean, but note:
I dunno if there’s a “not for the sake of aggregate preference utility (alone)”, but I felt like the sequences were arguing (albeit indirectly) that this was still more complex than you (generic you) were probably imagining.
I mean the former: like, whatever “utility” is is not a simple thing to define in terms of things we have a handle on (“pleasurable mental states” does not count as a simple definition), and even if you allow yourself access to standard language about mental states I don’t think it’s so easy (e.g. there are a bunch of different sorts of mental states that might fall under the broad umbrella of “pleasure”).
I do agree that “not for the sake of happiness alone” argues against utilitarianism.
The arguments here sound like “morality is actually complex, and you shouldn’t oversimplify it”. But utilitarianism is pretty complex, in the relevant sense, so this kind of fails to land for me.
Hmm. What do you mean by “complex in the relevant sense?”. The two obvious things you might call complex are “the part where you figure out to estimate a person’s utility in the first place, and aggregate that across people”, and “the part where in practice you need all kinds of complex rules of thumb or brute force evaluation of second-order consequences.”
The former seems legit “hard”, I guess, but sorta seems like a one-shot upfront scientific/philosophical problem that isn’t that hard. (I realize it’s, like, unsolved after decades of relevant work, but, idk, still, doesn’t seem fundamentally confusing to me?). Is this what you meant by “relevant sense?”
The second seems complex in some sense, but seems sorta like how AlphaGo can figure out complex Go strategy given the simple task of “play go against yourself a bunch”. And it seemed like the sequences were arguing against this sort of thing being that easy.
Also I guess depends what sort of utilitarianism you mean, but note:
https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone
I dunno if there’s a “not for the sake of aggregate preference utility (alone)”, but I felt like the sequences were arguing (albeit indirectly) that this was still more complex than you (generic you) were probably imagining.
I mean the former: like, whatever “utility” is is not a simple thing to define in terms of things we have a handle on (“pleasurable mental states” does not count as a simple definition), and even if you allow yourself access to standard language about mental states I don’t think it’s so easy (e.g. there are a bunch of different sorts of mental states that might fall under the broad umbrella of “pleasure”).
I do agree that “not for the sake of happiness alone” argues against utilitarianism.