An argument that consequentialism is incomplete

I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I’m playing with something, it’s mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don’t really know what end result I’m aiming for, but I can say what I’m willing or unwilling to do.

If I try to shoehorn everything into consequentialism, then I end up looking for “consequentialist permission” to do stuff. Like climbing a mountain: consequentialism says “I can put you on top of the mountain! Oh, that’s not what you want? Then I can give you the feeling of having climbed it! You don’t want that either? Then this is tricky...” This seems a lot of work, just to do something I already want to do. There are many reasons to do things—not everything has to be justified by consequences.

There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don’t push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there’s no difference here.

Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren’t all perfectly aligned with with evolution’s wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there’s no difference here either—both our consequentialist and non-consequentialist wishes come from an equally messy process, so they’re equally legitimate.