This doesn’t strike me as an inherently bad objection. Even the post offers the caveat that we’re running on corrupt hardware. One can’t say that consequentialist theories are WRONG on such grounds, but one can certainly object to the likely consequences of combining ambiguous expected values with brains that do not naturally multiply and are good at imagining fictional futures.
I think the argument can be cut down to this:
In theory, we should act to create the best state of affairs.
People are bad at doing that without predefined moral rules.
Can we at least believe that we believe in those rules?
This is lousy truth-seeking, but may be excellent instrumental rationality if enough people are poor consequentialists and decent enough deontologists. It’s not my argument of choice; step 3 seems suspiciously oily.
But then again, “That which can be destroyed by the truth, should be” has kind of a deontological ring to it...
Well, that’s the thing: some people do. Even obvious things can require some explanation.
This doesn’t strike me as an inherently bad objection. Even the post offers the caveat that we’re running on corrupt hardware. One can’t say that consequentialist theories are WRONG on such grounds, but one can certainly object to the likely consequences of combining ambiguous expected values with brains that do not naturally multiply and are good at imagining fictional futures.
I think the argument can be cut down to this:
In theory, we should act to create the best state of affairs.
People are bad at doing that without predefined moral rules.
Can we at least believe that we believe in those rules?
This is lousy truth-seeking, but may be excellent instrumental rationality if enough people are poor consequentialists and decent enough deontologists. It’s not my argument of choice; step 3 seems suspiciously oily.
But then again, “That which can be destroyed by the truth, should be” has kind of a deontological ring to it...