Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I’m a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that’s a result of my physical structure. I don’t really know the algorithm I use for assigning utility, but that’s because my design is pretty messed up. Nevertheless, there is an algorithm, and it’s what I talk about when I use the words “right” and “wrong”.
Moral rightness is fundamentally a two-place function: it takes both an optimization process and a hypothetical future as arguments. In practice, people frequently use the curried form, with themselves as the implied first argument.
Suppose I proved that all utilities equaled zero.
That result is obviously false for my present self. If the proof pertains to that entity, it’s either incorrect or the formal system it is phrased in is inappropriate for modeling this aspect of reality.
It’s also false for all of my possible future selves. I refuse to recognize something which doesn’t have preferences over hypothetical futures as a future-self of me; whatever it is, it’s lost too many important functions for that.
Moral rightness is fundamentally a two-place function: it takes both an optimization process and a hypothetical future as arguments. In practice, people frequently use the curried form, with themselves as the implied first argument. That result is obviously false for my present self. If the proof pertains to that entity, it’s either incorrect or the formal system it is phrased in is inappropriate for modeling this aspect of reality.
It’s also false for all of my possible future selves. I refuse to recognize something which doesn’t have preferences over hypothetical futures as a future-self of me; whatever it is, it’s lost too many important functions for that.