The entire green circle on the right is just a zoomed in version of the green circle on the left. The ‘projected’ arrow is just what the projectivist thesis is (third subsection). The idea is that our moral beliefs are formed by a basically illegitimate mechanism by projecting our utility function onto the external world. There isn’t a an arrow from “beliefs” to “the Map” because those are the same thing.
Good clarification, I now am pretty sure I understand how our beliefs relate.
I am suggesting that our moral beliefs are formed by a totally legitimate mechanism by projecting our utility function onto the external world.
If X is a zoomed-in version of Y, you can’t project Z into X. Either Z is part of Y, in which case it’s part of X, or it isn’t part of Y, in which case it’s not part of X.
I’m pretty confused by this comment, you’ll have to clarify. If our moral beliefs are formed by projecting our utility function onto the external world I’m unsure of what you could mean by calling this process “legitimate”. Certainly it doesn’t seem likely to be a way to form accurate beliefs about the world.
If X is a zoomed-in version of Y, you can’t project Z into X. Either Z is part of Y, in which case it’s part of X, or it isn’t part of Y, in which case it’s not part of X.
Z is projected into X/Y. It’s just too small to see in Y and I didn’t think more arrows would clarify things.
“projected onto the external world” isn’t really correct. Moral beliefs don’t, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it’s not what people naturally believe.
What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn’t have a utility function, they’re the closest approximation of one).
Otherwise, in the diagram, there would be two identical circles in your brain, one labeled “moral beliefs” and the other labeled “utility function”.
Thus, it is perfectly legitimate for your moral beliefs to be your utility function.
“projected onto the external world” isn’t really correct. Moral beliefs don’t, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it’s not what people naturally believe.
Often pre-theoretic moral beliefs are entities unto themselves, like laws of nature sort of. People routinely think of morality as consisting of universal facts which can be debated. Thats what makes them “beliefs”. As far as I know nearly everyone is a pre-theoretic moral realist. Of course, moral beliefs might not feel quite the same as say beliefs about whether or not something is dog. But they can still be beliefs.
What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn’t have a utility function, they’re the closest approximation of one).
Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
A utility function doesn’t constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn’t a reliable way of forming accurate beliefs that correspond to the territory.
If you want to just use the word ‘belief’ to also describe moral principles that seems okay as long as you don’t confuse them with beliefs proper.
In any case, it sounds like we’re both anti-realists.
If you want to just use the word ‘belief’ to also describe moral principles that seems okay as long as you don’t confuse them with beliefs proper.
The reason I want to do this is because things like logically manipulating moral beliefs / preferences in conjunction with factual beliefs / anticipations makes sense.
But I think this is our disagreement:
A utility function doesn’t constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn’t a reliable way of forming accurate beliefs that correspond to the territory.
You say it’s illegitimate because it doesn’t constrain future experiences. If it constrained future experiences incorrectly, I would agree that it was illegitimate. If it was trying to constrain future experiences and failing, that would also be illegitimate.
But the point of morality is not to constrain our experiences. The point of morality is to constrain our actions. And it does that quite well.
The entire green circle on the right is just a zoomed in version of the green circle on the left. The ‘projected’ arrow is just what the projectivist thesis is (third subsection). The idea is that our moral beliefs are formed by a basically illegitimate mechanism by projecting our utility function onto the external world. There isn’t a an arrow from “beliefs” to “the Map” because those are the same thing.
Good clarification, I now am pretty sure I understand how our beliefs relate.
I am suggesting that our moral beliefs are formed by a totally legitimate mechanism by projecting our utility function onto the external world.
If X is a zoomed-in version of Y, you can’t project Z into X. Either Z is part of Y, in which case it’s part of X, or it isn’t part of Y, in which case it’s not part of X.
I’m pretty confused by this comment, you’ll have to clarify. If our moral beliefs are formed by projecting our utility function onto the external world I’m unsure of what you could mean by calling this process “legitimate”. Certainly it doesn’t seem likely to be a way to form accurate beliefs about the world.
Z is projected into X/Y. It’s just too small to see in Y and I didn’t think more arrows would clarify things.
“projected onto the external world” isn’t really correct. Moral beliefs don’t, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it’s not what people naturally believe.
What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn’t have a utility function, they’re the closest approximation of one).
Otherwise, in the diagram, there would be two identical circles in your brain, one labeled “moral beliefs” and the other labeled “utility function”.
Thus, it is perfectly legitimate for your moral beliefs to be your utility function.
Often pre-theoretic moral beliefs are entities unto themselves, like laws of nature sort of. People routinely think of morality as consisting of universal facts which can be debated. Thats what makes them “beliefs”. As far as I know nearly everyone is a pre-theoretic moral realist. Of course, moral beliefs might not feel quite the same as say beliefs about whether or not something is dog. But they can still be beliefs.
Recall:
A utility function doesn’t constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn’t a reliable way of forming accurate beliefs that correspond to the territory.
If you want to just use the word ‘belief’ to also describe moral principles that seems okay as long as you don’t confuse them with beliefs proper.
In any case, it sounds like we’re both anti-realists.
The reason I want to do this is because things like logically manipulating moral beliefs / preferences in conjunction with factual beliefs / anticipations makes sense.
But I think this is our disagreement:
You say it’s illegitimate because it doesn’t constrain future experiences. If it constrained future experiences incorrectly, I would agree that it was illegitimate. If it was trying to constrain future experiences and failing, that would also be illegitimate.
But the point of morality is not to constrain our experiences. The point of morality is to constrain our actions. And it does that quite well.
Agreed! But that means morality doesn’t consist in proper beliefs! You can still use belief language if you like, I do.
And doing so is legitimate and not illegitimate.
Sure. What is illegitimate is not the language but thinking that one’s morality consists in proper beliefs.