I would say that comparing rationalism and utilitarianism is comparing apples to oranges. Rationalism is concerned with forming accurate models about the world. Essentially, it’s a set of tools used to find “truth”. It only deals with positive aspects. Meanwhile, utilitarianism is an ethical system. It only deals with normative aspects. It just happens that many rationalists here are also utilitarians, thus making lots of writing concerning rationalism be couched in utilitarianism.
The two are related in the sense that, since utilitarianism is a form of consequentialism, you need to have accurate models of the world to make sure what you do will lead to the most utility. But you could be a rationalist and have a value ethics system.
With regard to your example, the scope of rationalism would be assessing the effect on your friend of you saying those things, and that is where it ends. What you choose to do will depend on your value system. If you’re a utilitarian, maybe you’d tell the truth so they can pick out better gifts for you, or because you know they value honesty. Maybe you won’t if you believe the negative feelings they would feel outweigh the benefit of them knowing. If you’re a Kantian, you would definitely say it was ugly.
Rationalism is concerned with forming accurate models about the world.
That’s not the way the term is primarily used in this community. We generally orient us more towards decision science. From Jonathan Baron’s textbook Thinking and deciding:
The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.”
When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.
It’s instrumentally useful for the world to be affected according to a decision theory, but it’s not obviously a terminal value for people to act this way, especially in detail. Instrumentally useful things that people shouldn’t be doing can instead be done by tools we build.
I would say that comparing rationalism and utilitarianism is comparing apples to oranges. Rationalism is concerned with forming accurate models about the world. Essentially, it’s a set of tools used to find “truth”. It only deals with positive aspects. Meanwhile, utilitarianism is an ethical system. It only deals with normative aspects. It just happens that many rationalists here are also utilitarians, thus making lots of writing concerning rationalism be couched in utilitarianism.
The two are related in the sense that, since utilitarianism is a form of consequentialism, you need to have accurate models of the world to make sure what you do will lead to the most utility. But you could be a rationalist and have a value ethics system.
With regard to your example, the scope of rationalism would be assessing the effect on your friend of you saying those things, and that is where it ends. What you choose to do will depend on your value system. If you’re a utilitarian, maybe you’d tell the truth so they can pick out better gifts for you, or because you know they value honesty. Maybe you won’t if you believe the negative feelings they would feel outweigh the benefit of them knowing. If you’re a Kantian, you would definitely say it was ugly.
That’s not the way the term is primarily used in this community. We generally orient us more towards decision science. From Jonathan Baron’s textbook Thinking and deciding:
It’s instrumentally useful for the world to be affected according to a decision theory, but it’s not obviously a terminal value for people to act this way, especially in detail. Instrumentally useful things that people shouldn’t be doing can instead be done by tools we build.