My personal religion involves two* gods – the god of humanity (who I sometimes call “Humo”) and the god of the robot utilitarians (who I sometimes call “Robutil”).
When I’m facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there’s no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things… but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies)
If you’re an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman’s story here).
But Humo and Robutil in fact disagree on some things, and disagree on emphasis.
They disagree on how many high schoolers it’s acceptable to accidentally fuck up psychologically, while you experiment with a new program to get them into.
They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so.
Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn’t occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying “I told you so.” (Robutil responds “but you didn’t provide any arguments about how that maximized utility!”. Humo responds “but I said it was obviously unhealthy!” Robutil says “wtf does ‘unhealthy’ even mean?”)
It took Robutil longer still to consider that perhaps you not only need to prioritize your own wellbeing and your friendships, but you need to prioritize them for their own sake, not just as part of a utilitarian calculus
Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this is in fact measured in real human beings who suffer and die because you took longer to scale up your program.
In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don’t disagree as much as you’d naively expect, but they speak different languages and emphasize things differently.
Humo might acknowledge that I can’t take care of everyone, or even respond compassionately to all the people who show up in my life I don’t have time to help. But he says so with a warm, mournful compassion, whereas Robutil says in with brief, efficient ruthlessness.
I find it useful to query them independently, and to imagine the wise version of each of them as best I can – even if my imagining is but a crude shadow of their idealized platonic selves.
prioritize your own wellbeing and your friendships, but you need to prioritize them for their own sake, not just as part of a utilitarian calculus
Hmm. Does this fully deny utilitarianism? Are these values sacred (more important that calculable tradeoffs), in some way?
I’m not utilitarian for other reasons (I don’t believe in comparability of utility, and I don’t value all moral patients equally, or fairly, or objectively), but I think you COULD fit those priorities into a utilitarian framework, not by prioritizing them for their own sake, but acknowledging the illegibility of the values and taking a guess at how to calculate with them, and then adjusting as circumstances change.
My personal religion involves two* gods – the god of humanity (who I sometimes call “Humo”) and the god of the robot utilitarians (who I sometimes call “Robutil”).
When I’m facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there’s no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things… but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies)
If you’re an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman’s story here).
But Humo and Robutil in fact disagree on some things, and disagree on emphasis.
They disagree on how much effort you should spend to avoid accidentally recruiting people you don’t have much use for.
They disagree on how many high schoolers it’s acceptable to accidentally fuck up psychologically, while you experiment with a new program to get them into.
They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so.
Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn’t occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying “I told you so.” (Robutil responds “but you didn’t provide any arguments about how that maximized utility!”. Humo responds “but I said it was obviously unhealthy!” Robutil says “wtf does ‘unhealthy’ even mean?”)
It took Robutil longer still to consider that perhaps you not only need to prioritize your own wellbeing and your friendships, but you need to prioritize them for their own sake, not just as part of a utilitarian calculus
Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this is in fact measured in real human beings who suffer and die because you took longer to scale up your program.
In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don’t disagree as much as you’d naively expect, but they speak different languages and emphasize things differently.
Humo might acknowledge that I can’t take care of everyone, or even respond compassionately to all the people who show up in my life I don’t have time to help. But he says so with a warm, mournful compassion, whereas Robutil says in with brief, efficient ruthlessness.
I find it useful to query them independently, and to imagine the wise version of each of them as best I can – even if my imagining is but a crude shadow of their idealized platonic selves.
Hmm. Does this fully deny utilitarianism? Are these values sacred (more important that calculable tradeoffs), in some way?
I’m not utilitarian for other reasons (I don’t believe in comparability of utility, and I don’t value all moral patients equally, or fairly, or objectively), but I think you COULD fit those priorities into a utilitarian framework, not by prioritizing them for their own sake, but acknowledging the illegibility of the values and taking a guess at how to calculate with them, and then adjusting as circumstances change.