It’ll be awhile before I’m able to give it the time it warrants, but the basic gist of my EA-Get-Got-thoughts:
(Most of what I’ve written here is cached from before I read Out To Get You, and I think is still true and relevant)
Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of “sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?”
But, as soon as you’ve thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his “replacing guilt” series). But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying “well then, the obvious-implications-of-EA-thinking must be wrong”, or “I guess maybe I don’t need to live healthily”.
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like “okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it’s a necessary evil that I feel guilty about.”
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says “I’m just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare’s writing and see if you’re able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
...
...pause for “I’m not sure if Zvi considers that first chunk unobjectionable, and acknowledge that I can imagine it being objectionable, but it’s only Part A and taking that as a given for now, here’s...”
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it’s better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I’m in a community that prioritizes saying true things even if they’re inconvenient, I’m willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you’re on board with Point A the way I meant Point A to be interpreted requires going through some awkward and maybe unhealthy stages where you haven’t fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with “we really think you should read some detailed blogposts about the psychology of this before you commit” (this may be a good idea), reading the blogposts wouldn’t actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like “if you live off more than $20k a year that’s basically murder”. (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it’s fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, tahere’s still a fair number of prominent people hanging around who do earnestly lean towards “it’s okay to make costs hidden, it’s okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It.”
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it’s not worth talking about and taking seriously as a consideration.
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like “okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it’s a necessary evil that I feel guilty about.”
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says “I’m just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
My view is that there is indeed a principled resolution, which should be pursued by people aspiring to be more lawful and coherent. But the resolution requires nontrivial skills to implement. The key insight is that certain gut reactions should be viewed as policy-level choices, instead of correct, primitive moral evaluations of the situation.
Many people are deeply confused about their feelings, and what they’re “for”—should they be modified? Are feelings reflections of ground-truth morality? In particular, if I feel guilty about spending money on myself, does that mean it is reflectively-correct to feel that way?
The answer, of course, is a resounding No!.
Unfortunately, this can be hard to see, since our motivational systems are not well-typed—expected-utility and utility feel the same from the inside, and executed-heuristic versus reflectively-consistent-judgement are not primitive internal observables.
Notice that spending money on myself is not an intrinsic bad. Therefore, any guilt I feel must be the result of an instrumental value-function heuristic which fires when I take such actions, perhaps because “take selfish action while someone else needs your help” is (societally-, personally-)usually an action which leads to lower-value states, and—eventually—to outcomes with lower terminal utility.[1]
Since my guilt does not reflect an intrinsic bad, it is “up for grabs”—the “guilt” heuristic constitutes a cognitive strategy, which I can choose to execute or not, depending on its logical and causal effects.
After all, I’m (basically) optimizing for outcomes. From the FDT standpoint, there is no need to have an angsty internal struggle over these facts, as if I were living out the script of a hero who is dutifully remorseful about daring purchase a luxury for themselves. I simply choose the way of being which works out bestTurnTrout's values. The rest is noise.
(I know that from a certain inferential distance, the advice may seem trivial and laughably impractical, and if it does, I don’t immediately see how to bridge the gap. And I describe the closer-to-ideal standard as I understand it. I have moved some good distance towards it, but I am not yet able to fluently interact with myself in this way.)
[inspired by this comment, but not entirely a response; still relevant]
Assume utilitarianism and altruism. You’re trying to help the world. There’s a large pit of suffering that you could throw your entire life into and still not fill. So you do as much as you can. You maximize your positive impact on the world.
But argmax requires a set of possible actions. What are these actions? “Be a superhuman who needs no overhead to turn work into donations” is not a valid action. Given what you can do, taking into account physical and psychological limitations, you maximize positive impact. And this requires cutting corners. If you try your hardest to squeeze every last cent of your life into altruism, this has significant negative effects on you, and thus on your altruism. You might burn out. You might lose effectiveness. So to optimize to the fullest, don’t optimize too hard.
So rational “optimize just for altruism” apparently destroys itself. To optimize for altruism, you have to do things that look like they’re selfish.
Coming back a few months later, what did I even mean by “cutting corners”?
Somebody doesn’t understand the difference between the thing and the appearance of the thing, and I can’t tell whether it’s my past self or the hypothetical EAs being discussed.
It’ll be awhile before I’m able to give it the time it warrants, but the basic gist of my EA-Get-Got-thoughts:
(Most of what I’ve written here is cached from before I read Out To Get You, and I think is still true and relevant)
Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of “sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?”
But, as soon as you’ve thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his “replacing guilt” series). But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying “well then, the obvious-implications-of-EA-thinking must be wrong”, or “I guess maybe I don’t need to live healthily”.
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like “okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it’s a necessary evil that I feel guilty about.”
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says “I’m just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare’s writing and see if you’re able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
...
...pause for “I’m not sure if Zvi considers that first chunk unobjectionable, and acknowledge that I can imagine it being objectionable, but it’s only Part A and taking that as a given for now, here’s...”
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it’s better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I’m in a community that prioritizes saying true things even if they’re inconvenient, I’m willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you’re on board with Point A the way I meant Point A to be interpreted requires going through some awkward and maybe unhealthy stages where you haven’t fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with “we really think you should read some detailed blogposts about the psychology of this before you commit” (this may be a good idea), reading the blogposts wouldn’t actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like “if you live off more than $20k a year that’s basically murder”. (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it’s fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, tahere’s still a fair number of prominent people hanging around who do earnestly lean towards “it’s okay to make costs hidden, it’s okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It.”
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it’s not worth talking about and taking seriously as a consideration.
So. That’s where I am pre-reading-Out-To-Get-You.
My view is that there is indeed a principled resolution, which should be pursued by people aspiring to be more lawful and coherent. But the resolution requires nontrivial skills to implement. The key insight is that certain gut reactions should be viewed as policy-level choices, instead of correct, primitive moral evaluations of the situation.
Many people are deeply confused about their feelings, and what they’re “for”—should they be modified? Are feelings reflections of ground-truth morality? In particular, if I feel guilty about spending money on myself, does that mean it is reflectively-correct to feel that way?
The answer, of course, is a resounding No!.
Unfortunately, this can be hard to see, since our motivational systems are not well-typed—expected-utility and utility feel the same from the inside, and executed-heuristic versus reflectively-consistent-judgement are not primitive internal observables.
Notice that spending money on myself is not an intrinsic bad. Therefore, any guilt I feel must be the result of an instrumental value-function heuristic which fires when I take such actions, perhaps because “take selfish action while someone else needs your help” is (societally-, personally-)usually an action which leads to lower-value states, and—eventually—to outcomes with lower terminal utility.[1]
Since my guilt does not reflect an intrinsic bad, it is “up for grabs”—the “guilt” heuristic constitutes a cognitive strategy, which I can choose to execute or not, depending on its logical and causal effects.
After all, I’m (basically) optimizing for outcomes. From the FDT standpoint, there is no need to have an angsty internal struggle over these facts, as if I were living out the script of a hero who is dutifully remorseful about daring purchase a luxury for themselves. I simply choose the way of being which works out bestTurnTrout's values. The rest is noise.
(I know that from a certain inferential distance, the advice may seem trivial and laughably impractical, and if it does, I don’t immediately see how to bridge the gap. And I describe the closer-to-ideal standard as I understand it. I have moved some good distance towards it, but I am not yet able to fluently interact with myself in this way.)
I don’t think people’s desires are well-represented by utility functions, but I think the theory works fine in this situation.
I appreciate the clarity there, and I also like the amount of bullet-biting and facing-up-to-big-issues. Well put.
[inspired by this comment, but not entirely a response; still relevant]
Assume utilitarianism and altruism. You’re trying to help the world. There’s a large pit of suffering that you could throw your entire life into and still not fill. So you do as much as you can. You maximize your positive impact on the world.
But argmax requires a set of possible actions. What are these actions? “Be a superhuman who needs no overhead to turn work into donations” is not a valid action. Given what you can do, taking into account physical and psychological limitations, you maximize positive impact. And this requires cutting corners. If you try your hardest to squeeze every last cent of your life into altruism, this has significant negative effects on you, and thus on your altruism. You might burn out. You might lose effectiveness. So to optimize to the fullest, don’t optimize too hard.
So rational “optimize just for altruism” apparently destroys itself. To optimize for altruism, you have to do things that look like they’re selfish.
Coming back a few months later, what did I even mean by “cutting corners”?
Somebody doesn’t understand the difference between the thing and the appearance of the thing, and I can’t tell whether it’s my past self or the hypothetical EAs being discussed.