hmm I think Alice wants to wrestle in that puddle of mud?
Like, these two sections are basically how Alice would respond to Bob saying “hey Alice, you’re going to burn out”:
I think if this was your true objection—your crux—then you would have probably put a lot of work in to understand burnout. Some of the hardest-working people have done that work—and never burned out. Instead, you seem to treat it like a magical worst possible outcome, which provides a universal excuse to never do anything that you don’t want to do. How good a model do you have of what causes burnout? (I notice that many people think vacations treat burnout, which is probably a sign they haven’t looked at the research.) Surely there’s not a black-and-white system where working slightly too hard will instantly disable you forever; maybe there’s a third option where you do more but you also take some anti-burnout precaution.. If I really believed I couldn’t do more without risking burnout, and that was the most important factor preventing me from fulfilling my deeply held ethical beliefs, I think I would have a complex model of what sorts of risk factors create what sort of probability of burnout, and whether there’s different kinds of burnout or different severity levels, and what I could do to guard against it.
What if the techniques I use to avoid burnout—like meditating, surrounding myself with people who work similarly hard so that my brain feels it’s normal, eating a really healthy diet, coworking or getting support on tasks that I’m aversive about, practising lots of instrumental rationality techniques, frequently reminding myself that I’m living consistently with my values, avoiding guilt-based motivation, exercising regularly, seeing a therapist proactively to work on my emotional resilience, and all that—would actually completely work for you, and you’d be able to work super hard without burning out at all, and you’d be perfectly capable of changing yourself if you tried?
It’s hard because Alice is a fictional character in stylized dialogue the author says is intended to be a bad implementation. But in the real world if someone talked like Alice did (about herself and towards Bob) I’d place good money on burnout.
Probably Bob isn’t actually the right person to raise this issue with Alice, because she doesn’t respect him enough. But I don’t think it’s worse than what she’s doing to him.
(I wrote way too much in this comment while waiting for my lentils to finish simmering; I apologise!)
I don’t think it’s necessarily intended to be bad or excessively stylized, but it’s intended to be rude for sure. I didn’t want to write a preachy thing!
Three kinda main reasons that I made Alice suck, deliberately:
Firstly, later in my sequence I want to talk about ways that Alice could achieve her goals better.
Secondly, I kind of want to be able to sit with the awkward dissonant feeling of, “huh, Alice is rude and mean and making me feel bad and maybe she shouldn’t say those things, and ALSO, Alice being an infinitely flawed person would still not actually be a good justification for me to save fewer lives than I think I can save if I try (or otherwise fail according to my own values and my own ethics), and hm, holding those two ideas in juxtaposition feels uncomfy for me, let’s poke that!”
I feel like a lot of truthseeking mindsets involve getting comfy with that sorta “huh, this juxtaposition is super uncomfy and I’m going to sit with it anyway” kinda mental state.
Thirdly, I have a voice in my head that gets WAY meaner than Alice! I totally sometimes have thoughts like, “Wow, I’m such a worthless hypocrite for preaching EA things online even though I don’t have as much impact as I could if I tried harder, I’m totally just lying to myself about thinking I’m burned out because I’m lazy, I should go flog myself in penance!*”
*mild hyperbole for humour
I can respond by thinking something like, “Go away, stupid voice in my head, you’re rude and mean and I don’t want to listen to you.” I could also respond by deliberately seeking out lots of reassuring blog posts that say “burnout is super bad and you’re morally obligated to be happy!” and try to pretend that I’m definitely not engaging in any confirmation bias, no, definitely not, I definitely feel reassured by all of these definitely-true posts about the thing I really wanted to believe anyway. But maybe there’s a way better thing where I can think, “Nope, I made really good models and rigorously tested this, so I’m actually for real confident that I can’t be more ethical than I currently am, even after I looked into the dark and really asked myself the question and prepared myself to discover information that I might not like, and so I don’t have to listen to this mean voice because I know that it’s wrong.”
But as long as I haven’t ACTUALLY looked into the dark, or so long as I’ve been biased and incomplete in my investigations, I’ll always have the little doubt in the back of my mind that says, “well, maybe Alice is right”—so I’ll never be able to get rid of the mean Alice voice. There’s about a thousand different rationality posts I could link here; generally LessWrong is a good place to acquire a gut feeling for “huh, just professing a belief sure does feel different to actually believing something because you checked”.
I think that circles us back to your hot take: if Bob makes really good models about his capabilities and has really sought truth about all this, then maybe he’ll BOTH be better able to refute Alice’s criticisms AND even be able to persuade Alice to act more sustainably. But maybe he actually has to really do that work, and maybe that work isn’t possible to do properly unless he’s really truthseeking, and maybe really truthseeking would require him to also be okay with learning “I can and should do more” if that were to turn out to be true. Knowing that he was capable of concluding “I can and should do more” (if that were true) might be a prerequisite to being able to convince Alice that he legitimately reached the conclusion “I can’t or shouldn’t do more”.
And if he actually does the truthseeking, then maybe he should bully Alice to do less! The interesting question for me then is: can I get myself to be curious about whether that’s true, like really actually curious, like the kind of curious where I want to believe the truth no matter what the truth turns out to be, because the truth matters?
I agree this set of questions is really important, and shouldn’t be avoided just because it’s uncomfortable. And I really appreciate your investment in truthseeking even when it’s hard.
But Alice doesn’t seem particularly truthseeking to me here, and the voice in your head sounds worse. Alice sounds like she has made up her mind and is attempting to browbeat people into agreeing with her. Nor does Alice seem curious about why her approach causes such indignance, which makes me further doubt this is about pursuit of knowledge for her.
One reason people react badly to these tactics: rejecting assholes out of hand when they try to extract value from you is an important defense mechanism. If you force people to remove that you make them vulnerable to all kinds of malware (and you can’t say “only remove it for good things” because the decision needs to be made before you know if the idea is good or not. That’s the point). If Alice is going to push this hard about responsibility to the world she needs to put more thought into her techniques.
Maybe this will be covered in a later post but I have to respond to what’s in front of me now.
yep, fair! Do you think the point would come across better if Alice was nice? (I wasn’t sure I could make Alice nice without an extra few thousand words, but maybe someone more skilful could.)
I think a lot of us have voices in our heads that are meaner than Alice, so if you think Alice is going to cause burnout, I think we need a response that is better than Bob’s (and better than “I’m just going to reject all assholes out of hand”, because I can’t use that on myself!)
I think being nicer would make truthseeking easier but isn’t truthseeking in and of itself.
I also think it’s a mistake to assume your inner Alice would shut up if only you came up with a good enough argument. The loudest alarm is probably false. Truthseeking might be useful in convincing other parts of your brain to stop giving Alice so much weight, but I would include “is Alice updating in response to facts?” as part of that investigation.
Hot take: Bob should be bullying Alice to do less so she doesn’t burn out.
hmm I think Alice wants to wrestle in that puddle of mud?
Like, these two sections are basically how Alice would respond to Bob saying “hey Alice, you’re going to burn out”:
It’s hard because Alice is a fictional character in stylized dialogue the author says is intended to be a bad implementation. But in the real world if someone talked like Alice did (about herself and towards Bob) I’d place good money on burnout.
Probably Bob isn’t actually the right person to raise this issue with Alice, because she doesn’t respect him enough. But I don’t think it’s worse than what she’s doing to him.
(I wrote way too much in this comment while waiting for my lentils to finish simmering; I apologise!)
I don’t think it’s necessarily intended to be bad or excessively stylized, but it’s intended to be rude for sure. I didn’t want to write a preachy thing!
Three kinda main reasons that I made Alice suck, deliberately:
Firstly, later in my sequence I want to talk about ways that Alice could achieve her goals better.
Secondly, I kind of want to be able to sit with the awkward dissonant feeling of, “huh, Alice is rude and mean and making me feel bad and maybe she shouldn’t say those things, and ALSO, Alice being an infinitely flawed person would still not actually be a good justification for me to save fewer lives than I think I can save if I try (or otherwise fail according to my own values and my own ethics), and hm, holding those two ideas in juxtaposition feels uncomfy for me, let’s poke that!”
I feel like a lot of truthseeking mindsets involve getting comfy with that sorta “huh, this juxtaposition is super uncomfy and I’m going to sit with it anyway” kinda mental state.
Thirdly, I have a voice in my head that gets WAY meaner than Alice! I totally sometimes have thoughts like, “Wow, I’m such a worthless hypocrite for preaching EA things online even though I don’t have as much impact as I could if I tried harder, I’m totally just lying to myself about thinking I’m burned out because I’m lazy, I should go flog myself in penance!*”
*mild hyperbole for humour
I can respond by thinking something like, “Go away, stupid voice in my head, you’re rude and mean and I don’t want to listen to you.” I could also respond by deliberately seeking out lots of reassuring blog posts that say “burnout is super bad and you’re morally obligated to be happy!” and try to pretend that I’m definitely not engaging in any confirmation bias, no, definitely not, I definitely feel reassured by all of these definitely-true posts about the thing I really wanted to believe anyway. But maybe there’s a way better thing where I can think, “Nope, I made really good models and rigorously tested this, so I’m actually for real confident that I can’t be more ethical than I currently am, even after I looked into the dark and really asked myself the question and prepared myself to discover information that I might not like, and so I don’t have to listen to this mean voice because I know that it’s wrong.”
But as long as I haven’t ACTUALLY looked into the dark, or so long as I’ve been biased and incomplete in my investigations, I’ll always have the little doubt in the back of my mind that says, “well, maybe Alice is right”—so I’ll never be able to get rid of the mean Alice voice. There’s about a thousand different rationality posts I could link here; generally LessWrong is a good place to acquire a gut feeling for “huh, just professing a belief sure does feel different to actually believing something because you checked”.
I think that circles us back to your hot take: if Bob makes really good models about his capabilities and has really sought truth about all this, then maybe he’ll BOTH be better able to refute Alice’s criticisms AND even be able to persuade Alice to act more sustainably. But maybe he actually has to really do that work, and maybe that work isn’t possible to do properly unless he’s really truthseeking, and maybe really truthseeking would require him to also be okay with learning “I can and should do more” if that were to turn out to be true. Knowing that he was capable of concluding “I can and should do more” (if that were true) might be a prerequisite to being able to convince Alice that he legitimately reached the conclusion “I can’t or shouldn’t do more”.
And if he actually does the truthseeking, then maybe he should bully Alice to do less! The interesting question for me then is: can I get myself to be curious about whether that’s true, like really actually curious, like the kind of curious where I want to believe the truth no matter what the truth turns out to be, because the truth matters?
I agree this set of questions is really important, and shouldn’t be avoided just because it’s uncomfortable. And I really appreciate your investment in truthseeking even when it’s hard.
But Alice doesn’t seem particularly truthseeking to me here, and the voice in your head sounds worse. Alice sounds like she has made up her mind and is attempting to browbeat people into agreeing with her. Nor does Alice seem curious about why her approach causes such indignance, which makes me further doubt this is about pursuit of knowledge for her.
One reason people react badly to these tactics: rejecting assholes out of hand when they try to extract value from you is an important defense mechanism. If you force people to remove that you make them vulnerable to all kinds of malware (and you can’t say “only remove it for good things” because the decision needs to be made before you know if the idea is good or not. That’s the point). If Alice is going to push this hard about responsibility to the world she needs to put more thought into her techniques.
Maybe this will be covered in a later post but I have to respond to what’s in front of me now.
yep, fair! Do you think the point would come across better if Alice was nice? (I wasn’t sure I could make Alice nice without an extra few thousand words, but maybe someone more skilful could.)
I think a lot of us have voices in our heads that are meaner than Alice, so if you think Alice is going to cause burnout, I think we need a response that is better than Bob’s (and better than “I’m just going to reject all assholes out of hand”, because I can’t use that on myself!)
I think being nicer would make truthseeking easier but isn’t truthseeking in and of itself.
I also think it’s a mistake to assume your inner Alice would shut up if only you came up with a good enough argument. The loudest alarm is probably false. Truthseeking might be useful in convincing other parts of your brain to stop giving Alice so much weight, but I would include “is Alice updating in response to facts?” as part of that investigation.