Your advice boiled down to this: If there is something that makes you unhappy the only solution is to become batman. Studying child abuse will not held someone who is made unhappy by the thought of it happy. As far as resolving to stop it, what exactly do you recommend? You can’t exactly start going from house to house asking parents if they beat their children.
I sort of agree that that’s what the advice boiled down to, but I almost feel as though, given issues related to diffusion of responsibility that perhaps the world could use a few more batmen?
I mean, that’s basically what Eliezer has done with his life, right? If I understand corrently, he spent the last ~8 years trying to make the singularity not be horrible because everyone around him appeared to be delusional in thinking a horrifying outcome wasn’t racing towards us with no one taking it seriously and so he saddled up and started optimizing. When he talked about how having something to protect seems to cause people to become better rationalists, I think this sort of emotional crisis and subsequent sense of urgency was exactly the sort of thing he was talking about. (As far as I’m aware Lesswrong exists because he spent about two years of his life building it, and he built it because it seemed at the time like a good intermediate step along the way to saving the world.)
And yes, it may very well be be impossible for any of us to simply “stop child abuse” by any normal method, and Eliezer has an answer to that to: “Shut up and do the impossible”.
Do you think getting the singularity to go right is easier than stopping child abuse? If that’s so (and a positive singularity would solve child abuse and there aren’t simpler routes to a solution) then you may have just found a new derivation for working on a positive singularity. Personally I think becoming a social worker is almost certainly not the solution to the general problem of child abuse if you look bottom line and shut up and multiply. A more obvious solution is to get the government to “fix it somehow” but many governments are trying to do that and failing. Maybe what’s necessary is fixing government? Or something? Its a seriously hard problem.
Most obvious ideas probably won’t even work and maybe you’ll need to research and plan for several months to figure out what’s been tried before and shown to fail, and based on that research you could come up with an angle of attack? Off the top of my head, you might try interviewing social workers and ask them what they thought a solution to child abuse would look like, then process the implied plans, and then spend the next 10 years of your life on that if you find something tractable. I have no idea what the results of that process would look like, but they might entrain something that will end child abuse after a while? It seems like that could happen, especially if you cooperate with other rationalists.
But, I mean, I thought the whole reason for having created this website was to help people understand that the world is deep shit and we seriously have to up our game if any of our real and pressing problems are going to be effectively solved. Maybe I’m misunderstanding something fundamental here? Maybe the connection between rabidchicken’s attitude toward child abuse and Eliezer’s attitude towards a “bad singularity” is illusory? But they seem pretty similar to me.
Also, I think this is why we have things in this community like the Litany of Tarski and posts like I’m scared to talk about processing the emotional repercussions of not mentally flinching when a situation really appears to be horrible. Trying to “be batman” is hard. No one said it would be easy, but even if it is hard, it might be better than all the alternatives.
If you look at bad things and (after clicking) it seems that the rational response might actually be to “become batman and then fix it”, then its kind of weird to be told that your suggestions are silly because you’ve just suggested becoming batman to solve a horrible problem. The right answer to that is simply, “Yes of course that’s what I’ve proposed. But why is that silly?”
Edited to add: I see that you’re new to commenting. Please accept my apologies for sort of biting your head off there. I don’t want LW to be unfriendly or intimidating… but at the same time I don’t want the culture to drift away from a community ethic of clear-eyed-pursuit-of-goodness and you were massively upvoted there. Boiling things down to “become batman” was insightful, I just didn’t like that you boiled things down to the right conclusion and then, as far as I could tell, threw out the result simply because it reminded you of something from a comic book.
Becoming batman for stopping child abuse is not okay, because the easiest way to achieve that goal might just involve an Orwellian system of total control. Indeed the people in government who push for such schemes today may think of themselves as batmen for their respective noble goals. The goal of “protecting the kids” has an especially bad record, probably because the image of a suffering child appeals so strongly to the emotions of politicians and voters alike. Eliezer has sorta explained why something like FAI may be the only goal worth becoming batman for.
If you actually want to protect children rather than do something which looks like protecting them, an Orwellian solution doesn’t look likely to me. There’s a risk that the authorities will end up doing very bad things to children if there’s that much unaccountable power.
I suggest that the most stable solution would be finding a reliable, extremely attractive way of teaching empathy. I don’t know if this would take becoming batman—to the extent that it does, it would be a very different sort of batman than it takes to get a positive singularity, though it might be related by way of accelerating CEV.
The most reliable way of teaching empathy is by installing a chip in everyone’s brains.
You seem to envision nice batmen whose subgoal stomp hasn’t yet reached dangerous levels. I’m concerned about what happens when a human becomes really focused on their goal, so it overrides niceness.
Thanks for the clearly thought out reply! I guess the way I see it is, even if “go become batman” is in general something we should want people to strive for, I don’t think it’s actually very helpful advice. I don’t think being Batman is bad, he’s just not something everyone can be.
Thank you for this comment. It’s a very nice illustration of how we CAN actually take something and try to make the world a better place, instead of feeling powerless. For me, I still debate whether I really want to spend that energy, but I’d rather freely choose “I’m ignoring this in favor of X” instead of a nagging guilt about the state of the world. And, even that decision, to set something aside and accept that I don’t want to spend that energy, is also a very difficult one. It’s much easier to bottle it up and never quite think about either possibility.
Yeah, I think maybe part of where the tendency to flinch comes from is the implicit recognition that “fixing W” will sometimes take a huge amount of work, and recognizing the scale of the effort versus how much you care… it might lead you to internalize that you don’t actually care about W that much :-(
To be clear with oneself about your priorities (including “necessary selfishness” that no one will prioritize if you don’t) can be unflattering or diverge from other people’s public statements about good and bad. I suspect there are better or worse ways to deal with this, so that you don’t emphasize a cached self of the wrong sort? Maybe its better to cache that you’re the sort of person who clarifies their evolving priorities in response to improved understanding of a changing world and who currently values X, Y, and Z the most, rather than focusing too much on the fact that “I don’t value W enough to do anything substantive about it”.
“I don’t value W enough to do anything substantive about it”.
I tend to think instead “The cost of W is more than I can afford” / “I can buy the even-cooler Q for that price!”
It’s a lot like a financial budget: I can save up for a new computer if it’s important to me. If I’m especially rich, I can just buy one. If I change income brackets or values, it’s important to refactor that budget—if I lose my job, I probably can’t afford a new computer until I get a new job. If my existing computer breaks, I might put aside some normal monthly luxuries or dip in to savings to get it replaced faster.
Admittedly, I seem to be unusual in that I can internalize “I’m not willing to pay that price for that goal” very well, and I don’t tend to dwell on it or guilt about it once I’ve made a genuine decision. I’ve long been overweight simply because the benefit of eating pleasantly outweighed any visible gains. I used to feel some guilt, until I worked this out consciously and realized the price of being thin was just not worth paying. Now I’m quite content :)
(Although, ironically, soon after this realization, I got in to sports, and so now I’ve changed my values and have a very nice motivation to lose weight—which makes it far more bearable to sacrifice the pleasant eating :))
Your advice boiled down to this: If there is something that makes you unhappy the only solution is to become batman. Studying child abuse will not held someone who is made unhappy by the thought of it happy. As far as resolving to stop it, what exactly do you recommend? You can’t exactly start going from house to house asking parents if they beat their children.
I sort of agree that that’s what the advice boiled down to, but I almost feel as though, given issues related to diffusion of responsibility that perhaps the world could use a few more batmen?
I mean, that’s basically what Eliezer has done with his life, right? If I understand corrently, he spent the last ~8 years trying to make the singularity not be horrible because everyone around him appeared to be delusional in thinking a horrifying outcome wasn’t racing towards us with no one taking it seriously and so he saddled up and started optimizing. When he talked about how having something to protect seems to cause people to become better rationalists, I think this sort of emotional crisis and subsequent sense of urgency was exactly the sort of thing he was talking about. (As far as I’m aware Lesswrong exists because he spent about two years of his life building it, and he built it because it seemed at the time like a good intermediate step along the way to saving the world.)
And yes, it may very well be be impossible for any of us to simply “stop child abuse” by any normal method, and Eliezer has an answer to that to: “Shut up and do the impossible”.
Do you think getting the singularity to go right is easier than stopping child abuse? If that’s so (and a positive singularity would solve child abuse and there aren’t simpler routes to a solution) then you may have just found a new derivation for working on a positive singularity. Personally I think becoming a social worker is almost certainly not the solution to the general problem of child abuse if you look bottom line and shut up and multiply. A more obvious solution is to get the government to “fix it somehow” but many governments are trying to do that and failing. Maybe what’s necessary is fixing government? Or something? Its a seriously hard problem.
Most obvious ideas probably won’t even work and maybe you’ll need to research and plan for several months to figure out what’s been tried before and shown to fail, and based on that research you could come up with an angle of attack? Off the top of my head, you might try interviewing social workers and ask them what they thought a solution to child abuse would look like, then process the implied plans, and then spend the next 10 years of your life on that if you find something tractable. I have no idea what the results of that process would look like, but they might entrain something that will end child abuse after a while? It seems like that could happen, especially if you cooperate with other rationalists.
But, I mean, I thought the whole reason for having created this website was to help people understand that the world is deep shit and we seriously have to up our game if any of our real and pressing problems are going to be effectively solved. Maybe I’m misunderstanding something fundamental here? Maybe the connection between rabidchicken’s attitude toward child abuse and Eliezer’s attitude towards a “bad singularity” is illusory? But they seem pretty similar to me.
Also, I think this is why we have things in this community like the Litany of Tarski and posts like I’m scared to talk about processing the emotional repercussions of not mentally flinching when a situation really appears to be horrible. Trying to “be batman” is hard. No one said it would be easy, but even if it is hard, it might be better than all the alternatives.
If you look at bad things and (after clicking) it seems that the rational response might actually be to “become batman and then fix it”, then its kind of weird to be told that your suggestions are silly because you’ve just suggested becoming batman to solve a horrible problem. The right answer to that is simply, “Yes of course that’s what I’ve proposed. But why is that silly?”
Edited to add: I see that you’re new to commenting. Please accept my apologies for sort of biting your head off there. I don’t want LW to be unfriendly or intimidating… but at the same time I don’t want the culture to drift away from a community ethic of clear-eyed-pursuit-of-goodness and you were massively upvoted there. Boiling things down to “become batman” was insightful, I just didn’t like that you boiled things down to the right conclusion and then, as far as I could tell, threw out the result simply because it reminded you of something from a comic book.
Becoming batman for stopping child abuse is not okay, because the easiest way to achieve that goal might just involve an Orwellian system of total control. Indeed the people in government who push for such schemes today may think of themselves as batmen for their respective noble goals. The goal of “protecting the kids” has an especially bad record, probably because the image of a suffering child appeals so strongly to the emotions of politicians and voters alike. Eliezer has sorta explained why something like FAI may be the only goal worth becoming batman for.
If you actually want to protect children rather than do something which looks like protecting them, an Orwellian solution doesn’t look likely to me. There’s a risk that the authorities will end up doing very bad things to children if there’s that much unaccountable power.
I suggest that the most stable solution would be finding a reliable, extremely attractive way of teaching empathy. I don’t know if this would take becoming batman—to the extent that it does, it would be a very different sort of batman than it takes to get a positive singularity, though it might be related by way of accelerating CEV.
The most reliable way of teaching empathy is by installing a chip in everyone’s brains.
You seem to envision nice batmen whose subgoal stomp hasn’t yet reached dangerous levels. I’m concerned about what happens when a human becomes really focused on their goal, so it overrides niceness.
Thanks for the clearly thought out reply! I guess the way I see it is, even if “go become batman” is in general something we should want people to strive for, I don’t think it’s actually very helpful advice. I don’t think being Batman is bad, he’s just not something everyone can be.
Thank you for this comment. It’s a very nice illustration of how we CAN actually take something and try to make the world a better place, instead of feeling powerless. For me, I still debate whether I really want to spend that energy, but I’d rather freely choose “I’m ignoring this in favor of X” instead of a nagging guilt about the state of the world. And, even that decision, to set something aside and accept that I don’t want to spend that energy, is also a very difficult one. It’s much easier to bottle it up and never quite think about either possibility.
Yeah, I think maybe part of where the tendency to flinch comes from is the implicit recognition that “fixing W” will sometimes take a huge amount of work, and recognizing the scale of the effort versus how much you care… it might lead you to internalize that you don’t actually care about W that much :-(
To be clear with oneself about your priorities (including “necessary selfishness” that no one will prioritize if you don’t) can be unflattering or diverge from other people’s public statements about good and bad. I suspect there are better or worse ways to deal with this, so that you don’t emphasize a cached self of the wrong sort? Maybe its better to cache that you’re the sort of person who clarifies their evolving priorities in response to improved understanding of a changing world and who currently values X, Y, and Z the most, rather than focusing too much on the fact that “I don’t value W enough to do anything substantive about it”.
I tend to think instead “The cost of W is more than I can afford” / “I can buy the even-cooler Q for that price!”
It’s a lot like a financial budget: I can save up for a new computer if it’s important to me. If I’m especially rich, I can just buy one. If I change income brackets or values, it’s important to refactor that budget—if I lose my job, I probably can’t afford a new computer until I get a new job. If my existing computer breaks, I might put aside some normal monthly luxuries or dip in to savings to get it replaced faster.
Admittedly, I seem to be unusual in that I can internalize “I’m not willing to pay that price for that goal” very well, and I don’t tend to dwell on it or guilt about it once I’ve made a genuine decision. I’ve long been overweight simply because the benefit of eating pleasantly outweighed any visible gains. I used to feel some guilt, until I worked this out consciously and realized the price of being thin was just not worth paying. Now I’m quite content :)
(Although, ironically, soon after this realization, I got in to sports, and so now I’ve changed my values and have a very nice motivation to lose weight—which makes it far more bearable to sacrifice the pleasant eating :))