Okay, if other altruists aren’t motivated by being angry about pain and suffering and wanting to end pain and suffering, how are they motivated?
Ask them, I’m not an altruist. But I heard it may have something to do with the concept of compassion.
I genuinely don’t see how wanting to help people is correlated with ending up killing people.
Historically, it correlates quite well. You want to help the “good” people and in order to do this you need to kill the “bad” people. The issue, of course, is that definitions of “good” and “bad” in this context… can vary, and rather dramatically too.
I think setting up guillotines in the public square is much more likely if you go around saying “I’m the chosen one and I’m going to singlehandedly design a better world”.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
If I noticed myself causing any death or suffering I would be very sad, and sit down and have a long think about a way to stop doing that.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don’t like, will necessarily cause suffering.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable.
Really? Their “perspective” appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.
Not just civility and niceness, but affirmative statements. That is, if you’re trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo “blah blah Solomonoff”) to do in efficient time.
I think a good group norm is, “Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update.” Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they’ve heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another’s lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I’ve already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
I would like a world full of highly rational and happy people cooperating to improve one another’s lives
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
I want to figure out ways to improve cooperation between people and groups.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the “group” in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups’ goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
I want to do research on cultural attitudes towards altruism and ways to get more people to be altruistic/charitable
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field—mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else’s largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself—have someone else keep a close eye on you in that circumstance).
I want to try and get LW-style critical thinking classes introduced in schools from an early age so as to raise the sanity waterline
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers—and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher’s password.
If implemented properly, likely to be world-improving.
...that’s my thoughts on those goals. Other people will likely have different thoughts.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
And these are all very virtuous things to say, but you’re a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
“Greetings, Comrade Acty. Today the Collective has decreed that you...” Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read “Brave New World”? “1984″? “With Folded Hands”? Do those depict societies you find attractive?
To me, this seems like a happy wonderful place that I would very much like to live in.
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
What I can do is think: a lot of aspects of the current world (war, poverty, disease etc) make me really angry and seem like they also hurt other people other than me, and if I were to absolutely annihilate those things, the world would look like a better place to me and would also better satisfy others’ preferences. So I’m going to do that.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? “On pain of death.” How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I’m sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from “My intuitions say that specialism increases output.”
I’m going to pursue the elimination of suffering until the suffering stops.
Exterminate all life, then. That will stop the suffering.
I’m sure you’re really smart, and will go far. I’m concerned about the direction, though. Right now, I’m looking at an Unfriendly Natural Intelligence.
That’s why I don’t want to make such a society. I don’t want to do it. It is a crazy idea that I dreamed up by imagining all the things that I want, scaled up to 11. It is merely a demonstration of why I feel very strongly that I should not rely on the things I want
Wait a minute. You don’t want them, or you do want them but shouldn’t rely on what you want?
And I’m not just nitpicking here. This is why people are having bad reactions. On one level, you don’t want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone’s toe, that triggers your emotional instincts that say that you shouldn’t cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can’t see it, that doesn’t. I’m sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won’t step on people’s toes or hurt any cats. If anything, their desire not to hurt people makes it worse. “We have to do these things for everyone’s own good, that way nobody gets hurt!”
Currently my thought processes go something more like: “When I think about the things that make me happy, I come up with a list like meritocracy and unity and productivity and strong central authority. I don’t come up with things like freedom. Taking those things to their logical conclusion, I should propose a society designed like so… wait… Oh my god that’s terrifying, I’ve just come up with a society that the mere description of causes other people to want to run screaming, this is bad, RED ALERT, SOMETHING IS WRONG WITH MY BRAIN. I should distrust my moral intuitions. I should place increased trust in ideas like doing science to see what makes people happiest and then doing that, because clearly just listening to my moral intuitions is a terrible way to figure out what will make other people happy. In fact, before I do anything likely to significantly change anyone else’s life, I should do some research or test it on a small scale in order to check whether or not it will make them happy, because clearly just listening to what I want/like is a terrible idea.”
I’m not so sure you should distrust your intuitions here. I mean, let’s be frank, the same people who will rave about how every left-wing idea from liberal feminism to state socialism is absolutely terrible, evil, and tyrannical will, themselves, manage to reconstruct most of the same moral intuitions if left alone on their own blogs. I mean, sure, they’ll call it “neoreaction”, but it’s not actually that fundamentally different from Stalinism. On the more moderate end of the scale, you should take account of the fact that anti-state right-wing ideologies in Anglo countries right now are unusually opposed to state and hierarchy across the space of all human societies ever, including present-day ones.
POINT BEING, sometimes you should distrust your distrust of certain intuitions, and ask simply, “How far is this intuition from the mean human across history?” If it’s close, actually, then you shouldn’t treat it as, “Something [UNUSUAL] is wrong with my brain.” The intuition is often still wrong, but it’s wrong in the way most human intuitions are wrong rather than because you have some particular moral defect.
So if the “motivate yourself by thinking about a great world and working towards it” is a terrible option for me because my brain’s imagine-great-worlds function is messed up, then clearly I need to look for an alternative motivation. And “motivate yourself by thinking about clearly evil things like death and disease and suffering and then trying to eliminate them” is a good alternative.
See, the funny thing is, I can understand this sentiment, because my imagine-great-worlds function is messed-up in exactly the opposite way. When I try to imagine great worlds, I don’t imagine worlds full of disciplined workers marching boldly forth under the command of strong, wise, meritorious leadership for the Greater Good—that’s my “boring parts of Shinji and Warhammer 40k” memories.
Instead, my “sample great worlds” function outputs largely equal societies in which people relate to each-other as friends and comrades, the need to march boldly forth for anything when you don’t really want to has been long-since abolished, and people spend their time coming up with new and original ways to have fun in the happy sunlight, while also re-terraforming the Earth, colonizing the rest of the Solar System, and figuring out ways to build interstellar travel (even for digitized uploads) that can genuinely survive the interstellar void to establish colonies further-out.
I am deeply disturbed to find that a great portion of “the masses” or “the real people, outside the internet” seem to, on some level, actually feel that being oppressed and exploited makes their lives meaningful, and that freedom and happiness is value-destroying, and that this is what’s at the root of all that reactionary rhetoric about “our values” and “our traditions”… but I can’t actually bring myself to say that they ought to be destroyed for being wired that way.
I just kinda want some corner of the world to have your and my kinds of wiring, where Progress is supposed to achieve greater freedom, happiness, and entanglement over time, and we can come up with our own damn fates rather than getting terminally depressed because nobody forced one on us.
Likewise, I can imagine that a lot of these goddamn Americans are wired in such a way that “being made to do anything by anyone else, ever” seems terminally evil to them. Meh, give them a planetoid.
On some level, you do need a motivation, so it would be foolish to say that anger is a bad reason to do things. I would certainly never tell you to do only things you are indifferent about.
On another level, though, doing things out of strong anger causes you to ignore evidence, think short term, ignore collateral damage, etc. just as much as doing things because they make you happy does. You think that describing the society that will make you feel happy makes people run screaming? Describing the society that would alleviate your anger will make people run screaming too—in fact it already has made people run screaming in this very thread.
Or at least, it has a bad track record in the real world. Look at the things that people have done because they are really angry about terrorism.
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
To me, this seems like a happy wonderful place that I would very much like to live in. Unfortunately, everyone else seems to strongly disagree.
I think there’s an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor’s allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government’s ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there’s a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can’t have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they’re maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn’t object to your ideal world so much as they’d object to the grossly unrealistic prospect. But without such, you’re proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
“But it’s a benevolent dictatorship”, I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king’s purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it’s ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
If you are “procrastinate-y” you wouldn’t be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
An ideology would just bias my science and make me worse.
I don’t know you well enough to say, but it’s quite easy to pretend that one has no ideology.
For clear thinking it’s very useful to understand one’s own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Oh, I definitely have some kind of inbuilt ideology—it’s just that right now, I’m consciously trying to suppress/ignore it. It doesn’t seem to converge with what most other humans want. I’d rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people’s preferences and increasing happiness and decreasing suffering and doing correct true science.
we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say. Nobody would have property, you would just contribute towards the state of human happiness when the state told you to and then you would be assigned the goods you needed by the state. To me, this seems like a happy wonderful place that I would very much like to live in
Why do you call inhabitants of such a state “citizens”? They are slaves.
To me, this seems like a happy wonderful place that I would very much like to live in
Interesting. So you would like to be a slave.
Unfortunately, everyone else seems to strongly disagree.
Don’t mind Lumifer. He’s one of our resident Anti-Spirals.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing. I’ve tried to talk to him about this.
Hypothesized failure mode for online forums: Online communities are disproportionately populated by disagreeable people who are driven online because they have trouble making real-life friends. They tend to “win” long discussions because they have more hours to invest in them. Bystanders generally don’t care much about long discussions because it’s an obscure and wordy debate they aren’t invested in, so for most extended discussions, there’s no referee to call out bad conversational behavior. The end result: the bulldog strategy of being the most determined person in the conversation ends up “winning” more often than not.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Burning fury does, and if it makes me help people… whatever works, right?
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
I’m just a kid who wants to grow up and study social science and try and help people.
Maybe :-) The reason you’ve met a certain… lack of enthusiasm about your anger for good causes is because you’re not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let’s just say, it does not always lead to good outcomes.
Ask them, I’m not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the “good” people and in order to do this you need to kill the “bad” people. The issue, of course, is that definitions of “good” and “bad” in this context… can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don’t like, will necessarily cause suffering.
--
Don’t mind Lumifer. He’s one of our resident Anti-Spirals.
But, here’s a question: if you’re angry at the Bad, why? Where’s your hope for the Good?
Of course, that’s something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Really? Their “perspective” appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
Not just civility and niceness, but affirmative statements. That is, if you’re trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo “blah blah Solomonoff”) to do in efficient time.
I think a good group norm is, “Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update.” Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they’ve heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another’s lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I’ve already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
--
Let us consider them, one by one.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the “group” in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups’ goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field—mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else’s largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself—have someone else keep a close eye on you in that circumstance).
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers—and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher’s password.
If implemented properly, likely to be world-improving.
...that’s my thoughts on those goals. Other people will likely have different thoughts.
And these are all very virtuous things to say, but you’re a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
--
“Greetings, Comrade Acty. Today the Collective has decreed that you...” Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read “Brave New World”? “1984″? “With Folded Hands”? Do those depict societies you find attractive?
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? “On pain of death.” How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I’m sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from “My intuitions say that specialism increases output.”
Exterminate all life, then. That will stop the suffering.
I’m sure you’re really smart, and will go far. I’m concerned about the direction, though. Right now, I’m looking at an Unfriendly Natural Intelligence.
--
Wait a minute. You don’t want them, or you do want them but shouldn’t rely on what you want?
And I’m not just nitpicking here. This is why people are having bad reactions. On one level, you don’t want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone’s toe, that triggers your emotional instincts that say that you shouldn’t cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can’t see it, that doesn’t. I’m sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won’t step on people’s toes or hurt any cats. If anything, their desire not to hurt people makes it worse. “We have to do these things for everyone’s own good, that way nobody gets hurt!”
--
I’m not so sure you should distrust your intuitions here. I mean, let’s be frank, the same people who will rave about how every left-wing idea from liberal feminism to state socialism is absolutely terrible, evil, and tyrannical will, themselves, manage to reconstruct most of the same moral intuitions if left alone on their own blogs. I mean, sure, they’ll call it “neoreaction”, but it’s not actually that fundamentally different from Stalinism. On the more moderate end of the scale, you should take account of the fact that anti-state right-wing ideologies in Anglo countries right now are unusually opposed to state and hierarchy across the space of all human societies ever, including present-day ones.
POINT BEING, sometimes you should distrust your distrust of certain intuitions, and ask simply, “How far is this intuition from the mean human across history?” If it’s close, actually, then you shouldn’t treat it as, “Something [UNUSUAL] is wrong with my brain.” The intuition is often still wrong, but it’s wrong in the way most human intuitions are wrong rather than because you have some particular moral defect.
See, the funny thing is, I can understand this sentiment, because my imagine-great-worlds function is messed-up in exactly the opposite way. When I try to imagine great worlds, I don’t imagine worlds full of disciplined workers marching boldly forth under the command of strong, wise, meritorious leadership for the Greater Good—that’s my “boring parts of Shinji and Warhammer 40k” memories.
Instead, my “sample great worlds” function outputs largely equal societies in which people relate to each-other as friends and comrades, the need to march boldly forth for anything when you don’t really want to has been long-since abolished, and people spend their time coming up with new and original ways to have fun in the happy sunlight, while also re-terraforming the Earth, colonizing the rest of the Solar System, and figuring out ways to build interstellar travel (even for digitized uploads) that can genuinely survive the interstellar void to establish colonies further-out.
I consider this deeply messed-up because everyone always tells me that their lives would be meaningless if not for the drudgery (which is actually what the linked post is trying to refute).
I am deeply disturbed to find that a great portion of “the masses” or “the real people, outside the internet” seem to, on some level, actually feel that being oppressed and exploited makes their lives meaningful, and that freedom and happiness is value-destroying, and that this is what’s at the root of all that reactionary rhetoric about “our values” and “our traditions”… but I can’t actually bring myself to say that they ought to be destroyed for being wired that way.
I just kinda want some corner of the world to have your and my kinds of wiring, where Progress is supposed to achieve greater freedom, happiness, and entanglement over time, and we can come up with our own damn fates rather than getting terminally depressed because nobody forced one on us.
Likewise, I can imagine that a lot of these goddamn Americans are wired in such a way that “being made to do anything by anyone else, ever” seems terminally evil to them. Meh, give them a planetoid.
On some level, you do need a motivation, so it would be foolish to say that anger is a bad reason to do things. I would certainly never tell you to do only things you are indifferent about.
On another level, though, doing things out of strong anger causes you to ignore evidence, think short term, ignore collateral damage, etc. just as much as doing things because they make you happy does. You think that describing the society that will make you feel happy makes people run screaming? Describing the society that would alleviate your anger will make people run screaming too—in fact it already has made people run screaming in this very thread.
Or at least, it has a bad track record in the real world. Look at the things that people have done because they are really angry about terrorism.
And for one level less meta, look at the terrorism that people have done because they are so angry about something.
Of course, while most people would not want to live in BNW, most characters in BNW would not want to live in our society.
I think there’s an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor’s allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government’s ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there’s a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can’t have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they’re maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn’t object to your ideal world so much as they’d object to the grossly unrealistic prospect. But without such, you’re proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
“But it’s a benevolent dictatorship”, I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king’s purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it’s ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
--
There are reasons to suspect the tests would not work. “It would be nice to think that you can trust powerful people who are aware that power corrupts. But this turns out not to be the case.” (Content Note: killing, mild racism.)
If you are “procrastinate-y” you wouldn’t be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
I don’t know you well enough to say, but it’s quite easy to pretend that one has no ideology. For clear thinking it’s very useful to understand one’s own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Oh, I definitely have some kind of inbuilt ideology—it’s just that right now, I’m consciously trying to suppress/ignore it. It doesn’t seem to converge with what most other humans want. I’d rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people’s preferences and increasing happiness and decreasing suffering and doing correct true science.
Ignoring something and working around a bias are two different things.
Why do you call inhabitants of such a state “citizens”? They are slaves.
Interesting. So you would like to be a slave.
...and do you understand why?
--
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing. I’ve tried to talk to him about this.
Hypothesized failure mode for online forums: Online communities are disproportionately populated by disagreeable people who are driven online because they have trouble making real-life friends. They tend to “win” long discussions because they have more hours to invest in them. Bystanders generally don’t care much about long discussions because it’s an obscure and wordy debate they aren’t invested in, so for most extended discussions, there’s no referee to call out bad conversational behavior. The end result: the bulldog strategy of being the most determined person in the conversation ends up “winning” more often than not.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you’ve met a certain… lack of enthusiasm about your anger for good causes is because you’re not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let’s just say, it does not always lead to good outcomes.
--
If you stick around long enough, we shall see :-)
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.