Which of these seems like it will inevitably lead to setting up guillotines in the public square?
That thing:
The reason I want to fix the world is, well, the world contains stuff like war, and poverty, and people who buy plasma TVs for their dog’s kennel instead of donating to charity, and kids who can’t get an education because they’re busy fetching filthy water and caring for their siblings who are dying from drinking the dirty water, and people who abuse kids or rape people or blow up civilians, and malaria and cancer and dementia, and lack of funding for people who are trying to cure diseases and stop ageing, and sexism and racism and homophobia and transphobia, and preachers who help spread AIDS by trying to limit access to contraception, and all of those things make me REALLY REALLY ANGRY.
Besides, we’re talking about “more likely”, not “inevitably”.
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it’s important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don’t think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don’t think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement’s mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That’s why some people are cynical about altruistic political attitudes.
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn’t mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it’s quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
“So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.”
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That’s why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not—that’s the kind of social science that I think is important and the kind I want to do.
You’re ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it’s worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won’t tell you whether that price is acceptable—you need to ask your values about it.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what’s true about the world… like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu’s position, or do I misunderstand?
Given how irrational people can be about politics, I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what’s true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like “do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?”) in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There’s an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we’d like to keep it that way.
you are more focused on working to achieve your values than trying to figure out what’s true about the world
That’s a false dichotomy. Epistemic rationality and working to achieve your values are largely orthogonal and are not opposed to each other. In fact, epistemic rationality is useful to achieving your values because of instrumental rationality.
I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways.
So you do not think that many people have sufficiently different and irreconcilable values?
I wonder how are you going to distinguish “true” values and “mindkill-generated” values. Take some random ISIS fighter in Iraq, what are his “true” values?
my feeling is that this is the possibility to be explored last.
I disagree, I think it’s useful to figure out value differences before spending a lot of time on figuring out whether we agree about how the world works.
...where we have...
Who’s that “we”? It is a bit ironic that you felt the need to use the pseudonymous handle to claim that you represent the views of all LW… X-)
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I’ve never seen people arguing about “the tradeoff is worth it” followed by “no it isn’t”. I’ve seen a lot of arguments about “We should decrease inequality with policy X!” followed by “But that will slow economic growth!” followed by “No it won’t! Inequality slows down economic growth!” followed by “Inequality is necessary for economic growth!” followed by “No it isn’t!” Like with Obamacare—I didn’t hear any Republicans saying “the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff” (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying “this policy won’t increase health and long life and happiness the way you think it will”.
“Is this tradeoff worth it?” is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don’t implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark—economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the “other tribe” on “why do you disagree with this policy?” and give them options to choose between like “I think reducing inequality is more important than economic growth” and “I don’t think reducing inequality will decrease economic growth, I think it will speed it up”. I think there are a lot of issues where people disagree on facts.
Like prisons—you have people saying “prisons should be really nasty and horrid to deter people from offending”, and you have people saying “prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don’t reoffend”, and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down—nasty prisons or nice prisons? Isn’t that a factual question, and couldn’t we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn’t doing the science be way better than ideological arguments about “prisoners are evil people and deserve to suffer!” versus “making people suffer is really mean!” since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: “Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?” If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don’t do the science first, how do we even know what tradeoff we’re making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of “do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?” is a values question. But “given £10,000, what’s the most effective intervention we could try that will do the most good?” is a scientific question and one that I’d like to have good, evidence-based answers to. “Which intervention gives most improvement unit per money unit?” is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists.
The solution to this problem is to find smarter people to talk to.
We could resolve this by doing an experiment
Experiment? On live people? Cue in GlaDOS :-P
This was a triumph! I’m making a note here: ”Huge success!!” It’s hard to overstate My satisfaction. Aperture science: We do what me must Because we can. For the good of all of us. Except the ones who are dead. But there’s no sense crying Over every mistake. You just keep on trying Till you run out of cake. And the science gets done. And you make a neat gun For the people who are Still alive.
Surveys are not experiments and Acty is explicitly talking about science with control groups, etc. E.g.
compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it’s not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
What I had in mind was the difference between passive observation and actively influencing the lives of subjects. I would consider “surveys” to be observation and “experiments” to be or contain active interventions. Since the context of the discussion is kinda-sorta ethical, this difference is meaningful.
I am not sure where is this question coming from. I am not suggesting any particular studies or ways of conducting them.
Maybe it’s worth going back to the post from which this subthread originated. Acty wrote:
If we set a benchmark that would satisfy our values … then which policy is likely to better satisfy that benchmark...? But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind.
First, Acty is mistaken in thinking that a survey will settle the question of which policy will actually satisfy the value benchmark. We’re talking about real consequences of a policy and you don’t find out what they are by conducting a public poll.
And second, if you do want to find the real consequences of a policy, you do need to run an intervention (aka an experiment) -- implement the policy in some limited fashion and see what happens.
Oh, I guess I misunderstood. I read it as “We should survey to determine whether terminal values differ (e.g. ‘The tradeoff is not worth it’) or whether factual beliefs differ (e.g. ‘There is no tradeoff’)”
But if we’re talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you’re kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world’s dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it’s because since the leftist creed is so anti-power, leftists don’t carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that’s why these leaders are so quick to go for the throats of society’s intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn’t exist doesn’t work. Eliminating it doesn’t work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like “let’s kill all the avaricious humans” is themselves avaricious at some level. And by trying to put this plan in to action, they’re creating a new “defect/defect” equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
Okay, if other altruists aren’t motivated by being angry about pain and suffering and wanting to end pain and suffering, how are they motivated?
Ask them, I’m not an altruist. But I heard it may have something to do with the concept of compassion.
I genuinely don’t see how wanting to help people is correlated with ending up killing people.
Historically, it correlates quite well. You want to help the “good” people and in order to do this you need to kill the “bad” people. The issue, of course, is that definitions of “good” and “bad” in this context… can vary, and rather dramatically too.
I think setting up guillotines in the public square is much more likely if you go around saying “I’m the chosen one and I’m going to singlehandedly design a better world”.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
If I noticed myself causing any death or suffering I would be very sad, and sit down and have a long think about a way to stop doing that.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don’t like, will necessarily cause suffering.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable.
Really? Their “perspective” appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.
Not just civility and niceness, but affirmative statements. That is, if you’re trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo “blah blah Solomonoff”) to do in efficient time.
I think a good group norm is, “Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update.” Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they’ve heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another’s lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I’ve already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
I would like a world full of highly rational and happy people cooperating to improve one another’s lives
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
I want to figure out ways to improve cooperation between people and groups.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the “group” in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups’ goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
I want to do research on cultural attitudes towards altruism and ways to get more people to be altruistic/charitable
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field—mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else’s largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself—have someone else keep a close eye on you in that circumstance).
I want to try and get LW-style critical thinking classes introduced in schools from an early age so as to raise the sanity waterline
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers—and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher’s password.
If implemented properly, likely to be world-improving.
...that’s my thoughts on those goals. Other people will likely have different thoughts.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
And these are all very virtuous things to say, but you’re a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
“Greetings, Comrade Acty. Today the Collective has decreed that you...” Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read “Brave New World”? “1984″? “With Folded Hands”? Do those depict societies you find attractive?
To me, this seems like a happy wonderful place that I would very much like to live in.
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
What I can do is think: a lot of aspects of the current world (war, poverty, disease etc) make me really angry and seem like they also hurt other people other than me, and if I were to absolutely annihilate those things, the world would look like a better place to me and would also better satisfy others’ preferences. So I’m going to do that.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? “On pain of death.” How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I’m sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from “My intuitions say that specialism increases output.”
I’m going to pursue the elimination of suffering until the suffering stops.
Exterminate all life, then. That will stop the suffering.
I’m sure you’re really smart, and will go far. I’m concerned about the direction, though. Right now, I’m looking at an Unfriendly Natural Intelligence.
That’s why I don’t want to make such a society. I don’t want to do it. It is a crazy idea that I dreamed up by imagining all the things that I want, scaled up to 11. It is merely a demonstration of why I feel very strongly that I should not rely on the things I want
Wait a minute. You don’t want them, or you do want them but shouldn’t rely on what you want?
And I’m not just nitpicking here. This is why people are having bad reactions. On one level, you don’t want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone’s toe, that triggers your emotional instincts that say that you shouldn’t cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can’t see it, that doesn’t. I’m sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won’t step on people’s toes or hurt any cats. If anything, their desire not to hurt people makes it worse. “We have to do these things for everyone’s own good, that way nobody gets hurt!”
Currently my thought processes go something more like: “When I think about the things that make me happy, I come up with a list like meritocracy and unity and productivity and strong central authority. I don’t come up with things like freedom. Taking those things to their logical conclusion, I should propose a society designed like so… wait… Oh my god that’s terrifying, I’ve just come up with a society that the mere description of causes other people to want to run screaming, this is bad, RED ALERT, SOMETHING IS WRONG WITH MY BRAIN. I should distrust my moral intuitions. I should place increased trust in ideas like doing science to see what makes people happiest and then doing that, because clearly just listening to my moral intuitions is a terrible way to figure out what will make other people happy. In fact, before I do anything likely to significantly change anyone else’s life, I should do some research or test it on a small scale in order to check whether or not it will make them happy, because clearly just listening to what I want/like is a terrible idea.”
I’m not so sure you should distrust your intuitions here. I mean, let’s be frank, the same people who will rave about how every left-wing idea from liberal feminism to state socialism is absolutely terrible, evil, and tyrannical will, themselves, manage to reconstruct most of the same moral intuitions if left alone on their own blogs. I mean, sure, they’ll call it “neoreaction”, but it’s not actually that fundamentally different from Stalinism. On the more moderate end of the scale, you should take account of the fact that anti-state right-wing ideologies in Anglo countries right now are unusually opposed to state and hierarchy across the space of all human societies ever, including present-day ones.
POINT BEING, sometimes you should distrust your distrust of certain intuitions, and ask simply, “How far is this intuition from the mean human across history?” If it’s close, actually, then you shouldn’t treat it as, “Something [UNUSUAL] is wrong with my brain.” The intuition is often still wrong, but it’s wrong in the way most human intuitions are wrong rather than because you have some particular moral defect.
So if the “motivate yourself by thinking about a great world and working towards it” is a terrible option for me because my brain’s imagine-great-worlds function is messed up, then clearly I need to look for an alternative motivation. And “motivate yourself by thinking about clearly evil things like death and disease and suffering and then trying to eliminate them” is a good alternative.
See, the funny thing is, I can understand this sentiment, because my imagine-great-worlds function is messed-up in exactly the opposite way. When I try to imagine great worlds, I don’t imagine worlds full of disciplined workers marching boldly forth under the command of strong, wise, meritorious leadership for the Greater Good—that’s my “boring parts of Shinji and Warhammer 40k” memories.
Instead, my “sample great worlds” function outputs largely equal societies in which people relate to each-other as friends and comrades, the need to march boldly forth for anything when you don’t really want to has been long-since abolished, and people spend their time coming up with new and original ways to have fun in the happy sunlight, while also re-terraforming the Earth, colonizing the rest of the Solar System, and figuring out ways to build interstellar travel (even for digitized uploads) that can genuinely survive the interstellar void to establish colonies further-out.
I am deeply disturbed to find that a great portion of “the masses” or “the real people, outside the internet” seem to, on some level, actually feel that being oppressed and exploited makes their lives meaningful, and that freedom and happiness is value-destroying, and that this is what’s at the root of all that reactionary rhetoric about “our values” and “our traditions”… but I can’t actually bring myself to say that they ought to be destroyed for being wired that way.
I just kinda want some corner of the world to have your and my kinds of wiring, where Progress is supposed to achieve greater freedom, happiness, and entanglement over time, and we can come up with our own damn fates rather than getting terminally depressed because nobody forced one on us.
Likewise, I can imagine that a lot of these goddamn Americans are wired in such a way that “being made to do anything by anyone else, ever” seems terminally evil to them. Meh, give them a planetoid.
On some level, you do need a motivation, so it would be foolish to say that anger is a bad reason to do things. I would certainly never tell you to do only things you are indifferent about.
On another level, though, doing things out of strong anger causes you to ignore evidence, think short term, ignore collateral damage, etc. just as much as doing things because they make you happy does. You think that describing the society that will make you feel happy makes people run screaming? Describing the society that would alleviate your anger will make people run screaming too—in fact it already has made people run screaming in this very thread.
Or at least, it has a bad track record in the real world. Look at the things that people have done because they are really angry about terrorism.
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
To me, this seems like a happy wonderful place that I would very much like to live in. Unfortunately, everyone else seems to strongly disagree.
I think there’s an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor’s allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government’s ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there’s a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can’t have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they’re maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn’t object to your ideal world so much as they’d object to the grossly unrealistic prospect. But without such, you’re proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
“But it’s a benevolent dictatorship”, I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king’s purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it’s ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
If you are “procrastinate-y” you wouldn’t be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
An ideology would just bias my science and make me worse.
I don’t know you well enough to say, but it’s quite easy to pretend that one has no ideology.
For clear thinking it’s very useful to understand one’s own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Oh, I definitely have some kind of inbuilt ideology—it’s just that right now, I’m consciously trying to suppress/ignore it. It doesn’t seem to converge with what most other humans want. I’d rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people’s preferences and increasing happiness and decreasing suffering and doing correct true science.
we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say. Nobody would have property, you would just contribute towards the state of human happiness when the state told you to and then you would be assigned the goods you needed by the state. To me, this seems like a happy wonderful place that I would very much like to live in
Why do you call inhabitants of such a state “citizens”? They are slaves.
To me, this seems like a happy wonderful place that I would very much like to live in
Interesting. So you would like to be a slave.
Unfortunately, everyone else seems to strongly disagree.
Don’t mind Lumifer. He’s one of our resident Anti-Spirals.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing. I’ve tried to talk to him about this.
Hypothesized failure mode for online forums: Online communities are disproportionately populated by disagreeable people who are driven online because they have trouble making real-life friends. They tend to “win” long discussions because they have more hours to invest in them. Bystanders generally don’t care much about long discussions because it’s an obscure and wordy debate they aren’t invested in, so for most extended discussions, there’s no referee to call out bad conversational behavior. The end result: the bulldog strategy of being the most determined person in the conversation ends up “winning” more often than not.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Burning fury does, and if it makes me help people… whatever works, right?
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
I’m just a kid who wants to grow up and study social science and try and help people.
Maybe :-) The reason you’ve met a certain… lack of enthusiasm about your anger for good causes is because you’re not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let’s just say, it does not always lead to good outcomes.
That thing:
Besides, we’re talking about “more likely”, not “inevitably”.
--
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it’s important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don’t think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don’t think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement’s mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That’s why some people are cynical about altruistic political attitudes.
--
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn’t mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it’s quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
“So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.”
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That’s why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not—that’s the kind of social science that I think is important and the kind I want to do.
You’re ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it’s worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won’t tell you whether that price is acceptable—you need to ask your values about it.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what’s true about the world… like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu’s position, or do I misunderstand?
Given how irrational people can be about politics, I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what’s true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like “do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?”) in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There’s an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we’d like to keep it that way.
That’s a false dichotomy. Epistemic rationality and working to achieve your values are largely orthogonal and are not opposed to each other. In fact, epistemic rationality is useful to achieving your values because of instrumental rationality.
So you do not think that many people have sufficiently different and irreconcilable values?
I wonder how are you going to distinguish “true” values and “mindkill-generated” values. Take some random ISIS fighter in Iraq, what are his “true” values?
I disagree, I think it’s useful to figure out value differences before spending a lot of time on figuring out whether we agree about how the world works.
Who’s that “we”? It is a bit ironic that you felt the need to use the pseudonymous handle to claim that you represent the views of all LW… X-)
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I’ve never seen people arguing about “the tradeoff is worth it” followed by “no it isn’t”. I’ve seen a lot of arguments about “We should decrease inequality with policy X!” followed by “But that will slow economic growth!” followed by “No it won’t! Inequality slows down economic growth!” followed by “Inequality is necessary for economic growth!” followed by “No it isn’t!” Like with Obamacare—I didn’t hear any Republicans saying “the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff” (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying “this policy won’t increase health and long life and happiness the way you think it will”.
“Is this tradeoff worth it?” is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don’t implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark—economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the “other tribe” on “why do you disagree with this policy?” and give them options to choose between like “I think reducing inequality is more important than economic growth” and “I don’t think reducing inequality will decrease economic growth, I think it will speed it up”. I think there are a lot of issues where people disagree on facts.
Like prisons—you have people saying “prisons should be really nasty and horrid to deter people from offending”, and you have people saying “prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don’t reoffend”, and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down—nasty prisons or nice prisons? Isn’t that a factual question, and couldn’t we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn’t doing the science be way better than ideological arguments about “prisoners are evil people and deserve to suffer!” versus “making people suffer is really mean!” since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: “Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?” If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don’t do the science first, how do we even know what tradeoff we’re making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of “do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?” is a values question. But “given £10,000, what’s the most effective intervention we could try that will do the most good?” is a scientific question and one that I’d like to have good, evidence-based answers to. “Which intervention gives most improvement unit per money unit?” is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
The solution to this problem is to find smarter people to talk to.
Experiment? On live people? Cue in GlaDOS :-P
It sounded to me like she recommended a survey. Do you consider surveys problematic?
Surveys are not experiments and Acty is explicitly talking about science with control groups, etc. E.g.
According to every IRB I’ve been in contact with, they are. Here’s Cornell’s, for example.
I’m talking common sense, not IRB legalese.
According to the US Federal code, a home-made pipe bomb is a weapon of mass destruction.
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it’s not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
What I had in mind was the difference between passive observation and actively influencing the lives of subjects. I would consider “surveys” to be observation and “experiments” to be or contain active interventions. Since the context of the discussion is kinda-sorta ethical, this difference is meaningful.
What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?
I am not sure where is this question coming from. I am not suggesting any particular studies or ways of conducting them.
Maybe it’s worth going back to the post from which this subthread originated. Acty wrote:
First, Acty is mistaken in thinking that a survey will settle the question of which policy will actually satisfy the value benchmark. We’re talking about real consequences of a policy and you don’t find out what they are by conducting a public poll.
And second, if you do want to find the real consequences of a policy, you do need to run an intervention (aka an experiment) -- implement the policy in some limited fashion and see what happens.
Oh, I guess I misunderstood. I read it as “We should survey to determine whether terminal values differ (e.g. ‘The tradeoff is not worth it’) or whether factual beliefs differ (e.g. ‘There is no tradeoff’)”
But if we’re talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
--
Yep :-) That’s why GlaDOS made an appearance in this thread :-D
Failure often comes with worse consequences than just an unchanged status quo.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you’re kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world’s dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it’s because since the leftist creed is so anti-power, leftists don’t carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that’s why these leaders are so quick to go for the throats of society’s intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn’t exist doesn’t work. Eliminating it doesn’t work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like “let’s kill all the avaricious humans” is themselves avaricious at some level. And by trying to put this plan in to action, they’re creating a new “defect/defect” equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
Ask them, I’m not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the “good” people and in order to do this you need to kill the “bad” people. The issue, of course, is that definitions of “good” and “bad” in this context… can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don’t like, will necessarily cause suffering.
--
Don’t mind Lumifer. He’s one of our resident Anti-Spirals.
But, here’s a question: if you’re angry at the Bad, why? Where’s your hope for the Good?
Of course, that’s something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Really? Their “perspective” appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
Not just civility and niceness, but affirmative statements. That is, if you’re trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo “blah blah Solomonoff”) to do in efficient time.
I think a good group norm is, “Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update.” Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they’ve heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another’s lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I’ve already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don’t have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people’s preferences and I’m not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
--
Let us consider them, one by one.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the “group” in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups’ goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field—mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else’s largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself—have someone else keep a close eye on you in that circumstance).
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers—and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher’s password.
If implemented properly, likely to be world-improving.
...that’s my thoughts on those goals. Other people will likely have different thoughts.
And these are all very virtuous things to say, but you’re a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
--
“Greetings, Comrade Acty. Today the Collective has decreed that you...” Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read “Brave New World”? “1984″? “With Folded Hands”? Do those depict societies you find attractive?
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? “On pain of death.” How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I’m sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from “My intuitions say that specialism increases output.”
Exterminate all life, then. That will stop the suffering.
I’m sure you’re really smart, and will go far. I’m concerned about the direction, though. Right now, I’m looking at an Unfriendly Natural Intelligence.
--
Wait a minute. You don’t want them, or you do want them but shouldn’t rely on what you want?
And I’m not just nitpicking here. This is why people are having bad reactions. On one level, you don’t want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone’s toe, that triggers your emotional instincts that say that you shouldn’t cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can’t see it, that doesn’t. I’m sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won’t step on people’s toes or hurt any cats. If anything, their desire not to hurt people makes it worse. “We have to do these things for everyone’s own good, that way nobody gets hurt!”
--
I’m not so sure you should distrust your intuitions here. I mean, let’s be frank, the same people who will rave about how every left-wing idea from liberal feminism to state socialism is absolutely terrible, evil, and tyrannical will, themselves, manage to reconstruct most of the same moral intuitions if left alone on their own blogs. I mean, sure, they’ll call it “neoreaction”, but it’s not actually that fundamentally different from Stalinism. On the more moderate end of the scale, you should take account of the fact that anti-state right-wing ideologies in Anglo countries right now are unusually opposed to state and hierarchy across the space of all human societies ever, including present-day ones.
POINT BEING, sometimes you should distrust your distrust of certain intuitions, and ask simply, “How far is this intuition from the mean human across history?” If it’s close, actually, then you shouldn’t treat it as, “Something [UNUSUAL] is wrong with my brain.” The intuition is often still wrong, but it’s wrong in the way most human intuitions are wrong rather than because you have some particular moral defect.
See, the funny thing is, I can understand this sentiment, because my imagine-great-worlds function is messed-up in exactly the opposite way. When I try to imagine great worlds, I don’t imagine worlds full of disciplined workers marching boldly forth under the command of strong, wise, meritorious leadership for the Greater Good—that’s my “boring parts of Shinji and Warhammer 40k” memories.
Instead, my “sample great worlds” function outputs largely equal societies in which people relate to each-other as friends and comrades, the need to march boldly forth for anything when you don’t really want to has been long-since abolished, and people spend their time coming up with new and original ways to have fun in the happy sunlight, while also re-terraforming the Earth, colonizing the rest of the Solar System, and figuring out ways to build interstellar travel (even for digitized uploads) that can genuinely survive the interstellar void to establish colonies further-out.
I consider this deeply messed-up because everyone always tells me that their lives would be meaningless if not for the drudgery (which is actually what the linked post is trying to refute).
I am deeply disturbed to find that a great portion of “the masses” or “the real people, outside the internet” seem to, on some level, actually feel that being oppressed and exploited makes their lives meaningful, and that freedom and happiness is value-destroying, and that this is what’s at the root of all that reactionary rhetoric about “our values” and “our traditions”… but I can’t actually bring myself to say that they ought to be destroyed for being wired that way.
I just kinda want some corner of the world to have your and my kinds of wiring, where Progress is supposed to achieve greater freedom, happiness, and entanglement over time, and we can come up with our own damn fates rather than getting terminally depressed because nobody forced one on us.
Likewise, I can imagine that a lot of these goddamn Americans are wired in such a way that “being made to do anything by anyone else, ever” seems terminally evil to them. Meh, give them a planetoid.
On some level, you do need a motivation, so it would be foolish to say that anger is a bad reason to do things. I would certainly never tell you to do only things you are indifferent about.
On another level, though, doing things out of strong anger causes you to ignore evidence, think short term, ignore collateral damage, etc. just as much as doing things because they make you happy does. You think that describing the society that will make you feel happy makes people run screaming? Describing the society that would alleviate your anger will make people run screaming too—in fact it already has made people run screaming in this very thread.
Or at least, it has a bad track record in the real world. Look at the things that people have done because they are really angry about terrorism.
And for one level less meta, look at the terrorism that people have done because they are so angry about something.
Of course, while most people would not want to live in BNW, most characters in BNW would not want to live in our society.
I think there’s an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor’s allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government’s ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there’s a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can’t have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they’re maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn’t object to your ideal world so much as they’d object to the grossly unrealistic prospect. But without such, you’re proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
“But it’s a benevolent dictatorship”, I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king’s purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it’s ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
--
There are reasons to suspect the tests would not work. “It would be nice to think that you can trust powerful people who are aware that power corrupts. But this turns out not to be the case.” (Content Note: killing, mild racism.)
If you are “procrastinate-y” you wouldn’t be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
I don’t know you well enough to say, but it’s quite easy to pretend that one has no ideology. For clear thinking it’s very useful to understand one’s own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Oh, I definitely have some kind of inbuilt ideology—it’s just that right now, I’m consciously trying to suppress/ignore it. It doesn’t seem to converge with what most other humans want. I’d rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people’s preferences and increasing happiness and decreasing suffering and doing correct true science.
Ignoring something and working around a bias are two different things.
Why do you call inhabitants of such a state “citizens”? They are slaves.
Interesting. So you would like to be a slave.
...and do you understand why?
--
And yet he’s consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation… his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can’t help but think that he is driving valuable people away; having difficult people dominate the conversation can’t be a good thing. I’ve tried to talk to him about this.
Hypothesized failure mode for online forums: Online communities are disproportionately populated by disagreeable people who are driven online because they have trouble making real-life friends. They tend to “win” long discussions because they have more hours to invest in them. Bystanders generally don’t care much about long discussions because it’s an obscure and wordy debate they aren’t invested in, so for most extended discussions, there’s no referee to call out bad conversational behavior. The end result: the bulldog strategy of being the most determined person in the conversation ends up “winning” more often than not.
(To clarify, I’m not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don’t do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you’ve met a certain… lack of enthusiasm about your anger for good causes is because you’re not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let’s just say, it does not always lead to good outcomes.
--
If you stick around long enough, we shall see :-)
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.