You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn’t mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it’s quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
“So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.”
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That’s why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not—that’s the kind of social science that I think is important and the kind I want to do.
You’re ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it’s worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won’t tell you whether that price is acceptable—you need to ask your values about it.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what’s true about the world… like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu’s position, or do I misunderstand?
Given how irrational people can be about politics, I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what’s true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like “do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?”) in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There’s an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we’d like to keep it that way.
you are more focused on working to achieve your values than trying to figure out what’s true about the world
That’s a false dichotomy. Epistemic rationality and working to achieve your values are largely orthogonal and are not opposed to each other. In fact, epistemic rationality is useful to achieving your values because of instrumental rationality.
I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways.
So you do not think that many people have sufficiently different and irreconcilable values?
I wonder how are you going to distinguish “true” values and “mindkill-generated” values. Take some random ISIS fighter in Iraq, what are his “true” values?
my feeling is that this is the possibility to be explored last.
I disagree, I think it’s useful to figure out value differences before spending a lot of time on figuring out whether we agree about how the world works.
...where we have...
Who’s that “we”? It is a bit ironic that you felt the need to use the pseudonymous handle to claim that you represent the views of all LW… X-)
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I’ve never seen people arguing about “the tradeoff is worth it” followed by “no it isn’t”. I’ve seen a lot of arguments about “We should decrease inequality with policy X!” followed by “But that will slow economic growth!” followed by “No it won’t! Inequality slows down economic growth!” followed by “Inequality is necessary for economic growth!” followed by “No it isn’t!” Like with Obamacare—I didn’t hear any Republicans saying “the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff” (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying “this policy won’t increase health and long life and happiness the way you think it will”.
“Is this tradeoff worth it?” is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don’t implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark—economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the “other tribe” on “why do you disagree with this policy?” and give them options to choose between like “I think reducing inequality is more important than economic growth” and “I don’t think reducing inequality will decrease economic growth, I think it will speed it up”. I think there are a lot of issues where people disagree on facts.
Like prisons—you have people saying “prisons should be really nasty and horrid to deter people from offending”, and you have people saying “prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don’t reoffend”, and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down—nasty prisons or nice prisons? Isn’t that a factual question, and couldn’t we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn’t doing the science be way better than ideological arguments about “prisoners are evil people and deserve to suffer!” versus “making people suffer is really mean!” since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: “Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?” If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don’t do the science first, how do we even know what tradeoff we’re making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of “do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?” is a values question. But “given £10,000, what’s the most effective intervention we could try that will do the most good?” is a scientific question and one that I’d like to have good, evidence-based answers to. “Which intervention gives most improvement unit per money unit?” is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists.
The solution to this problem is to find smarter people to talk to.
We could resolve this by doing an experiment
Experiment? On live people? Cue in GlaDOS :-P
This was a triumph! I’m making a note here: ”Huge success!!” It’s hard to overstate My satisfaction. Aperture science: We do what me must Because we can. For the good of all of us. Except the ones who are dead. But there’s no sense crying Over every mistake. You just keep on trying Till you run out of cake. And the science gets done. And you make a neat gun For the people who are Still alive.
Surveys are not experiments and Acty is explicitly talking about science with control groups, etc. E.g.
compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it’s not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
What I had in mind was the difference between passive observation and actively influencing the lives of subjects. I would consider “surveys” to be observation and “experiments” to be or contain active interventions. Since the context of the discussion is kinda-sorta ethical, this difference is meaningful.
I am not sure where is this question coming from. I am not suggesting any particular studies or ways of conducting them.
Maybe it’s worth going back to the post from which this subthread originated. Acty wrote:
If we set a benchmark that would satisfy our values … then which policy is likely to better satisfy that benchmark...? But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind.
First, Acty is mistaken in thinking that a survey will settle the question of which policy will actually satisfy the value benchmark. We’re talking about real consequences of a policy and you don’t find out what they are by conducting a public poll.
And second, if you do want to find the real consequences of a policy, you do need to run an intervention (aka an experiment) -- implement the policy in some limited fashion and see what happens.
Oh, I guess I misunderstood. I read it as “We should survey to determine whether terminal values differ (e.g. ‘The tradeoff is not worth it’) or whether factual beliefs differ (e.g. ‘There is no tradeoff’)”
But if we’re talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
--
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn’t mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it’s quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
“So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.”
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That’s why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not—that’s the kind of social science that I think is important and the kind I want to do.
You’re ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it’s worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won’t tell you whether that price is acceptable—you need to ask your values about it.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what’s true about the world… like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu’s position, or do I misunderstand?
Given how irrational people can be about politics, I’d guess that in many cases apparent “value” differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what’s true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like “do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?”) in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There’s an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we’d like to keep it that way.
That’s a false dichotomy. Epistemic rationality and working to achieve your values are largely orthogonal and are not opposed to each other. In fact, epistemic rationality is useful to achieving your values because of instrumental rationality.
So you do not think that many people have sufficiently different and irreconcilable values?
I wonder how are you going to distinguish “true” values and “mindkill-generated” values. Take some random ISIS fighter in Iraq, what are his “true” values?
I disagree, I think it’s useful to figure out value differences before spending a lot of time on figuring out whether we agree about how the world works.
Who’s that “we”? It is a bit ironic that you felt the need to use the pseudonymous handle to claim that you represent the views of all LW… X-)
In my (admittedly limited, I’m young) experience, people don’t disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I’ve never seen people arguing about “the tradeoff is worth it” followed by “no it isn’t”. I’ve seen a lot of arguments about “We should decrease inequality with policy X!” followed by “But that will slow economic growth!” followed by “No it won’t! Inequality slows down economic growth!” followed by “Inequality is necessary for economic growth!” followed by “No it isn’t!” Like with Obamacare—I didn’t hear any Republicans saying “the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff” (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying “this policy won’t increase health and long life and happiness the way you think it will”.
“Is this tradeoff worth it?” is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don’t implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark—economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the “other tribe” on “why do you disagree with this policy?” and give them options to choose between like “I think reducing inequality is more important than economic growth” and “I don’t think reducing inequality will decrease economic growth, I think it will speed it up”. I think there are a lot of issues where people disagree on facts.
Like prisons—you have people saying “prisons should be really nasty and horrid to deter people from offending”, and you have people saying “prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don’t reoffend”, and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down—nasty prisons or nice prisons? Isn’t that a factual question, and couldn’t we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it’s like versus kids who visit the nicer prison versus a control group who don’t visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn’t doing the science be way better than ideological arguments about “prisoners are evil people and deserve to suffer!” versus “making people suffer is really mean!” since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: “Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?” If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don’t do the science first, how do we even know what tradeoff we’re making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of “do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?” is a values question. But “given £10,000, what’s the most effective intervention we could try that will do the most good?” is a scientific question and one that I’d like to have good, evidence-based answers to. “Which intervention gives most improvement unit per money unit?” is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
The solution to this problem is to find smarter people to talk to.
Experiment? On live people? Cue in GlaDOS :-P
It sounded to me like she recommended a survey. Do you consider surveys problematic?
Surveys are not experiments and Acty is explicitly talking about science with control groups, etc. E.g.
According to every IRB I’ve been in contact with, they are. Here’s Cornell’s, for example.
I’m talking common sense, not IRB legalese.
According to the US Federal code, a home-made pipe bomb is a weapon of mass destruction.
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it’s not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
What I had in mind was the difference between passive observation and actively influencing the lives of subjects. I would consider “surveys” to be observation and “experiments” to be or contain active interventions. Since the context of the discussion is kinda-sorta ethical, this difference is meaningful.
What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?
I am not sure where is this question coming from. I am not suggesting any particular studies or ways of conducting them.
Maybe it’s worth going back to the post from which this subthread originated. Acty wrote:
First, Acty is mistaken in thinking that a survey will settle the question of which policy will actually satisfy the value benchmark. We’re talking about real consequences of a policy and you don’t find out what they are by conducting a public poll.
And second, if you do want to find the real consequences of a policy, you do need to run an intervention (aka an experiment) -- implement the policy in some limited fashion and see what happens.
Oh, I guess I misunderstood. I read it as “We should survey to determine whether terminal values differ (e.g. ‘The tradeoff is not worth it’) or whether factual beliefs differ (e.g. ‘There is no tradeoff’)”
But if we’re talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
--
Yep :-) That’s why GlaDOS made an appearance in this thread :-D
Failure often comes with worse consequences than just an unchanged status quo.