(It’s good to have less social pressure against odd-seeming positions, so that they can be freely examined according to their more carefully construed meaning rather than surface appearance.)
Having less pressure against unorthodox or novel positions is a good thing. But I think it makes sense to have minimal social pressure to give some account of apparent discrepancies between actions and beliefs—since it suggests (though doesn’t necessitate) contradictory beliefs somewhere.
This seems to act as an incentive for both resolving the conflict, and for obscuring its presence or nature. I feel that the latter effect can be more damaging, so it might be safer to avoid this pressure. For example, drawing of attention to the presence of an apparent conflict (if it’s plausible that it has been missed) that isn’t accompanied by (implied) disapproval.
Fair enough, but they do seem pretty civil thus far. I’ve been monitoring them to make sure they don’t get out of hand, and that they don’t start infecting the rest of the discussions. (There have been a couple of political-leaning topics, but no more than before, and I think maybe less.)
I find it strange that the potential for political bias is seen as so much worse than a self imposed ban on The-Subject-Which-Must-Not-Be-Discussed. Is intellectual evasion really seen as preferable to potential bias?
The Subject Which Must Not Be Discussed ? Is that still a thing? (infohazard related to Super AIs?)
I can see two other reasons. The first is that a culture WILL develop, and if outsiders see the political culture, we might not get a chance to teach them enough rationality for them to not be mindkilled instantly.
The second is that it’s well established that smart people often believe wierd and/or untrue things. This, combined with the lack of respect for political correctness (in both the old-timey ‘within the realm of policy you can actually talk about’ and in the modern offensive language sense) and contrarianism, and a cultural site, could result in really bad politics.
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time. Besides, I feel like LW could get more done with discussions about political brainstorming, at least in the near future.
The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
That doesn’t sound like something I’d infer from his previous comment
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time.
The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI
‘Just’ because they’ve got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you’ve more or less won already.
We’ve got to deal with politics eventually.
Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.
The politics that MIRI folks would be concerned about are the politics before they win, not after they win.
Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that’s twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
You are not taking AI seriously. Is this intentional?
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
While I wouldn’t dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things “days” seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.
Incidentally, my biggest problem with these threads comes from the fact that the positions I’m most interested in hearing good arguments in opposition to, I suspect I wouldn’t find any opposition on here. I’m fairly aware of the first-principles differences which result in most of my disagreements; the baffling ones are things like support of drone warfare coming from people who believe in universal healthcare. (I can see support of one, or the other, but not both at the same time. And yet people exist who do support both at the same time.)
I see no particular reason why someone can’t believe that healthcare consequentially saves lives and that drone warfare also consequentially saves lives.
Yeah, this claim confuses me. ( I mean, I see this kind of thing every day, but Less Wrong seems to be where it would never occur.)
I do support universal healthcare, for pretty much all the normal reasons.
I don’t support drone warfare, but I am willing to criticize people who make bad arguments against it, because I don’t think I’m smarter than the US military strategists.
I also see no reason why somebody -would- believe that. It’s not just a matter of finding a possible set of beliefs, but a possible path by which someone could arrive at those beliefs.
I’ve read both common health care arguments and Yvain’s post on drone warfare. While I haven’t absorbed much information about drones, and don’t have a strong opinion on them because of it, it doesn’t seem strange that someone could find both arguments convincing.
Sorry, I was referring to a very specific case of drone warfare, not drone warfare in general. I was arguing about this in another forum and failed to import the full context of my statements when I wrote that comment.
To clarify, I was referring to the use of drone warfare to target a nation’s own citizens without judicial review.
To clarify, I was referring to the use of drone warfare to target a nation’s own citizens without judicial review.
I imagine there are fewer good reasons to support that policy that drone war in general (though we lack the details). I have a hard time seeing why people would support it but I don’t see why people who support universal healthcare would have a harder time supporting it than anyone else. I mean, I would be a little surprised to meet someone with that view—just because one is something more likely to be found in the conservative cluster of ideas and the other in the progressive cluster of ideas. But I don’t see any deep tension between the views.
One represents a belief in an inviolable positive right to life and health, the other represents a belief that life exists solely at the discretion of society/authority.
Note that I distinguish between single-provider/government-provided healthcare and universal healthcare. Somebody who wants government in charge of healthcare isn’t necessarily someone who believes in universal access to it.
One represents a belief in an inviolable positive right to life and health, the other represents a belief that life exists solely at the discretion of society/authority.
These are not the only possible intentions behind these policies, and not all support or opposition to them is based on those particular intentions.
In particular I suspect most of the supporters of universal healthcare think of it as a positive right to life/health, whereas most of it’s opponents think of it as a issue of governmental power.
Similarly, I suspect supporters of drone strikes think of it in terms of justice and/or preventing terrorism, whereas most of it’s opponents think of it as a issue of governmental power.
Thus people who don’t see governmental power as a problem, or are simply not inclined to think about it and its implications are likely to be more favorable to both.
First, the actual policies that aim at providing universal healthcare including cost-cutting measures of one kind or another. These measures, by their very nature, involve restricting what kind and how much medical care a person can receive and when they can receive it. The politically loaded term is “rationing”. The State is going tell you what you can and cannot buy with their money. The politically loaded term is “death panel”. Neither of which I, personally, see anything wrong with. But since policies designed to establish universal health care tend to involve taxing people I can certainly see how some would see it as a expansion of society’s discretion regarding life and health.
Second and more important: these aren’t symbols for abstract ideas about political philosophy. They are actual policies that are created by real governments and implemented by actual people. Any justification for either of them --regardless of one’s terminal values—will involve a complex synthesis of information from a wide set of domains. A position on universal healthcare, for instance, involves thinking about incentives for individuals, doctors, pharmaceutical companies, hospitals etc. E.g. what happens to prices when the person receiving the good isn’t footing the bill and what happens to behavior when the actor isn’t paying for the consequences of that behavior. If there are cost lowering measures: how do you determine what to pay for or who do you trust to determine it? What will the effect of these measures be on experimental procedures and will this harm innovation? If the country needs to borrow to pay for health care what will the effect be on the country’s economy? What about all the other ways to spend that money?
Drone warfare involves different complex questions like: what counts as a combatant in an unconventional, asymmetrical war? Does killing terrorists decrease terrorism or increase it by making more and angrier terrorists? To what extent does judicial review undermine the secrecy, security and timeliness of a military response to intelligence? To what extent does the precedent set by the policy alter existing legal protections and is this change good of bad? What about the psychological impact of the persistent fear of an unexpected drone strike? What about the change in incentives involved in fighting wars where one side doesn’t risk loss of life?
And so on. There is little reason to expect the answers of many of these issues and questions to be correlate. The only way to be particularly surprised that someone supports both these positions is if you understand these positions in an entirely symbolical way. I understand that most people choose their political positions mostly for signaling one thing or another and than just find ways to answer these questions in a way that is congruent that position. But you definitely shouldn’t be that perplexed by someone who doesn’t.
I’m not discussing the people creating policies, but the people supporting them. I’m not discussing implementation of those policies, even, but again, simple support.
I don’t expect implementations to correlate with their intention, but I do expect the -intentions- to correlate.
I’m not arguing with an anthropomorphization of the political process or utilitarian philosophy, after all, I’m arguing with real people who have real ideas about how the world should operate. In order to argue effectively, I have to understand what their intentions are—my goal is not merely to prove somebody wrong, it’s to change their mind. And starting with the implementation is backwards; you don’t navigate from where you were to where you are, you navigate from where you are to where you want to be. First and foremost I need to know where this person wants the world to be.
And in this case, the intentions I can see conflict.
You have to have beliefs about the world and how it works before you can have intentions to change it. I’m sure just about everyone would rather live in a place where no one was killed and everyone has their health cared for. People who oppose “universal healthcare” don’t usually want poor people to suffer: they just expect a whole host of problems to come with treating healthcare as a right and not a good. People who want extra-judicial drone assassinations of citizens aren’t particularly concerned with the State being empowered who can live or die. They just want to prevent terrorist attacks.
To extend your analogy: before deciding to navigate anywhere you still have to have beliefs about what the sea is, where the coastline is, how deep the water is, what the weather is like, how to manage the crew, what to do with the anchor, the rudder and the sails, general beliefs about the accuracy of your maps and navigational equipment, beliefs about what life is like at your new destination and whether it is worth the risks you’ll encounter a long the way and even whether or not it is a place you will want to live or do business in, beliefs about the possibility of it changing once you get there. Not to mention all sorts of possibilities you probably haven’t even thought of yet because your scientists haven’t developed the germ theory of disease and you’re about bring Small Pox and centuries of genocide and colonialism to a continent you didn’t even know existed because the world is twice as large as you thought it was! But hey, at least you intended to reach the Indies and make Spain wealthy.
Intentions are worthless if you don’t know what you’re talking about and they change radically when you do. It sounds like you’re trying to comprehend other people’s political views through the framework of your own political philosophy. But there is nothing inherent or necessary about that framework.
As far as I can tell, you’re arguing that I should undertake to understand somebody’s beliefs about the positioning of all 214,000 miles of coastline in the world before I try to understand their intentions about where they’re navigating to.
So, to sum up my response: It doesn’t matter. I don’t need to know where they think the coastlines are; all I need to know is that they want to go to Jamaica because the weather is nice. And then the information that a hurricane is coming through becomes relevant. If they’re going to Jamaica to deliver medical supplies in preparation for that hurricane, that piece of information doesn’t add anything.
You seem to be proposing that in an argument about politics, I should engage in a depth-first search. Here’s the issue: I can knock down all of their arguments, and change not a thing in their mind. No matter how many “How” arguments are defeated, there are an infinite more laying in wait. To change a mind, you must address the -Why-. You must direct your arguments to their motivations.
You seem to be suggesting I should argue with somebody like this:
“You’re going to travel up the sound? You’ll hit rocks during low tide, and it looks like that’s when you’ll be going through. You should go around. And this dock in your itinerary is closed this time of year; you’ll need to refuel over here instead. That restaurant right there has terrible food; you should eat here instead...” And so on and so forth, when a good argument might go...
“Oh, you’re trying to travel to the peninsula? You’d be better off driving there, the sea route is really inhospitable.”
Intentions are destinations. If you don’t know what you’re talking about, sure, they’re worthless—but whether you know what you’re talking about or not, it’s completely useless to analyze a route if you don’t know where that route is intended to lead.
you’re arguing that I should undertake to understand somebody’s beliefs about the positioning of all 214,000 miles of coastline in the world before I try to understand their intentions about where they’re navigating to
I’m not arguing that and I think it’s a pretty blatant strawman and that nearly any independent observer of this exchange would agree. This makes me pretty averse to continuing this conversation.
You seem to be proposing that in an argument about politics, I should engage in a depth-first search. Here’s the issue: I can knock down all of their arguments, and change not a thing in their mind. No matter how many “How” arguments are defeated, there are an infinite more laying in wait. To change a mind, you must address the -Why-. You must direct your arguments to their motivations.
No. I think one should never ever engage in an argument about politics to try to change someone’s mind unless your interlocutor is that very peculiar individual who will alter their beliefs based on new evidence. To the extent the above description is true it is evidence that people don’t form political opinions based on evidence and that’s a good time to stop arguing with someone about their opinions.
all I need to know is that they want to go to Jamaica because the weather is nice.
The weather is nice in Haiti too. Also, in High Communist Cuba and (parts of) apartheid South Africa. “Opposing universal health care” is like “opposing going to Haiti”. “Sure it the weather sounds nice but you’ve overlooked dozens of other issues”. It doesn’t usually mean you’re opposed to nice weather.
It wasn’t my intention to strawman you. If my interpretations of your arguments are incorrect, I have absolutely no idea what you’re trying to convey, except possibly a big “It’s complicated!” - which I don’t disagree with, and if that’s supposed to be a counterargument, it’s misdirected.
As for people discarding evidence, proving “Brand A of universal healthcare is Bad” doesn’t say anything about brands B-Z—again, my point is that you seem to be suggesting I should focus on implementation (or is that a strawman?) details rather than the intentions of that implementation. Disproving implementations does nothing.
For example, I could argue (ignore the truth value of this statement, please) that the PPACA necessitates or enables Death Panels, but this isn’t an argument against universal healthcare, only one particular -implementation-. It doesn’t matter whether it’s true or false for purposes of arguing about universal healthcare more broadly.
As for people discarding evidence, proving “Brand A of universal healthcare is Bad” doesn’t say anything about brands B-Z
There are obviously possible counter arguments that demonstrate that a vast majority set of possible implementations are bad. The possible implementations of any intention are likely to share a number of crucial parts. There are only so many ways to get to a place. E.g. If I think more health care harms health outcomes as much as it helps then I am going to oppose any implementation that involves subsidizing health care. And of course I could have as many such arguments as I like. If I have fifty arguments that, combined, show that all the possibly implementations of universal health care are harmful than I have good reason to oppose the intention of implementing universal health care. And I don’t have to exhaust all the possibilities: I just have to have never heard of an implementation plan that I didn’t think would be bad.
Not to mention: “universal health care” doesn’t actually mean “everyone gets the health care they need” it means something like, “everyone gets the health care they need through some new government mechanism”. “Increase per capita GDP and lower health care costs though economic growth until everyone can afford what they need” might the best way to get everyone covered, but it would never be called “a plan for universal health care”.
Obviously, “that rout to Jamaica is tricky, you can’t go South there” is not an objection to the basic idea of going to Jamaica. But “Jamaica is a terrible place” is. So is “40% of people who try to get to Jamaica die en route”. So is “we don’t have that kind of money”. So is “Jamaica will turn you away at the border and it is too dangerous to sneak in”. “I don’t like nice weather” is also a basic objection. But it’s an uncommon one and mere opposition to going to Jamaica is a really bad indicator that someone doesn’t like sunny days.
I’m not saying you need to determine every detail of the implementation of a policy before counting oneself in favor of something. But policy goals are not determined in the abstract. There are important, basic facts about economics, human nature, and government that yield heuristics about what policy goals are beneficial and which are harmful.
the baffling ones are things like support of drone warfare coming from people who believe in universal healthcare. (I can see support of one, or the other, but not both at the same time. And yet people exist who do support both at the same time.)
I am baffled by your bafflement. Kill your enemies, save your allies. Where’s the contradiction?
Sorry about the confusion; I just realized exactly where the disconnect is. I was discussing drone warfare in another forum, specifically the use of drones against a nation’s own citizens. Absent that context my statement doesn’t make much sense at all, no.
Does it make more sense when I clarify that I’m referring to the use of drone warfare against a nation’s own citizens without judicial oversight?
are you against drone warfare vs OTHER types of warfare or are you just against warfare? I think that might be where the confusion is. If you think we should try to save more people and therefore support healthcare and oppose warfare, I think that makes sense. I think it also makes sense to say you support healthcare because it saves lives and you support drone warfare because it saves lives in comparison to other warfare, vs the less realistic no warfare.
I was referring to a very specific use of drone warfare and was insufficiently explicit in my comment. (A peril of switching back and forth between different forums of discussion, dropping context.) It wasn’t even until the latest round of comments that I realized why exactly people were baffled by my position.
Specifically I was referring to the use of drone warfare to target a nation’s own citizens without judicial review.
I still don’t see the contradiction. Both universal healthcare and drone warfare are fundamentally come from a belief or alief that life or death decisions about citizens should be made by the government.
Not really; universal healthcare is based on a belief (or alief) that life is a fundamental right. A simple belief that government should be making these decisions might lead to a belief in government-provided or government-run healthcare, but that’s hardly the same thing as universal healthcare, which holds that government doesn’t have a right to decide, only a responsibility to provide.
Ok, I think a better way to formulate my point is that both universal healthcare and drone warfare come from an alief that the government has unlimited moral authority, in the sense Arnold Kling discusses here and here.
doesn’t have a right to decide, only a responsibility to provide.
I don’t see the difference, especially when you remember that resources are finite.
You seem to be conflating intention and results in the opposite direction I usually see; you’re suggesting that the practical necessities of implementing universal healthcare are a part of the ideology or principles which lead one to seek it.
you’re suggesting that the practical necessities of implementing universal healthcare are a part of the ideology or principles which lead one to seek it.
Specifically an ideology/alief that causes one to decide which policies to support without thinking about how they would actually be implemented in practice.
I still think these threads are a bad idea.
This seems like an odd position for someone who spends a relatively larger fraction of his LW time on politics.
Edit: Didn’t mean to make it personal. Was just interested in the rationale.
(It’s good to have less social pressure against odd-seeming positions, so that they can be freely examined according to their more carefully construed meaning rather than surface appearance.)
Having less pressure against unorthodox or novel positions is a good thing. But I think it makes sense to have minimal social pressure to give some account of apparent discrepancies between actions and beliefs—since it suggests (though doesn’t necessitate) contradictory beliefs somewhere.
This seems to act as an incentive for both resolving the conflict, and for obscuring its presence or nature. I feel that the latter effect can be more damaging, so it might be safer to avoid this pressure. For example, drawing of attention to the presence of an apparent conflict (if it’s plausible that it has been missed) that isn’t accompanied by (implied) disapproval.
My original comment was about as devoid of implications of disapproval as I could make it. I’d be interested to hear better formulations.
Fair enough, but they do seem pretty civil thus far. I’ve been monitoring them to make sure they don’t get out of hand, and that they don’t start infecting the rest of the discussions. (There have been a couple of political-leaning topics, but no more than before, and I think maybe less.)
The objection is mind-killing and agent-reputational effects, not incivility.
I find it strange that the potential for political bias is seen as so much worse than a self imposed ban on The-Subject-Which-Must-Not-Be-Discussed. Is intellectual evasion really seen as preferable to potential bias?
If one doesn’t know, it is better to know that one doesn’t know.
The Subject Which Must Not Be Discussed ? Is that still a thing? (infohazard related to Super AIs?)
I can see two other reasons. The first is that a culture WILL develop, and if outsiders see the political culture, we might not get a chance to teach them enough rationality for them to not be mindkilled instantly.
The second is that it’s well established that smart people often believe wierd and/or untrue things. This, combined with the lack of respect for political correctness (in both the old-timey ‘within the realm of policy you can actually talk about’ and in the modern offensive language sense) and contrarianism, and a cultural site, could result in really bad politics.
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time. Besides, I feel like LW could get more done with discussions about political brainstorming, at least in the near future.
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
Kind of, almost. It could be that we (implicitly) choose to have problems for ourselves.
In case it’s not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.
(Or ‘choosing not to intervene to solve all problems’. The difference matters to some, even if it is somewhat arbitrary.)
Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
I thought he was saying that once you have a Super AI, you don’t have to deal with politics.
That doesn’t sound like something I’d infer from his previous comment
‘Just’ because they’ve got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you’ve more or less won already.
Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.
The politics that MIRI folks would be concerned about are the politics before they win, not after they win.
Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that’s twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
You are not taking AI seriously. Is this intentional?
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.
While I wouldn’t dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things “days” seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.
Incidentally, my biggest problem with these threads comes from the fact that the positions I’m most interested in hearing good arguments in opposition to, I suspect I wouldn’t find any opposition on here. I’m fairly aware of the first-principles differences which result in most of my disagreements; the baffling ones are things like support of drone warfare coming from people who believe in universal healthcare. (I can see support of one, or the other, but not both at the same time. And yet people exist who do support both at the same time.)
I see no particular reason why someone can’t believe that healthcare consequentially saves lives and that drone warfare also consequentially saves lives.
Yeah, this claim confuses me. ( I mean, I see this kind of thing every day, but Less Wrong seems to be where it would never occur.)
I do support universal healthcare, for pretty much all the normal reasons.
I don’t support drone warfare, but I am willing to criticize people who make bad arguments against it, because I don’t think I’m smarter than the US military strategists.
I also see no reason why somebody -would- believe that. It’s not just a matter of finding a possible set of beliefs, but a possible path by which someone could arrive at those beliefs.
I’ve read both common health care arguments and Yvain’s post on drone warfare. While I haven’t absorbed much information about drones, and don’t have a strong opinion on them because of it, it doesn’t seem strange that someone could find both arguments convincing.
Sorry, I was referring to a very specific case of drone warfare, not drone warfare in general. I was arguing about this in another forum and failed to import the full context of my statements when I wrote that comment.
To clarify, I was referring to the use of drone warfare to target a nation’s own citizens without judicial review.
I imagine there are fewer good reasons to support that policy that drone war in general (though we lack the details). I have a hard time seeing why people would support it but I don’t see why people who support universal healthcare would have a harder time supporting it than anyone else. I mean, I would be a little surprised to meet someone with that view—just because one is something more likely to be found in the conservative cluster of ideas and the other in the progressive cluster of ideas. But I don’t see any deep tension between the views.
One represents a belief in an inviolable positive right to life and health, the other represents a belief that life exists solely at the discretion of society/authority.
Note that I distinguish between single-provider/government-provided healthcare and universal healthcare. Somebody who wants government in charge of healthcare isn’t necessarily someone who believes in universal access to it.
These are not the only possible intentions behind these policies, and not all support or opposition to them is based on those particular intentions.
In particular I suspect most of the supporters of universal healthcare think of it as a positive right to life/health, whereas most of it’s opponents think of it as a issue of governmental power.
Similarly, I suspect supporters of drone strikes think of it in terms of justice and/or preventing terrorism, whereas most of it’s opponents think of it as a issue of governmental power.
Thus people who don’t see governmental power as a problem, or are simply not inclined to think about it and its implications are likely to be more favorable to both.
First, the actual policies that aim at providing universal healthcare including cost-cutting measures of one kind or another. These measures, by their very nature, involve restricting what kind and how much medical care a person can receive and when they can receive it. The politically loaded term is “rationing”. The State is going tell you what you can and cannot buy with their money. The politically loaded term is “death panel”. Neither of which I, personally, see anything wrong with. But since policies designed to establish universal health care tend to involve taxing people I can certainly see how some would see it as a expansion of society’s discretion regarding life and health.
Second and more important: these aren’t symbols for abstract ideas about political philosophy. They are actual policies that are created by real governments and implemented by actual people. Any justification for either of them --regardless of one’s terminal values—will involve a complex synthesis of information from a wide set of domains. A position on universal healthcare, for instance, involves thinking about incentives for individuals, doctors, pharmaceutical companies, hospitals etc. E.g. what happens to prices when the person receiving the good isn’t footing the bill and what happens to behavior when the actor isn’t paying for the consequences of that behavior. If there are cost lowering measures: how do you determine what to pay for or who do you trust to determine it? What will the effect of these measures be on experimental procedures and will this harm innovation? If the country needs to borrow to pay for health care what will the effect be on the country’s economy? What about all the other ways to spend that money?
Drone warfare involves different complex questions like: what counts as a combatant in an unconventional, asymmetrical war? Does killing terrorists decrease terrorism or increase it by making more and angrier terrorists? To what extent does judicial review undermine the secrecy, security and timeliness of a military response to intelligence? To what extent does the precedent set by the policy alter existing legal protections and is this change good of bad? What about the psychological impact of the persistent fear of an unexpected drone strike? What about the change in incentives involved in fighting wars where one side doesn’t risk loss of life?
And so on. There is little reason to expect the answers of many of these issues and questions to be correlate. The only way to be particularly surprised that someone supports both these positions is if you understand these positions in an entirely symbolical way. I understand that most people choose their political positions mostly for signaling one thing or another and than just find ways to answer these questions in a way that is congruent that position. But you definitely shouldn’t be that perplexed by someone who doesn’t.
I’m not discussing the people creating policies, but the people supporting them. I’m not discussing implementation of those policies, even, but again, simple support.
I don’t expect implementations to correlate with their intention, but I do expect the -intentions- to correlate.
I’m not arguing with an anthropomorphization of the political process or utilitarian philosophy, after all, I’m arguing with real people who have real ideas about how the world should operate. In order to argue effectively, I have to understand what their intentions are—my goal is not merely to prove somebody wrong, it’s to change their mind. And starting with the implementation is backwards; you don’t navigate from where you were to where you are, you navigate from where you are to where you want to be. First and foremost I need to know where this person wants the world to be.
And in this case, the intentions I can see conflict.
You have to have beliefs about the world and how it works before you can have intentions to change it. I’m sure just about everyone would rather live in a place where no one was killed and everyone has their health cared for. People who oppose “universal healthcare” don’t usually want poor people to suffer: they just expect a whole host of problems to come with treating healthcare as a right and not a good. People who want extra-judicial drone assassinations of citizens aren’t particularly concerned with the State being empowered who can live or die. They just want to prevent terrorist attacks.
To extend your analogy: before deciding to navigate anywhere you still have to have beliefs about what the sea is, where the coastline is, how deep the water is, what the weather is like, how to manage the crew, what to do with the anchor, the rudder and the sails, general beliefs about the accuracy of your maps and navigational equipment, beliefs about what life is like at your new destination and whether it is worth the risks you’ll encounter a long the way and even whether or not it is a place you will want to live or do business in, beliefs about the possibility of it changing once you get there. Not to mention all sorts of possibilities you probably haven’t even thought of yet because your scientists haven’t developed the germ theory of disease and you’re about bring Small Pox and centuries of genocide and colonialism to a continent you didn’t even know existed because the world is twice as large as you thought it was! But hey, at least you intended to reach the Indies and make Spain wealthy.
Intentions are worthless if you don’t know what you’re talking about and they change radically when you do. It sounds like you’re trying to comprehend other people’s political views through the framework of your own political philosophy. But there is nothing inherent or necessary about that framework.
As far as I can tell, you’re arguing that I should undertake to understand somebody’s beliefs about the positioning of all 214,000 miles of coastline in the world before I try to understand their intentions about where they’re navigating to.
So, to sum up my response: It doesn’t matter. I don’t need to know where they think the coastlines are; all I need to know is that they want to go to Jamaica because the weather is nice. And then the information that a hurricane is coming through becomes relevant. If they’re going to Jamaica to deliver medical supplies in preparation for that hurricane, that piece of information doesn’t add anything.
You seem to be proposing that in an argument about politics, I should engage in a depth-first search. Here’s the issue: I can knock down all of their arguments, and change not a thing in their mind. No matter how many “How” arguments are defeated, there are an infinite more laying in wait. To change a mind, you must address the -Why-. You must direct your arguments to their motivations.
You seem to be suggesting I should argue with somebody like this:
“You’re going to travel up the sound? You’ll hit rocks during low tide, and it looks like that’s when you’ll be going through. You should go around. And this dock in your itinerary is closed this time of year; you’ll need to refuel over here instead. That restaurant right there has terrible food; you should eat here instead...” And so on and so forth, when a good argument might go...
“Oh, you’re trying to travel to the peninsula? You’d be better off driving there, the sea route is really inhospitable.”
Intentions are destinations. If you don’t know what you’re talking about, sure, they’re worthless—but whether you know what you’re talking about or not, it’s completely useless to analyze a route if you don’t know where that route is intended to lead.
I’m not arguing that and I think it’s a pretty blatant strawman and that nearly any independent observer of this exchange would agree. This makes me pretty averse to continuing this conversation.
No. I think one should never ever engage in an argument about politics to try to change someone’s mind unless your interlocutor is that very peculiar individual who will alter their beliefs based on new evidence. To the extent the above description is true it is evidence that people don’t form political opinions based on evidence and that’s a good time to stop arguing with someone about their opinions.
The weather is nice in Haiti too. Also, in High Communist Cuba and (parts of) apartheid South Africa. “Opposing universal health care” is like “opposing going to Haiti”. “Sure it the weather sounds nice but you’ve overlooked dozens of other issues”. It doesn’t usually mean you’re opposed to nice weather.
It wasn’t my intention to strawman you. If my interpretations of your arguments are incorrect, I have absolutely no idea what you’re trying to convey, except possibly a big “It’s complicated!” - which I don’t disagree with, and if that’s supposed to be a counterargument, it’s misdirected.
As for people discarding evidence, proving “Brand A of universal healthcare is Bad” doesn’t say anything about brands B-Z—again, my point is that you seem to be suggesting I should focus on implementation (or is that a strawman?) details rather than the intentions of that implementation. Disproving implementations does nothing.
For example, I could argue (ignore the truth value of this statement, please) that the PPACA necessitates or enables Death Panels, but this isn’t an argument against universal healthcare, only one particular -implementation-. It doesn’t matter whether it’s true or false for purposes of arguing about universal healthcare more broadly.
There are obviously possible counter arguments that demonstrate that a vast majority set of possible implementations are bad. The possible implementations of any intention are likely to share a number of crucial parts. There are only so many ways to get to a place. E.g. If I think more health care harms health outcomes as much as it helps then I am going to oppose any implementation that involves subsidizing health care. And of course I could have as many such arguments as I like. If I have fifty arguments that, combined, show that all the possibly implementations of universal health care are harmful than I have good reason to oppose the intention of implementing universal health care. And I don’t have to exhaust all the possibilities: I just have to have never heard of an implementation plan that I didn’t think would be bad.
Not to mention: “universal health care” doesn’t actually mean “everyone gets the health care they need” it means something like, “everyone gets the health care they need through some new government mechanism”. “Increase per capita GDP and lower health care costs though economic growth until everyone can afford what they need” might the best way to get everyone covered, but it would never be called “a plan for universal health care”.
Obviously, “that rout to Jamaica is tricky, you can’t go South there” is not an objection to the basic idea of going to Jamaica. But “Jamaica is a terrible place” is. So is “40% of people who try to get to Jamaica die en route”. So is “we don’t have that kind of money”. So is “Jamaica will turn you away at the border and it is too dangerous to sneak in”. “I don’t like nice weather” is also a basic objection. But it’s an uncommon one and mere opposition to going to Jamaica is a really bad indicator that someone doesn’t like sunny days.
I’m not saying you need to determine every detail of the implementation of a policy before counting oneself in favor of something. But policy goals are not determined in the abstract. There are important, basic facts about economics, human nature, and government that yield heuristics about what policy goals are beneficial and which are harmful.
I am baffled by your bafflement. Kill your enemies, save your allies. Where’s the contradiction?
Sorry about the confusion; I just realized exactly where the disconnect is. I was discussing drone warfare in another forum, specifically the use of drones against a nation’s own citizens. Absent that context my statement doesn’t make much sense at all, no.
Does it make more sense when I clarify that I’m referring to the use of drone warfare against a nation’s own citizens without judicial oversight?
Which country is that happening in? But presumably that government, rightly or wrongly, has decided that some of its citizens are enemies.
are you against drone warfare vs OTHER types of warfare or are you just against warfare? I think that might be where the confusion is. If you think we should try to save more people and therefore support healthcare and oppose warfare, I think that makes sense. I think it also makes sense to say you support healthcare because it saves lives and you support drone warfare because it saves lives in comparison to other warfare, vs the less realistic no warfare.
I was referring to a very specific use of drone warfare and was insufficiently explicit in my comment. (A peril of switching back and forth between different forums of discussion, dropping context.) It wasn’t even until the latest round of comments that I realized why exactly people were baffled by my position.
Specifically I was referring to the use of drone warfare to target a nation’s own citizens without judicial review.
I still don’t see the contradiction. Both universal healthcare and drone warfare are fundamentally come from a belief or alief that life or death decisions about citizens should be made by the government.
Not really; universal healthcare is based on a belief (or alief) that life is a fundamental right. A simple belief that government should be making these decisions might lead to a belief in government-provided or government-run healthcare, but that’s hardly the same thing as universal healthcare, which holds that government doesn’t have a right to decide, only a responsibility to provide.
Ok, I think a better way to formulate my point is that both universal healthcare and drone warfare come from an alief that the government has unlimited moral authority, in the sense Arnold Kling discusses here and here.
I don’t see the difference, especially when you remember that resources are finite.
You seem to be conflating intention and results in the opposite direction I usually see; you’re suggesting that the practical necessities of implementing universal healthcare are a part of the ideology or principles which lead one to seek it.
Specifically an ideology/alief that causes one to decide which policies to support without thinking about how they would actually be implemented in practice.