Blue or Green on Regulation?
In recent posts, I have predicted that, if not otherwise prevented from doing so, some people will behave stupidly and suffer the consequences: “If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold.” People misinterpret this as indicating that I take a policy stance in favor of regulation. It indicates no such thing. It is meant purely as guess about empirical consequences—a testable prediction on a question of simple fact.
Perhaps I would be less misinterpreted if I also told “the other side of the story”—inveighed at length about the reasons why bureaucrats are not perfect rationalists guarding our net best interests. But ideally, I shouldn’t have to go to such lengths. Ideally, I could make a prediction about a strictly factual question without this being interpreted as a policy stance, or as a stance on logically distinct factual questions.
Yet it would appear that there are two and only two sides to the issue—pro-regulation and anti-regulation. All arguments are either allied soldiers or enemy soldiers; they fight on one side or the other. Any allied soldier can be deployed to fight any enemy soldier and vice versa. Whatever argument pushes one side up, pushes the other side down.
I understand that there are continuing fights about regulation, that this battle is viewed as important, and that people caught up in such battle may not want to let a pro-Green point go past without parrying with a Blue counterpoint. But these battle reflexes have developed too far. If I remark that victims of car accidents include minor children who had to be pushed screaming into the car on the way to school, anyone who is anti-regulation instantly suspects me of trying to pull out an emotional trump card. But I was not trying to get cars banned. I was trying to make a point about how emotional trump cards fail to trump the universe.
I have previously predicted on the strictly factual matter of whether, in the absence of regulation, people will get hurt. (Yes.) I have also indicated as a matter of moral judgment that I do not think they deserve to get hurt, because being stupid is not the same as being malicious. Furthermore there are such things as minor children and pedestrians.
I shouldn’t have to say this, but apparently I do, so, for the record, here is “the other side of the story”:
The FDA prevents 5,000 casualties per year but causes at least 20,000-120,000 casualties by delaying approval of beneficial medications. The second number is calculated only by looking at delays in the introduction of medications eventually approved—not medications never approved, or medications for which approval was never sought. FDA fatalities are comparable to the annual number of fatal car accidents, but the noneffects of medications not approved don’t make the evening news. A bureaucrat’s chief incentive is not to approve anything that will ever harm anyone in a way that makes it into the newspaper; no other cost-benefit calculus is involved as an actual career incentive. The bureaucracy as a whole may have an incentive to approve at least some new products—if the FDA never approved a new medication, Congress would become suspicious—but any individual bureaucrat has an unlimited incentive to say no. Regulators have no career motive to do any sort of cost-benefit calculation—except of course for the easy career-benefit calculation. A product with a failure mode spectacular enough to make the newspapers will be banned regardless of what other good it might do; one-reason decisionmaking. As with the FAA banning toenail clippers on planes, “safety precautions” are primarily an ostentatious display of costly efforts so that, when a catastrophe does occur, the agency will be seen to have tried its hardest.
Government = ordinary human fallibility + poor incentives + organizational overhead + guns.
But this does not change the consequences of nonregulation. Children will still die horrible deaths in car accidents and they still will not deserve it.
I understand that debates are not conducted in front of perfectly rational audiences. We all know what happens when you try to trade off a sacred value against a nonsacred value. It’s why, when someone says, “But if you don’t ban cars, people will die in car crashes!” you don’t say “Yes, people will die horrible flaming deaths and they don’t deserve it. But it’s worth it so I don’t have to walk to work in the morning.” Instead you say, “How dare you take away our freedom to drive? We’ll decide for ourselves; we’re just as good at making decisions as you are.” So go ahead and say that, then. But think to yourself, in the silent privacy of your thoughts if you must: And yet they will still die, and they will not deserve it.
That way, when Sebastian Thrun comes up with a scheme to automate the highways, and claims it will eliminate nearly all traffic accidents, you can pay appropriate attention.
So too with those other horrible consequences of stupidity that I may dwell upon in later posts. Just because (you believe) regulation may not be able to solve these problems, doesn’t mean we wouldn’t be very interested in a proposal to solve them by other means.
People are hurt by free markets, just as they’re hurt by automobiles—torn up by huge powerful mindless machines with imperfect human operators. It may not be the course of wisdom to fix these problems by resorting to the blunt sledgehammer of ban-the-bad-thing, by wishing to the fairy godmother of government and her magic wand of law. But then people will still get hurt. They will lose their jobs, lose their pensions, lose their health insurance, be ground down to bloody stumps by poverty, perhaps die, and they won’t deserve it either.
So am I Blue or Green on regulation, then? I consider myself neither. Imagine, for a moment, that much of what the Greens said about the downside of the Blue policy was true—that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true—that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.
Close your eyes and imagine it. Extrapolate the result. If that were true, then… then you’d have a big problem and no easy way to fix it, that’s what you’d have. Does this universe look familiar?
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- Superstimuli and the Collapse of Western Civilization by 16 Mar 2007 18:10 UTC; 149 points) (
- 30 Oct 2011 11:21 UTC; 32 points) 's comment on Politics is the Mind-Killer by (
- Useless Medical Disclaimers by 19 Mar 2007 16:48 UTC; 22 points) (
- 28 Jul 2012 8:15 UTC; 17 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- 7 Nov 2012 10:22 UTC; 14 points) 's comment on Please don’t vote because democracy is a local optimum by (
- [SEQ RERUN] Blue or Green on Regulation? by 10 May 2011 18:31 UTC; 12 points) (
- [SEQ RERUN] Superstimuli and the Collapse of Western Civilization by 12 May 2011 16:42 UTC; 10 points) (
- 20 Aug 2012 7:34 UTC; 5 points) 's comment on Open Thread, August 16-31, 2012 by (
- Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth—Pt. 2 by 15 Aug 2017 0:53 UTC; 3 points) (
- 14 Apr 2012 10:05 UTC; 0 points) 's comment on Rationality Quotes April 2012 by (
- 3 Jun 2010 1:02 UTC; -1 points) 's comment on Open Thread: June 2010 by (
Imagine a person who declares they are not blue or green, but are just trying as best they can to judge the truth, and their estimate will change with time and context. Imagine that their behavior appears to fit this description. How will other people treat this person? Will they seek him out as a source of good policy information? Will they hire him or fund his work? Or will they suspect him of really being on the other side, and pretending neutrality as a rhetorical trick?
Amen. Blue vs. Green thinking is the norm, and I have been accused (negatively) of being a liberal and a conservative in the same day.
Your opinion doesn’t sound like mine, so it’s probably the other side’s opinion.
What bothers me about this challenge is that it is possible for a well thought out regulation to have a net benefit, even after considering the economics, technical limitations, and societal values. But it is also true that your [Government = ordinary human fallibility + poor incentives + organizational overhead + guns] equation suggests that we will never get this well thought out regulation enacted.
So our only choice is either small regs that do very little good in order to avoid doing harm or private solutions that are limited by the [Private groups = ordinary human fallibility + poor incentives + organizational overhead + a few bad actors] equation.
There is a business joke that says you cannot make something idiot-proof, because they can always build a better idiot.
With financial scams, for instance, you can always stop a fool from participating in one particular scam. But if the fool is determined to participate in some scam, somehow, you cannot stop the fool. I think that many fools have that degree of determination.
Bruce Schneier discusses “CYA Security” in his latest Crypto-gram: http://www.schneier.com/crypto-gram-0703.html#1 Much of the security reactions that occur are less aimed at achieving safety and more with ensuring that the agency cannot be criticised for not having done its job, even when the reactions are irrational and counterproductive. I guess this is part of the “poor incentives” term of Robins equation.
Security is perhaps one of the most clearcut forms of paternalism, where certain groups are expected to act to protect everyone. It also seems to be more vulnerable to overreactions like above than other forms of paternalism. Perhaps this is because of the larger power distance between the security people and the protected. The former have been given monopolies of coercive power, which means that they are scrutinized more heavily both internally and externally. There is also a psychological effect of power bias and separation from the “civilians” that means that they are less likely to accept disconfirming external information. Finally security problems often involve malign agency, which is something we humans understand in a very different way than other risks.
The health care paternalist who fails to detect and stop a health problem until some deaths occur can usually get away with it by imposing after-the-fact regulations.
I guess this line of reasoning would imply that we should expect paternalism in areas where the “outrage” aspect of risk is higher to be biased towards overreaction.
I am going to tell a story about me so it would be amazing and awesome if you still kept, reading Thanks. As a ‘health care paternalist’, or ‘former one’. I find it increasingly unconscionable to place heath gains against the self determination of an individual, even if addicted. How am I supposed to believe that health is a high law than liberty? Yes, I can see that willful, selfish cost to the public health system, but that is the problem of the public health system I suppose and I ought to focus on dismantling that instead
There is a faction in political theory that argues that, for rights to have any meaning at all, you need first to guarantee the conditions of life that make rights realizable. For example, the right to work is pointless if there are no jobs available.
Says Jeremy Waldron:
Note that when someone reads your “if people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold” it does sound rather as though you’re making a point about market decisions in particular, not just one of a spectrum of points like “if people have a right to vote for stupid policies, then ambitious politicians will supply all the stupid policies that people can be convinced to vote for.” Also, it’s not too uncommon for people to play rhetorical (and perhaps internal doublethink) games where people’s rationality in market decisionmaking is judged differently than in politics.
Similarly, you could state specifically “when we let members of someparticularethnicgroup vote, they often make uninformed decisions,” and believe yourself logically justified by the general truth that they are people and when we let people of any ethnic group vote, they often make uninformed decisions. But I don’t recommend that you try making that statement, especially about an ethnic group where it’s not too uncommon for people to dump on them in particular, unless you’re prepared to raise many, many more hackles than you would by just stating your general point about letting people in general vote.
And even though I give the parallel of another politically charged statement, I don’t think this is just people getting irrational around politically charged issues. In ordinary, un-charged situations too, it is normal for people to choose reasonably general forms of their statements when possible, so if you make a narrow statement, it conveys a suggestion that a more general statement doesn’t hold. “It’s really cold in the living room” means in practice something like “the living room is colder than the rest of the house” or “I am physically unable to leave the living room and don’t know about the rest of the house,” not “it’s really cold in the house.”
It’s not a completely reliable conversational rule, and it’s probably one of the reasons that some wag said that “communication would be more reliable if people would turn off the gainy decompression,” but it’s not obviously an unimportant or silly rule, either. In fact, if I imagine designing cooperating robotic agents with very powerful brains, very sophisticated software, and very low communication bandwidth, I’d be very inclined to borrow the rule.
Newman, that’s a fair point. The way this scenario arose was that someone proposed a specific policy—shops where otherwise banned items could be sold—and I remarked that some poor, honest, not overwhelmingly educated mother would buy Dr. Snakeoil’s Sulfuric Acid Drink for her arthritis and die and leave her five orphaned children to weep on national television. I meant it as a remark about the real-world political unfeasibility. However, this evidently struck most people as a Green argument, and I was shortly well on my way to acquiring a reputation as a Green, hence this little position paper.
It would indeed be disingenuous to make a general habit of discussing only the downsides of Blue policy. But when that specific issue does arise, one would like to be able to discuss such a straightforward factual prediction without being labeled a Green. When specific Green policies are proposed, I am just as frank (and vivid) about the downsides, hence I get labeled Blue.
Perhaps this is because of the larger power distance between the security people and the protected.
How do you measure this distance? The FDA has a monopoly, too. Here’s another theory: drug companies are a third player. Moreover, they are concentrated interests, so they affect the public choice. (Airlines play a similar role in the security theater, but their interests are more diffuse. Also, getting rid of airline security is a public good, while getting a drug approved helps one drug company relative the others.)
That’s not to say I disagree with Anders’s psychology, but I discount it because I find it harder to judge than public choice arguments.
I’m reading through all of Eliezer’s posts chronologically, as well as reading Luke’s comments when he did the same. I just came to this post and had to do a double take when I read this:
Click
Holy. [Expletive].
You somewhat misquote the FDA fatalities estimate. It is not that the FDA prevents 5000 fatalities; it’s that the extra delay by FDA, compared to european regulators (EMA), prevents at most 5000 fatalities per decade.
Total absence of regulation would result in a drug industry that is only concerned with soundbites, drug colouring, and trademarks. Through most of our history, the medicine worked just like this.
There is something which is very hard to estimate about drug regulation.
It’s relatively easy to estimate how much the regulation costs in added delay, and the amount of lives that could be saved if the (finally found to be efficient) drug was available earlier.
It’s a bit harder, but still possible, to estimate how much the regulation protects by looking at the drugs that were finally found to be dangerous, and estimating how much people they would have killed or damaged if they would have been released.
But it’s almost impossible to estimate how much the existing regulation will make the drug corporations to change their own internal practice. That’s the most efficient kind of regulations : regulation that, most of the time, aren’t enforced by cops and courts, but by people directly. Drug companies taking more care about preventing side-effects in the drugs in the whole process, just because they know that at the end the FDA will veto a drug that’s too dangerous.
Like with traffic regulation : the real effect of speed limits and red light is not measured by the number of people who end up being without a drivers’ license because they got caught too many times, and can’t endanger others anymore. But about the people who respect the red light and speed limits because of the law, but wouldn’t without it. And it’s very hard to estimate those.
There is one more factor, but in opposite direction: would you be more careful if there was nobody banning the medications? Do you read about medications now before you use them, and would you do that if there was no government doing tests? Your argument sounds to me like pro-minimal-wage argument, with the similar mistake: there are always two sides defining product/price, and one cannot think only about one of them and have good predictions.
https://en.wikipedia.org/wiki/Stamina_therapy
Read and weep.
This is a creepy story, but not a contra-argument for my point: these people were thinking that government ban bad medications, so they were not careful at all. I would like to see some study which tests how careful people are when they know someone else is taking care of them.
If there were no government to regulate medications, I think that people would make companies which would test these medications and which would give them scores, or something like that.
So the reason relatively few lives are saved by banning drugs is because, as a consequence of the regulation, not many dangerous drugs are being produced. Interesting.
We can’t know that. Regulation is not the only means by which information about what drugs are useful and who can be trusted can be disseminated. If the FDA was not around it could well be that a non-regulatory body would have developed to fulfill this role.
The fundamental problem of both FDA and such non-regulatory body is that the drug industry got the money to fake the signals. The valid argumentation must be substantially more effective at convincing public than invalid argumentation, for it to work at all.
(I do not think btw that people must be protected from themselves.)
This is basically the primary issue. It is possible for a hostile or simply incompetent drug company to spam the information sources of people with false or misleading information, drowning out the truth. The vast majority of humans in our society aren’t experts in drugs, and becoming an expert in drugs is very expensive, so they rely on others to evaluate drugs for them. The public bureaucrats at least have a strong counter-incentive to letting nasty drugs out into the wild.
Furthermore, it can take some time to realize a drug isn’t working, and the placebo effect is going to be in full force to make that even harder. By the time you realize you were sold snake oil, you may already be dead. “Reputation” may not be of use here, as fake drugs are much cheaper to develop than real ones, so the cost of throwing an old trademark or company shell under the bus every few years is minimal, especially compared to the cost of discovering that for individuals.
Consider also the time in man-hours that must be spent hunting for information and evaluating safety, not just of the drugs themselves, but also the reputations of the private verification firms, by all individuals that need drugs. The FDA is cheaper.
Edit: I should say that “in my estimation, the FDA is cheaper.” It’s only back-of-the-napkin math.
I generally take the position that we should protect people from themselves to the degree that it is reasonably practical to do so. We have all failed due to ignorance, irrationality, or inattention at some point. Of course, when someone tries to break open your high-voltage power line to steal the copper inside, well...
This comment and its parent are both true. And, strangely, we seem to exist in an universe where there are both known useful drugs and a lot of drugs of unclear benefit.
Some would say it still does.
There is a third alternative though. You are, of course, familiar with Underwriters Laboratories?
Oh, I see that Wedrifid has started down that road.
And ultimately the question isn’t whether people SHOULD be protected from themselves. The question is, in anything vaguely resembling a modern, pluralistic democratic society CAN people be protected from themselves.
See the Heinlein quote about bread and circuses. A Tai-Chi instructor of mine years ago instructed that the ground is hard because it loves you. It wants you to learn not to fall down so as to learn balance and how to walk and run and move well. I’m not sure that’s really a rational way of looking at things, but there is some utility there.
Well, I am quite a bit of libertarian myself, but not to such extent.
The independent labs still need big G to wield big stick to protect trademarks. And perhaps still need anti-trust law.
Furthermore, there is a bit of problem with advertisement. Free speech is extremely important, but advertisement makes me think of Langford’s basilisk . In the universe of Langford’s basilisk stories—are you protecting people from themselves by getting rid of the basilisks? Clearly not. But what if the people felt as if it was their free will, to buy product, after seeing a basilisk? As a part of basilisk’s function? The Heinlein’s approach assumes strong notion of free will.
The modus operandi in advertisement is that you do not have free will. In the advertisement based version of Newcomb, omega makes ads so that that you’ll buy 2 boxes. The first for a million, and the second for a thousand. And they both will be empty. But you’ll be happy. (note, that’s meant to be humour).
(Note that I currently deal with ads from the other side—the selling side. And I myself made ads for living. So my hidden agendas are towards advertising, not against. And i’m somewhat exaggerating evil impact of ads here. The ads don’t work on everyone, but they certainly do bias your ‘free will’ in the ways that you’d rather not. And yes, we sell great products using ads, too)
I think I might not be understanding your post correctly, but in the universe of these stories, seeing the nastier basilisks literally kills you instantly. Getting rid of the basilisks absolutely protects people—see for instance comp.basilisk FAQ.
The point is that you have to censor images out there to protect people. And in our universe, seeing the basilisks makes you buy stuff. When does it cross from protecting people from basilisks, to protecting people from themselves?
Well, statistically. I am not sitting thinking what exact hue will break your brain better, but I put damn good effort into doing some advertisment video, right now, for cinemas. (the rendering runs take a while, which makes me go on lesswrong, which makes me addicted to lesswrong, vicious cycle). And i use fractals a lot to model natural phenomena for ads. Thats my specialization (besides game programming).
When I was considering whether or not I objected to various types of advertising, it seemed like a substantial question to consider would be information asymmetry, since that seems to be a substantial part of ads.
For instance take the following advertisement:
Buy one get one free.
And then much later in small print Items ring up at 50% off regular price. (After all, it doesn’t help you sell as much of a profit if they just buy one, there is no reason to specifically call attention to this.)
And then not even stated on the page And by “regular price”, we mean what other people might consider a fake price that the goods are at only the legally minimum required amount of time so that we can claim that they have been discounted, because people love getting discounts and we know this.
Or for anecdotes, the “regular price” of a store brand of Diet soda I buy frequently is now 1.19. I don’t think I actually remember ever seeing it at that price. It is always at a “discount” However, it is now more expensive than it used to be. They can raise the price and discount at the same time.
It seems like in general, a lot of these kinds of sales tactics are specifically related to information asymmetry.
Now, this kind of information asymmetry can be reduced.
For instance, consider an app where you can scan the barcode and get a rescalable graph of the price which shows the information about the price over time, or if you wanted to be thorough, information about the price over time at rival stores.
Basically, like the app in this link, but even more so: http://www.psfk.com/2012/01/amazon-retail-showroom.html since that only compares to Amazon’s current prices.
That’s just going from one data point to two, and retail establishments are already objecting because it is hurting their sales. Imagine if you could instantly generate a three dimensional graph which compared the past year of prices over time at 10 stores and the reason for discounting. All of that is publicly available information. And it wouldn’t require any new hardware to make such an app. So it seems very likely it might happen in the future.
Then you could have customers who upon seeing your ad would say: “Well, I could buy this TV here at Best Buy at 50% off for 400 dollars, but Belmont TV is probably going to have it for 300 dollars when they have their birthday sale in a few days, so I’ll wait and then ship it from there.”
This app seems like it would be a good thing to have.
That’s a bit longer than I thought it would be, but it does seem to cover the bases. What do you think?
Can you clarify why that transition is particularly significant?
Often when people use the phrase “protecting people from themselves” it’s meant to connote that this is something we shouldn’t do, as contrasted with protecting people from one another, which (it is implied) we should do. Is that what you’re trying to connote here?
If so, then I don’t think such a line is terribly significant.
Protecting people from one another can be a higher priority in cases where the incentives for harming others are higher than the incentives for harming oneself (which is frequently true in the real world) but that’s ultimately just a shortcut, not a fundamental dividing line. Useful in practice, but problematic to generalize a theory from.
I totally agree; I was more referring to the problem with libertarianism. Example:
Someone makes ads for some quack cure. For example, radium. People die. Libertarianism says, it’s those people’s problem, we can’t protect them from themselves.
Someone makes basilisk fractal that only works on some people. Some people die. Libertarianism agrees that information killed them.
Now, for some reason the former falls under their own will, and the latter, under non their own will (even though it is their neural network killing them).
Then genes are discovered, that strongly correlate with susceptibility to the advertising. Or parenting style. Or school environment. And suddenly, in both instances, people die, because they were shown carefully constructed visual (and auditory, for tv ads) input, due to their innate proneness to being damaged by inputs.
(I myself don’t really make marketing concepts, I just do some of the art.)
edit: okay, i’ll steel man this a little… it can be argued, that the people who die of ads, they could have somehow compensated for their innate failure, while people who die of basilisks can’t. Well, suppose one can do basilisk-training, with a milder basilisk, which makes one much more immune to effects of basilisk. But most people don’t do that, because they don’t need it.
That distinction [which libertarianism makes] does strike me as inconsistent and arbitrary. (and if one is to evaluate values of different types of information, etc etc, that’s utilitarianism).
Yup, the distinction you’re describing sounds pretty inconsistent and arbitrary to me as well.
One thing I desperately want to devise is some method, at least partial, of incentivizing bureaucrats (public or private) to act in the most useful manner. This is, by its very nature, a difficult challenge with lots of thorny sub-problems. However, I think it’s something LWers have been thinking about, even if not always explicitly.
And then another silent afterthought: “Oh,” you think, “Letting bad things happen, even when I have some stupid principle to justify it, is still bad!”
^ It was a different topic, but basically, that’s how I became a consequentialist.
That ignores how the FDA actually works in practice. A lot of regulators go into privat sector jobs after being at the FDA. Being nice to corporation is the way to get a high paying private sector job after being at the FDA.
Take a look at the Ranbaxy scandal. If you look at what the FDA actually does in the real world there plenty of reasons to criticise it that don’t have anything to do with being pro or against regulation.
I think medicine is a very particular field of regulation. Examples from this field would generalise extremely poorly as soon as you move them in any field where a released product doesn’t directly save lives or isn’t designed to improve life quality.
An industry not being free to dump pollutants of unknown effect on the environment will harm marginally its own profit. This profit correlates pretty poorly with people’s wellbeing, even the jobs such an industry provides aren’t strongly correlated with the industry having to spend less on filters and waste disposal. But unknown chemicals dumped in the environment can reasonably be expected to correlate negatively a lot more with people well being.
The article quoted for the FDA states that the European regulatory institutions related to drugs are outperforming FDA, releasing more drugs while keeping deaths low. So the FDA seems to be an institution failing in over regulation, but that doesn’t really makes evidence against regulation, especially if you consider the the European Union has an overall approach that is more leaning toward regulation than the USA.
The question isn’t “regulation: yes or no” but “regulation: how and how much”.
I don’t think the “regulation bad” side have any people that would argue for “zero regulation”. We can all clearly imagine how fast things would go downhill with that. The debate should be understood as “do we have too much regulation or not at the moment?”, but even that question is stupid.
Different fields of technology and industry require different approaches, because mistakes caused by over-regulating and under-regulating would have very different costs. So it would be plain madness to simply argument for “more regulation” or “less regulation” unless you had extremely good evidence that all systems were currently skewed in the same direction, which is not the direction evidence if flowing at all.
It’s not just that people need to have a more rational attitude toward regulation, listing all the pro and cons honestly when deciding, if you put all those pro and cons in the same “regulation good or bad” bucket you already lost the battle for rationality and gave in to an ideological false dilemma, a dilemma that has really poisoned American culture from what I’ve seen.
Also: simply stating that “well, government officials would act in their interest, so regulation won’t ever be reliable, nothing we can do about that” strikes me as motivated stopping. People are actually know to reliably try doing a good job or caring about the lives their decision will end or save, if you don’t dip them in a cut-throat politics environment too much.
If the incentives to do a good job are poor you can work to improve them, and also give them better tools than a sledgehammer to intervene, and knowledge about the delicate forces they’re messing with.
You don’t even have to wait for promotion-hungry regulators to decide to go against their interest and reform their own system. You can have promotion-hungry politicians or utility-hungry citizens try to do that for them by perfectly selfish reasons, and yeah, you’d of course get another imperfect system who still has to react to public outcry and was shaped by other interests, but you still can try to improve from the current situation.
That doesn’t follow. Those could be conclusions, not premises.
Marc, your description strikes me as a pretty good summary of the mess we’re in.
I’d consider the question of your position on regulation to be, if you could choose for the government to increase or decrease regulation, but you have no other effect, which would you choose?
I consider myself anti-regulation. If they decreased regulation to the point where increasing it would be good, I’d consider myself pro-regulation.
Regulation isn’t a scalar. Your question is malformed.
True, but it can be approximated as such. If the government regulates what needs to be regulated most, with a certain error corresponding to how bad the government is at figuring out what needs to be regulated, how much should the government regulate?
In my humble opinion, even Eliezer sometimes forget that making an argument more shocking doesn’t necessarily make it more correct—more “honest” in a generic fashion, maybe, but abstract “honesty” and practical adherence to one’s values (or utility function for utilitarians) can be totally different things. I asked my dad, who drives quite a lot, and he absolutely would ban all private transportation—in cities at the very least. So would I. That we can say something that reflects awful facts without embellishing it does not absolve us of an inch of moral responsiblity! You should think long and think carefully and feel sorrow and anguish for at least a moment—not as an intuition pump to sway your judgment, but to remind yourself of what you want to want and value to value. Forgive me if I find this bit (if it’s meant literally and not to shock the audience into thinking) to be a naked ethical failure—or at least dangerous laziness—on EY’s part.
(That’s not to say that I disapprove of the rest of the article, or am in denial about the horrible reality of it! It’s an excellent article with just this one low point in my eyes. “If that were true, then… then you’d have a big problem and no easy way to fix it, that’s what you’d have. Does this universe look familiar?”—sure, that’s so, but if anything it’s worse and more reprehensible to use sloppy, spur-of-the-moment ethical judgments in such a hostile environment, especially if those judgments happen to favor your peace of mind and convenience.)