It appears that Karma cannot be negative. There also does not appear to be a way to ignore comments from users below a certain Karma threshold. The ability to set a Karma threshold together with the ability for Karma to be negative would seem useful for this kind of situation. Setting a Karma threshold of 0 would not be ideal as it would block comments from new users as well as trolls.
With relevance to the subject, I say the thing to do is downvote obnoxious comments and then stop reading them. Also, do not reply. This is standard behavior when dealing with trolls. If someone is annoying you (a) talk to an admin or (b) ignore them.
I gave him the benefit of the doubt, the voluntary castration sounds so crazy, but the absurdity heuristic is there for a reason, maybe I gave too much credit for simply being on LW.
Someone like this is just griefing, and is impervious to downvoting. I think we can ban such trolls as this without any danger of evaporative cooling. (Um, is there a moderator with the banhammer?)
Of course, that just leads to getting lots of accounts. Not sure how to deal with that.
The worst punishment is an IP ban, which is only really helpful if the banned is not smart enough to switch locations. Generally speaking, annoying people go away if you ignore them. Thanks to karma, the comments will be censored out automatically once it passes your threshold.
So much of this emerging art seems to about how to get ourselves to actually update our beliefs and actions to evidence, rather than just going around in circles just doing what we’re used to doing. In the quantum physics sequence, Eliezer told a story about aliens called the Ebborians, who “think faster than humans {and} are less susceptible to habit; they recompute what we would cache.” Following the discussion of individual differences on the path towards rationality, I wonder: would an Ebborian art of rationality be about how to update less, how to maintain true beliefs and effective actions, rather than thrashing about wildly, doing whatever seems like a good idea at the moment? Or is this idle speculation too ill-formed to be worth discussing?
I notice that there doesn’t seem to be any way to have a Less Wrong profile page, or at least a link that people can click to learn more about you. As it is, no matter what you post on Less Wrong, it’s going to get buried eventually.
Suppose that I live on a holodeck but don’t know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.
Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.
At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.
so the probability that it will win is more like 10^-3
If the hololords smiled upon you, why would they even need you to buy a lottery ticket? How improbable is it that they not only want to help you, but they want to help you in this very specific way and in no other obvious way?
I don’t think this argument holds up. Suppose the holodeck hypothesis is true and someone outside the simulation decides to punish irrational choices by killing you if you buy a lottery ticket. The probability of you being killed is around 10^-3 so you should never risk buying a lottery ticket.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
OK, what if we accept the simulation hypothesis, and we further say that the advanced civilizations are simulating their ancestors? Then we’d expect our simulators to be evolved or derived from humans in some way. It’s unlikely we’ll change our ideas of fun and entertainment too much, as those are terminal values—we value them for their own sake. This gives us some pretty strong priors, based on just what we know of current human players...
I’m not sure how to interpret that though. Based on players of the Sims, say, we would expect tons of either sadistic deaths by fire in bathrooms and starvation in the kitchen, or people with perfectly lovely lives and great material success. Since we don’t observe many of the former, that’d suggest that if we’re in a simulation, it’s not being run by human-like entities who intervene.
This is exactly like the old joke about the guy who prays fervently for years that God let him win the lottery; finally, a booming voice comes down “At least meet me halfway: buy a ticket!”
When exactly will the probability estimate go way up? Someone living in the holodeck obviously isn’t aware they are living “in the future” or not. The probability has to be calculated from the inside, so I don’t see how it would ever change.
When exactly will the probability estimate go way up?
The probability of living in a holodeck is P(it is possible to build a holodeck) * P(you are in a holodeck|it is possible to build a holodeck). If you ever see or hear about people a holodeck being built, then the first term becomes 1.
However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.
I know this is a blog—and not a journal—but if post authors could start with a summary of your main point, it might help. Like an abstract. Say what you are going to say, then say it, then summarise.
I tend to give up on posts if I don’t see that the topic is one which is of interest to me in the first paragraph.
I thought about replying to this anecdote about sleep with an anecdote of my own, but decided that it’ll only add noise to the discussion. At the same time, I caught a heuristic that I would possibly follow had I not caught it, and that I now recall I did follow on many occasions: if someone else already replied, I’d join in.
It’s a variety of conformity bias, and it sounds dangerous: turns out that sometimes, only two people agreeing with each other is enough to change my decision, independently on whether them agreeing with each other gives any evidence to change my own mind. This is a powerful force, that threatens to irrationally synchronize people in a group even where influence is not strong, and the group is far from being on the way towards an affective death spiral.
The dearth of posts in the past couple of days has led me to wonder whether there is a correlation between the length of time a post spends as the most recent post (or most recent promoted post), and the magnitude of its score. Or rather, I expect there to be such a correlation, but I wonder how strong it is.
I just lost about 50 karma in the space of about an hour, with pretty much all of my most recent comments being voted down. I recall others mentioning they’ve had similar experiences, and was wondering how widespread this sort of thing actually is. Does this happen to others often? I can imagine circumstances under which it could legitimately occur, but it seems a bit odd, to say the least.
Yeah, that’s getting to be a problem around here. We could at least make it more time-consuming for the griefers by removing the ability to upvote or downvote on the user pages, so that you’d have to click on the permalink for each comment first. I don’t think that would be too much of a hassle for those of us who are using the karma system properly...
While I disagree with rhollerith below, I also sometimes downvote (or upvote) from the user page (though less frequently than from ‘recent comments’). Sometimes I read a comment so obscure (or so brilliant) that I feel the need to look back and see if this person has made a lot of comparable comments that have been neglected.
Of course, I haven’t been able to downvote at all for some time.
I, too, sometimes read from a user’s user-page and up- or down-vote comments according to my impressions of their individual quality.
This wouldn’t create the −50 karma jump conchis describes, at least the way I do it, but it is evidence that orthonormal’s suggestion would have some costs.
conchis, I have been reading your comments for at least 12 months on Overcoming Bias and have accumulated no negative feeling or opinion about you, so please do not think that what I am going to say is directed at you.
I have been thinking of adopting this strategy of occasionally giving a participant 20 or 30 or so downvotes all at once rather than frequently giving a comment a single downvote because I judge moderation of coment-writers (used, e.g., on SL4 back before 2005 and again in recent months, when a List Sniper has been active, during which times SL4 has been IMHO very high quality) to work better than moderation of comments (used, e.g., on Slashdot, Reddit and Hacker News, which are IMHO of lower quality).
So, I would like people to consider the possibility that downvoting of 20 or 30 of the comments of one comment-maker in one go should not be regarded as an abuse or an improper use of this web site unless of course it is done for a bad reason.
I hereby withdraw this comment because the responses to this comment have made me realize that it is destructive to downvote a comment without regard to the worthwhileness or quality of that particular comment.
So, I would like people to consider the possibility that downvoting of the comments of one comment-maker in one go should not be regarded as an abuse or an improper use of this web site unless of course it is done for a bad reason.
That makes sense, but removing the negative feedback to a time other than when the comment was made makes it extremely hard for the commenter to improve. If that is not one of your goals, fair enough.
A commenter’s karma means nothing when reading this site since the karma is not displayed while the karma of a comment means everything. Disagreeing with the way a site uses karma makes sense, but trying to implement a better system by ignoring the purpose of the implemented system is not particularly useful.
Now, if you have the habit of reading through someone’s comments all at one time and judge each comment for its own value, than what I said here is mostly irrelevant.
Now, if you have the habit of reading through someone’s comments all at one time and judge each comment for its own value
No, that’s not what I have been contemplating.
“A commenter’s karma means nothing,” is a bit of an overstatement because you need 20 karma to post. Also, most commenters are probably aware of changes in their karma. And if I reduce a person’s karma by 20 or 30 points, I would send him a private message to explain.
What I propose reduces the informativeness of a comment’s point score but more-or-less maintains the informativeness of a commenter’s karma. If enough voters come to do what I contemplate doing or if enough well-regard participants announce their intention to do what I contempate doing, then the maintainers of the site will adjust to the reduction in the informativeness of a comment’s point score by focusing more of their site-improvement efforts on a commenter’s karma. Note that those site-improvement efforts will tend to make effective use of the information created by the voters who follow the original policy of voting on individual comments (as well as the information created by voters who vote that way I contempate voting).
What is it that causes you to believe a commenter should be penalized 20 or 30 karma points at a time? If it is that they make a lot of worthless comments, then you have no shortage of comments to down vote, and there is no need to down vote their comments indiscriminately. If it is the they made an exceptionally worthless comment, it is my experience that these get down vote pretty fast by many people, so they will still lose a lot of karma even though you only contribute one point to their loss.
In short, I don’t see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem. If a normally good contributor has a bad day and makes some bad comments, it does not make sense to devalue their previous high quality comments.
I concur; the points-values attached to individual comments have a larger impact on what LW-readers see than do users’ karma values, and are therefore more important to retain as accurate indicators of comment quality.
If a particular user has a pattern of making comments that impair LW in a particular way, you might explicitly comment on this, rhollerith, with detailed, concrete language describing what the pattern is, what specific comments fit that pattern, and why it may impair LW conversation. You could do this by public comment or private message. This has the following advantages over blanket user-downvoting:
It does not impair quality-indicators on the user’s other comments;
The user can understand where you are coming from, and so can integrate information instead of just finding it unfair;
It publicly states community norms (in the public-message version), and so may help others of us retool our comments in more useful ways as well (as well as making us less likely to feel there are random invisible grudges disrupting LW karma);
If you are mistaken about what is and is not useful, others can respond by explicitly sharing conflicting impressions.
ETA: My comment here was slightly mis-directed, in that Hollerith above said he would send the user a message explaining his reasoning.
JGWeissman writes, “I don’t see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem.”
Vladimir Nesov writes, “If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system.”
Anna writes, “This has the following advantages over blanket user-downvoting: . . . It does not impair quality-indicators on the user’s other comments”
The objection is valid. I retract my proposal and will say so in an addendum to my original comment.
The problem with my proposal is the part where the voter goes to a commenter’s lesswrong.com/user/ page and votes down 20 or 30 or so comments in a row. That dilutes or cancels out useful information, namely, votes from those who used the system the way it was intended.
If there were a way for a voter to reduce the karma of a person without reducing the point-score of any substantive comment, then my proposal might still have value, but without that, my proposal will have a destructive effect on the community, so of course I withdraw my proposal.
the points-values attached to individual comments have a larger impact on what LW-readers see than do users’ karma value
True, but if the behavior of the voters change so that the former becomes less informative, the site will tend to change so that the user’s karma will come to have a larger impact. In competent software development, changes in people’s behavior will cause maor changes in the software more often than changes in the software will cause major changes in the behavior of people. (Consequently, assuming the software developers are competent, most changes to the system are best initiated as changes to behavior rather than changes to the software—and if the software developers are not competent, then the site is probably doomed anyway.) Or so it seems to me.
A normally good contributor’s having a bad day is not going to be enough to trigger any downvoting of any of his comments under the policy I contemplate. Th policy I contemplate makes use of a general skill that I hypothesize that most participants on this site have: the ability to reserve judgement on someone till one has seen at least a dozen communications from that person and then to make a determination as to whether the person is worth continuing to pay attention to.
The people who have the most to contribute to a site like this are very busy. As Eliezer has written recently on this site, all that is need for this site to die is for these busy people to get discouraged because they see that the contributions of the worthwhile people are difficult to find among the contributions of the people who are not worth reading—and I stress that the people who are not worth reading often have a lot of free time which they use to generate many contributions.
Well, the voting is supposed to be the main way that the worthwhile contributions “float to the top” or float to where people are more likely to see them than to see the mediocre contributions. But that only works if the people who can distinguish a worthwhile contribution from a mediocre contribution bother to vote. So let us consider whether they do. For example, has Patri Friedman or Shane Legg bothered to vote? They both have made a few comments here. But they are both very busy people. I’ll send them both emails, referencing this conversation and asking them if they remember actually voting on comments here, and report back to y’all. (Eliezer is not a good person to ask in this regard because he has a big investment in the idea that a social web site based on voting will win, so of course he has been voting on the contributions here.)
The highest-scoring comment I know of is Shane Legg’s description of an anti-procrastination technique, which currently has 16 points. But there are thousands of readers of this site. Now it is possible that a lot more readers of Shane’s comment would have voted it up if it did not already have a high score, but I humbly suggest that it is more likely that only one or two or three percent of the readers of a comment would have bothered to vote on the comment regardless of its score.
Whether this site lives or dies seems to depend on the frequency with which the people who can tell a worthwhile comment from a non-worthwhile comment bother to vote. But like I said, these people tend to be very busy.
Hence my suggestion of adopting a policy of voting on commenters rather than coment—because that is going to save some of the busy person’s time.
There is a strong ethic in American society (and probably in other societies) that it is contributions and not individuals that should be judged. Well, I humbly suggest that since being able to contribute comments and posts here is not a basic human need, like housing or education or the opportunity to compete on an equal footing with other workers for income, the application of that admirable ethic to the decision of who gets to comment and post here is not worth the risk of this site’s going downhill to the point where the people who could have carried the site decide it is not worth the time out of their busy lives.
EDIT. If no other participants on this site declare their intention to use commenter-based voting, then I probably will not use commenter-based voting either because of what the economists call network effects. The only reason I suggested it in the first place is that conchis’s comment is not the first time someone here has indicated that voters other than me are already using commenter-based voting.
The policy I contemplate makes use of a general skill that I hypothesize that most participants on this site have: the ability to reserve judgment on someone till one has seen at least a dozen communications from that person and then to make a determination as to whether the person is worth continuing to pay attention to.
If you are basing your judgment on at least a dozen communications from the commenter, then, as I explained, you have already identified plenty of comments that should be downloaded. If you base your decision on seeing the 10 of the 12 observed comments are problems, then you can dock the bad commenter 10 points. And if you are right, you will not be the only one. You do not need to personally dock the user 20 or 30 points.
Whether this site lives or dies seems to depend on the frequency with which the people who can tell a worthwhile comment from a non-worthwhile comment bother to vote. But like I said, these people tend to be very busy.
If a person has excellent judgment to distinguish which comments should be up voted and which person should be down voted, but does not have the time to actually use that judgment, then that person is not going to be a successful protector of this site. Either take the time to do it right, or leave it to those who have the good judgment and the time, who there seems to be plenty of, giving that the system is working.
If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system.
So far we have the following buttons below a comment:
Vote up
Vote down
Permalink
Report
Reply
What would you think of another button
Linked From
which would take you to a page that listed all the comments that had a permalink to that comment?
There’s a lot of permalinking to past comments, but hardly any linking to future comments. I think this is just something that hasn’t evolved yet—once it did, we would find it very natural.
Would it be difficult to implement? It would involve keeping a library of all permalinks used in comments, and updating the comment’s “Linked From” page as often as it is linked to. Preferably, the button would only appear if the comment was linked to somewhere, so as not to waste time checking if the comment had been linked.
Indeed this would be difficult to implement, and furthermore I’m not convinced it’s the best choice from a UI perspective.
It would be a bit of work even to find out when a comment linked to another comment, and notify it.
This sort of thing is why Tim Berners-Lee wanted hyperlinks to be two-directional in the first place, but there are a lot of good reasons why they’re not.
This would not be that hard to implement. Each comment could have a list of linking comments. Then whenever a comment links to another, which is not hard to detect as links are already detected to support the comment markups, the list of the linked comment can be updated.
To make a good decision, it’s not necessary to be a good thinker. If you’re wise enough to defer to someone who is a good thinker, that also works. And if you’re wise enough to defer to someone who is wise enough to defer to someone (repeat N times) who is a good thinker, that also works. That suggests to me the hopeful thought that in a population of agents with varying rationality, a small change can cause a phase transition where the system goes from a very incompetent agent making the decisions to a very competent agent making the decisions. One might call this an “authority cascade”. Do you agree? Discuss, etc.
The major problem with this approach is that you’d need to ask the good thinker about each particular decision of this kind, taking into account the context in which you are making it. Only when you have good understanding of the world yourself, you can make good custom decisions for every situation.
Textbook decisions only work where textbook writers know in advance what questions they need to address.
And then, of course, there is a task of making sure this system doesn’t turn into an echo chamber.
Meta: Given the rate at which new comments appear, I wish the comments feed http://lesswrong.com/comments/.rss contained more than 20 entries; say closer to 200. Also, all of the feeds I’ve looked at (front page, all new, comments) have the identical title “lesswrong: What’s new”, which is useless for distinguishing them.
I was poking around the older posts and found So you say you’re an altruist.… Without intending to open the whole discussion up again, I want to talk about the first example.
So please now imagine yourself to be in an ancient country which is ruled over by an evil king who has absolute power of life or death over all his subjects—including yourself. Now this king is very bored, and so for his amusement he picks 10 of his subjects, men, women, and children, at random as well as an eleventh man who is separate from the rest. Now the king gives the eleventh man a choice: he will either hang the 10 people picked at random and let the eleventh go free, or he will hang the eleventh man and let the other 10 go free. And the eleventh man must decide which it is to be.
My instinctive response is that the eleventh man is not at all responsible for the deaths of anyone, regardless of his choice. He is not murdering or saving anyone. The evil king is doing the murdering (and no saving). Why, in this scenario, do people place moral responsibility on the eleventh man?
This example could be rephrased into a hostage situation. If the evil king was holding ten people hostage and demanded $100 or he would kill them all, I wouldn’t give him a dime. Does that make me an evil person? Even if I was absolutely sure that the king was not lying, why should I pay him?
This all changes if I have a more personal investment in the people. If my loved ones were at risk I would probably pay the $100 and sacrifice my life, even though I believe I am not morally responsible for the result. I apparently have more investment in the outcome.
Does this mean I am a terrible person? In my opinion the eleventh man should choose himself. When the other ten die, anyone who blames the eleventh is foolish.
In my opinion the eleventh man should choose himself.
Do you really mean “should” in the sense that it’s morally better to choose oneself? If so, could you provide some justification for that? My view would be that saving the 10 isn’t morally required but would be virtuous in a supererogatory sense.
Do you really mean “should” in the sense that it’s morally better to choose oneself? If so, could you provide some justification for that? My view would be that saving the 10 isn’t morally required but would be virtuous in a supererogatory sense.
In quick bullet points:
The eleventh man is not morally responsible for anyone dying, regardless of what he chooses
It is better to live than die
I see the king’s question as simple as “do you want to live?”
The answer is as simple as, “yes.”
The cost of the eleventh man living is not paid by himself, it is paid by the king in the form of a moral choice.
Also, to be clear, the man cannot sacrifice himself. He does not kill himself; he is not sacrificing anything; his life is not actually his to sacrifice. The king can kill him no matter what the answer is. The way I look at it, this scenario is exactly the same as the king making the eleventh guess a random number correctly or everyone dies. The man has no power over the situation. Any power is an illusion because all of the power is the king’s.
Likewise, the man cannot save anyone. The ten are not his to save. The king decides who lives and dies.
I agree that the 11th man has no moral obligation in this scenario and I wouldn’t consider him to have acted immorally to choose his own life. I think this is the kind of scenario where people will signal altruism by costlessly saying they would sacrifice themselves in the thought experiment but that far fewer people would sacrifice themselves when faced with the actual choice.
I suspect that people will tend to look negatively on those who fail to signal altruism by saying they would sacrifice themselves but would be more forgiving of someone who was faced with the choice in reality. I think they would consider an individual who had actually made such a choice to have made a morally weak but understandable choice and would not consider him deserving of punishment.
You two seem to be making slightly different points here. Matt, I take it you accept that there is some reason to sacrifice yourself (not doing so would be “morally weak”) but that failing to do so would not be blameworthy. That sounds like a fairly mainstream view. In contrast, MrHen seems to be making the stronger claim that there is no reason to save the others at all (unless he has a personal investment in said others).
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
If you want to prioritize your own life over theirs then you are free to do so, but I think you should own up to the fact that that’s ultimately what you’re doing. Disclaiming responsibility entirely seems like a convenient excuse designed to let you get what you want without having to feel bad about it.
I have to read the single-true-cause fallacy before I can fully reply, but here is a quick ditty to munch on until then:
Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
I disagree with this. The eleventh’s choice is completely irrelevant. The king has a decision to make and just because he makes it the same every single time does not mean the actual decision is different the next time around.
The similar example where the king puts a gun in the eleventh’s hand and says “kill them or I kill you” is when the choice actually becomes the eleventh’s. In this scenario, the eleventh man has to choose to (a) kill the ten or (b) not kill the ten. This is a moral decision.
Of note, whoever actually has to kill the ten has this choice and will probably choose the selfish route. If the king shares the blame with anyone, it will be whoever actually kills the ten. If the eleventh is morally responsible than everyone else watching the event is morally responsible, too.
I don’t understand what coherent theory of causation could make this statement true.
The issue is not causality. The issue is moral responsibility. If I go postal and start shooting people as they run past my house and later tell the police that it was because my neighbor pissed me off, the neighbor may have been one (of many) causes but should not be held morally responsible for my actions.
Likewise, if the king asks someone a question and, in response, kills ten people, I do not see how the question asks makes any different in the assignment of moral responsibility.
Causality does not imply moral responsibility.
Also, having read the link you gave earlier, I can now comment on this:
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
“Responsible” has two meanings. The first is a cause-effect sense of “these actions precluded these other actions.” This is the same as saying a bowling ball is responsible for the bowling pins falling over.
The other is a moral judgement stating “this person should be held accountable for this evil.” The bowling ball holds no moral responsibility because it was thrown by a bowler.
I am not claiming that the eleventh man was not part of the causal chain that resulted in ten people dying. I am claiming that the eleventh man holds no moral responsibility for the ten people dying. I am not trying to say that the king is the single-true-cause. I am claiming that the king is the one who should be held morally responsible.
To belabor this point with one more example: If I rigged a door to blow up when opened and Jack opened the door while standing next to Jill they are both reduced to goo. Jack is causally responsible for what happened because he opened the door. He is not, however, morally responsible.
The question of when someone does become morally responsible is tricky and I do not have a good example of when I think the line is crossed. I do not, however, pass any blame on the eleventh man for answer a question to which there is no correct answer.
The issue is not causality. The issue is moral responsibility.
Agreed. But I think if you want to separate the two, you need a reasonable account of the distinction. One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view. It accounts easily for the neighbor, bowling ball, and Jack and Jill cases, but still implies responsibility for the 11th man.
I can accept a view that says that, all things considered, the king has a greater causal influence on the outcome of the 11th man case, and thus bears much greater moral responsibility for it than does the 11th man. But (and this was the point of the no-single-true-cause analogy) I see no reason why this should imply that the 11th man has no responsibility whatsoever, given that the death of 10 innocent others is a clearly foreseeable consequence of his choice.
I still think this is a convenient conclusion designed to let you be selfish without feeling like you’re doing anything wrong.
P.S. FWIW, yes I pretty much do think you’re evil if you’re not willing to sacrifice $100 to save 10 lives in your hostage example. I can understand not being willing to die, even if I think it would be morally better to sacrifice oneself. (And I readily confess that it’s possible that I would take the morally wrong/weak choice if actually faced with this situation.) But for $100 I wouldn’t hesitate.
One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view.
I can understand that. I have not dug quite so deeply into this area of my ethical map so it could be representing the territory poorly. What little mental exercises I have done have led me to this point.
I guess the example that really puts me in a pickle is asking what would happen if Jack knew the door was rigged but opened it anyway. It makes sense that Jack shares the blame. There seems to be something in me that says the physical action weighs against Jack.
So, if I had to write it up quickly:
Being a physical cause in a chain of events that leads to harm
While knowing the physical action has a high likelihood of leading to harm
Is evil
But, on the other hand:
Being a non-physical cause in a chain of events that leads to harm
While knowing the non-physical action has a high likelihood of leading to harm
Is not necessarily evil but can be sometimes
Weird. That sure seems like an inconsistency to me. Looks like I need to get the mapmaking tools out. The stickiness of the eleventh man is that the king is another moral entity and the king somehow shrouds the eleventh from actually making a moral choice. But I do not have justification for that distinction.
There may yet be justification, but working backwards is not proper. Once I get the whole thing worked out I will report what I find, if you are interested.
My use of the phrase ‘morally weak’ was to describe how I think many/most people would view the choice, not my own personal judgement. I agree with MrHen that the 11th man’s choice is not morally wrong. I was contrasting that with what I think would be the mainstream view that the choice is morally wrong but understandable and not deserving of punishment.
To me this is similar to the trolley problems where you are supposed to choose between taking action and killing one person to save 10 or taking no action and allowing the 10 to die. The one person to be sacrificed is yourself however. I wouldn’t kill the one to save the 10 either (although I view that as more morally wrong than sacrificing yourself). I also generally place much lower moral weight on harm caused by inaction than harm caused by action and the forced choice scenario here presents the 11th man with a situation that I think is similar to one of causing harm by inaction.
As to the act-omission distinction, it would be simple enough to stipulate that the default option is that you die unless you tell the king to kill the other ten. Does this change your willingness to die?
No, that wouldn’t change my decision. It’s the not-sacrificing-your-life that I’m comparing with causing harm by inaction (the inaction being the not-sacrificing) rather than anything specific about the way the question is phrased.
The agency of the king does make a relevant difference in this scenario in my view. It is not exactly equivalent to a scenario where you could sacrifice your life to save 10 people from a fire or car crash. Although I don’t think there is a moral obligation in that case either I do consider the difference morally relevant.
Suppose the king has 10 people prepared to be hung. They are in the gallows with nooses around their neck, standing on a trap door. The king shows you a lever that will open the trap door, and kill the 10 victims. The king informs you that if you do not pull the lever within one hour, the 10 people will be freed and you will be executed.
Here the king has set up the situation, but you will be the last sentient being capable of moral reasoning in the causal chain that kills 10 people. Is your conclusion different in this scenario?
The king here is more diabolical and the scenario you describe is more traumatic. I believe it does change the intuitive moral response to the scenario. I don’t believe it changes my conclusion of the morality of the act. I feel that I’d still direct my moral outrage at the king and absolve the 11th man of moral responsibility.
This is where these kinds of artificial moral thought experiments start to break down though. In real situations analogous to this I believe the uncertainty in the outcomes of various actions (together with other unspecified details of the situation) would overwhelm the ‘pure’ decision made on the basis of the thought experiment. I’m unconvinced of the value of such intuition pumps in enhancing understanding of a problem.
Why is this where the thought experiments suddenly start to break down? Sure, it’s a less convenient world for you, but I don’t see why it’s any more artificial than the original problem, and you didn’t seem to take issue with that.
I have taken issue with the use of thought experiments generally in previous comments, partly because it seems to me that they start to break down rapidly when pushed further into ‘least convenient world’ territory. I’m skeptical in general of the value of thought experiments in revealing philosophical truths of any kind, ethical or otherwise. They are often designed by construction to trigger intuitive judgements based on scenarios so far from actual experience that those judgements are rendered highly untrustworthy.
I answered the original question to say that yes, I did agree that the 11th man was not acting immorally here. I suspect this particular thought experiment is constructed as an intuition pump to generate the opposite conclusion and to the extent that the first commenter is correct that the view that the 11th man has done nothing immoral is a minority position it would seem it serves its purpose.
I’ve attempted to explain why I think the intuition that this is morally questionable is generated and why I think it’s not to be fully trusted. I don’t intend to endorse the use of such thought experiments as a good method for examining moral questions though.
Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies.
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
They are often designed by construction to trigger intuitive judgments based on scenarios so far from actual experience that those judgments are rendered highly untrustworthy.
Agreed; however it’s important to distinguish between this sort of appeal-to-intuition and the more rigorous sort of thought experiment that appeals to reasoning (e.g. Einstein’s famous Gedankenexperimente).
I don’t believe it changes my conclusion of the morality of the act.
Given that your defense of the morality was based on the inaction of not self sacrificing, and that in this scenario inaction means self sacrifice and you have to actively kill the other 10 people to avoid it, what reasoning supports keeping the same conclusion?
I’m comparing the inaction to the not-self-sacrificing, not to the lack of action. I attempted to clarify the distinction when I said the similarity was not ‘anything specific about the way the question is phrased’.
The similarity is not about the causality but about the cost paid. In many ‘morality of inaction’ problems the cost to self is usually so low as to be neglected but in fact all actions carry a cost. I see the problem not as primarily one of determining causality but more as a cost-benefit analysis. Inaction is usually the ‘zero-cost’ option, action carries a cost (which may be very small, like pressing a button, or extremely large, like jumping in front of a moving trolley). The benefit is conferred directly on other parties and indirectly on yourself according to what value you place on the welfare of others (and possibly according to other criteria).
I think our moral intuition is primed to distinguish between freely chosen actions taken to benefit ourselves that ignore fairly direct negative consequences on others (which we generally view as morally wrong) and refraining from taking actions that would harm ourselves but would fairly directly benefit others (which may or may not be viewed as morally wrong but are generally seen as ‘less wrong’ than the former). We also seem primed to associate direct action with agency and free choice (since that is usually what it represents) and so directly taken actions tend to lead to events being viewed as the former rather than the latter.
I believe the moral ‘dilemma’ represented by carefully constructed thought experiments like this represents a conflict between our ‘agency recognizing’ intuition that attempts to distinguish directly taken action from inaction and our judgement of sins of commission vs. omission. Given that the unusual part of the dilemma is the forced choice imposed by a third party (the evil king) it seems likely that the moral intuition that is primed to react to agency is more likely to be making flawed judgements.
I see the problem not as primarily one of determining causality but more as a cost-benefit analysis.
This makes sense to me, but it seems to run counter to the nature of MrHen’s original claim that the issue is lack of responsibility. For example, if it’s all about CBA, then you would presumably be more uneasy about MrHen’s hostage example ($100 vs. 10 lives) than he seems to be. Presumably also you would become even more uneasy were it $10, or $1, whereas MrHen’s argument seems to suggest that all of this is irrelevant because you’re not responsible either way.
In this example I wouldn’t hold someone morally responsible for the murders if they failed to pay $100 ransom—that responsibility still lies firmly with the person taking the hostages. Depending on the circumstances I would probably consider it morally questionable to fail to pay such a low cost for such a high benefit to others though. That’s a little different to the question of moral responsibility for the deaths however.
Note that I also don’t consider an example like this morally equivalent to not donating $100 to a charity that is expected to save 10 lives as a utilitarian/consequentialist view of morality would tend to hold.
OK, I think I’m sort of with you now, but I’m just want to be clear about the nature of the similarity claim you’re making. Is it that:
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing; or is it just that
you just happen to hold both the view that harm-by-inaction is allowed and the view that not-sacrificing is allowed, but the justifications for these views are independent (i.e. it’s merely a contingent surface similarity)?
I originally assumed you were claiming something along the lines of 1. but I’m struggling to see how such a link is supposed to work, so maybe I’ve misinterpreted you’re intention.
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing
Yes. I’d generally hold that it is not morally wrong to allow harm-by-inaction: there is not a general moral obligation to act to prevent harm. In real moral dilemmas there is a continuum of cost to the harm-preventing action and when that cost is low relative to the harm prevented it would be morally good to perform that action but not morally required. At extremely low cost relative to harm things become a little fuzzy and inaction borders on an immoral choice. When the cost of the action is extremely high (likely or certain self-sacrifice) then there is no fuzziness and inaction is clearly morally allowed (not-sacrificing by jumping in front of a trolley cart to save 10 is not immoral).
Given inaction being morally permitted in the trolley case, I have difficulty imagining a coherent moral system that would then say that it was not permissible for the 11th man to save himself. The evil king does change the problem but I can only see it making not-sacrificing more rather than less morally acceptable. I can conceive of coherent moral systems that would allow the 11th man to save himself but would require the trolley jumper to sacrifice himself. I have difficulty conceiving of the reverse. That’s not to say that one doesn’t exist, it’s just sufficiently removed from my own moral sense that it doesn’t present itself to me.
That would fall in the territory I describe as fuzzy above. At a sufficiently low cost inaction begins to seem morally questionable. That is largely driven by intuition though and I’m skeptical of attempts to scale it up and draw moral conclusions. I believe there are reasons the intuition exists that do not scale up simply. In other words, scaling up from this to conclude that if a very small cost is obligatory to save a single person then a very large cost is obligatory to save a million people is faulty reasoning in my opinion.
Re. repeated requests for some LW whipping-boy other than religion: How about (Platonic) realism?
It may be more popular than religion; and it may be hard to find a religion that doesn’t require at least moral realism; but people will get less worked-up over attacks on their metaphysics than attacks on their religion.
Having recently received a couple of Amazon gift certificates, I’m looking for recommendations of ‘rationalist’ books to buy. (It’s a little difficult to separate the wheat from the chaff.)
I’m looking mainly for non-fiction that would be helpful on the road to rationality. Anything from general introductory type texts to more technical or math oriented stuff.
I found this OB thread which has some recommendations, but I thought that:
this could be a useful thread for beginners (and others) here
the ability to vote on suggestions would provide extra information
So, if you have a book to recommend, please leave a comment. If you have more than one to recommend, make them separate comments so that each can be voted up/down individually.
I would think that it is, but I’ve never taken a standard Econ 101 college course (not educated in the US, and didn’t take any economics courses as part of my degree).
It helped clarify some thoughts I’d already had about free will—that the standard paradox of free will as incompatible with determinism was not a true paradox. I think the concept of free will used by many people is horribly confused and this book is the best attempt I’ve seen to come up with a coherent conception of what free will can mean in a purely material universe.
Same and same. Recently, Dennett has been good on memes, but elsewhere he does tend to waffle a bit. In Freedom Evolves, Dennet redefines the terms he is discussing, berates everyone else for not using his definitions, and then bangs on about them for hundreds of pages. That’s philosophy for you.
According to this post, doing so would be “against blog guidelines”. The suggested approach is to do top-level book review posts. I haven’t seen any of these yet, though.
What do people here (esp. libertarians) think about (inter)net neutrality?
Seems to me that net neutrality is a great boon to spammers and porn downloaders. People might not like it so much if they discovered that, without net neutrality, they could pay an extra dollar a month and increase their download speed while browsing by a factor of ten.
It seems you are conflating net neutrality (ISPs should not discriminate based on packet characteristics, including origin) with the concept that users should pay for the resources they use.
For one thing, spammers usually use botnets so no change there, average users would bear the cost one way or another. Unless you are advocating depriorization of all email traffic, ISPs have no way other than spam filters to differentiate what counts as spam. I see no connection to the net neutrality debate, or the pay-per-usage model.
As for porn-downloaders, I take it you mean people with high bandwidth needs, which includes all sorts of downloaders. (I really don’t see why you would emphasise porn here, even if you’re trying to evoke feelings of moral resentment, LW would seem the unlikeliest of places that this would have any efect.) I never had a problem with bandwidth usage caps, as long as they are explicit. Then carriers can compete on what they set these limits to and I can choose based on my needs. Nothing to do with net neutrality as far as I can see.
As for my libertarian view on net neutrality: When the governments allow for true competition between ISPs, they can drop all net neutrality provisions as far as I care. But then again, in a truly competitive market, I doubt we would be having a net neutrality issue to begin with.
| As for my libertarian view on net neutrality: When the governments allow for true competition between ISPs, they can drop all net neutrality provisions as far as I care.
Do you believe that true competition can exist in a free market where the economics of scale are as big as in the ISP market? If net neutrality isn’t enforced, a big ISP could squash a small new ISP by demanding a lot of money for peering. They are much less likely to try something like this against a big ISP, who has a lot more bargaining power.
(I am Assuming “true competition” means at least low barriers to entry.)
The arguments against monopolies in a free market apply, here. A big ISP which set out to squash little ISPs would run up its own costs trying, thereby losing to other big ISPs which didn’t do this. If there was only one big ISP, they’d eventually fail if they kept this up, since it would be in the interest of all the little ISPs to peer with each other, and they’d eventually have most of the market, collectively. Economies of scale can be really useful, but unless your firm is able to use force, much of the savings will go to the consumers through competition.
Of course, in the real world, we’re awash in force, so perhaps this isn’t very useful. :(
A big ISP which set out to squash little ISPs would run up its own costs trying, thereby losing to other big ISPs which didn’t do this. If there was only one big ISP, they’d eventually fail if they kept this up, since it would be in the interest of all the little ISPs to peer with each other, and they’d eventually have most of the market, collectively.
But in the meantime, very many small ISPs would go out of business trying to compete before they collectively pull down the big ISP, which likely has other advantages beyond competing on price, such as having a lot of friends and influence among the set of people who could possibly invest funding into a new ISP.
At some point people are going to realize that getting into the ISP market is a recipe for disaster, and if this happens before the big ISP runs out of slack, competition dries up and the big ISP gets to continue being a monopoly.
So yes, if you assume that significant numbers of people will make irrational decisions and take large personal losses starting businesses that are very likely to fail it might work out, but I’m not sure that’s justified.
Honestly, most of the arguments about why monopolies would never survive in a truly free market are glaring examples of how irrational hard-line free market ideas are, usually because people turn the idea of an unregulated market itself into a terminal value and then start rationalizing why it will obviously produce ideal results.
Check out the startup market sometime. Most startups fail, yet there always seems to be money for new ones, because every now and then there’s a Google. You seem to be assuming that people won’t do what they’re actually doing.
Technology startups generally have relatively low entry costs and aren’t trying to jump into an established market with substantial network effect and force out an entrenched larger player.
How many startups do you see trying to, say, go toe-to-toe with Microsoft in the desktop OS or office suite market, and how successful are those?
It’s a fallacy to point to the lack of direct competition from startups in the desktop OS or office suite market and claim that as proof that natural monopolies exist. Companies that dominate an industry for a period often lose their dominance when new technologies come along that make their dominance irrelevant.
Companies that dominated telecommunications when fixed land lines were the only game in town now compete against cellular phone networks and Internet telephony. Microsoft’s dominance in the desktop OS space is becoming less and less relevant as more of people’s day to day computing needs move into the cloud. Google Docs is a potential challenger to Office in the future and has its roots partly in a startup (Writely).
Technological innovation has a way of undermining monopolies that are not protected by government regulation. Sometimes it even manages to undermine protected monopolies—the process of updating legislation to maintain profitable monopoly privileges in the face of technological change is fortunately slow enough that the rent seeking entities can be beaten by faster moving companies.
Net neutrality is a bunch of different issues that get incorrectly lumped together.
The first issue is prioritizing traffic based on endpoints. An example of that is where ISP A contacts example.com and offers to speed up its customers’ connections to example.com, in exchange for money. The problem is that example.com isn’t a customer of ISP A, but of a competing ISP. The full graph of business relationships is
The end user pays ISP A to use its portion of the network, example.com pays ISP B to use its portion of the network, and they split the cost of the link that connects the two ISP’s networks. If ISP A goes after example.com, then it’s trying to bill its competitor’s customers. This is probably in violation of its peering agreement with ISP B, and it would cause a total nightmare for anyone trying to run a web site, as they would have to negotiate with every ISP instead of just the one they get their connectivity from. So with respect to traffic endpoints, net neutrality is extremely important.
The second issue is prioritizing traffic based on type. This is reasonable and sometimes necessary, because some protocols such as telephony only use a small amount of bandwidth but are badly disrupted if they don’t get any for a fraction of a second, while other protocols like ftp use tons of bandwidth but can be paused for several seconds without disrupting anything. The problem there is that protocol prioritization is more often used as a cover story for anti-competitive behavior; eg, ISP A wants to drive ISP B out of business, so they configure their network so that ISP A’s expensive VoIP service gets high priority, ISP B’s VoIP service gets low priority, and everything else gets medium priority. You end up with telephone companies setting the priority on VoIP services that directly compete with their own voice services, cable television setting the priority on streaming video services that directly compete with their own television services, and so on.
Phil, things like cables and phone lines going to houses are “natural monopolies” in that it costs so much to install them that competitors probably can never get started. In fact, if the technology to deliver video over phone lines were available or anticipated when cable TV was building out in the 70s, the owner of the phone lines (pre-breakup AT&T) could probably have stopped the new cable TV companies from ever getting off the ground (by using the fact that AT&T has already paid for its pipe to the home to lowball the new companies). In other words, the probable reason we have two data pipes going into most homes in the U.S. rather than just one is that the first data pipe (the phone line) was not at that time of the introduction of the second pipe technically able to carry the new kind of data (video).
It is desirable that these duopolists (the owners of the phone lines and the cable-TV cables going to the home) are not able to use their natural duopoly as a wedge to enter markets for data services like search engines, online travel agencies, online stores, etc, in the way that Microsoft used their ownership of DOS to lever their way into dominance of markets like word processors and web browsers.
One way to do that is to draw a line at the level of “IP connectivity” and impose a regulation that say that the duopolists are in the business of selling this IP connectivity (and if they like, levels below IP connectivity like raw copper) but cannot enter the market (or prefer partners who are in the market) of selling services that ride on top of IP connectivity and depend on IP connectivity to deliver value to the residential consumer.
This proposal has the advantage that up to now on the internet companies that provide IP connectivity have mostly stayed out of most of the markets that depend on IP connectivity to deliver value to the residential consumer.
It is possible to enshrine such a separation into law and regulations without letting one cable-internet user on a local network (or whatever they call them) shared by a whole block of houses hog up most of the bandwidth of the local network. I.e., there is nothing incompatible here with contracts that impose a monthly cap on bytes received.
And even if spam filtering is made an exception to the separation, so that both connectivity providers (cable-internet and DSL providers) and Google can offer spam filtering, that does not mean that spammer get a free license to spam. What we want is to prevent Verizon or Comcast from making it impossible or more difficult for the Joe Consumer to go to Expedia than to go to Travelocity (or the Comcast Travel Store) -- or more difficult for him to go to Windows Live Search than to Google Search—and we can do that while still allowing Verizon and Comcast to cut off recalictrant spammers (or requiring Joe Consumer to get his email from a email provider that does not happen to be a duopolist who will cut off recalitrant spammers).
Bob Frankston has been eloquent on this issue for at least 10 years now.
Phil, things like cables and phone lines going to houses are “natural monopolies” in that it costs so much to install them that competitors probably can never get started.
How does that square with the fact that in places without government-granted monopolies, there are often more than one provider? My apartment building has two separate cable companies, in addition to Verizon fiber. Is there a general argument for how rental houses often end up with two or more separate cable boxes from more than one provider in areas without government suppression of competition, while still holding that it can’t happen in the general case?
Installing wires requires digging up roads, using utility poles and leaving utility boxes on other peoples’ property. You need permission from local government to do that, period. In some places, the local governments only give one company permission to do that. That’s a government granted monopoly. In other places, they give permission to more than one company, so that they can compete with each other. That’s a government granted oligarchy. But whether there’s one pre-existing cable company or five, if you want to start a new cable company, you need government permission, and you probably won’t get it. It’s nothing like a free market.
And it’s worth noting that a gentlemen’s agreement to not compete too hard is profitable for all parties in a heavily restricted market. Ergo, the government-granted oligopoly is only superior to the monopoly insofar as you expect business executives to be irrational enough to not cooperate in the iterated Prisoner’s Dilemma.
There are other alternatives as well. There’s a company here that provides high speed broadband to businesses via a network of roof mounted microwave transmitters. Businesses use them because they offer better value than paying a local cable company to hook a building up. My parents in rural England had a number of options for broadband despite no cable companies operating in the area, including DSL and a wireless relay from a satellite uplink.
I’m not a libertarian. I am in favor of net neutrality for the following reasons:
Setting up different speeds for different sources of data means that somewhere along the line, either a person or a program is going to see what sites or downloads the customer is trying to access. Anything that gives someone an excuse to spy on browsing is to be discouraged.
Net neutrality keeps the barrier to entry on the Internet low. If it’s necessary to pay extra fees to ISPs to make your site tolerably fast to access, then financially disadvantaged people who have something to offer on their websites will lose out on the part of their audience that is not patient enough to wait for the slower load times.
Saying that no sites will get slower than they already are without net neutrality is not a convincing argument, because speed in computers is judged relatively. The computer my family had when I was five was a fast computer then. It is not just slower than the computers that are new now, it’s no longer a fast computer, period, even if we assume that it hasn’t deteriorated by an absolute measure in the last fifteen years. As such, I would rather everything be the same (absolutely slower) speed than have some things get (absolutely) faster.
Saying that consumers will be able to choose net-neutral ISPs is not a convincing argument, because in many places, there are not multiple ISPs competing for business. I cannot get my Internet from anyone other than Comcast; if Comcast becomes non-neutral, I cannot take my business elsewhere unless I want to do without the Internet at home altogether.
Net neutrality keeps the barrier to entry on the Internet low. If it’s necessary to pay extra fees to ISPs to make your site tolerably fast to access, then financially disadvantaged people who have something to offer on their websites will lose out on the part of their audience that is not patient enough to wait for the slower load times.
This seems to be an argument that people who have something they want to say that nobody wants to pay to hear should be subsidized by people who have something to say that they are either willing to pay to make available or have found others who are willing to pay them to hear (usually through the intermediary of paid adverts under current Internet business models). Is that your actual position or do would you not support that argument? If you do not support this interpretation where do you see the distinction?
If I parse your long sentence correctly, I think I disagree with your interpretation. If no one wants to pay to hear something, that could be for any of several reasons, but the one I had in mind was lack of information about the message or the speaker (e.g. “Hey, do you want to buy a book? It’s only fifty cents!” “What book?” “I’m not going to tell you. It could be the latest John Scalzi, or it could be Volume Four of an outdated botanical encyclopedia, or it could be sheet music for the ocarina, or it could be a coffee table volume about those giant cow art projects in Kansas City.”). Browsing unknown websites is a gamble with time already; making it a gamble with money too will make it less appealing and fewer people will be able to get large audiences. New content providers already have difficulty attracting attention.
I see that as a feature rather than a bug though. Spam is a problem in large part because the cost of sending it is extremely low, much lower per mail for the spammer than the cost to the recipient in wasted time. If someone has some information that they want to share that they believe will be of value to others then an up-front investment is a measure of how valuable they really think it will be. If the primary value of sharing the information is the pleasure of hearing the sound of your own voice (as seems to be the case for a significant percentage of the Internet) or as an attempt to steal other people’s time for personal profit (as in the case of spam) then I think a higher barrier to entry is a good thing.
It seems to me that filtering out information that I don’t want is at least as big a problem on the Internet as finding information I do want.
If someone has some information that they want to share that they believe will be of value to others then an up-front investment is a measure of how valuable they really think it will be.
People already have to spend time and effort to provide the information, which constitutes a concrete investment indicating how valuable they think it is. Many also pay for web hosting. Why would additional costs in money serve any purpose other than to introduce a selection bias in favor of people who have more money?
Also, it wouldn’t help with spam at all and I have no idea why you think it would.
If different types of traffic can be given differing priorities or charged at different rates then I think creative solutions to the spam problems are more likely to be discovered. If some kind of blanket legislation is introduced prohibiting any kind of differentiation between types of traffic then I’m inclined to think we will see less optimal allocation of resources. Even differentiating between high-bandwidth/high-latency usage like movie downloads vs. medium-bandwidth/low-latency usage like online gaming will be restricted. I have no faith in lawmakers to craft legislation that will not hamper future technological innovations.
If different types of traffic can be given differing priorities or charged at different rates then I think creative solutions to the spam problems are more likely to be discovered.
You may recall that net neutrality is currently being debated, there is no current legal barrier to adjusting priorities for types of traffic. Spam has been a problem for quite a while now and no such solutions have been found.
The general rule of thumb when it comes to “creative solutions to the spam problem” is “it won’t work”.
I don’t think it’s true to say that no creative solutions to spam have been found. Spam filters are probably the most successful real world example of applying Bayesian techniques. The battle against spam is an ongoing struggle—we have solutions now to the original problems but the spammers keep evolving new attacks. Legislation will tend to reduce the options of the anti-spammers who have to follow the law and give an advantage to the spammers who ignore it.
Any legislation will limit options and hamper innovation and technological progress. That’s what legislation invariably does in all fields.
I don’t think it’s true to say that no creative solutions to spam have been found.
Bayesian filtering at the user end is the only exception to the rule of thumb I’m aware of. The only other anti-spam actions I’ve heard of with any success are distinctly non-creative variations on cutting the hydra’s heads off, such as blocking individual spam sources.
Any legislation will limit options and hamper innovation and technological progress. That’s what legislation invariably does in all fields.
“Invariably”? Do you have any evidence for this assertion?
Legislation can increase one groups’ options by taking away options from another group. It can’t globally increase options. Legislation is just rules about what actions are permitted and what actions are not permitted so it can’t create new options, it can only take them away or trade them off between different groups. Fewer options means reducing the space of allowed innovations and so hampers techological progress. If you want evidence I direct you to the field of economics.
As I said in another comment this discussion is straying into general politics territory and I’m not sure I want to start down that road. We still haven’t decided as a community how to deal with that particular mindkiller.
You’ve defined “options” in a manner that is zero-sum at best and serves mostly to beg the question raised. Even so, consider: what about government uniquely allows it to limit options? Imagine one man owns 90% of the property in a town and uses his influence to financially ruin anyone who does something he doesn’t like, limiting their potential options (a net negative). A government steps in and forbids him from this behavior, thereby limiting his options but restoring everyone else in the town’s options (a net positive).
You could of course define only physical force, or threat thereof, as limiting options, but even in that case a state-run police force is clearly restoring more options than it removes.
As I said in another comment this discussion is straying into general politics territory and I’m not sure I want to start down that road. We still haven’t decided as a community how to deal with that particular mindkiller.
Agreed, and I wouldn’t have gotten into it on anything but an Open Thread (wherein it seems relatively harmless).
what about government uniquely allows it to limit options?
A monopoly on the use of force.
A government steps in and forbids him from this behavior, thereby limiting his options but restoring everyone else in the town’s options (a net positive).
Assuming the validity of the example for the sake of argument, this kind of situation is what I meant when I said that legislation can only move options from one group to another. The example I had in mind was anti-discrimination laws—the government removes the option from an employer to discriminate on the basis of race/sex/religion and thus increases the options available to the employee who was discriminated against. That’s one of the best cases I can think of for the argument that the change is a net positive but I don’t think it’s a watertight case.
In the case of legislation limiting economic activity I think it’s hard to argue that reducing options can ever be an encouragement to innovation and technological progress, although it can potentially redirect it in politically favoured directions. The only economically sound arguments for legislation I’ve seen stem from attempts to internalize negative externalities and while in theory such legislation can be justified, real world examples are often less clearly beneficial.
Agreed, and I wouldn’t have gotten into it on anything but an Open Thread (wherein it seems relatively harmless).
As long as it stays civil it should be harmless, political discussions have a tendency to rapidly degenerate though...
In the case of legislation limiting economic activity I think it’s hard to argue that reducing options can ever be an encouragement to innovation and technological progress, although it can potentially redirect it in politically favoured directions.
Hmmm… spending tax money to directly fund basic research doesn’t count? (If you have to, assume that the people whose money was taxed would have spent it on something generally unrelated to technological progress—say, tobacco cultivation and consumption.)
In theory, eliminating or discouraging options that result in not creating progress should result in more progress...
It would count as an example of redirecting innovation and technological progress in politically favoured directions. I would argue that very little money is spent on something unrelated to technological progress—all industries, the tobacco industry included, drive technological progress in their pursuit of greater profits. The technologies that get developed will tend to be technologies that help satisfy the public’s actual wants and needs rather than those that the political class thinks are more important or those that have the best lobbyists.
I figured you would claim that—please justify it. How does my (contrived, but not impossible) scenario not result in a global net negative in terms of options available? How is someone exercising non-physical power and influence to limit someone else’s options different from physical force in terms of practical end results?
Also note that property rights are backed by threat of force from the state. Does the existence of property rights constitute a net loss in options for society and, if so, would it be better to repeal them?
How does my (contrived, but not impossible) scenario not result in a global net negative in terms of options available?
I didn’t originally claim that governments alone have the power to limit options. My original claim was that legislation cannot globally increase available options. What is unique about states (unique in practice if not in theory) is that their geographical scope and monopoly on the use of force gives them vastly greater power to limit options than other entities, power that they have a strong inclination to exercise at every opportunity. I’m not aware of any individuals in history who have had anything approaching the power of a modern state to restrict the options of others without using physical force. It is much easier for the victim in your example to move to another town than it is for most people to escape the reach of states.
I stand by the claim that legislation can only reduce or redistribute options and not create them. I also believe that states are far more capable of meaningfully restricting options than individuals so long as they maintain a monopoly on the use of force. I never intended to imply a claim that non-state actors cannot also restrict options, and can sometimes do so without the use of force and I’m not going to try and defend that straw man.
Does the existence of property rights constitute a net loss in options for society and, if so, would it be better to repeal them?
I believe societies work better with property rights (though I have serious doubts about intellectual property rights being a net benefit). I believe the benefit comes from reducing the number of occasions when conflicts of interest can only be resolved by resorting to violence. If everyone can agree to a framework in advance for resolving disputes without violence then there is a net benefit to be gained. I think it is unclear whether this results in a net loss in options for society since the greater prosperity property rights make possible leads to new options that may not have existed before. Certainly individuals lose options in the short term but a rational agent may make that choice in order to reap the perceived future benefits. For individuals it`s akin to a hedging strategy—giving up some potential gains (stealing from others) in order to reduce the risk of catastrophic losses (being killed by others).
If you want to pursue a similar argument to justify net neutrality legislation (or any piece of proposed new legislation) then I believe you’d need to make a case that the introduction of such legislation would lead to such an improvement in prosperity that it would more than make up for the lost opportunities it prevents. I think that is a difficult case to make for most legislation.
My original claim was that legislation cannot globally increase available options.
Well, no, because you’ve defined options such that a global increase appears to be essentially impossible.
I’m not aware of any individuals in history who have had anything approaching the power of a modern state to restrict the options of others without using physical force.
A large, modern, state such as the USA federal government, yes. Beyond that, a large corporation has more power to restrict people than, say, a small township’s government, including that it’s easier to move to a new town than escape a global corporation. There are a lot more ways to coerce people than physical force; bribery is pretty effective, too.
I never intended to imply a claim that non-state actors cannot also restrict options, and can sometimes do so without the use of force and I’m not going to try and defend that straw man.
So you do agree that by intervening with force to prevent a non-state actor from restricting options, the state can increase global options vs. non-interference?
I believe the benefit comes from reducing the number of occasions when conflicts of interest can only be resolved by resorting to violence. If everyone can agree to a framework in advance for resolving disputes without violence then there is a net benefit to be gained.
Governmental actions, including enforcing property rights, are in the end backed up with threat of violence, as always. You’ve not removed the violence inherent in the system, merely hidden it.
I think it is unclear whether this results in a net loss in options for society since the greater prosperity property rights make possible leads to new options that may not have existed before.
Of course it restricts options—there’s nothing you can do in a society with property rights that wouldn’t also be possible in a society without property rights, it’s just less likely to occur without government-imposed restrictions.
Ergo, an example wherein government acting to reduce individual options has causally lead to a greater chance of success and more innovation.
Furthermore, your defense of property rights is pretty much exactly the same logic that defends any government intervention at all. You’ve drawn an arbitrary line, and the fact that plenty of societies far on the wrong side of your line have prospered suggests that it isn’t immediately obvious that your placement of the line is the correct one.
If you want to pursue a similar argument to justify net neutrality legislation (or any piece of proposed new legislation) then I believe you’d need to make a case that the introduction of such legislation would lead to such an improvement in prosperity that it would more than make up for the lost opportunities it prevents. I think that is a difficult case to make for most legislation.
The arguments in favor of net neutrality are well known, and persuasive mostly because:
The telecom market is not a free market in any conceivable way
The telecom companies have a history of not doing a good job
At least one company has floated trial balloons about exactly the sort of absurdity that net neutrality is intended to prevent
The stuff it intends to prevent is antithetical to the design of the internet and pretty objectively bad for anyone who isn’t a bloated, inefficient telecom monopolist, and there’s evidence that the chance of it happening is nontrivial; ergo, the burden of proof is substantially shifted to those arguing that the negative side-effects (e.g., collateral damage to legitimate packet QOS) are bad enough to be not worth it. I’ve yet to see any persuasive arguments along these lines, especially from informed, respected people in the technology field.
The only reason I can see to oppose net neutrality is a (in my opinion, unjustifiably) large prior probability for the proposition “legislation X is ipso facto bad” for all X.
you’ve defined options such that a global increase appears to be essentially impossible.
I don’t believe I’ve defined options in any particularly unusual way. What specifically do you take issue with? There is a sense in which options can globally increase—economic growth and technological progress can globally increase options (giving people the option to do things that were not possible before). Institutions that tend to encourage such progress within a society are valuable. Legislation that limits options requires very compelling evidence that it will encourage such progress to be justified in my opinion—when in doubt, err on the side of not restricting options would be my default position.
Beyond that, a large corporation has more power to restrict people than, say, a small township’s government, including that it’s easier to move to a new town than escape a global corporation. There are a lot more ways to coerce people than physical force; bribery is pretty effective, too.
Bribery is not coercion, it’s an economic exchange. It differs from other economic exchanges in that it generally involves a non state actor exchanging money or other goods for favourable treatment under the coercive powers of a representative of the state. I can not think of an example of being restricted by a corporation except when they have acted in concert with the state and have had the backing of the state’s threat of force. I don’t really know what you mean by ‘escaping’ a global corporation—what kind of escape do you have in mind beyond terminating a contract?
So you do agree that by intervening with force to prevent a non-state actor from restricting options, the state can increase global options vs. non-interference?
If a state intervenes with force to prevent the use of force by a non-state actor (the police intervening in a mugging for example) then it is creating an environment that is more conducive to productive economic activity and so allows for a global increase in options. I think the set of actions a non state actor can take to reduce options that do not involve force that the state can beneficially interfere in is either empty or very small though. I’m also not convinced that a state is the only institution that can play this beneficial role, though there are limited historical examples of alternatives.
You’ve drawn an arbitrary line, and the fact that plenty of societies far on the wrong side of your line have prospered suggests that it isn’t immediately obvious that your placement of the line is the correct one.
I’d disagree that the line is arbitrary. It’s certainly less arbitrary than the standard generally applied when deciding what laws to pass. It’s not immediately obvious that it’s the right place it’s true. That’s why I consider the large amount of evidence that demonstrate greater economic growth and prosperity in societies that are closer to the line to be one of the key insights from modern economics.
Ironically the argument you are making here is almost exactly a mirror image of my argument against net-neutrality legislation. The fact that the Internet exists as it does without any current legislation suggests that it isn’t immediately obvious that your desire to move the line is the correct one. There seems to me to be little evidence that would suggest that in this one special case legislation would be beneficial to outweigh the large amounts of evidence that restrictive legislation is generally a net negative and a barrier to innovation.
This addresses the “options” point but not the “hamper innovation” point. The obvious (but arguable) counter-example to the “hamper innovation” point is patent law, in which the government legislates a property right into existence and gives it to an inventor in exchange for the details of an invention.
ETA: Patent law is said to foster* innovation in two ways—it protects an inventor’s investment of time and energy, and it encourages inventors to make details public, which allows others to build on the work. These others can then patent their own work and obtain licenses to use the previously patented components.
* phrasing weakened after reading reply. Was: “Patent law fosters innovation...”
True, patent law is intended to promote innovation. There’s quite a lot of evidence that it has the opposite effect but I agree it’s not immediately obvious that it doesn’t work and there is not yet a consensus that it is a failure. The standard argument you give in favour of patent law is at least superficially plausible.
I let automatic programs filter most of my spam, and the small trickle that gets through seems a small price to pay for the fact that I can have my creative projects on the Internet for free, without having to pay a premium to eliminate a special opportunity cost for potential readers. According to my stats, they are not utterly valueless wastes of space—I have some people who are willing to invest time in viewing my content—but I don’t doubt for a moment that I’d lose most, if not all, of my audience if they were obliged money (that didn’t even make its way to me, the creator of the content).
People drop or refrain from picking up new sites over very little provocation—I stopped reading Dr. McNinja when I started using an RSS feed instead of bookmarks to read my webcomics. Dr. McNinja didn’t become more inconvenient to read when I made this switch; I could have kept the bookmark—it simply didn’t get more convenient along with everything else. I didn’t care about it quite enough to keep it on my radar when it would have taken ten seconds of conscious effort three times a week—not even money and the hassle of providing money over the Internet. I can’t think of any (individual) website that I would pay even a trivial extra amount of money to visit.
The situation you describe is the one that currently exists without any net neutrality legislation though.
I’m suspicious of net neutrality because it uses the threat of imagined or potential future problems to push for more legislation and more government involvement in a market that seems to have worked pretty well without significant regulation so far. This is a general tactic for pushing increased government involvement in many different areas.
The actions that have so far been taken that would be prohibited by net-neutrality legislation mostly seem to be about throttling bittorrent traffic. I’d much rather see a focus on eliminating existing government sponsored monopolies in the provision of broadband access and allow the market to sort out the allocation of bandwidth. I am very doubtful that any kind of legislation will produce an optimal allocation of resources.
The fact that something seems to have worked pretty well without significant regulation so far could mean that it will continue to do so, or it could mean that it’s been lucky and taking no new precautions will cause it to stop working pretty well. I don’t have any antivirus software on my Mac; if more people start finding it an appealing challenge to infect Macs with viruses, though, it would be stupid for me to assume that this will be safe forever. More companies are starting to show interest in behaviors that will lead to biased net access. Regulation will almost certainly not yield optimal allocation of resources; it will, however, prevent certain kinds of abuses and inequalities.
I guess this comes down to politics ultimately. I have more faith that good solutions will be worked out by negotiation between the competing interests (Google tends to counter-balance the cable companies, consumers have options even though they tend to be limited by government sponsored monopolies for broadband provision) than by congress being captured by whoever has the most powerful lobbyists at the time the laws are passed. I take the fact that things are ok at the moment as reasonable evidence that a good solution is possible without legislation. Certainly bad solutions are possible both with and without legislation, I just tend to think they are much more likely with legislation than without.
This may or may not have to do with the fact that I am not paid by the hour. My stipend depends on grading papers and doing adequately in school, but if I can accomplish that in ten hours a week, I don’t get paid any less than if I accomplish it in forty. Time I spend on Less Wrong isn’t time I could be spending earning money, because I have enough on my plate that getting an outside job would be foolish of me.
Also, one cent is not just one cent here. If my computer had a coin slot, I’d probably drop in a penny for lifetime access to Less Wrong. But spending time (not happily) wrestling with the transaction itself, and running the risk that something will go wrong and the access to the site won’t come immediately after the penny has departed from my end, and wasting brainpower trying to decide whether the site is worth a penny when for all I know it could be gone next week or deteriorate tremendously in quality—that would be too big an intrusion, and that’s what it looks like when you have to pay for website access.
Additionally, coughing up any amount of money just to access a site sets up an incentive structure I don’t care for. If people tolerate a pricetag for the main contents of websites—not just extra things like bonus or premium content, or physical objects from Cafépress, or donations as gratitude or charity—then there is less reason not to attach a pricetag. I visit more than enough different websites (thanks to Stumbleupon) to make a difference in my budget over the course of a month if I had to pay a penny each to see them all.
In a nutshell: I can’t trade time alone directly for money; I can’t trade cash alone directly for website access; and I do not wish to universalize the maxim that paying for website access would endorse.
The telecommunications market in the United States is so ridiculously far from an idealized free market in so many ways that I don’t see why you’d expect a libertarian perspective to be insightful.
The only sensible free market perspective on anything related to telecommunications has to pretty much start with “tear down the entire current system and start over”.
Obviously we would do things differently if starting over from scratch, but that isn’t going to happen, and it doesn’t mean that we shouldn’t think about the incremental steps we should take. And I don’t think we should ignore economics when thinking about what those incremental steps should be.
However, the typical libertarian response is roughly “move closer to a free market”, which does not necessarily give better results from all starting points. In the case of a market that is by its nature far from a perfectly competitive one, that’s been heavily distorted by previous government interference, has several entrenched players, &c., there’s every reason to believe that naively reducing regulation will lead toward a local minimum.
Look up Tim Lee from the Cato Institute. I think he has written on this topic. I think the standard libertarian position on net neutrality is that it’s no good. Personally, I don’t have the technical knowledge to really comment, though once at what might be called a libertarian boot camp, I came up with the slogan “surf at your own speed” in an exercise to come up with promotions for net nonneutrality.
Toward the end (i think) they get into the issue. I can’t remember what Raymond says, but IIRC, he takes a nonneutrality position while not sound like the standard libertarian position. It’s an interesting podcast throughout, however, and you should listen to the whole thing. All of you.
“Thomas Hazlett of George Mason University talks with EconTalk host Russ Roberts about a number of key issues in telecommunications and telecommunication policy including net neutrality, FCC policy, and the state of antitrust. Hazlett argues for an emergent, Hayekian approach to policy toward the internet rather than trying to design it from the top down and for an increased use of exchangeable property rights in allocating spectrum.”
Why do you think it should be free? And why would that offset the inefficiencies inherent in a massive public system?
Is there any evidence that massive systems are efficient no matter who is running them? Government-run utilities don’t seem to do worse, and the USA health care system is demonstrably less efficient than many countries’ public systems.
Can you give an example? All the government run utilities that come to mind are disasters. There are probably examples of government run utilities that are efficient but I’m having trouble thinking of any.
My local town provides unusually good garbage collection service. An attempt by the borough council to save money by hiring private contractors for garbage collection was met by many, many outraged people showing up to the council meeting to protest.
Efficient implies cost-effective: government run garbage collection might be a reasonably high quality service but come at an unreasonably high price. It sounds like cost concerns were the motivation for the change in your example.
Are 3^^^3 dust specks enough dust specks to cause the universe to collapse rather than expand or has the expansion moved beyond even the theoretic influence of gravity?
In the process of reading and thinking and processing all of this new information I keep having questions about random topics. Generally speaking, I start analyzing the questions and either find a suitable answer or a more relevant keystone question. I have often thought about turning these into posts and submitting them for the populace but I really have no idea if the subjects have been broached or solved or whatever. What should I do?
An example would be a continuing the train of thought started in Rationalistic Losing. The next question in that line deals with how to determine which contests are valuable and how to weigh the pros and cons of particular contests. But I have strong suspicions that this subject has been covered or is so blatantly obvious no one would be interested. Should I just post the darn thing and find out then, or should I ping for interest first?
The reason I even ask is that I am very under-versed in LW and OB topics and posts. This leads me to the opinion that I should not submit new posts until after I read everything. The flip-side is that I am going to be thinking about all of these things anyway; what harm is there in typing it up and submitting it? There is potential gain in some feedback and the only potential harm seems to be annoying you. But if the post is annoying, the community can downvote and be done with it.
Are there any strong opinions about this?
The question could be reworded in this form: Should newer members post on what could be potentially obvious, boring, or been-there-done-that topics?
I am going to be thinking about all of these things anyway; what harm is there in typing it up and submitting it? There is potential gain in some feedback and the only potential harm seems to be annoying you. But if the post is annoying, the community can downvote and be done with it.
The ability to solve epistemic problems where motor actions are inhibited—and all the agent can do is indicate the correct answer—seems like an important subset of problems a rational agent can face—if only because this skill is less complicated to learn—and is relatively easy to test.
I believe this skill is usually referred to as “reasoning”. Maybe we should be discussing this subject more than we do.
This is not very topical, but does anyone want to help me come up with a new term paper topic for my course on self-knowledge? My original one got shot down when it turned out that what I was really trying to defend was a hidden assumption that is unrelated to self-knowledge. Any interesting view I can defend or attack on the subject of introspection, reflection, self-awareness, etc. etc. has potential. Recommended reading is appreciated.
I found Strangers to Ourselves an interesting read on this topic. One of his claims is that the best way to come to know yourself is to study how others react to you and to study your own actions rather than relying on introspection which is an interesting perspective.
Someone mentioned Eric Schwitzgebel here recently, and if you didn’t catch it, his papers might be of help. I’m currently reading “The Unreliability of Naïve Introspection”, and it’s very good so far.
I was the person who mentioned Eric Schwitzgebel, and that paper is a reading we were assigned in class. I’d love to write something on him, but the trouble is that I just agree with him; that isn’t enough for a five-page paper, let alone twenty.
Do you read OB as well? Robin Hanson’s posts often cover topics related to self-knowledge (typically how our actions are better explained by such cynical factors as social signaling, and our beliefs are revised in feel-good ways). I’d say just about any of the studies he links could use up a number of pages.
In many eastern philosophies, there are meditational practices which seek to see without ego. In less wrong and other places, i got ideas that autistic people don’t construct stories, they see things as they are.
A topic you could try is—When an autistic person sees within himself, there is something present there, not just nothing ness.
Such research could be useful in constructing artificial conciousness and uploading applications. Kudos!
On the basis of a brief perusal of that post and the Wikipedia article it links to, I’d say it looks ad hoc and overcomplicated. It’s entirely possible that people working with it have had useful insights, but if the approach turns out to be anything like the best way of doing decision theory under conditions of extreme uncertainty then I’ll be extremely surprised.
(I also looked at the website made by the theory’s chief proponent. It seems to have scarcely any actual mathematical content, and to exist only for publicity; that doesn’t seem like a good sign.)
Oops, I “reported” when I meant to “reply”. (Someone was talking to me and I clicked ‘yes’.) What action need I take to undo?
Seems like a sensible thing would be to report this as well, and the two will cancel out. However, I can’t report myself… can you report this, indicating you have done so with a single downvote, and then I will delete this comment.
I can’t help suspecting that Eliezer’s suggestion that no one should talk about AI, technological singularities, etc., until May was motivated not only by wanting to keep LW undistracted until then, but also by wanting to encourage discussion of those topics afterwards. (He has been quite open about his hope that all his writings on rationality, etc., will have the effect of increasing support for SIAI.)
Whether that’s a reason for talking about those topics (yay, Eliezer will be pleased, and Eliezer is a good chap) or a reason for not talking about them (eeew, he’s trying to manipulate us) will doubtless vary from person to person.
I think it was so that newcomers wouldn’t think that LW are a bunch of fringe technophiles that just want to have their cause associated with rationality.
But that’s pretty much what LW is, no? I’ve long suspected that “rationality,” as discussed here, was a bit of a ruse designed to insinuate a (misleading) necessary connection between being rational and supporting transhumanist ideals.
But that’s pretty much what Christianity is, no? I’ve long suspected that “faith,” as discussed there, was a bit of a ruse designed to insinuate a necessary connection between being eating other animals and supporting childish ideals.
I just want to say thanks for all of the minor UI improvements over the past few weeks. Whoever helped is most appreciated.
I’m particularly grateful for the reply notifications! No more tedious checking every visit, lest I appear rude.
So, Lojban is getting actively obnoxious, ignoring the informal kinds of feedback. What’s to be done (in the cases like this)?
It appears that Karma cannot be negative. There also does not appear to be a way to ignore comments from users below a certain Karma threshold. The ability to set a Karma threshold together with the ability for Karma to be negative would seem useful for this kind of situation. Setting a Karma threshold of 0 would not be ideal as it would block comments from new users as well as trolls.
I’d just like to say as an occasional Lojbanist that the LW user “Lojban” does not speak for us.
Also of note, I do not speak for chickens.
With relevance to the subject, I say the thing to do is downvote obnoxious comments and then stop reading them. Also, do not reply. This is standard behavior when dealing with trolls. If someone is annoying you (a) talk to an admin or (b) ignore them.
I gave him the benefit of the doubt, the voluntary castration sounds so crazy, but the absurdity heuristic is there for a reason, maybe I gave too much credit for simply being on LW.
It’s not that part that’s trolling. Look at his recent history.
Someone like this is just griefing, and is impervious to downvoting. I think we can ban such trolls as this without any danger of evaporative cooling. (Um, is there a moderator with the banhammer?)
And I don’t speak for the military.
(Why the hell do I keep using a username I picked when I was 12, anyway?)
I know: I am honored to include among my friends Robin Lee Powell and Matt Arnold.
LW doesn’t register negative karma right now. It should.
Sufficiently negative karma should enforce a maximum posting rate.
Of course, that just leads to getting lots of accounts. Not sure how to deal with that.
I would like some clarification on “LW doesn’t register negative karma right now.” Does that mean
my negative points are GONE, or
they are hiding and still need to be paid off before I can get a positive score?
Thanks
I believe they stick around invisibly. Your karma should always be dynamically the sum of upvotes and downvotes you’ve received.
THanks for the clarification.
I guess I won’t be posting articles to LessWrrong, as I have no clue what I’m doing wrong such that I get more downvotes than upvotes.
They are hiding. To see your actual karma score, try to down-vote me and a little message will appear.
The worst punishment is an IP ban, which is only really helpful if the banned is not smart enough to switch locations. Generally speaking, annoying people go away if you ignore them. Thanks to karma, the comments will be censored out automatically once it passes your threshold.
So much of this emerging art seems to about how to get ourselves to actually update our beliefs and actions to evidence, rather than just going around in circles just doing what we’re used to doing. In the quantum physics sequence, Eliezer told a story about aliens called the Ebborians, who “think faster than humans {and} are less susceptible to habit; they recompute what we would cache.” Following the discussion of individual differences on the path towards rationality, I wonder: would an Ebborian art of rationality be about how to update less, how to maintain true beliefs and effective actions, rather than thrashing about wildly, doing whatever seems like a good idea at the moment? Or is this idle speculation too ill-formed to be worth discussing?
I notice that there doesn’t seem to be any way to have a Less Wrong profile page, or at least a link that people can click to learn more about you. As it is, no matter what you post on Less Wrong, it’s going to get buried eventually.
Excellent diagrams-next-to-equations explanation of Bayes’ Theorem (except that I think diagrams with rectangles would be more visually accurate).
Suppose that I live on a holodeck but don’t know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.
Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.
At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.
Not my only objection, but:
If the hololords smiled upon you, why would they even need you to buy a lottery ticket? How improbable is it that they not only want to help you, but they want to help you in this very specific way and in no other obvious way?
I don’t think this argument holds up. Suppose the holodeck hypothesis is true and someone outside the simulation decides to punish irrational choices by killing you if you buy a lottery ticket. The probability of you being killed is around 10^-3 so you should never risk buying a lottery ticket.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
OK, what if we accept the simulation hypothesis, and we further say that the advanced civilizations are simulating their ancestors? Then we’d expect our simulators to be evolved or derived from humans in some way. It’s unlikely we’ll change our ideas of fun and entertainment too much, as those are terminal values—we value them for their own sake. This gives us some pretty strong priors, based on just what we know of current human players...
I’m not sure how to interpret that though. Based on players of the Sims, say, we would expect tons of either sadistic deaths by fire in bathrooms and starvation in the kitchen, or people with perfectly lovely lives and great material success. Since we don’t observe many of the former, that’d suggest that if we’re in a simulation, it’s not being run by human-like entities who intervene.
This is exactly like the old joke about the guy who prays fervently for years that God let him win the lottery; finally, a booming voice comes down “At least meet me halfway: buy a ticket!”
When exactly will the probability estimate go way up? Someone living in the holodeck obviously isn’t aware they are living “in the future” or not. The probability has to be calculated from the inside, so I don’t see how it would ever change.
The probability of living in a holodeck is P(it is possible to build a holodeck) * P(you are in a holodeck|it is possible to build a holodeck). If you ever see or hear about people a holodeck being built, then the first term becomes 1.
However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.
Um, what?
I know this is a blog—and not a journal—but if post authors could start with a summary of your main point, it might help. Like an abstract. Say what you are going to say, then say it, then summarise.
I tend to give up on posts if I don’t see that the topic is one which is of interest to me in the first paragraph.
I thought about replying to this anecdote about sleep with an anecdote of my own, but decided that it’ll only add noise to the discussion. At the same time, I caught a heuristic that I would possibly follow had I not caught it, and that I now recall I did follow on many occasions: if someone else already replied, I’d join in.
It’s a variety of conformity bias, and it sounds dangerous: turns out that sometimes, only two people agreeing with each other is enough to change my decision, independently on whether them agreeing with each other gives any evidence to change my own mind. This is a powerful force, that threatens to irrationally synchronize people in a group even where influence is not strong, and the group is far from being on the way towards an affective death spiral.
The dearth of posts in the past couple of days has led me to wonder whether there is a correlation between the length of time a post spends as the most recent post (or most recent promoted post), and the magnitude of its score. Or rather, I expect there to be such a correlation, but I wonder how strong it is.
I just lost about 50 karma in the space of about an hour, with pretty much all of my most recent comments being voted down. I recall others mentioning they’ve had similar experiences, and was wondering how widespread this sort of thing actually is. Does this happen to others often? I can imagine circumstances under which it could legitimately occur, but it seems a bit odd, to say the least.
Yeah, that’s getting to be a problem around here. We could at least make it more time-consuming for the griefers by removing the ability to upvote or downvote on the user pages, so that you’d have to click on the permalink for each comment first. I don’t think that would be too much of a hassle for those of us who are using the karma system properly...
While I disagree with rhollerith below, I also sometimes downvote (or upvote) from the user page (though less frequently than from ‘recent comments’). Sometimes I read a comment so obscure (or so brilliant) that I feel the need to look back and see if this person has made a lot of comparable comments that have been neglected.
Of course, I haven’t been able to downvote at all for some time.
I, too, sometimes read from a user’s user-page and up- or down-vote comments according to my impressions of their individual quality.
This wouldn’t create the −50 karma jump conchis describes, at least the way I do it, but it is evidence that orthonormal’s suggestion would have some costs.
conchis, I have been reading your comments for at least 12 months on Overcoming Bias and have accumulated no negative feeling or opinion about you, so please do not think that what I am going to say is directed at you.
I have been thinking of adopting this strategy of occasionally giving a participant 20 or 30 or so downvotes all at once rather than frequently giving a comment a single downvote because I judge moderation of coment-writers (used, e.g., on SL4 back before 2005 and again in recent months, when a List Sniper has been active, during which times SL4 has been IMHO very high quality) to work better than moderation of comments (used, e.g., on Slashdot, Reddit and Hacker News, which are IMHO of lower quality).
So, I would like people to consider the possibility that downvoting of 20 or 30 of the comments of one comment-maker in one go should not be regarded as an abuse or an improper use of this web site unless of course it is done for a bad reason.
I hereby withdraw this comment because the responses to this comment have made me realize that it is destructive to downvote a comment without regard to the worthwhileness or quality of that particular comment.
That makes sense, but removing the negative feedback to a time other than when the comment was made makes it extremely hard for the commenter to improve. If that is not one of your goals, fair enough.
A commenter’s karma means nothing when reading this site since the karma is not displayed while the karma of a comment means everything. Disagreeing with the way a site uses karma makes sense, but trying to implement a better system by ignoring the purpose of the implemented system is not particularly useful.
Now, if you have the habit of reading through someone’s comments all at one time and judge each comment for its own value, than what I said here is mostly irrelevant.
No, that’s not what I have been contemplating.
“A commenter’s karma means nothing,” is a bit of an overstatement because you need 20 karma to post. Also, most commenters are probably aware of changes in their karma. And if I reduce a person’s karma by 20 or 30 points, I would send him a private message to explain.
What I propose reduces the informativeness of a comment’s point score but more-or-less maintains the informativeness of a commenter’s karma. If enough voters come to do what I contemplate doing or if enough well-regard participants announce their intention to do what I contempate doing, then the maintainers of the site will adjust to the reduction in the informativeness of a comment’s point score by focusing more of their site-improvement efforts on a commenter’s karma. Note that those site-improvement efforts will tend to make effective use of the information created by the voters who follow the original policy of voting on individual comments (as well as the information created by voters who vote that way I contempate voting).
What is it that causes you to believe a commenter should be penalized 20 or 30 karma points at a time? If it is that they make a lot of worthless comments, then you have no shortage of comments to down vote, and there is no need to down vote their comments indiscriminately. If it is the they made an exceptionally worthless comment, it is my experience that these get down vote pretty fast by many people, so they will still lose a lot of karma even though you only contribute one point to their loss.
In short, I don’t see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem. If a normally good contributor has a bad day and makes some bad comments, it does not make sense to devalue their previous high quality comments.
I concur; the points-values attached to individual comments have a larger impact on what LW-readers see than do users’ karma values, and are therefore more important to retain as accurate indicators of comment quality.
If a particular user has a pattern of making comments that impair LW in a particular way, you might explicitly comment on this, rhollerith, with detailed, concrete language describing what the pattern is, what specific comments fit that pattern, and why it may impair LW conversation. You could do this by public comment or private message. This has the following advantages over blanket user-downvoting:
It does not impair quality-indicators on the user’s other comments;
The user can understand where you are coming from, and so can integrate information instead of just finding it unfair;
It publicly states community norms (in the public-message version), and so may help others of us retool our comments in more useful ways as well (as well as making us less likely to feel there are random invisible grudges disrupting LW karma);
If you are mistaken about what is and is not useful, others can respond by explicitly sharing conflicting impressions.
ETA: My comment here was slightly mis-directed, in that Hollerith above said he would send the user a message explaining his reasoning.
JGWeissman writes, “I don’t see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem.”
Vladimir Nesov writes, “If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system.”
Anna writes, “This has the following advantages over blanket user-downvoting: . . . It does not impair quality-indicators on the user’s other comments”
The objection is valid. I retract my proposal and will say so in an addendum to my original comment.
The problem with my proposal is the part where the voter goes to a commenter’s lesswrong.com/user/ page and votes down 20 or 30 or so comments in a row. That dilutes or cancels out useful information, namely, votes from those who used the system the way it was intended.
If there were a way for a voter to reduce the karma of a person without reducing the point-score of any substantive comment, then my proposal might still have value, but without that, my proposal will have a destructive effect on the community, so of course I withdraw my proposal.
True, but if the behavior of the voters change so that the former becomes less informative, the site will tend to change so that the user’s karma will come to have a larger impact. In competent software development, changes in people’s behavior will cause maor changes in the software more often than changes in the software will cause major changes in the behavior of people. (Consequently, assuming the software developers are competent, most changes to the system are best initiated as changes to behavior rather than changes to the software—and if the software developers are not competent, then the site is probably doomed anyway.) Or so it seems to me.
A normally good contributor’s having a bad day is not going to be enough to trigger any downvoting of any of his comments under the policy I contemplate. Th policy I contemplate makes use of a general skill that I hypothesize that most participants on this site have: the ability to reserve judgement on someone till one has seen at least a dozen communications from that person and then to make a determination as to whether the person is worth continuing to pay attention to.
The people who have the most to contribute to a site like this are very busy. As Eliezer has written recently on this site, all that is need for this site to die is for these busy people to get discouraged because they see that the contributions of the worthwhile people are difficult to find among the contributions of the people who are not worth reading—and I stress that the people who are not worth reading often have a lot of free time which they use to generate many contributions.
Well, the voting is supposed to be the main way that the worthwhile contributions “float to the top” or float to where people are more likely to see them than to see the mediocre contributions. But that only works if the people who can distinguish a worthwhile contribution from a mediocre contribution bother to vote. So let us consider whether they do. For example, has Patri Friedman or Shane Legg bothered to vote? They both have made a few comments here. But they are both very busy people. I’ll send them both emails, referencing this conversation and asking them if they remember actually voting on comments here, and report back to y’all. (Eliezer is not a good person to ask in this regard because he has a big investment in the idea that a social web site based on voting will win, so of course he has been voting on the contributions here.)
The highest-scoring comment I know of is Shane Legg’s description of an anti-procrastination technique, which currently has 16 points. But there are thousands of readers of this site. Now it is possible that a lot more readers of Shane’s comment would have voted it up if it did not already have a high score, but I humbly suggest that it is more likely that only one or two or three percent of the readers of a comment would have bothered to vote on the comment regardless of its score.
Whether this site lives or dies seems to depend on the frequency with which the people who can tell a worthwhile comment from a non-worthwhile comment bother to vote. But like I said, these people tend to be very busy.
Hence my suggestion of adopting a policy of voting on commenters rather than coment—because that is going to save some of the busy person’s time.
There is a strong ethic in American society (and probably in other societies) that it is contributions and not individuals that should be judged. Well, I humbly suggest that since being able to contribute comments and posts here is not a basic human need, like housing or education or the opportunity to compete on an equal footing with other workers for income, the application of that admirable ethic to the decision of who gets to comment and post here is not worth the risk of this site’s going downhill to the point where the people who could have carried the site decide it is not worth the time out of their busy lives.
EDIT. If no other participants on this site declare their intention to use commenter-based voting, then I probably will not use commenter-based voting either because of what the economists call network effects. The only reason I suggested it in the first place is that conchis’s comment is not the first time someone here has indicated that voters other than me are already using commenter-based voting.
EDIT. I have backed down from the whole idea of downvoting many comments in one go. I do not delete this comment only because someone already replied to it.
If you are basing your judgment on at least a dozen communications from the commenter, then, as I explained, you have already identified plenty of comments that should be downloaded. If you base your decision on seeing the 10 of the 12 observed comments are problems, then you can dock the bad commenter 10 points. And if you are right, you will not be the only one. You do not need to personally dock the user 20 or 30 points.
If a person has excellent judgment to distinguish which comments should be up voted and which person should be down voted, but does not have the time to actually use that judgment, then that person is not going to be a successful protector of this site. Either take the time to do it right, or leave it to those who have the good judgment and the time, who there seems to be plenty of, giving that the system is working.
If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system.
Do people have any thoughts on how to fund research rationally?
Charities? Futarchical government?
Earn a lot of money. Fund what you consider important.
I have no brilliant ideas on solving the cooperation problem but suspect if you start then others may follow your example.
So far we have the following buttons below a comment:
Vote up
Vote down
Permalink
Report
Reply
What would you think of another button
Linked From
which would take you to a page that listed all the comments that had a permalink to that comment?
There’s a lot of permalinking to past comments, but hardly any linking to future comments. I think this is just something that hasn’t evolved yet—once it did, we would find it very natural.
Would it be difficult to implement? It would involve keeping a library of all permalinks used in comments, and updating the comment’s “Linked From” page as often as it is linked to. Preferably, the button would only appear if the comment was linked to somewhere, so as not to waste time checking if the comment had been linked.
Indeed this would be difficult to implement, and furthermore I’m not convinced it’s the best choice from a UI perspective.
It would be a bit of work even to find out when a comment linked to another comment, and notify it.
This sort of thing is why Tim Berners-Lee wanted hyperlinks to be two-directional in the first place, but there are a lot of good reasons why they’re not.
This would not be that hard to implement. Each comment could have a list of linking comments. Then whenever a comment links to another, which is not hard to detect as links are already detected to support the comment markups, the list of the linked comment can be updated.
To make a good decision, it’s not necessary to be a good thinker. If you’re wise enough to defer to someone who is a good thinker, that also works. And if you’re wise enough to defer to someone who is wise enough to defer to someone (repeat N times) who is a good thinker, that also works. That suggests to me the hopeful thought that in a population of agents with varying rationality, a small change can cause a phase transition where the system goes from a very incompetent agent making the decisions to a very competent agent making the decisions. One might call this an “authority cascade”. Do you agree? Discuss, etc.
The major problem with this approach is that you’d need to ask the good thinker about each particular decision of this kind, taking into account the context in which you are making it. Only when you have good understanding of the world yourself, you can make good custom decisions for every situation.
Textbook decisions only work where textbook writers know in advance what questions they need to address.
And then, of course, there is a task of making sure this system doesn’t turn into an echo chamber.
Meta: Given the rate at which new comments appear, I wish the comments feed http://lesswrong.com/comments/.rss contained more than 20 entries; say closer to 200. Also, all of the feeds I’ve looked at (front page, all new, comments) have the identical title “lesswrong: What’s new”, which is useless for distinguishing them.
I was poking around the older posts and found So you say you’re an altruist.… Without intending to open the whole discussion up again, I want to talk about the first example.
My instinctive response is that the eleventh man is not at all responsible for the deaths of anyone, regardless of his choice. He is not murdering or saving anyone. The evil king is doing the murdering (and no saving). Why, in this scenario, do people place moral responsibility on the eleventh man?
This example could be rephrased into a hostage situation. If the evil king was holding ten people hostage and demanded $100 or he would kill them all, I wouldn’t give him a dime. Does that make me an evil person? Even if I was absolutely sure that the king was not lying, why should I pay him?
This all changes if I have a more personal investment in the people. If my loved ones were at risk I would probably pay the $100 and sacrifice my life, even though I believe I am not morally responsible for the result. I apparently have more investment in the outcome.
Does this mean I am a terrible person? In my opinion the eleventh man should choose himself. When the other ten die, anyone who blames the eleventh is foolish.
Do you really mean “should” in the sense that it’s morally better to choose oneself? If so, could you provide some justification for that? My view would be that saving the 10 isn’t morally required but would be virtuous in a supererogatory sense.
In quick bullet points:
The eleventh man is not morally responsible for anyone dying, regardless of what he chooses
It is better to live than die
I see the king’s question as simple as “do you want to live?”
The answer is as simple as, “yes.”
The cost of the eleventh man living is not paid by himself, it is paid by the king in the form of a moral choice.
Also, to be clear, the man cannot sacrifice himself. He does not kill himself; he is not sacrificing anything; his life is not actually his to sacrifice. The king can kill him no matter what the answer is. The way I look at it, this scenario is exactly the same as the king making the eleventh guess a random number correctly or everyone dies. The man has no power over the situation. Any power is an illusion because all of the power is the king’s.
Likewise, the man cannot save anyone. The ten are not his to save. The king decides who lives and dies.
I agree that the 11th man has no moral obligation in this scenario and I wouldn’t consider him to have acted immorally to choose his own life. I think this is the kind of scenario where people will signal altruism by costlessly saying they would sacrifice themselves in the thought experiment but that far fewer people would sacrifice themselves when faced with the actual choice.
I suspect that people will tend to look negatively on those who fail to signal altruism by saying they would sacrifice themselves but would be more forgiving of someone who was faced with the choice in reality. I think they would consider an individual who had actually made such a choice to have made a morally weak but understandable choice and would not consider him deserving of punishment.
You two seem to be making slightly different points here. Matt, I take it you accept that there is some reason to sacrifice yourself (not doing so would be “morally weak”) but that failing to do so would not be blameworthy. That sounds like a fairly mainstream view. In contrast, MrHen seems to be making the stronger claim that there is no reason to save the others at all (unless he has a personal investment in said others).
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
If you want to prioritize your own life over theirs then you are free to do so, but I think you should own up to the fact that that’s ultimately what you’re doing. Disclaiming responsibility entirely seems like a convenient excuse designed to let you get what you want without having to feel bad about it.
I have to read the single-true-cause fallacy before I can fully reply, but here is a quick ditty to munch on until then:
I disagree with this. The eleventh’s choice is completely irrelevant. The king has a decision to make and just because he makes it the same every single time does not mean the actual decision is different the next time around.
The similar example where the king puts a gun in the eleventh’s hand and says “kill them or I kill you” is when the choice actually becomes the eleventh’s. In this scenario, the eleventh man has to choose to (a) kill the ten or (b) not kill the ten. This is a moral decision.
Of note, whoever actually has to kill the ten has this choice and will probably choose the selfish route. If the king shares the blame with anyone, it will be whoever actually kills the ten. If the eleventh is morally responsible than everyone else watching the event is morally responsible, too.
I don’t understand what coherent theory of causation could make this statement true.
If they could stop it, then yes, they are.
The issue is not causality. The issue is moral responsibility. If I go postal and start shooting people as they run past my house and later tell the police that it was because my neighbor pissed me off, the neighbor may have been one (of many) causes but should not be held morally responsible for my actions.
Likewise, if the king asks someone a question and, in response, kills ten people, I do not see how the question asks makes any different in the assignment of moral responsibility.
Causality does not imply moral responsibility.
Also, having read the link you gave earlier, I can now comment on this:
“Responsible” has two meanings. The first is a cause-effect sense of “these actions precluded these other actions.” This is the same as saying a bowling ball is responsible for the bowling pins falling over.
The other is a moral judgement stating “this person should be held accountable for this evil.” The bowling ball holds no moral responsibility because it was thrown by a bowler.
I am not claiming that the eleventh man was not part of the causal chain that resulted in ten people dying. I am claiming that the eleventh man holds no moral responsibility for the ten people dying. I am not trying to say that the king is the single-true-cause. I am claiming that the king is the one who should be held morally responsible.
To belabor this point with one more example: If I rigged a door to blow up when opened and Jack opened the door while standing next to Jill they are both reduced to goo. Jack is causally responsible for what happened because he opened the door. He is not, however, morally responsible.
The question of when someone does become morally responsible is tricky and I do not have a good example of when I think the line is crossed. I do not, however, pass any blame on the eleventh man for answer a question to which there is no correct answer.
Agreed. But I think if you want to separate the two, you need a reasonable account of the distinction. One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view. It accounts easily for the neighbor, bowling ball, and Jack and Jill cases, but still implies responsibility for the 11th man.
I can accept a view that says that, all things considered, the king has a greater causal influence on the outcome of the 11th man case, and thus bears much greater moral responsibility for it than does the 11th man. But (and this was the point of the no-single-true-cause analogy) I see no reason why this should imply that the 11th man has no responsibility whatsoever, given that the death of 10 innocent others is a clearly foreseeable consequence of his choice.
I still think this is a convenient conclusion designed to let you be selfish without feeling like you’re doing anything wrong.
P.S. FWIW, yes I pretty much do think you’re evil if you’re not willing to sacrifice $100 to save 10 lives in your hostage example. I can understand not being willing to die, even if I think it would be morally better to sacrifice oneself. (And I readily confess that it’s possible that I would take the morally wrong/weak choice if actually faced with this situation.) But for $100 I wouldn’t hesitate.
I can understand that. I have not dug quite so deeply into this area of my ethical map so it could be representing the territory poorly. What little mental exercises I have done have led me to this point.
I guess the example that really puts me in a pickle is asking what would happen if Jack knew the door was rigged but opened it anyway. It makes sense that Jack shares the blame. There seems to be something in me that says the physical action weighs against Jack.
So, if I had to write it up quickly:
Being a physical cause in a chain of events that leads to harm
While knowing the physical action has a high likelihood of leading to harm
Is evil
But, on the other hand:
Being a non-physical cause in a chain of events that leads to harm
While knowing the non-physical action has a high likelihood of leading to harm
Is not necessarily evil but can be sometimes
Weird. That sure seems like an inconsistency to me. Looks like I need to get the mapmaking tools out. The stickiness of the eleventh man is that the king is another moral entity and the king somehow shrouds the eleventh from actually making a moral choice. But I do not have justification for that distinction.
There may yet be justification, but working backwards is not proper. Once I get the whole thing worked out I will report what I find, if you are interested.
Good luck with the map-making! I’d certainly be interested to know what you find, if and when you find it.
My use of the phrase ‘morally weak’ was to describe how I think many/most people would view the choice, not my own personal judgement. I agree with MrHen that the 11th man’s choice is not morally wrong. I was contrasting that with what I think would be the mainstream view that the choice is morally wrong but understandable and not deserving of punishment.
To me this is similar to the trolley problems where you are supposed to choose between taking action and killing one person to save 10 or taking no action and allowing the 10 to die. The one person to be sacrificed is yourself however. I wouldn’t kill the one to save the 10 either (although I view that as more morally wrong than sacrificing yourself). I also generally place much lower moral weight on harm caused by inaction than harm caused by action and the forced choice scenario here presents the 11th man with a situation that I think is similar to one of causing harm by inaction.
Sorry, my bad. Thanks for clearing that up.
As to the act-omission distinction, it would be simple enough to stipulate that the default option is that you die unless you tell the king to kill the other ten. Does this change your willingness to die?
No, that wouldn’t change my decision. It’s the not-sacrificing-your-life that I’m comparing with causing harm by inaction (the inaction being the not-sacrificing) rather than anything specific about the way the question is phrased.
The agency of the king does make a relevant difference in this scenario in my view. It is not exactly equivalent to a scenario where you could sacrifice your life to save 10 people from a fire or car crash. Although I don’t think there is a moral obligation in that case either I do consider the difference morally relevant.
Suppose the king has 10 people prepared to be hung. They are in the gallows with nooses around their neck, standing on a trap door. The king shows you a lever that will open the trap door, and kill the 10 victims. The king informs you that if you do not pull the lever within one hour, the 10 people will be freed and you will be executed.
Here the king has set up the situation, but you will be the last sentient being capable of moral reasoning in the causal chain that kills 10 people. Is your conclusion different in this scenario?
The king here is more diabolical and the scenario you describe is more traumatic. I believe it does change the intuitive moral response to the scenario. I don’t believe it changes my conclusion of the morality of the act. I feel that I’d still direct my moral outrage at the king and absolve the 11th man of moral responsibility.
This is where these kinds of artificial moral thought experiments start to break down though. In real situations analogous to this I believe the uncertainty in the outcomes of various actions (together with other unspecified details of the situation) would overwhelm the ‘pure’ decision made on the basis of the thought experiment. I’m unconvinced of the value of such intuition pumps in enhancing understanding of a problem.
Why is this where the thought experiments suddenly start to break down? Sure, it’s a less convenient world for you, but I don’t see why it’s any more artificial than the original problem, and you didn’t seem to take issue with that.
I have taken issue with the use of thought experiments generally in previous comments, partly because it seems to me that they start to break down rapidly when pushed further into ‘least convenient world’ territory. I’m skeptical in general of the value of thought experiments in revealing philosophical truths of any kind, ethical or otherwise. They are often designed by construction to trigger intuitive judgements based on scenarios so far from actual experience that those judgements are rendered highly untrustworthy.
I answered the original question to say that yes, I did agree that the 11th man was not acting immorally here. I suspect this particular thought experiment is constructed as an intuition pump to generate the opposite conclusion and to the extent that the first commenter is correct that the view that the 11th man has done nothing immoral is a minority position it would seem it serves its purpose.
I’ve attempted to explain why I think the intuition that this is morally questionable is generated and why I think it’s not to be fully trusted. I don’t intend to endorse the use of such thought experiments as a good method for examining moral questions though.
Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
I don’t, but I’d second the call for any good suggestions.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
You may find this paper on consequentialism and decision procedures interesting.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
Agreed; however it’s important to distinguish between this sort of appeal-to-intuition and the more rigorous sort of thought experiment that appeals to reasoning (e.g. Einstein’s famous Gedankenexperimente).
Given that your defense of the morality was based on the inaction of not self sacrificing, and that in this scenario inaction means self sacrifice and you have to actively kill the other 10 people to avoid it, what reasoning supports keeping the same conclusion?
I’m comparing the inaction to the not-self-sacrificing, not to the lack of action. I attempted to clarify the distinction when I said the similarity was not ‘anything specific about the way the question is phrased’.
The similarity is not about the causality but about the cost paid. In many ‘morality of inaction’ problems the cost to self is usually so low as to be neglected but in fact all actions carry a cost. I see the problem not as primarily one of determining causality but more as a cost-benefit analysis. Inaction is usually the ‘zero-cost’ option, action carries a cost (which may be very small, like pressing a button, or extremely large, like jumping in front of a moving trolley). The benefit is conferred directly on other parties and indirectly on yourself according to what value you place on the welfare of others (and possibly according to other criteria).
I think our moral intuition is primed to distinguish between freely chosen actions taken to benefit ourselves that ignore fairly direct negative consequences on others (which we generally view as morally wrong) and refraining from taking actions that would harm ourselves but would fairly directly benefit others (which may or may not be viewed as morally wrong but are generally seen as ‘less wrong’ than the former). We also seem primed to associate direct action with agency and free choice (since that is usually what it represents) and so directly taken actions tend to lead to events being viewed as the former rather than the latter.
I believe the moral ‘dilemma’ represented by carefully constructed thought experiments like this represents a conflict between our ‘agency recognizing’ intuition that attempts to distinguish directly taken action from inaction and our judgement of sins of commission vs. omission. Given that the unusual part of the dilemma is the forced choice imposed by a third party (the evil king) it seems likely that the moral intuition that is primed to react to agency is more likely to be making flawed judgements.
This makes sense to me, but it seems to run counter to the nature of MrHen’s original claim that the issue is lack of responsibility. For example, if it’s all about CBA, then you would presumably be more uneasy about MrHen’s hostage example ($100 vs. 10 lives) than he seems to be. Presumably also you would become even more uneasy were it $10, or $1, whereas MrHen’s argument seems to suggest that all of this is irrelevant because you’re not responsible either way.
Am I understanding you correctly?
In this example I wouldn’t hold someone morally responsible for the murders if they failed to pay $100 ransom—that responsibility still lies firmly with the person taking the hostages. Depending on the circumstances I would probably consider it morally questionable to fail to pay such a low cost for such a high benefit to others though. That’s a little different to the question of moral responsibility for the deaths however.
Note that I also don’t consider an example like this morally equivalent to not donating $100 to a charity that is expected to save 10 lives as a utilitarian/consequentialist view of morality would tend to hold.
Well, you are certainly understanding me correctly.
OK, I think I’m sort of with you now, but I’m just want to be clear about the nature of the similarity claim you’re making. Is it that:
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing; or is it just that
you just happen to hold both the view that harm-by-inaction is allowed and the view that not-sacrificing is allowed, but the justifications for these views are independent (i.e. it’s merely a contingent surface similarity)?
I originally assumed you were claiming something along the lines of 1. but I’m struggling to see how such a link is supposed to work, so maybe I’ve misinterpreted you’re intention.
Yes. I’d generally hold that it is not morally wrong to allow harm-by-inaction: there is not a general moral obligation to act to prevent harm. In real moral dilemmas there is a continuum of cost to the harm-preventing action and when that cost is low relative to the harm prevented it would be morally good to perform that action but not morally required. At extremely low cost relative to harm things become a little fuzzy and inaction borders on an immoral choice. When the cost of the action is extremely high (likely or certain self-sacrifice) then there is no fuzziness and inaction is clearly morally allowed (not-sacrificing by jumping in front of a trolley cart to save 10 is not immoral).
Given inaction being morally permitted in the trolley case, I have difficulty imagining a coherent moral system that would then say that it was not permissible for the 11th man to save himself. The evil king does change the problem but I can only see it making not-sacrificing more rather than less morally acceptable. I can conceive of coherent moral systems that would allow the 11th man to save himself but would require the trolley jumper to sacrifice himself. I have difficulty conceiving of the reverse. That’s not to say that one doesn’t exist, it’s just sufficiently removed from my own moral sense that it doesn’t present itself to me.
OK, I see where you’re coming from now. (We still have strongly differing intuitions about this, but that’s a separate matter.)
This thought experiment among other things convinces me that omission vs. commission is a sliding scale.
That would fall in the territory I describe as fuzzy above. At a sufficiently low cost inaction begins to seem morally questionable. That is largely driven by intuition though and I’m skeptical of attempts to scale it up and draw moral conclusions. I believe there are reasons the intuition exists that do not scale up simply. In other words, scaling up from this to conclude that if a very small cost is obligatory to save a single person then a very large cost is obligatory to save a million people is faulty reasoning in my opinion.
Re. repeated requests for some LW whipping-boy other than religion: How about (Platonic) realism?
It may be more popular than religion; and it may be hard to find a religion that doesn’t require at least moral realism; but people will get less worked-up over attacks on their metaphysics than attacks on their religion.
You may wish to exempt morals and the integers.
Having recently received a couple of Amazon gift certificates, I’m looking for recommendations of ‘rationalist’ books to buy. (It’s a little difficult to separate the wheat from the chaff.)
I’m looking mainly for non-fiction that would be helpful on the road to rationality. Anything from general introductory type texts to more technical or math oriented stuff. I found this OB thread which has some recommendations, but I thought that:
this could be a useful thread for beginners (and others) here
the ability to vote on suggestions would provide extra information
So, if you have a book to recommend, please leave a comment. If you have more than one to recommend, make them separate comments so that each can be voted up/down individually.
See also Eliezer’s Rationalist Fiction and Great Books of Failure posts, and his old but good Bookshelf. A few here too.
Basic Economics by Thomas Sowell.
That looks good. Is it worth reading even if you’ve taken and understood the standard Econ 101 college course?
I would think that it is, but I’ve never taken a standard Econ 101 college course (not educated in the US, and didn’t take any economics courses as part of my degree).
The Selfish Gene by Richard Dawkins.
Consciousness Explained by Daniel Dennett.
I’m reading The Moral Animal (Robert Wright) currently and have been recommending it to everyone I talk to.
Beginning of unordered list test
Item one
Item two
End of unordered list test
Source code:
My guess: you’re missing a blank line before your list.
That sorted it, thanks.
The Demon-Haunted World: Science as a Candle in the Dark by Carl Sagan
Empire by Niall Ferguson.
Freedom Evolves by Daniel Dennett.
I wasn’t that taken with this book, and I’m usually a big Dennett fan. What did you like about it?
It helped clarify some thoughts I’d already had about free will—that the standard paradox of free will as incompatible with determinism was not a true paradox. I think the concept of free will used by many people is horribly confused and this book is the best attempt I’ve seen to come up with a coherent conception of what free will can mean in a purely material universe.
Same and same. Recently, Dennett has been good on memes, but elsewhere he does tend to waffle a bit. In Freedom Evolves, Dennet redefines the terms he is discussing, berates everyone else for not using his definitions, and then bangs on about them for hundreds of pages. That’s philosophy for you.
Surely You’re Joking Mr. Feynman by Richard Feynman.
Might be easier to manage comments and direct people to it if its a whole post rather than a comment in the may 09 open thread.
According to this post, doing so would be “against blog guidelines”. The suggested approach is to do top-level book review posts. I haven’t seen any of these yet, though.
Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time by Michael Shermer
What do people here (esp. libertarians) think about (inter)net neutrality?
Seems to me that net neutrality is a great boon to spammers and porn downloaders. People might not like it so much if they discovered that, without net neutrality, they could pay an extra dollar a month and increase their download speed while browsing by a factor of ten.
It seems you are conflating net neutrality (ISPs should not discriminate based on packet characteristics, including origin) with the concept that users should pay for the resources they use.
For one thing, spammers usually use botnets so no change there, average users would bear the cost one way or another. Unless you are advocating depriorization of all email traffic, ISPs have no way other than spam filters to differentiate what counts as spam. I see no connection to the net neutrality debate, or the pay-per-usage model.
As for porn-downloaders, I take it you mean people with high bandwidth needs, which includes all sorts of downloaders. (I really don’t see why you would emphasise porn here, even if you’re trying to evoke feelings of moral resentment, LW would seem the unlikeliest of places that this would have any efect.) I never had a problem with bandwidth usage caps, as long as they are explicit. Then carriers can compete on what they set these limits to and I can choose based on my needs. Nothing to do with net neutrality as far as I can see.
As for my libertarian view on net neutrality: When the governments allow for true competition between ISPs, they can drop all net neutrality provisions as far as I care. But then again, in a truly competitive market, I doubt we would be having a net neutrality issue to begin with.
| As for my libertarian view on net neutrality: When the governments allow for true competition between ISPs, they can drop all net neutrality provisions as far as I care.
Do you believe that true competition can exist in a free market where the economics of scale are as big as in the ISP market? If net neutrality isn’t enforced, a big ISP could squash a small new ISP by demanding a lot of money for peering. They are much less likely to try something like this against a big ISP, who has a lot more bargaining power.
(I am Assuming “true competition” means at least low barriers to entry.)
The arguments against monopolies in a free market apply, here. A big ISP which set out to squash little ISPs would run up its own costs trying, thereby losing to other big ISPs which didn’t do this. If there was only one big ISP, they’d eventually fail if they kept this up, since it would be in the interest of all the little ISPs to peer with each other, and they’d eventually have most of the market, collectively. Economies of scale can be really useful, but unless your firm is able to use force, much of the savings will go to the consumers through competition.
Of course, in the real world, we’re awash in force, so perhaps this isn’t very useful. :(
But in the meantime, very many small ISPs would go out of business trying to compete before they collectively pull down the big ISP, which likely has other advantages beyond competing on price, such as having a lot of friends and influence among the set of people who could possibly invest funding into a new ISP.
At some point people are going to realize that getting into the ISP market is a recipe for disaster, and if this happens before the big ISP runs out of slack, competition dries up and the big ISP gets to continue being a monopoly.
So yes, if you assume that significant numbers of people will make irrational decisions and take large personal losses starting businesses that are very likely to fail it might work out, but I’m not sure that’s justified.
Honestly, most of the arguments about why monopolies would never survive in a truly free market are glaring examples of how irrational hard-line free market ideas are, usually because people turn the idea of an unregulated market itself into a terminal value and then start rationalizing why it will obviously produce ideal results.
Check out the startup market sometime. Most startups fail, yet there always seems to be money for new ones, because every now and then there’s a Google. You seem to be assuming that people won’t do what they’re actually doing.
Technology startups generally have relatively low entry costs and aren’t trying to jump into an established market with substantial network effect and force out an entrenched larger player.
How many startups do you see trying to, say, go toe-to-toe with Microsoft in the desktop OS or office suite market, and how successful are those?
It’s a fallacy to point to the lack of direct competition from startups in the desktop OS or office suite market and claim that as proof that natural monopolies exist. Companies that dominate an industry for a period often lose their dominance when new technologies come along that make their dominance irrelevant.
Companies that dominated telecommunications when fixed land lines were the only game in town now compete against cellular phone networks and Internet telephony. Microsoft’s dominance in the desktop OS space is becoming less and less relevant as more of people’s day to day computing needs move into the cloud. Google Docs is a potential challenger to Office in the future and has its roots partly in a startup (Writely).
Technological innovation has a way of undermining monopolies that are not protected by government regulation. Sometimes it even manages to undermine protected monopolies—the process of updating legislation to maintain profitable monopoly privileges in the face of technological change is fortunately slow enough that the rent seeking entities can be beaten by faster moving companies.
Net neutrality is a bunch of different issues that get incorrectly lumped together.
The first issue is prioritizing traffic based on endpoints. An example of that is where ISP A contacts example.com and offers to speed up its customers’ connections to example.com, in exchange for money. The problem is that example.com isn’t a customer of ISP A, but of a competing ISP. The full graph of business relationships is
End user—ISP A—ISP B—example.com
The end user pays ISP A to use its portion of the network, example.com pays ISP B to use its portion of the network, and they split the cost of the link that connects the two ISP’s networks. If ISP A goes after example.com, then it’s trying to bill its competitor’s customers. This is probably in violation of its peering agreement with ISP B, and it would cause a total nightmare for anyone trying to run a web site, as they would have to negotiate with every ISP instead of just the one they get their connectivity from. So with respect to traffic endpoints, net neutrality is extremely important.
The second issue is prioritizing traffic based on type. This is reasonable and sometimes necessary, because some protocols such as telephony only use a small amount of bandwidth but are badly disrupted if they don’t get any for a fraction of a second, while other protocols like ftp use tons of bandwidth but can be paused for several seconds without disrupting anything. The problem there is that protocol prioritization is more often used as a cover story for anti-competitive behavior; eg, ISP A wants to drive ISP B out of business, so they configure their network so that ISP A’s expensive VoIP service gets high priority, ISP B’s VoIP service gets low priority, and everything else gets medium priority. You end up with telephone companies setting the priority on VoIP services that directly compete with their own voice services, cable television setting the priority on streaming video services that directly compete with their own television services, and so on.
Phil, things like cables and phone lines going to houses are “natural monopolies” in that it costs so much to install them that competitors probably can never get started. In fact, if the technology to deliver video over phone lines were available or anticipated when cable TV was building out in the 70s, the owner of the phone lines (pre-breakup AT&T) could probably have stopped the new cable TV companies from ever getting off the ground (by using the fact that AT&T has already paid for its pipe to the home to lowball the new companies). In other words, the probable reason we have two data pipes going into most homes in the U.S. rather than just one is that the first data pipe (the phone line) was not at that time of the introduction of the second pipe technically able to carry the new kind of data (video).
It is desirable that these duopolists (the owners of the phone lines and the cable-TV cables going to the home) are not able to use their natural duopoly as a wedge to enter markets for data services like search engines, online travel agencies, online stores, etc, in the way that Microsoft used their ownership of DOS to lever their way into dominance of markets like word processors and web browsers.
One way to do that is to draw a line at the level of “IP connectivity” and impose a regulation that say that the duopolists are in the business of selling this IP connectivity (and if they like, levels below IP connectivity like raw copper) but cannot enter the market (or prefer partners who are in the market) of selling services that ride on top of IP connectivity and depend on IP connectivity to deliver value to the residential consumer.
This proposal has the advantage that up to now on the internet companies that provide IP connectivity have mostly stayed out of most of the markets that depend on IP connectivity to deliver value to the residential consumer.
It is possible to enshrine such a separation into law and regulations without letting one cable-internet user on a local network (or whatever they call them) shared by a whole block of houses hog up most of the bandwidth of the local network. I.e., there is nothing incompatible here with contracts that impose a monthly cap on bytes received.
And even if spam filtering is made an exception to the separation, so that both connectivity providers (cable-internet and DSL providers) and Google can offer spam filtering, that does not mean that spammer get a free license to spam. What we want is to prevent Verizon or Comcast from making it impossible or more difficult for the Joe Consumer to go to Expedia than to go to Travelocity (or the Comcast Travel Store) -- or more difficult for him to go to Windows Live Search than to Google Search—and we can do that while still allowing Verizon and Comcast to cut off recalictrant spammers (or requiring Joe Consumer to get his email from a email provider that does not happen to be a duopolist who will cut off recalitrant spammers).
Bob Frankston has been eloquent on this issue for at least 10 years now.
How does that square with the fact that in places without government-granted monopolies, there are often more than one provider? My apartment building has two separate cable companies, in addition to Verizon fiber. Is there a general argument for how rental houses often end up with two or more separate cable boxes from more than one provider in areas without government suppression of competition, while still holding that it can’t happen in the general case?
Installing wires requires digging up roads, using utility poles and leaving utility boxes on other peoples’ property. You need permission from local government to do that, period. In some places, the local governments only give one company permission to do that. That’s a government granted monopoly. In other places, they give permission to more than one company, so that they can compete with each other. That’s a government granted oligarchy. But whether there’s one pre-existing cable company or five, if you want to start a new cable company, you need government permission, and you probably won’t get it. It’s nothing like a free market.
And it’s worth noting that a gentlemen’s agreement to not compete too hard is profitable for all parties in a heavily restricted market. Ergo, the government-granted oligopoly is only superior to the monopoly insofar as you expect business executives to be irrational enough to not cooperate in the iterated Prisoner’s Dilemma.
You mean oligopoly
Yep, it’s the bankers that are a government granted oligarchy.
There are other alternatives as well. There’s a company here that provides high speed broadband to businesses via a network of roof mounted microwave transmitters. Businesses use them because they offer better value than paying a local cable company to hook a building up. My parents in rural England had a number of options for broadband despite no cable companies operating in the area, including DSL and a wireless relay from a satellite uplink.
I’m not a libertarian. I am in favor of net neutrality for the following reasons:
Setting up different speeds for different sources of data means that somewhere along the line, either a person or a program is going to see what sites or downloads the customer is trying to access. Anything that gives someone an excuse to spy on browsing is to be discouraged.
Net neutrality keeps the barrier to entry on the Internet low. If it’s necessary to pay extra fees to ISPs to make your site tolerably fast to access, then financially disadvantaged people who have something to offer on their websites will lose out on the part of their audience that is not patient enough to wait for the slower load times.
Saying that no sites will get slower than they already are without net neutrality is not a convincing argument, because speed in computers is judged relatively. The computer my family had when I was five was a fast computer then. It is not just slower than the computers that are new now, it’s no longer a fast computer, period, even if we assume that it hasn’t deteriorated by an absolute measure in the last fifteen years. As such, I would rather everything be the same (absolutely slower) speed than have some things get (absolutely) faster.
Saying that consumers will be able to choose net-neutral ISPs is not a convincing argument, because in many places, there are not multiple ISPs competing for business. I cannot get my Internet from anyone other than Comcast; if Comcast becomes non-neutral, I cannot take my business elsewhere unless I want to do without the Internet at home altogether.
This seems to be an argument that people who have something they want to say that nobody wants to pay to hear should be subsidized by people who have something to say that they are either willing to pay to make available or have found others who are willing to pay them to hear (usually through the intermediary of paid adverts under current Internet business models). Is that your actual position or do would you not support that argument? If you do not support this interpretation where do you see the distinction?
If I parse your long sentence correctly, I think I disagree with your interpretation. If no one wants to pay to hear something, that could be for any of several reasons, but the one I had in mind was lack of information about the message or the speaker (e.g. “Hey, do you want to buy a book? It’s only fifty cents!” “What book?” “I’m not going to tell you. It could be the latest John Scalzi, or it could be Volume Four of an outdated botanical encyclopedia, or it could be sheet music for the ocarina, or it could be a coffee table volume about those giant cow art projects in Kansas City.”). Browsing unknown websites is a gamble with time already; making it a gamble with money too will make it less appealing and fewer people will be able to get large audiences. New content providers already have difficulty attracting attention.
I see that as a feature rather than a bug though. Spam is a problem in large part because the cost of sending it is extremely low, much lower per mail for the spammer than the cost to the recipient in wasted time. If someone has some information that they want to share that they believe will be of value to others then an up-front investment is a measure of how valuable they really think it will be. If the primary value of sharing the information is the pleasure of hearing the sound of your own voice (as seems to be the case for a significant percentage of the Internet) or as an attempt to steal other people’s time for personal profit (as in the case of spam) then I think a higher barrier to entry is a good thing.
It seems to me that filtering out information that I don’t want is at least as big a problem on the Internet as finding information I do want.
People already have to spend time and effort to provide the information, which constitutes a concrete investment indicating how valuable they think it is. Many also pay for web hosting. Why would additional costs in money serve any purpose other than to introduce a selection bias in favor of people who have more money?
Also, it wouldn’t help with spam at all and I have no idea why you think it would.
If different types of traffic can be given differing priorities or charged at different rates then I think creative solutions to the spam problems are more likely to be discovered. If some kind of blanket legislation is introduced prohibiting any kind of differentiation between types of traffic then I’m inclined to think we will see less optimal allocation of resources. Even differentiating between high-bandwidth/high-latency usage like movie downloads vs. medium-bandwidth/low-latency usage like online gaming will be restricted. I have no faith in lawmakers to craft legislation that will not hamper future technological innovations.
You may recall that net neutrality is currently being debated, there is no current legal barrier to adjusting priorities for types of traffic. Spam has been a problem for quite a while now and no such solutions have been found.
The general rule of thumb when it comes to “creative solutions to the spam problem” is “it won’t work”.
I don’t think it’s true to say that no creative solutions to spam have been found. Spam filters are probably the most successful real world example of applying Bayesian techniques. The battle against spam is an ongoing struggle—we have solutions now to the original problems but the spammers keep evolving new attacks. Legislation will tend to reduce the options of the anti-spammers who have to follow the law and give an advantage to the spammers who ignore it.
Any legislation will limit options and hamper innovation and technological progress. That’s what legislation invariably does in all fields.
Bayesian filtering at the user end is the only exception to the rule of thumb I’m aware of. The only other anti-spam actions I’ve heard of with any success are distinctly non-creative variations on cutting the hydra’s heads off, such as blocking individual spam sources.
“Invariably”? Do you have any evidence for this assertion?
Legislation can increase one groups’ options by taking away options from another group. It can’t globally increase options. Legislation is just rules about what actions are permitted and what actions are not permitted so it can’t create new options, it can only take them away or trade them off between different groups. Fewer options means reducing the space of allowed innovations and so hampers techological progress. If you want evidence I direct you to the field of economics.
As I said in another comment this discussion is straying into general politics territory and I’m not sure I want to start down that road. We still haven’t decided as a community how to deal with that particular mindkiller.
You’ve defined “options” in a manner that is zero-sum at best and serves mostly to beg the question raised. Even so, consider: what about government uniquely allows it to limit options? Imagine one man owns 90% of the property in a town and uses his influence to financially ruin anyone who does something he doesn’t like, limiting their potential options (a net negative). A government steps in and forbids him from this behavior, thereby limiting his options but restoring everyone else in the town’s options (a net positive).
You could of course define only physical force, or threat thereof, as limiting options, but even in that case a state-run police force is clearly restoring more options than it removes.
Agreed, and I wouldn’t have gotten into it on anything but an Open Thread (wherein it seems relatively harmless).
A monopoly on the use of force.
Assuming the validity of the example for the sake of argument, this kind of situation is what I meant when I said that legislation can only move options from one group to another. The example I had in mind was anti-discrimination laws—the government removes the option from an employer to discriminate on the basis of race/sex/religion and thus increases the options available to the employee who was discriminated against. That’s one of the best cases I can think of for the argument that the change is a net positive but I don’t think it’s a watertight case.
In the case of legislation limiting economic activity I think it’s hard to argue that reducing options can ever be an encouragement to innovation and technological progress, although it can potentially redirect it in politically favoured directions. The only economically sound arguments for legislation I’ve seen stem from attempts to internalize negative externalities and while in theory such legislation can be justified, real world examples are often less clearly beneficial.
As long as it stays civil it should be harmless, political discussions have a tendency to rapidly degenerate though...
Hmmm… spending tax money to directly fund basic research doesn’t count? (If you have to, assume that the people whose money was taxed would have spent it on something generally unrelated to technological progress—say, tobacco cultivation and consumption.)
In theory, eliminating or discouraging options that result in not creating progress should result in more progress...
It would count as an example of redirecting innovation and technological progress in politically favoured directions. I would argue that very little money is spent on something unrelated to technological progress—all industries, the tobacco industry included, drive technological progress in their pursuit of greater profits. The technologies that get developed will tend to be technologies that help satisfy the public’s actual wants and needs rather than those that the political class thinks are more important or those that have the best lobbyists.
I figured you would claim that—please justify it. How does my (contrived, but not impossible) scenario not result in a global net negative in terms of options available? How is someone exercising non-physical power and influence to limit someone else’s options different from physical force in terms of practical end results?
Also note that property rights are backed by threat of force from the state. Does the existence of property rights constitute a net loss in options for society and, if so, would it be better to repeal them?
I didn’t originally claim that governments alone have the power to limit options. My original claim was that legislation cannot globally increase available options. What is unique about states (unique in practice if not in theory) is that their geographical scope and monopoly on the use of force gives them vastly greater power to limit options than other entities, power that they have a strong inclination to exercise at every opportunity. I’m not aware of any individuals in history who have had anything approaching the power of a modern state to restrict the options of others without using physical force. It is much easier for the victim in your example to move to another town than it is for most people to escape the reach of states.
I stand by the claim that legislation can only reduce or redistribute options and not create them. I also believe that states are far more capable of meaningfully restricting options than individuals so long as they maintain a monopoly on the use of force. I never intended to imply a claim that non-state actors cannot also restrict options, and can sometimes do so without the use of force and I’m not going to try and defend that straw man.
I believe societies work better with property rights (though I have serious doubts about intellectual property rights being a net benefit). I believe the benefit comes from reducing the number of occasions when conflicts of interest can only be resolved by resorting to violence. If everyone can agree to a framework in advance for resolving disputes without violence then there is a net benefit to be gained. I think it is unclear whether this results in a net loss in options for society since the greater prosperity property rights make possible leads to new options that may not have existed before. Certainly individuals lose options in the short term but a rational agent may make that choice in order to reap the perceived future benefits. For individuals it`s akin to a hedging strategy—giving up some potential gains (stealing from others) in order to reduce the risk of catastrophic losses (being killed by others).
If you want to pursue a similar argument to justify net neutrality legislation (or any piece of proposed new legislation) then I believe you’d need to make a case that the introduction of such legislation would lead to such an improvement in prosperity that it would more than make up for the lost opportunities it prevents. I think that is a difficult case to make for most legislation.
Well, no, because you’ve defined options such that a global increase appears to be essentially impossible.
A large, modern, state such as the USA federal government, yes. Beyond that, a large corporation has more power to restrict people than, say, a small township’s government, including that it’s easier to move to a new town than escape a global corporation. There are a lot more ways to coerce people than physical force; bribery is pretty effective, too.
So you do agree that by intervening with force to prevent a non-state actor from restricting options, the state can increase global options vs. non-interference?
Governmental actions, including enforcing property rights, are in the end backed up with threat of violence, as always. You’ve not removed the violence inherent in the system, merely hidden it.
Of course it restricts options—there’s nothing you can do in a society with property rights that wouldn’t also be possible in a society without property rights, it’s just less likely to occur without government-imposed restrictions.
Ergo, an example wherein government acting to reduce individual options has causally lead to a greater chance of success and more innovation.
Furthermore, your defense of property rights is pretty much exactly the same logic that defends any government intervention at all. You’ve drawn an arbitrary line, and the fact that plenty of societies far on the wrong side of your line have prospered suggests that it isn’t immediately obvious that your placement of the line is the correct one.
The arguments in favor of net neutrality are well known, and persuasive mostly because:
The telecom market is not a free market in any conceivable way
The telecom companies have a history of not doing a good job
At least one company has floated trial balloons about exactly the sort of absurdity that net neutrality is intended to prevent
The stuff it intends to prevent is antithetical to the design of the internet and pretty objectively bad for anyone who isn’t a bloated, inefficient telecom monopolist, and there’s evidence that the chance of it happening is nontrivial; ergo, the burden of proof is substantially shifted to those arguing that the negative side-effects (e.g., collateral damage to legitimate packet QOS) are bad enough to be not worth it. I’ve yet to see any persuasive arguments along these lines, especially from informed, respected people in the technology field.
The only reason I can see to oppose net neutrality is a (in my opinion, unjustifiably) large prior probability for the proposition “legislation X is ipso facto bad” for all X.
I don’t believe I’ve defined options in any particularly unusual way. What specifically do you take issue with? There is a sense in which options can globally increase—economic growth and technological progress can globally increase options (giving people the option to do things that were not possible before). Institutions that tend to encourage such progress within a society are valuable. Legislation that limits options requires very compelling evidence that it will encourage such progress to be justified in my opinion—when in doubt, err on the side of not restricting options would be my default position.
Bribery is not coercion, it’s an economic exchange. It differs from other economic exchanges in that it generally involves a non state actor exchanging money or other goods for favourable treatment under the coercive powers of a representative of the state. I can not think of an example of being restricted by a corporation except when they have acted in concert with the state and have had the backing of the state’s threat of force. I don’t really know what you mean by ‘escaping’ a global corporation—what kind of escape do you have in mind beyond terminating a contract?
If a state intervenes with force to prevent the use of force by a non-state actor (the police intervening in a mugging for example) then it is creating an environment that is more conducive to productive economic activity and so allows for a global increase in options. I think the set of actions a non state actor can take to reduce options that do not involve force that the state can beneficially interfere in is either empty or very small though. I’m also not convinced that a state is the only institution that can play this beneficial role, though there are limited historical examples of alternatives.
I’d disagree that the line is arbitrary. It’s certainly less arbitrary than the standard generally applied when deciding what laws to pass. It’s not immediately obvious that it’s the right place it’s true. That’s why I consider the large amount of evidence that demonstrate greater economic growth and prosperity in societies that are closer to the line to be one of the key insights from modern economics.
Ironically the argument you are making here is almost exactly a mirror image of my argument against net-neutrality legislation. The fact that the Internet exists as it does without any current legislation suggests that it isn’t immediately obvious that your desire to move the line is the correct one. There seems to me to be little evidence that would suggest that in this one special case legislation would be beneficial to outweigh the large amounts of evidence that restrictive legislation is generally a net negative and a barrier to innovation.
This addresses the “options” point but not the “hamper innovation” point. The obvious (but arguable) counter-example to the “hamper innovation” point is patent law, in which the government legislates a property right into existence and gives it to an inventor in exchange for the details of an invention.
ETA: Patent law is said to foster* innovation in two ways—it protects an inventor’s investment of time and energy, and it encourages inventors to make details public, which allows others to build on the work. These others can then patent their own work and obtain licenses to use the previously patented components.
* phrasing weakened after reading reply. Was: “Patent law fosters innovation...”
True, patent law is intended to promote innovation. There’s quite a lot of evidence that it has the opposite effect but I agree it’s not immediately obvious that it doesn’t work and there is not yet a consensus that it is a failure. The standard argument you give in favour of patent law is at least superficially plausible.
I let automatic programs filter most of my spam, and the small trickle that gets through seems a small price to pay for the fact that I can have my creative projects on the Internet for free, without having to pay a premium to eliminate a special opportunity cost for potential readers. According to my stats, they are not utterly valueless wastes of space—I have some people who are willing to invest time in viewing my content—but I don’t doubt for a moment that I’d lose most, if not all, of my audience if they were obliged money (that didn’t even make its way to me, the creator of the content).
People drop or refrain from picking up new sites over very little provocation—I stopped reading Dr. McNinja when I started using an RSS feed instead of bookmarks to read my webcomics. Dr. McNinja didn’t become more inconvenient to read when I made this switch; I could have kept the bookmark—it simply didn’t get more convenient along with everything else. I didn’t care about it quite enough to keep it on my radar when it would have taken ten seconds of conscious effort three times a week—not even money and the hassle of providing money over the Internet. I can’t think of any (individual) website that I would pay even a trivial extra amount of money to visit.
The situation you describe is the one that currently exists without any net neutrality legislation though.
I’m suspicious of net neutrality because it uses the threat of imagined or potential future problems to push for more legislation and more government involvement in a market that seems to have worked pretty well without significant regulation so far. This is a general tactic for pushing increased government involvement in many different areas.
The actions that have so far been taken that would be prohibited by net-neutrality legislation mostly seem to be about throttling bittorrent traffic. I’d much rather see a focus on eliminating existing government sponsored monopolies in the provision of broadband access and allow the market to sort out the allocation of bandwidth. I am very doubtful that any kind of legislation will produce an optimal allocation of resources.
The fact that something seems to have worked pretty well without significant regulation so far could mean that it will continue to do so, or it could mean that it’s been lucky and taking no new precautions will cause it to stop working pretty well. I don’t have any antivirus software on my Mac; if more people start finding it an appealing challenge to infect Macs with viruses, though, it would be stupid for me to assume that this will be safe forever. More companies are starting to show interest in behaviors that will lead to biased net access. Regulation will almost certainly not yield optimal allocation of resources; it will, however, prevent certain kinds of abuses and inequalities.
I guess this comes down to politics ultimately. I have more faith that good solutions will be worked out by negotiation between the competing interests (Google tends to counter-balance the cable companies, consumers have options even though they tend to be limited by government sponsored monopolies for broadband provision) than by congress being captured by whoever has the most powerful lobbyists at the time the laws are passed. I take the fact that things are ok at the moment as reasonable evidence that a good solution is possible without legislation. Certainly bad solutions are possible both with and without legislation, I just tend to think they are much more likely with legislation than without.
I must have misread, lifetime access to lesswrong isn’t worth one cent, but you’ll voluntarily spend hours of time on it?
This may or may not have to do with the fact that I am not paid by the hour. My stipend depends on grading papers and doing adequately in school, but if I can accomplish that in ten hours a week, I don’t get paid any less than if I accomplish it in forty. Time I spend on Less Wrong isn’t time I could be spending earning money, because I have enough on my plate that getting an outside job would be foolish of me.
Also, one cent is not just one cent here. If my computer had a coin slot, I’d probably drop in a penny for lifetime access to Less Wrong. But spending time (not happily) wrestling with the transaction itself, and running the risk that something will go wrong and the access to the site won’t come immediately after the penny has departed from my end, and wasting brainpower trying to decide whether the site is worth a penny when for all I know it could be gone next week or deteriorate tremendously in quality—that would be too big an intrusion, and that’s what it looks like when you have to pay for website access.
Additionally, coughing up any amount of money just to access a site sets up an incentive structure I don’t care for. If people tolerate a pricetag for the main contents of websites—not just extra things like bonus or premium content, or physical objects from Cafépress, or donations as gratitude or charity—then there is less reason not to attach a pricetag. I visit more than enough different websites (thanks to Stumbleupon) to make a difference in my budget over the course of a month if I had to pay a penny each to see them all.
In a nutshell: I can’t trade time alone directly for money; I can’t trade cash alone directly for website access; and I do not wish to universalize the maxim that paying for website access would endorse.
The telecommunications market in the United States is so ridiculously far from an idealized free market in so many ways that I don’t see why you’d expect a libertarian perspective to be insightful.
The only sensible free market perspective on anything related to telecommunications has to pretty much start with “tear down the entire current system and start over”.
Obviously we would do things differently if starting over from scratch, but that isn’t going to happen, and it doesn’t mean that we shouldn’t think about the incremental steps we should take. And I don’t think we should ignore economics when thinking about what those incremental steps should be.
Of course not.
However, the typical libertarian response is roughly “move closer to a free market”, which does not necessarily give better results from all starting points. In the case of a market that is by its nature far from a perfectly competitive one, that’s been heavily distorted by previous government interference, has several entrenched players, &c., there’s every reason to believe that naively reducing regulation will lead toward a local minimum.
Look up Tim Lee from the Cato Institute. I think he has written on this topic. I think the standard libertarian position on net neutrality is that it’s no good. Personally, I don’t have the technical knowledge to really comment, though once at what might be called a libertarian boot camp, I came up with the slogan “surf at your own speed” in an exercise to come up with promotions for net nonneutrality.
Also, see this podcast: http://www.econtalk.org//archives/2009/01/eric_raymond_on.html
Toward the end (i think) they get into the issue. I can’t remember what Raymond says, but IIRC, he takes a nonneutrality position while not sound like the standard libertarian position. It’s an interesting podcast throughout, however, and you should listen to the whole thing. All of you.
edit: It was Tim Lee, not Tim Butler: http://www.cato.org/people/timothy-lee
For a libertarian perspective on these issues see:
http://www.econtalk.org/archives/2008/11/hazlett_on_tele.html
“Thomas Hazlett of George Mason University talks with EconTalk host Russ Roberts about a number of key issues in telecommunications and telecommunication policy including net neutrality, FCC policy, and the state of antitrust. Hazlett argues for an emergent, Hayekian approach to policy toward the internet rather than trying to design it from the top down and for an increased use of exchangeable property rights in allocating spectrum.”
Internet should be as free as streets. Internet cables, switches, retransmitters, etc. should be considered public space.
Applause light. Why do you think it should be free? And why would that offset the inefficiencies inherent in a massive public system?
This is what I’m talking about.
The reverse statement is not absurd.
Is there any evidence that massive systems are efficient no matter who is running them? Government-run utilities don’t seem to do worse, and the USA health care system is demonstrably less efficient than many countries’ public systems.
Can you give an example? All the government run utilities that come to mind are disasters. There are probably examples of government run utilities that are efficient but I’m having trouble thinking of any.
My local town provides unusually good garbage collection service. An attempt by the borough council to save money by hiring private contractors for garbage collection was met by many, many outraged people showing up to the council meeting to protest.
Efficient implies cost-effective: government run garbage collection might be a reasonably high quality service but come at an unreasonably high price. It sounds like cost concerns were the motivation for the change in your example.
The protesters made it very clear that they preferred to pay more for the higher level of service.
But since this is a government run monopoly they don’t have that choice as individuals so instead they have to take political action.
Topics that have appeared in recent posts are strictly forbidden upon pain of 3^^^3 dust specks in your eye.
Would you hurt me if I pointed out that G_1 is actually 3^^^^3, not 3^^^3?
No, I wouldn’t hurt you—but I would send you to your room with only Knuth up-arrows for dinner.
Are 3^^^3 dust specks enough dust specks to cause the universe to collapse rather than expand or has the expansion moved beyond even the theoretic influence of gravity?
There’s no actual dust specks—we just wire dust-speck-simulating punitive cybernetic components into your nervous system and run ’em 3^^^3 times.
Well, someone’s eye.
No, all in the offender’s eye—I have it on good authority that it’s worse than 50 years of torture.
What’s 3^^^3? I take it more than 27.
If memory serves me, that’s using Knuth’s up-arrow notation.
What it works out to is 3^3^3^3^(… seven trillion more exponents here …)^3^3^3.
It’s a large number.
http://en.wikipedia.org/wiki/Knuth’s_up-arrow_notation
Define “rational”. I seem to have forgotten.
Believing things that are true, and then using this true knowledge to do the best thing.
In the process of reading and thinking and processing all of this new information I keep having questions about random topics. Generally speaking, I start analyzing the questions and either find a suitable answer or a more relevant keystone question. I have often thought about turning these into posts and submitting them for the populace but I really have no idea if the subjects have been broached or solved or whatever. What should I do?
An example would be a continuing the train of thought started in Rationalistic Losing. The next question in that line deals with how to determine which contests are valuable and how to weigh the pros and cons of particular contests. But I have strong suspicions that this subject has been covered or is so blatantly obvious no one would be interested. Should I just post the darn thing and find out then, or should I ping for interest first?
The reason I even ask is that I am very under-versed in LW and OB topics and posts. This leads me to the opinion that I should not submit new posts until after I read everything. The flip-side is that I am going to be thinking about all of these things anyway; what harm is there in typing it up and submitting it? There is potential gain in some feedback and the only potential harm seems to be annoying you. But if the post is annoying, the community can downvote and be done with it.
Are there any strong opinions about this?
The question could be reworded in this form: Should newer members post on what could be potentially obvious, boring, or been-there-done-that topics?
I agree; do post.
The ability to solve epistemic problems where motor actions are inhibited—and all the agent can do is indicate the correct answer—seems like an important subset of problems a rational agent can face—if only because this skill is less complicated to learn—and is relatively easy to test.
I believe this skill is usually referred to as “reasoning”. Maybe we should be discussing this subject more than we do.
Anyone know how to unhide a post? (I think I clicked on the hide button without meaning to.)
Each post under http://lesswrong.com/user/yourname/hidden/ should have an Unhide link.
Thanks muchly.
This is not very topical, but does anyone want to help me come up with a new term paper topic for my course on self-knowledge? My original one got shot down when it turned out that what I was really trying to defend was a hidden assumption that is unrelated to self-knowledge. Any interesting view I can defend or attack on the subject of introspection, reflection, self-awareness, etc. etc. has potential. Recommended reading is appreciated.
I found Strangers to Ourselves an interesting read on this topic. One of his claims is that the best way to come to know yourself is to study how others react to you and to study your own actions rather than relying on introspection which is an interesting perspective.
Someone mentioned Eric Schwitzgebel here recently, and if you didn’t catch it, his papers might be of help. I’m currently reading “The Unreliability of Naïve Introspection”, and it’s very good so far.
I was the person who mentioned Eric Schwitzgebel, and that paper is a reading we were assigned in class. I’d love to write something on him, but the trouble is that I just agree with him; that isn’t enough for a five-page paper, let alone twenty.
Well, that’s what I get for not verifying who mentioned him ^_^
Do you read OB as well? Robin Hanson’s posts often cover topics related to self-knowledge (typically how our actions are better explained by such cynical factors as social signaling, and our beliefs are revised in feel-good ways). I’d say just about any of the studies he links could use up a number of pages.
In many eastern philosophies, there are meditational practices which seek to see without ego. In less wrong and other places, i got ideas that autistic people don’t construct stories, they see things as they are.
A topic you could try is—When an autistic person sees within himself, there is something present there, not just nothing ness.
Such research could be useful in constructing artificial conciousness and uploading applications. Kudos!
Um, yes.
Seeking comments on Info-Gap Decision Theory:
http://www.stat.columbia.edu/~cook/movabletype/archives/2009/04/what_is_info-ga.html
On the basis of a brief perusal of that post and the Wikipedia article it links to, I’d say it looks ad hoc and overcomplicated. It’s entirely possible that people working with it have had useful insights, but if the approach turns out to be anything like the best way of doing decision theory under conditions of extreme uncertainty then I’ll be extremely surprised.
(I also looked at the website made by the theory’s chief proponent. It seems to have scarcely any actual mathematical content, and to exist only for publicity; that doesn’t seem like a good sign.)
Oops, I “reported” when I meant to “reply”. (Someone was talking to me and I clicked ‘yes’.) What action need I take to undo?
Seems like a sensible thing would be to report this as well, and the two will cancel out. However, I can’t report myself… can you report this, indicating you have done so with a single downvote, and then I will delete this comment.
Thanks
It’s May. Singularity!
I can’t help suspecting that Eliezer’s suggestion that no one should talk about AI, technological singularities, etc., until May was motivated not only by wanting to keep LW undistracted until then, but also by wanting to encourage discussion of those topics afterwards. (He has been quite open about his hope that all his writings on rationality, etc., will have the effect of increasing support for SIAI.)
Whether that’s a reason for talking about those topics (yay, Eliezer will be pleased, and Eliezer is a good chap) or a reason for not talking about them (eeew, he’s trying to manipulate us) will doubtless vary from person to person.
I think it was so that newcomers wouldn’t think that LW are a bunch of fringe technophiles that just want to have their cause associated with rationality.
But wouldn’t the site’s earliest days be the time of least newcomers?
But that’s pretty much what LW is, no? I’ve long suspected that “rationality,” as discussed here, was a bit of a ruse designed to insinuate a (misleading) necessary connection between being rational and supporting transhumanist ideals.
But that’s pretty much what Christianity is, no? I’ve long suspected that “faith,” as discussed there, was a bit of a ruse designed to insinuate a necessary connection between being eating other animals and supporting childish ideals.
I get that you’re being sarcastic, but I’m not sure what you’re driving at.
I get that you’re being annoying, but I’m not sure what you’re aiming at.