Strongly suspect more of us should be taking this advice.
Grognor
This is a super-duper nice comment.
Most of the really horrible things in this world are deontic violations, like tyranny and genocide.
Disagree. Most of the really horrible things in this world are just accidents that not enough people are paying attention to. If animals can suffer then millions of Holocausts are happening every day. If insects can suffer then tens of billions are. In any case humans can certainly suffer, and they’re doing plenty of that from pure accident. Probably less than a twentieth of human suffering is intentionally caused by other humans. (Though I will say that the absolute magnitude of human-intent-caused human suffering is unbelievably huge.)
will you ever forgive yourself after this realization?
Yeah, okay, I get it; I give up. No more trying to post quotes that make people think.
Any belief that I can never be talked out of (given how the human mind works, probably most beliefs we have are like this actually)
I suspect that with enough resources you could be talked out of any of your beliefs. Oh, sure, it would take a lot of time, planning, and manpower (and probably some people you approve of having the beliefs we’d want to indoctrinate you with). You’re not actually 100% certain that you’re 100% certain that 0 and 1 are probabilities.
The trouble with thinking 0 or 1 is a probability is that it is exactly equivalent to having an infinite amount of evidence, which is impossible by the laws of thermodynamics; minds exist within physics.
Furthermore, a feeling of absolute certainty isn’t even a number, much less a probability.
You’re supposed to post things you actually believe, you know! What are you, a spirit-of-the-game violator?
Upvoted in furious, happy disagreement, because I was going to post this very thing, with a confidence level of 20%, but then I reasoned out that this was unbelievably stupid and the probability of infinite Bayesian evidence being possible should be the same as probabilities for other things we have very strong reason to believe are simply impossible: 1 - epsilon.
Upvoted in disagreement. The trend to moral progress has been one of less accepting of violence, less acceptance of nonconsensual interaction, less victim blaming, and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).
This leads to a falsifiable prediction. In the next one to four centuries, vegetarianism will increase to a majority, jails will be seen as unnecessarily, brutally, unjustifiably harsh, “the poor” will be less of an Acceptable Target (c.f. delusions that they are “just lazy” and so on), and a condemnation of the present generation for being so terrible at donating in general and at donating to the right causes. If all of those things happen, moral progress will have been flat-out confirmed.
I think this was a legitimate use of “by definition”, since it’s the definition we use on this website. You’re right that “rational” has often meant “blindly crunching numbers without looking at all available information &c.” but I thought we had a widespread agreement here not to use the word like that.
You’re right that my response seems excessive, but I don’t know if it actually is excessive rather than merely seeming so.
You’re right. I assumed symmetry, which was wrong.
There are many problems here.
At the end of paragraph 2 and the other examples, you say
This exactly mirrors the Prisoner’s Dilemma.
But it doesn’t, as you point out later in the post, because the payoff matrix isn’t D-C > C-C > D-D, as you explain, but rather C-C > D-C > C-D, because of reputational effects, which is not a prisoner’s dilemma. “Prisoner’s dilemma” is a very specific term, and you are inflating it.
evolution is also strongly motivated [...] evolution will certainly take note.
I doubt that quite strongly!
The evolutionarily dominant strategy is commonly called “Tit-for-tat”—basically, cooperate if and only if you expect your opponent to do so.
That is not tit-for-tat! Tit-for-tat is start with cooperate and then parrot the opponent’s previous move. It does not do what it “expects” the opponent to do. Furthermore, if you categorically expect your opponent to cooperate, you should defect (just like you should if you expect him to defect). You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.
This so-called “superrationality” appears even more [...]
That is not superrationality! Superrationality achieves cooperation by reasoning that you and your opponent will get the same result for the same reasons, so you should cooperate in order to logically bind your result to C-C (since C-C and D-D are the only two options). What is with all this misuse of terminology? You write like the agents in the examples of this game are using causal decision theory (which defects all the time no matter what) and then bring up elements that cannot possibly be implemented in causal decision theory, and it grinds my gears!
And if two people with these sorts of emotional hangups play the Prisoner’s Dilemma together, they’ll end up cooperating on all hundred crimes, getting out of jail in a mere century and leaving rational utility maximizers to sit back and wonder how they did it.
This is in direct violation of one of the themes of Less Wrong. If “rational expected utility maximizers” are doing worse than “irrational emotional hangups”, then you’re using a wrong definition of “rational”. You do this throughout the post, and it’s especially jarring because you are or were one of the best writers for this website.
playing as a “rational economic agent” gets you a bad result
9_9
[...] anger makes us irrational. But this is the good kind of irrationality [...]
“The good kind of irrationality” is like “the good kind of bad thing”. An oxymoron, by definition.
[...] if we’re playing an Ultimatum Game against a human, and that human precommits to rejecting any offer less than 50-50, we’re much more likely to believe her than if we were playing against a rational utility-maximizing agent
Bullshit. A rational agent is going to do what works. We know this because we stipulated that it was rational. If you mean to say a “stupid number crunching robot that misses obvious details like how to play ultimatum games” then sure it might do as you describe. But don’t call it “rational”.
It is distasteful and a little bit contradictory to the spirit of rationality to believe it should lose out so badly to simple emotion, and the problem might be correctable.
You think?
Downvoted.
Anyone else who has registered please say so publicly in the comments as well.
Okay. +1 registration
[...] what is the necessity, nay, the justification for parties existing in this day and age?
It’s a good question. The answer is “none, because people are crazy and the world is mad”.
The expert-at vs. expert-on distinction severely weakens this meta-advice. See also Unteachable Excellence.
Since this is an introduction, you should emphasize that the prisoner’s dilemma is about utilons and not years in jail or dollars or whatever. This is usually approximated by indicating that the two agents are perfectly self-interested and don’t care about the other player at all, but people tend to reject this for one reason or another. In any case that is the point of Eliezer’s essay The True Prisoner’s Dilemma, so you could link to that as well.
Decided to behave in a different way in some set of situations
I downloaded seven metabooks and I’m going to read all of them, regardless of their quality. I’m pretty sure I’m bad at reading, never having “practiced” it, and I’m certain that I do not read quickly enough or recall enough. So I hope these books will fix that problem. I may make a discussion post after this experience, explaining which ones I thought were helpful and which ones I thought were not.
Obtained new evidence that made you change your mind about some belief
Mitchell Porter has decreased my confidence in the computational theory of mind, but not so much that I think it’s wrong. This actually happened a while back. I’ve also changed my mind about something else, which I won’t publicly reveal.
For my areas of expertise [...] Philosophy
I find it somewhat difficult to believe that you’ve maxed out your knowledge “philosophy” obtainable from audiobooks! But only somewhat, if you care to inform me that there is simply a terrible drought of philosophy audiobooks.
There is widespread rumor that the free days of finding good books, audiobooks, movies, and music on the internet are counted.
This comment is oddly prescient. Yesterday, I downloaded several books from libgen.info. Today, when I attempted to do so a bit more, I find that libgen.info is gone. library.nu and libgen.info were the best, or perhaps only decent book pirating websites. Now there is basically nowhere to get free pirated books online.
Edit: And now I find that this comment is premature; libgen.info is back up. Shocking scare, though.
Edit: And now I see that previous edit was premature; libgen.info is down again.
Edit: It is back up. I give up.
The cia.gov link leads to a redirect.
The linked article is a complete waste of time as the authors don’t bother to explain what the extortionate strategy is, only insist that it turns the game into an ultimatum. And the title must be a lie, since halfway through, it explicitly says TFT gets the same score as its opponent. (In other words, it doesn’t get “beat” by anything.) So the parts of the article that are true are useless, the parts that are supposedly interesting are asserted, unexplained, and the title is certainly false. Downvoted.
Your confidence is much higher than marchdown’s. You should have upvoted because you think he’s underconfident. Mind you I upvoted it myself because:
is definitely not true for me, as when I learn that I am subtly and subconsciously manipulating people, I stop doing it. And when I learn some trick to make people agree with me, I make sure I don’t do it.