Rationality Quotes April 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
No more than 5 quotes per person per monthly thread, please.
-- Scott Aaronson
Holy Belldandy, it sounds like someone located the player character. Everyone get your quests ready!
Woah, I’d better implement Phase One of my evil plan if it’s going to be ready in time for the hero to encounter it.
Omg, what do I do?! I can’t find my random encounter table!
Just act life-like!
Don’t worry, you are the random encounter.
And rewards.
???
Someone located his inner d2!
My bet is that the student had many digits of pi memorised and just used their parity.
I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you’ve memorized, but you’d have to figure out how to convert it to bits on the fly.
Personally I’d recommend going to random.org, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you’ll always be able to act unpredictably.
Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who’s good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn’t actually choose pi, but something less well-known and preferably easier to calculate. I don’t know any such algorithms, and I guess if anyone knows a good one, they’re not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn’t share your seed. If anyone’s thought about this in more depth and is willing to share, I’m interested.
http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/
That’s awesome, thanks.
Awesome. I tried doing that when I was a child but naturally failed.
When I need this I just look at the nearest object. If the first letter is between a and m, that’s a 0. If it’s between n and z, that’s a 1. For larger strings of random bits, take a piece of memorized text (like a song you like) and do this with the first letter of each word.
There’s an easier way: look at the time.
Seconds are even? Type ‘f’. Odd? Type ‘d’. (Or vice-versa. Or use minutes, if you don’t have to do this very often.)
A while ago there was an article (in NYTimes online, I think) about a program that could beat anyone in Rock-Paper-Scissors. That is, it would take a few iterations, and learn your pattern, and do better than chance against you.
It never got any better than chance against me, because I just used the current time as a PRNG.
Edit: Found it. http://www.nytimes.com/interactive/science/rock-paper-scissors.html?_r=0
Edit2: Over 25 rounds, 12-6-7 (win-loss-tie) vs. the “veteran” computer. Try it and post your results! :)
Over 12 rounds against the veteran computer, I managed 5-4-3, just trying to play “counterintuitively” and play differently from how I expected the players whose information it aggregated would play.
Not enough repetitions to be highly confident that I could beat the computer in the long term, but I stopped because trying to be that counterintuitive is a pain.
Got 7-6-7 with the same tactic. Apparently the computer only looks at the last 4 throws, so as long as you’re playing against Veteran (where your own rounds will be lost in the noise), it should be possible for a human to learn “anti-anti-patterns” and do better than chance.
14-11-14 over 39 rounds using gwern’s linked prng (p=69, m=6, seed=minutes+seconds). Yet another cool trick to impress psychology professors!
I got 8-9-7 over 25 rounds (which seems approximately as good as chance) while trying to be smart (and not using any source of randomness).
Edit: I guess this was actually 24 rounds.
19-18-13 over 50 rounds against the veteran, without using any external RNG, by looking away and thinking of something else so that I couldn’t remember the results of previous rounds. (My after-lunch drowsiness probably helped.)
10-5-10 against veteran by trying to predict the computer and occasionally changing levels of recursion.
Second try: 14-16-15 by trying to act randomly (without conciously using an algorithm).
9-6-10 here out of 25 rounds, using current time. :(
I remember doing way better than this a few months ago, just by playing naturally. Gonna blame sample size...
Somehow managed 16-8-5 versus the veteran computer, by using the articles own text as a seed “Computers mimic human reasoning by building on simple rules...” and applying a-h = rock, i-p = paper, q-z = scissors, I think this is the technique I will use against humans (I know a few people I would love to see flail against pseudo-randomness).
That should fail in the long run because it’s unlikely that the frequency of letters in English divides so evenly that those rules make each choice converge to happening exactly 1⁄3 of the time.
I’d just generate the random numbers in my head. A useful thing to do is to pick a couple of numbers from thin air (which doesn’t work by itself because the human mind isn’t good at picking ‘random’ numbers from thin air), then adding them together and then taking the last digit (or if you wantt 3 choices, taking them mod 3).
9-6-10 here out of 25 rounds, using current time. :(
That’ll be almost independent but not unbiased: I think that a-m will be more frequent than n-z. However, you could do the von Neumann trick: if you have an unfair coin and want a fair sequence of bits, take the first and second flips. HT is 0, TH is 1, and if you get HH or TT, check the third and fourth flips. Etc.
I just looked up the letter frequencies and it’s 52% for a-m and 48% for n-z (for the initial letters of English words). Using ‘l’ instead of ‘m’ gives a 47⁄53 split, so ‘m’ is at least the best letter to use.
[Aside] When do you need to generate random numbers in your head? I can think of literally no time when I’ve needed to.
If you have to make a close decision and don’t have a coin to flip. Or at a poker tournament if you don’t trust your own ability to be unpredictable.
There once was some site that let you enter a sequence of “H” and “T” and test it for non-randomness (e.g. the distribution of the length of runs, the number of alternations, etc.), and after a couple attempts I managed to pass all or almost all the tests a few times in a row.
Winston Rowntree, Non-Bullshit Fables
I’ve always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.
http://abstrusegoose.com/494
Link.
On the meta-level, I’m not sure “quickness beats persistence” is a helpful lesson to teach. At the scale of things many LessWrongers would hope to help accomplish, both qualities are prerequisites, and it would be a mistake to believe that you don’t have to worry about the latter just because you’re one of the millions of people who are 99.9th percentile at the former.
On the base level, a non-bullshit version of this fable would look more like “There once was a hare being passed by a tortoise. Neither of them could talk. The end.”
Now that you mention it, a fable, by definition, requires bullshit.
“Moral: life is inarguably a depressingly unfair endeavor.”
FTFY:
What’s unfair about that quote? The faster one did win. This would exemplify your moral.
“Fairness” depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it’s fair that the hare wins. But why does the hare get to be better at racing in the first place?
Debates about what is and isn’t fair are best framed as debates over what to condition on, because that’s where most of the disagreement lies. (As is the case here, I suppose).
The quote is the next line from the quote source.
Huh, okay.
On a similar note, there’s http://www.thisamericanlife.org/radio-archives/episode/463/transcript—search for “Act Two”.
http://www.quickmeme.com/meme/3t0l49/
Sorry, saw it earlier today and couldn’t resist.
“The peril of arguing with you is forgetting to argue with myself. Don’t make me convince you: I don’t want to believe that much.”
Even More Aphorisms and Ten-Second Essays from Vectors 3.0, James Richardson
The others are quite nice too: http://www.theliteraryreview.org/WordPress/tlr-poetry/
That link is now broken. It turns out it was a highly incomplete excerpt from “Vectors 3.0” so I’ve put By the Numbers on Libgen and put up a complete version taken from the book. (I like some of the aphorisms, so I’ve ordered the other 2 books to scan as well.)
--Pirates of the Caribbean
The pirate-specific stuff is a bit extraneous, but I’ve always thought this scene neatly captured the virtue of cold, calculating practicality. Not that “fairness” is never important to worry about, but when you’re faced with a problem, do you care more about solving it, or arguing that your situation isn’t fair? What can you do, and what can’t you do? Reminds me of What do I want? What do I have? How can I best use the latter to get the former?
That said, if I recognize that I’m in a group that values “fairness” as an abstract virtue, then arguing that my situation isn’t fair is often a useful way of solving my problem by recruiting alliances.
If you’re in a group where “that’s not fair” is frequently a winning argument, you may already be in trouble.
I am in many groups where, when choosing between two strategies A and B, fairness is one of the things we take into account. I’m not sure that’s a problem.
If it’s a frequently-occurring observation within the group then yes, there seems to be something wrong. Possibly because things are regularly proposed and acted on without considering fairness until someone has to point it out.
If it hardly ever has to be said, but when pointed out, it is often persuasive, you’re probably OK.
Even more generally it can be taken as a paraphraasing of the Litany of Gendlin
Frankly this is precisely the kind of ruthless pragmatism that gives utilitarians such a horrible reputation.
Well, it certainly didn’t stop Jack Sparrow from being a beloved character.
You can be ruthless and popular, if you’re sufficiently charismatic about it.
It also helps to be fictional, or at least sufficiently removed from the target audience that they perceive you in far mode.
I’d say that it’s possible to be ruthless and popular even among people who’re familiar with you, as long as you keep your ruthlessness in far mode for the people you’re attempting to cultivate popularity amongst. Business executives come to mind, and the more cutthroat strains of social maneuverers.
Dunno mate, I could name a few US Presidents and non-US leaders.
Mmm, that’s a good point.
Potentially—If people know you’re going to play according to a higher rule or purpose, rather than following feelings, then how much are they going to trust that you’re really going to exercise that rule on their behalf?
It’d be like the old argument that people should be allowed to kidnap people off the streets and take their organs—because when you average it out any individual is more likely to need an organ than be the one kidnapped so it’s the better gamble for everyone to make. But we don’t really imagine it that way, we all see ourselves being the ones dragged off the street and cut up, or that people with unpopular political opinions would be the ones… You can’t trust someone who’d come up with that sort of system not to be playing a different game because they’ve already shown you can’t trust their compassionate feelings to work as bounds on their actions. Maybe any friendship they express means as little to them as the poor guy they just butchered.
I wonder how much of it is a trust problem though, and how you’d resolve that. It seems to me that if you knew someone really well, or they didn’t seem to be grasping power, they could get away with being ruthless. People seem almost to gloat about how ruthless specops folks and the like are.
My impression is that whistle-blowers tend not to be trusted. It’s not as though other businesses line up to hire them.
I think the problem is having moral systems which impose high local costs.
-Allen Knutson on collaborating with Terence Tao
At that point I’d start wondering why there doesn’t appear to be a simple proof. For example, maybe some kind of generalization of the result is false and you need the complexity to “break the correspondence” with the generalization.
(meta)
Saith the linked site: “You must sign in to read answers past the first one.”
Well, that’s obnoxious.
If it’s any consolation, none of the answers past the first one on this question are very good.
Well, there are only 2
-Same place
Daniel Kahneman,Thinking, Fast and Slow
As far as I can tell this doesn’t agree with my experience; a good chunk of every day is spent in groping uncertainty and confusion.
Come and take my herb?
Those moments send me into panic attacks. (At least when they’re on significant topics not on maths).
Math is a significant topic!
*Topics where my inability to work out the answer immediately implies a lack of ability or puts me at risk.
Unless you took John Leslie’s advice and Ankified the multiplication table up to 25.
I’ve read your link to John Leslie with both curiosity and bafflement.
17 x 24 is not perhaps the best example of a question for which no answer comes immediately to mind. Seventeen has the curious property that 17 x 6 = 102. (The recurring decimal 1⁄6 = 0.166666… hints to us that 17 x 6 = 102 is just the first of a series of near misses on a round number, 167 x 6 = 1002, 1667 x 6 = 10002, etc). So multiplying 17 by any small multiple of 6 is no harder than the two times table. In particular 17 x 24 = 17 x (6 x 4) = (17 x 6) x 4 = 102 x 4 = 408.
17 x 23 might have served better, were it not for the curious symmetry around the number 20, with 17 = 20 − 3 while 23 = 20 + 3. One is reminded of the identity (x + y)(x—y) = x^2 - y^2 which is often useful in arithmetic and tells us at once that 17 x 23 = 20 x 20 − 3 x 3 = 400 − 9 = 391.
17 x 25 has a different defect as an example, because one can hardly avoid apprehending 25 as one quarter of 100, which stimulates the observation that 17 = 16 + 1 and 16 is full of yummy fourness. 17 x 25 = (16 + 1) x 25 = (4 x 4 + 1) x 25 = 4 x 4 x 25 + 1 x 25 = 4 x 100 + 25 = 425.
17 x 26 is a better example. Nature has its little jokes. 7 x 3 = 21 therefore 17 x 13 = (1 + 7) x (1 + 3) = (1 + 1) + 7 x 3 = 2 + 21 = 221. We get the correct answer by outrageously bogus reasoning. And we are surely puzzled. Why does 21 show up in 17 x 13? Aren’t larger products always messed up and nasty? (This is connected to 7 + 3 = 10). Any-one who is in on the joke will immediately say 17 x 26 = 17 x (13 x 2) = (17 x 13) x 2 = 221 x 2 = 442. But few people are.
Some people advocate cultivating a friendship with the integers. Learning the multiplication table, up to 25 times 25, by the means exemplified above, is part of what they mean by this.
Others, full of sullen resentment at the practical usefulness of arithmetic, advocate memorizing ones times tables by the grimly efficient deployment of general purpose techniques of rote memorization such as the Anki deck. But who in this second camp sees any need to go beyond ten times ten?
Does John Leslie have a foot in both camps? Does he set the twenty-five times table as the goal and also indicate rote memorization as the means?
I’m not sure exactly what he had in mind, but learning the multiplication tables using Anki isn’t exactly rote.
Now, this may not be the case for others, but when I see a new problem like 17 x 24, I don’t just keep reading off the answer until I remember it when the note comes back around. Instead, I try to answer it using mental arithmetic, no matter how long it takes. I do this by breaking the problem into easier problems (perhaps by multiplying 17 x 20 and then adding that to 17 x 4). Sooner or later my brain will simply present the answers to the intermediate steps for me to add together and only much later do those steps fade away completely and the final answer is immediately retrievable.
Doing things this way, simply as a matter of course, you develop somewhat of a feel for how certain numbers multiply and develop a kind of “friendship with the integers.” Er, at least, that’s what it feels like from the inside.
That’s not the important point. Even if you have, you will still face the same problem when facing a question like, for example, say 34 × 57 = ?. The quote was using that particular problem as an example. If that example does not apply to you because you Ankified the multiplication table up to 25 or for any other reason, it is trivial to find another problem that gives the desired mental response. (As I just did with the 34 × 57 problem.)
Agreed. I’m not so much disagreeing with the thrust of the quote as nitpicking in order to engage in propaganda for my favorite SRS.
Of course, even if I have no complete answer to 34 × 57, I still have “intuitive feelings and opinions” about it, and so do you. For example, I know it’s between 100 and 10000 just by counting the digits, and although I’ve just now gone and formalized this intuition, it was there before the math: if I claimed that 34 × 57 = 218508 then I’m sure most people here would call me out long before doing the calculation.
What has this got to do with the original quote? The quote was claiming, truthfully or not, that when one is first presented with a certain type of problem, one is dumbfounded for a period of time. And of course the problem is solvable, and of course even without calculating it you can get a rough picture of the range the answer is in, and with a certain amount of practice one can avoid the dumbfoundedness altogether and move on to solving the problem, and that is a fine response to give to the original quote, but it has no relevance to what I was saying.
All I was saying is that it is an invalid objection to object to the quote based on the fact that with a certain technique the specific example given by the quote can be avoided, as that example could have easily been replaced by a similar example which that technique does not solve. I was talking about that specific objection I was not saying the quote is perfect, or even that it is entirely right. You may raise these other objections to it. But the specific objection that Jayson_Virissimo raised happens to be entirely invalid.
I wasn’t trying to contradict you. Try reading my comment again without the “No, you’re wrong, and here’s why” you seem to have imagined attached to the beginning.
Oh god. Everyone stop talking.
I’m a little perplexed that I haven’t got the multiplication table up to 25 memorized, given the number of times I’ve multiplied any two numbers under 25.
I’m curious—what advantage do you get from this?
So far, mostly the ability to perform entertaining parlor tricks (via mental arithmetic and a large body of facts about the countries of the world). I admit, it is not very impressive, but not useless either. In other words, nothing you couldn’t do in a few minutes with a smartphone (although, I imagine, that would tend to ruin the “trick”).
--Chris Brogan on the Sunk Cost Fallacy
If there is another one next door, maybe. If it is much farther than that the menu would have to be fairly bad.
… if there is a sufficiently convenient alternative and the difference is significant.
I think you are using settle in its more precise meaning (i.e. release a legal claim), which is not consistent with the colloquial usage. Colloquially, “settle” is often used as the antonym of “take reasonable risks.”
Similarly, I think the difference between “don’t like the menu” and “fairly bad” is hairsplitting for someone who would find this level and type of advice useful. In just about any city, the BATNA is “travel to another place to eat, getting no further from your home than you were at the first place.” And that’s a pretty good alternative. I think the quote correctly asserts that the alternative is underrated.
While I assert that the quote advocates premature optimization. It distracts from actual cases of the sunk cost fallacy by warning against things that are often just are not worth fixing.
-- Isaac Asimov
For some interesting exceptions to this quote, see Bostrom on Information Hazards.
This may not be strictly true. Consider the basilisk.
I have, and I’ve come to the conclusion that Eliezer’s solution, i.e., suppress all knowledge of it, won’t actually work.
Agreed, but I think the exceptions are very few.
― David Lamb & Susan M. Easton, Multiple Discovery: The pattern of scientific progress, pp. 100-101
Columbus’s “genius” was using the largest estimate for the size of Eurasia and the smallest estimate for the size of the world to make the numbers say what he wanted them to. As normally happens with that sort of thing, he was dead wrong. But he got lucky and it turned out there was another continent there.
Wait… he did that on purpose?
Yes, actually. He believed the true dimensions of the Earth would conform to his interpretation of a particular Bible verse (thwo-thirds of the earth should be land, and one-third water, so the Ocean had to be smaller than believed) and fudged the numbers to fit.
Ah, OK. I had taken DanielLC to be implying that he had fudged the numbers in order to convince the Spanish queen to fund him.
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
Perhaps Columbus’s “genius” was simply to take action. I’ve noticed this in executives and higher-ranking military officers I’ve met—they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn’t, but the success rate is a lot better than for people who never take action at all.
Executives and higher ranking military officers also happen to have the power to enforce their decisions. Making decisions and acting on them can be possible without that power but the political skill required is far greater, the rewards lower, the risks of failure greater and the risks of success non-negligible.
This is how Scott Sumner describes his own work in macroeconomics and NGDP targetting. Others see it as radical and innovative, he thinks he is just taking the standard theories seriously.
From Boswell’s Life of Johnson. HT to a commenter on the West Hunter blog.
If each person counts as one for each time he dines, Alexander can only claim to have personally hosted the guests at his most recent meal; the others were guests of someone else.
I think the idea is that all of the people are him.
quick math
I used to dine with 1460 people a year in my home, reckoning each person as one each time I dined there.
Families of four are mighty terrifying, aren’t they?
Oooh. That explains a lot...
-Paul Graham
--Paul Graham, same essay
Same essay.
Source
Thank you!
-- Scott Aaronson on areas of expertise
If the atheists what to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the theology community, bishops, the Pope, pastors, denominational and non-denominational bodies, etc., with superior research and arguments.
-- Scott Aaronson in the next paragraph
Not that I don’t think this is a fair counterpoint to make, but in my own experience trying to find the best arguments for religion, I learned a lot more and got better reasoning talking to random laypeople than by asking priests and theologians.
Of course, the fact that I talked to a lot more laypeople than priests and theologians is most likely the determining factor here, but my experiences discussing the nature and details of climate change have not followed a similar pattern at all.
Just so I’m clear: do you believe the theology community (“bishops, the Pope, pastors, denominational and non-denominational bodies, etc.”) is as reliable an authority on the nature and existence of the thing atheists don’t believe in than the academic climatology community is on the nature and existence of the thing climate skeptics don’t believe in?
If so, then this makes perfect sense.
That said, my experience with both groups doesn’t justify such a belief.
The analogy doesn’t cohere. Nobody denies that climate exists; they disagree on what it is doing.
I agree that nobody denies climate exists, but I think that’s irrelevant to the question at hand.
To clarify that a bit… Aaronson asserted a relationship between “climate skeptics” and “the academic climatology community” with respect to some concept X which climate skeptics deny exists. We could get into a whole discussion about what exactly X is (it certainly isn’t climate), but rather than go down that road I simply referred to it as “the thing climate skeptics don’t believe in.”
Eugine_Nier asserted a relationship between “atheists” and “the theology community” with respect to some concept Y which atheists deny exists. We could similarly get into a whole discussion about what exactly Y is, but rather than go down that road I simply referred to it as “the thing atheists don’t believe in.”
If the theology community is in the same relationship to Y as the academic climatology community is to X, then the analogy holds.
I just don’t believe that the theology community is in that relationship to Y.
I believe Eugine_Nier is suggesting not that theology community is in the same relationship to Y as the academic climatology community is to X, but the reverse.
(nods) Yup. It’s the opposite of what he said, but he could easily have been speaking ironically.
Well, no. You’re an atheist. I’m sure a Christian climate skeptic would agree with you, with the terms reversed.
That is, a Christian climate skeptic would claim that their experience with both groups doesn’t justify the belief that the academic climatology community is as reliable an authority as the theology community?
In a trivial sense I agree with you, in that there’s all sorts of tribal signaling effects going on, but not if I assume honest discussion. In my experience, strongly identified Christians believe that most theologians are unreliable authorities on the nature of God.
Indeed, it would be hard for them to believe otherwise, since most theologians don’t consider Jesus Christ to have been uniquely divine.
Of course, if we implicitly restrict “the theology community” to “the Christian theology community,” as many Americans seem to, then you’re probably right for sufficiently narrow definitions of “Christian”.
Hmm, interesting point. At a guess, I’d say there probably is more disagreement among theologians than climatologists, so there does seem to be some asymmetry there.
On the other hand, if God is analogous to Global Warming (or whatever) then I suppose the analogy for those disputed details might be predictions of how soon we’ll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is (including “there isn’t one”.) So there’s that.
If “God” refers to what theologians and atheists disagree about, and “Global Warming” refers to what climatologists and climate skeptics disagree about, then sure. I’d be cautious of assuming we agree on what those labels properly refer to more broadly, though.
Well, OK. Using that analogy, I guess I would say that if climatologists disagreed with each other about Global Warming as widely as theologians disagree with each other about God, I would not consider climatologists any more reliable a source of predictions of how soon we’ll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is, than I consider theologists reliable as a source of predictions about God.
Yup. Hence the “or whatever”.
The point, of course, is that while they may disagree about the details, they all agree on the existence of the thing in question. Although TBH climatologists do seem to have more consensus than theologians.
It is not clear to me how to distinguish between “Christian, Buddhist, and Wiccan theologians agree on the existence of God but disagree on the details of God” and “Christian, Buddhist, and Wiccan theologians disagree on whether God exists”
This is almost entirely due to a lack of clarity about what “God” refers to.
Well, Buddhist, and Wiccan theologians are in a minority compared to Christian, Hindu, Deist and so on. And there is a spectrum of both Wiccan and Buddhist thought ranging from standard atheism + relevant cosmology to pretty clear Theism of various kinds (plus relevant cosmology.) Still, it’s probably more common than among climatologists, depending on how strictly we define “theologian”. (And “climatologist” for that matter, there are a good few fringe “climatologists” who push climate skepticism.)
Yup, agreed that how we define the sets makes a big difference.
If atheists really thought that theists believed just because the pastors did, then targeting the pastors would seem to be the best way to go about it, yes. Either by attacking their credibility or attempting to convince them otherwise/attack the emotional basis of their faith. Even if the playing field was uneven and the pastors were actually crooked, there just wouldn’t be any gain in going after the believers as individuals.
I can’t think of a reply to this that won’t start a game of reference class tennis; but I think there’s a possibility that Aaronson’s list is a more complete set of the relevant experts on the climate than your list is of the relevant experts on the existence of deities. If we grant the existence of deities, and merely wish to learn about their behavior; your list would be analogous to Aaronson’s.
Both lists end with “etc.”, so I have trouble calling either of them incomplete.
I think “etc.” is a request to the reader to be a good classifier—simply truncating the list at “etc.” is overfitting, and defeats the purpose of the “etc.” Contrariwise, construing “etc.” to mean “everything else, everywhere” is trying to make do with fewer parameters than you actually need. The proper use of “etc.” is to use the training examples to construct a good classifier, and flesh out members of the category by lazy evaluation as needed.
It’s not a reasonable presumption that “etc.” will cover “any arbitrary thing that happens to make trouble for your counterargument”.
If nothing else at least we’ve got that covered.
Something a Chess Master told me as a child has stuck with me:
-- Robert Tanner
-- Jake the Dog (Adventure Time)
For reference purposes: video clip; episode transcript.
WTH… My latest Facebook status is “You got to lose to know how to win” (from “Dream On” by Aerosmith). o.O
Checkmate, atheists!
I don’t get it...
Will is (non-seriously) pointing out that the synchronicity between army1987′s Facebook status and Qiaochu’s comment is too great to be explained by coincidence alone, and is thus strong evidence for the existence of God.
You’ve got to crash the car to know how to drive, got to drown to learn how to swim, you’ve got to believe to disbelieve. Got to !x to x.
But that would make it “checkmate, believers”. All the other sentences say ” you’ve got to to ”.
X & !X can be anything, good or bad. You’ve just got to pick a value for X that fits in with your desires to get a particular outcome if you want to break it down in terms of good and bad. Got to live to die. The point is that the underlying structure of the argument remains the same whatever you pick.
If you’re actually interested in propositional logic, then the suitably named Logic by Paul Tomassi is a very approachable intro to this sort of thing. Though I’m afraid I couldn’t say what it goes for these days.
Which is of course a different question to “What should I do to get good at Chess?” which is all about deliberate practice with a small proportion of time devoted to playing actual games.
Right, I often play blitz games for an hour a day weeks on end and don’t improve at all. Interestingly, looking at professional games, even if I don’t bother to calculate many lines, seems to make me slightly better; so there are ways to improve without deliberate practice, but playing blitz doesn’t happen to be one of them. Playing standard time controls does work decently well though, at least once you can recognize all the dozen or so main tactics.
Playing a lot isn’t as good as deliberate practice, but it’s better than having done neither.
This seems incontrovertible.
- K’ung Fu-tzu
The ‘imitation’ part is appropriately meta for a quote page.
I’d like to imagine that it’s the blurb he put on the back of his own book: “I’ve done the reflection (noble!); buy now and you can get the benefit—it’s easy! -- or you can go stumbling off without the benefit of my wisdom like a sucker.”
Tim Maly, pondering the increasing and poorly understood impact of algorithms on the average person’s life.
Following the chain, I came across:
Source, with the addition later of ‘expect to read a lot of sentences like this in coming years.’
-- Albert Einstein
At least sometimes the formulation is far easier than the solution.
This is definitely true. General class of examples: almost any combinatorial problem ever. Concrete example: the Four Colour Theorem
Yes! Combinatorics problems are a perfect example of this. Trying to work out the probability of being dealt a particular hand in poker can be very difficult (for certain hands) until you correctly formulate the question- at which point the calculations are trivial : )
I think bentarm was offering “Combinatorics problems” as an example of the opposite of the phenomenon you describe. In particular the Four Colour Theorem is easy to formulate but hard to solve, and (as far as I know) the solution doesn’t involve a reformulation.
Yes, upon re-reading I see that you are correct. I think there may be overlap between activities I consider part of the formulation and activities others may consider part of the solution.
To expand on my poker suggestion. When attempting to determine the probability of a hand in poker it is necessary to determine a way to represent that hand using combinations/permutations. I have found that for certain hands this can be rather difficult as you often miss, exclude, or double count some amount of possible hands. This process of representing the hand using mathematics is, in my mind, part of the formulation of the problem; or more accurately, part of the precise formulation of the problem. In this respect, the solution is reduced to trivial calculations once the problem is properly formulated. However, I can certainly see how one might consider this to be part of the solution rather than the formulation.
Thanks for pointing that out
In my experience it can often turn out that the formulation is more difficult than the solution (particularly for an interesting/novel problem). Many times I have found that it takes a good deal of effort to accurately define the problem and clearly identify the parameters, but once that has been accomplished the solution turns out to be comparatively simple.
Do you have an original source for that? All I can find is various quotation sites, which contain so amny other things that Einstein allegedly said I feel sceptical.
Nope, and I don’t recall where I saw it attributed to him originally. (I did check by Googling it, but you’re right that that only confirms that it’s often attributed to him.)
Hmm. Einstein is perhaps most famous for “discovering” special relativity. But he neither formulated the problem, nor found the solution (I think the Lorentz transformation was already known to be the solution), but reinterpreted the solution as being real.
His “greatest error” was introducing the cosmological constant into general relativity—curiously, making a similar error to what everyone else had made when confronted with the constancy of the speed of light, which was refusing to accept that the mathematical result described reality.
In writing a story, it’s easy to identify problems with the story which you must struggle with for weeks to resolve. But often, you suddenly realize what the entire story is really about, and this makes everything suddenly easy. If by the formulation of the problem we mean that overall understanding, rather than specific obstacles, then yes. For stories.
--Matt Dillahunty
I wonder if somebody, looking at (a) his stated goal and (b) his behaviour, would consider his statement borne out. (Same goes for me, no offense to Dillahunty specifically).
-- Steve Jobs
Longer version from here
—Steve Jobs, interviewed in Fortune, March 7, 2008
Focusing is about saying no long enough to get into flow, or at least some kind of mental state where your short-term memory doesn’t constantly evaporate. If you have to say no all the time, you’ll wind up twenty hours later having written six lines and with a head full of jelly.
Without context I’m tempted to say focusing is about a whole bunch of things and that telling people to say no is just another way of saying, ‘Use your willpower.’ Which is another way of saying ‘Focus by focusing!’ Which… seems rather recursive at least.
One of the things that focusing is about is giving up pursuing good things.
Which means that if I want to focus, I need to decide which good things I’m going to say “no” to.
This may seem obvious, but after seeing many not-otherwise-stupid management structures create lists of “priorities” that encompass everything good (and consequently aren’t priorities at all), I’m inclined to say that it isn’t as obvious as it may seem.
Interesting take.
====
Or optimisation is going on at a different point in the company.
Or it is as obvious as it seems and sanity isn’t a property of management structures.
Come to think of it it’s not necessarily even a property of any individual who participated in the creation of that structure. An idiot who’s read The Effective Executive and How to Win Friends and Influence people should be a darned effective manager—but they’re not necessarily very intelligent. Similarly you can gradually converge on sane solutions without thinking anything through very far by applying fairly basic procedures, or even just being subject to selection pressures.
====
You need to decide which good things you’re going to assign the most resources to, or in what order you’re going to do them, or have a list of very general priorities that you’re going to pass off to some other system in the company that will give you a similar sort of output. But however you do it, focusing isn’t as simple as saying no—or even as saying no to the right things. You’ll exclude some things by default but knowing when to say ‘let’s see’ and how strongly to say yes is also very useful.
Yes, agreed.
This reminds me of Steven Covey’s idea of a coordinate graph with four quadrants where you graph importance on on axis and urgency on the other. This gives you for types of “activities” to invest your time into.
Urgent and Unimportant (a phone ringing is a good example): this is where many people loose a tremendous amount of time
Urgent and Important (A broken bone or crime in progress) hese immediately demand our “focus”
Not Urgent and Not Important: pure time wasters- not a good place to invest much energy
Not Urgent BUT Important. This is the area Steven made a point of saying that most people fall short. Because these things are not urgent, we tend to put them off and not invest enough enough energy into them, but since they are important this means we pay a hefty price in the long run. Into this category he puts things like our health, important relationships, personal development and self improvement to name a few.
When we choose what to focus our energy on, we would do well to direct as much of it as possible to these types of activites
Let us say you have a paper to write but you also want to go to a party. While trying to write the paper, you could keep wondering whether you should stop writing the paper and just go to the party, but keep writing anyway, i.e. try to use willpower. Or you could decide, once and for all that you are not going to go to the party, which is saying no. I think the second approach will be more effective in getting the paper done. So, I think there is actually a difference.
Now, of course the insight isn’t profound and both folk and professional psychology has known it for some time (I can’t find a good link off-hand). But, when a successful person high-status person who has achieved a lot saying it lends it whole lot more of credibility.
I feel like it’s more about saying “yes” with enthusiasm.
Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:
Of course this would imply that Pyne is not a featherless biped.
Source: Robert Cialdini’s Influence: The Psychology of Persuasion
-- Thomas Szasz
.
Radioactive stone in nest.
Use stone to seal off the air supply to a cage of birds.
Economist: Sell a precious stone (diamond? Ruby?). Use the proceeds to purchase several dozen chickens. The purchase produces an expected number of bird deaths equal to approximately the number of chickens purchased through tiny changes at the margins, making chicken farming and slaughter slightly more viable.
Omega: Use stone to kill the dog that would have killed the cat that will now kill 40 birds over its extended lifespan.
Punster: go on a hunting trip with Mick Jagger.
Double punster: it’s hunting season for Jimmy Page’s former band.
Nice, but how is this a rationality quote? Is there some allegory that I’m missing?
.
Um, be creative? 11 upvotes.
─James C. Scott, Seeing Like a State
-Kafka, A Little Fable
Moral: Just because the superior agent knows what is best for you and could give you flawless advice, doesn’t mean it will not prefer to consume you for your component atoms!
My problem with this is, that like a number of Kafka’s parables, the more I think about it, the less I understand it.
There is a mouse, and a mouse-trap, and a cat. The mouse is running towards the trap, he says, and the cat says that to avoid it, all he must do is change his direction and eats the mouse. What? Where did this cat come from? Is this cat chasing the mouse down the hallway? Well, if he is, then that’s pretty darn awful advice, because if the cat is right behind the mouse, then turning to avoid the trap just means he’s eaten by the cat, so either way he is doomed.
Actually, given Kafka’s novels, so often characterized by double-binds and false dilemmas, maybe that’s the point: that all choices lead to one’s doom, and the cat’s true observation hides the more important observation that the entire system is rigged.
(‘”Alas”, said the voter, “at first in the primaries the options seemed so wide and so much change possible that I was glad there was an establishment candidate to turn to to moderate the others, but as time passed the Overton Window closed in and now there is the final voting booth into which I must walk and vote for the lesser of two evils.” “You need only not vote”, the system told the voter, and took his silence for consent.’)
On the other hand, it’s a perfectly optimistic little fable if you simply replace the one word “trap” with the word “cat”.
This is much better than my moral.
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins.
The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
I briefly read the moral as something like this; something along the lines of “being exposed in the open was the worst thing the mouse could imagine, so it ran blindly away from it without asking what the alternatives were”. I’m still not sure I actually get it.
Tangentially, keeping mouse traps in a house with a cat seems hazardous (though I could be underestimating cats). And I assume “day” and “chamber” are used abstractly.
—Richard Hamming
(I recommend the whole talk, which contains some great examples and many other excellent points.)
I think the thing that strikes me most about this talk is how different science was then versus now. For one small example he was asked to comment on the relative effectiveness of giving talks, writing papers and writing books. In today’s world its not a question anyone would ask, and the answer would be “write at least a few papers a year or you won’t keep your job.”
I don’t see why it has to be either or.
Time and effort are zero-sum.
I don’t think so. The status and resources that you get for being a first-class scientist will help you to fight the system.
And would even more help you continue being a first-class scientist, won’t help you fight for free (no Time-Turners on offer, I’m afraid), and even in this scenario you still need to decide to first become a first-class scientist—since fighting the system is not a great path to getting status & resources.
Picking fights when you don’t have any resources to fight them is in general no good strategy. Whenever you pick a fight you actually have to think about the price and possible reward.
Craig Venter did oppose the NIH and then went and got private funding for himself to persue the ideas in a way he thought to be superior.
Eliezer Yudkowsky did decide to operate outside acdemia. Peter Thiel funded him and the whole LessWrong enterprize increased the amount of resources that he has at his disposal.
There are a lot of sources of resources, that can be gained by picking some fights.
Those aren’t the kinds of fights Hamming is talking about. (You have read his talk, right?)
Sorry, now I read it and you are right Hamming does acklowedge that you can fight some fights but just recommends against wasting your time with fights that don’t matter in the large scale of things.
Charles P Kindleberger, in Manias, Panics and Crashes; a History of Financial Crisis
I imagine that thanks to Bitcoin, a few of us can feel this quote acutely, in our guts.
-- Garret Jones
Could you give an example for goal goal-based advice that’s always right?
Sure. From the same post:
Parfit, On What Matters, Vol. 2 (pp. 616-620).
Parfit, quoted in ”How To Be Good” by Larissa MacFarquhar. PDF
--Charlie Munger
“You can catch more flies with honey than vinegar.” “You can catch even more with manure; what’s your point?”
--Sheldon Cooper from The Big Bang Theory
That’s actually an insightful analogy regarding human social politics.
The Stockholm syndrome says otherwise.
I gather one theory behind that is that captives associate an absence of punishment for the presence of kindness. i.e. they adjust for perceived reward—the reward being not getting intimidated/raped/whatever, at least not right then.
That link isn’t clear to me. Could you please elaborate?
It’s not “the iron rule”, just one of many heuristics of limited applicability. Hurting instead of rewarding is often just as effective. And rewarding can also backfire in the worst way.
The Stockholm syndrome isn’t only about hurting the hostage. The captor gains control of the enviroment in which the hostage lives and then can use that control to reward the hostage for fullfilling his wishes.
Munger’s quote seemed to me like a more colorful rendition of “incentives matter,” which is an iron rule (as it contrasts with what people often want to be true, which is “intentions matter”). Rewards backfiring is generally mistakenly applied rewards, like sugar on the floor, and punishments seem like they can be considered as anti-rewards; you don’t get what you punish (with, again, the note that precision matters).
-- John Stuart Mill, autobiography
For what it’s worth, personal experience tells me otherwise.
I’ve found that thinking about something outside yourself (and thus not your own happiness) makes lots of people less depressed, and somewhat happy. However, the last sentence is clearly false, as many anecdotal reports of “I’m so happy!” show. Maybe it works that way for some people?
--Charlie Munger
But regardless of whether we believe our own positions are inviolable, it behooves us to know and understand the arguments of those who disagree. We should do this for two reasons. First, our inviolable position may be anything but. What we assume is true could be false. The only way we’ll discover this is to face up to evidence and arguments against our position. Because, as much as we may not enjoy it, discovering we’ve believed a falsehood means we’re now closer to believing the truth than we were before. And that’s something we should only ever feel gratitude for.
Aaron Ross Powell, Free Thoughts
This is why steelmanning is a really good community norm. Social incentives for understanding the other’s position are usually bad, but if people give credit for steelmanning, these incentives are better.
“Steelmanning” and “understanding the other’s position” aren’t really related (to my knowledge).
It’s difficult to steelman someone’s position if I don’t understand it.
Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 1921
-- Warren Buffett
I have no idea whether this is true of Darwin, but it still might be good advice.
See here.
--Daniel Steinberg, The Cholesterol Wars, 2007, Elsevier Press, p. 89
-Paul Graham
I like the sentiment, but Paul Graham seems to be claiming that information hazards don’t exist, and that doesn’t appear to be true.
Despite agreeing with the rest of the essay (which is very good), this is not true. Tiresomely standard counter-example: “Heil Hitler! No, there are no Jews in my attic.”
I would say this is not ALWAYS true. But for the purpose of civilized discussion between human beings, it does seem like a very useful rule of thumb.
Substitute “statement” with “belief”.
Sorry, I don’t understand. I believe there are Jews in my attic, but this belief should be suppressed, rather than spread.
Fair enough.
This seems like fallacy of the excluded middle. Suppressed and spread are not the only two options.
If the nazi starts to believe it, you should suppress such a belief (probably by acting inocculuously, but if suppressing it violently would work better you should do that instead.)
That statement is bad for the nazis, who are now unable to achieve their desires. The statement is about instrumental badness, not universal moral badness. They’re really quite different.
I like the sentiment. I disagree that it is (always) the worst you can say about it. And there are also true things that are actively constructed to be misleading—I certainly go about suppressing those where possible and plan to continue.
Wouldn’t explaining why the statement is misleading be more productive than suppressing the misleading statement?
― Halldór Laxness, Under the Glacier.
...and then adjusted our senses of the ‘incredible’ accordingly, so that Special Relativity seemed less incredible, and God more so.
Sense of incredulity is not a belief, so it’s not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn’t depend to observer, it depends to reference frame… (or rather, doesn’t depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it’ll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don’t seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don’t respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse.
There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity.
If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
Before remembering the older definition of “incredible” that is presumably meant, I parsed this as “Like all great rationalists you believed in things that were twice as awesome as theology”; and thought “Only twice?”.
What does this mean?
That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.
Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.
I mean, our physics say it’s technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn’t use beams of light because that’s inefficient.
I am confused—upvoting this comment is a rejection of this website.
I doubt that Laxness means “rationalist” in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
Even after looking the book up on Google, without context, I can’t tell whether the rationalist being spoken of has gone astray through his reason, or has succeeded in finding the truth of something. But I am now interested in reading Laxness.
The mere size of the universe is pretty incredible. I don’t think it gets as much emphasis as it used to. I’m not sure whether people have quit thinking about it or gotten used to it.
Scott Adams on evolution toward… what?
-To The Stars
--Francis Bacon
Neither is necessarily or even usually true though, is it?
Necessarily, of course not. Usually, well, this is Francis Bacon, and so the intended meaning of the quote is more like “We can be more certain in the outputs of empiricism than we can be in the outputs of deductive argument beginning with intuitions or other a priori knowledge.”
Boswell’s Life of Johnson (quoted in “Applied Scientific Inference”, Sturrock 1994)
William Shakespeare, Henry V
What does this mean?
In context, this is said right before the battle of Agincourt and Henry V is reminding his troops that the only thing left for them to do is to prepare their minds for the coming battle (where they are horribly outnumbered). I guess the rationality part is to remember that sometimes we must make sure to be in the right mindset to succeed.
I’ve always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn’t take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn’t have pushed history quite that far.
A good ‘dark arts’ quote from that speech might be when he offers to pay anyone’s fare back to England if they leave then. After that, anyone thinking of deserting will be trapped by their sunk costs into staying—but maybe that’s not what Shakespeare had in mind...
The quote struck me as a poetic way of affirming the general importance of metacognition—a reminder that we are at the center of everything we do, and therefore investing in self improvement is an investment with a multiplier effect. I admit though this may be adding my own meaning that doesn’t exist in the quote’s context.
Rewatching Branagh’s version recently, I keyed in on a different aspect. In his speech, Henry describes in detail all the glory and status the survivors of the battle will enjoy for the rest of their lives, while (of course) totally downplaying the fact that few of them can expect to collect on that reward. He’s making a cost/benefit calculation for them and leaning heavily on the scale in the process.
Contrast with similar inspiring military speeches:
William Wallace says, “Fight and you may die. Run and you may live...for awhile. And dying in your beds, many years from now, would you be willin’ to trade ALL the days, from this day to that, for one chance, just one chance, to come back here and tell our enemies that they may take our lives, but they’ll never take our freedom!” He’s saying essentially the same thing as Henry, but framing it as a loss instead of a gain. Where Henry tells his soldiers what they’ll gain from fighting, Wallace tells them what they’ll lose if they don’t. Perhaps it’s telling that, unlike Henry, he doesn’t get very specific. It might’ve been an opportunity for someone in the ranks to run a thought experiment, “What specific aspects of my life will be measurably different if we have ‘freedom’ versus if we don’t have ‘freedom’? What exactly AM I trading ALL the days for? And if I magically had that thing without the cost of potentially dying, what would my preferences be then?” Or to just notice their confusion and be able to recognize they were being loss averse and without the ability to define exactly what they were averse to losing.
Meanwhile, Maximus tells his troops, “What you do in life echoes in eternity.” He’s more honest and direct about the probability that you’re going to die, but also reminds you that the cost/benefit analysis extends beyond your own life, the implication being that your ‘honor’ (reputation) affects your placement in the afterlife and (probably of more consequence) the well being of your family after your death. Life is an iterated game and sometimes you have to defect (or cooperate?) so that your children get to play at all.
And lastly, Patton says, “No bastard ever won a war by dying for his country. He won it by making the other poor, dumb bastard die for his.” He explicitly rejects the entire ‘die for your country’ framing and foists it wholly onto the enemy. It’s his version of “The enemy’s gate is down.” He’s not telling you you’re not going to die, but at least he’s not trying to convince you that your death is somehow a good or necessary thing.
When taken in this company, Henry actually comes across more like a villain. Of all of them, he’s appealing to their desire to achieve rational interests in an irrational way without being at all upfront about their odds of actually getting what he’s promising them.
Howard Taylor—Schlock Mercenary
Source: http://bladekindeyewear.tumblr.com/post/47462509182/but-where-exactally-will-this-backdoor-out-the-felt
--Daniel Kahneman on the dichotomy between the self that experiences things from moment to moment and the self that remembers and evaluates experiences as a whole. (from Thinking, Fast and Slow )
Edgar Lawrence Smith, Common Stocks as Long Term Investments
-Iain M. Banks, Look to Windward
Incidentally, Mr. Banks has been diagnosed with terminal cancer, and estimated to have a few months to live as of this post. Comments may be made on his website: http://friends.banksophilia.com/
Whoops, forgot to promote this.
Some people want it to happen, some wish it would happen, others make it happen.
“Michael Jordan”
The significant problems we face cannot be solved at the same level of thinking we were at when we created them.
-- Albert Einstein
Source? Wikiquote seems to think its a misquote.
Isn’t there a law or something stating that Einstein never said 99% of what’s attributed to him? Or maybe that the accuracy of quote’s attribution is inversely proportional to the person’s fame?
Well, it’s unsurprising that misattributed quotes are more often attributed to famous people than to unknown people.
Thanks FiftyTwo- I just looked up the article you refer to and it indicates that it may be a paraphrase of a longer quote. I heard this from Anthony Robbins, this quote is attributed to Einstein in some of his literature. It seems that the sentiment, if not the exact quote, seem to be attributable to Einstein
What are the black blobs I see in the quanta?
Are the black blobs entities or something else?