Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
No more than 5 quotes per person per monthly thread, please.
- 16 Feb 2013 20:55 UTC; 1 point) 's comment on Rationality Quotes February 2012 by (
Randall Munroe, on updating on other people’s beliefs.
Dilbert dunnit first!
(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It’s not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)
Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it’s lucky that I found it first.
Oh, and there’s more text in the comic than what’s quoted, and it’s good too, so read the comic everybody!
See also this Will_Newsome comment. (I incorrectly remembered that it said something like “If all your friends jumped off a bridge, would you jump too?” “If all of them survived, I probably would.”)
The ” every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time” scenario should be given some credence in human society; there is such a thing as puberty. The definition of puberty being ” every single person I know abruptly went crazy at exactly the same time, including me”.
-- Milton Friedman
-- Bertold Brecht
(I’m always amused when people of opposite political views express similar thoughts on society.)
Also:
I think the Brecht quote is somewhat misleading. The problem is not that not enough people want/demand goodness, the problem is that it is too easy to profit by cheating without getting caught.
This solution only works if you are in the special position of being able to make institutional design changes that can’t be undone by potential future enemies. Otherwise, whose “right things” will happen depends on who is currently in charge of institutional design (think gerrymandering).
Then try to make it politically profitable to help sustain those changes you make. Make it so painfully obvious that the only reason to remove those changes would be for one’s unethical gain that no politician would ever do so. The problem then though, is that people end up just not caring enough.
What you’re describing is exactly the position of being able to make institutional design changes that can’t be undone by potential future enemies. This position is “special” not only because the task is very difficult, but also because you have to be the first to think of it.
Couldn’t I also set up the system to try to exclude the wrong people from ever getting power?
It seems to me that computers get better at detecting liars, and we have an ease of fact checking on things now we never used to have, and conflicts of interest are generally relatively easily seen, and we’ve got all this research about how influence functions… In short that we’ve made a lot more progress on the judging people front, than we have on the side of designing procedures and regulations that suit us and also serve as one-way functions.
Not if having power over others turns the right people into the wrong people.
No. No-one can set up the system. The most that anyone can do is introduce a new piece into the game, pieces like Google, or Wikipedia, or Wikileaks.
That mentality is probably why US politics is as corrupt as it is at the moment. Electing people who aren’t corrupt to replace corrupt people is very valuable if your goal is to have a well governed country.
If you have the political goals of Milton Friedman it might not be. If you want politicians to be corportate friendly than you make it politically profitable for them to do so by making it easy for companies to bribe them.
I think the spirit of the quote is that instead of counting on anyone to be a both benevolent and effective ruler, or counting on voters to recognize such things, design the political environment so that that will happen naturally, even when an office is occupied by a corrupt or ineffective person.
This idea is primarily why I’m skeptical of the effectiveness of institutions like the federal reserve (despite not being a subject matter expert). It seems pretty clear that in order to be effective the leadership has to be comprised of people that are not only exceptionally brilliant, but exceptionally benevolent as well.
What do you think that “design the political environment so that that will happen naturally” means concretely?
The policies that Milton advocated got a huge boost because companies put lobbyists who distribute campaign money in the “right places” to switch political incentives.
There are political enviroments in which the actors try to do what is right instead of just maximizing their personal interests. Milton says in the quoted video that the US congress isn’t such an enviroment and that’s no problem that anyone should be attempting to fix by electing different politicians.
The only one I’ve heard of is “fiction.” Did you have an example in mind?
An example is in Federalist No. 10. Madison is trying to design a political environment resilient to the corrupt effects of factions:
His concrete solutions are to choose representative democracy over direct democracy, and to have large republic rather than a small republic.
A more recent example would be last year’s ban on members of Congress trading stocks based on the inside information they have as lawmakers. I think Milton Friedman’s point is that one should direct efforts toward supporting policies like that, rather than trying to elect politicians who are too ethical to insider-trade.
Why is this comment at −1 yet 100% positive?
It then goes to 0 and 0% positive when I up-vote it.
Why? How does this fix things? Without quite knowing what problem this solution is meant to address, the first consequence of this policy (representative democracy + large republic) that comes to my mind by judging it independently is that it looks optimized for the smallest number of rulers and the greatest amount of people limited in their political power by comparison—in other words, it seems to concentrate power. (If there are other implications, they’re not as obvious to me as this one.) How or why does that help overall impartiality?
Why is this comment at −1 yet 100% positive?
Designing for resillence is not the same thing as designing a system to get politicans to do certain things. If you think as Miltion Friedman that “the right” thing is free market policies, designing the political system in a way that gives political advantages to those people who push free market policies, you are likely reduce resiliency.
Given Friedman’s politics I doubt that he had actions such as restricting members ability to trade stocks in mind. That’s not the kind of political agenda that Friedman pushed.
Then I don’t think you understand what that policy does. Lawmakers get their information regardless of how they vote or what policies they persue. That kind of insider trading allows lawmakers to personally enrich themselves instead of making bargains with people who want to hand them money.
What the policy does do, is that it provides a new tool for the people who have information about the trades that a congressman makes, to blackmail the congressman.
You might get some positive effects through the policy, so I’m not clear that it’s a bad law.
What’s the problem that you are trying to solve in the first place? Insider-trade? Let Eliot Spitzer run the SEC and double SEC funding. Insider trading doesn’t exist because there a lack of laws against the practice.
I didn’t downvote you, but I’m not continuing the argument because it seems really political in a partisan way. I suspect that’s what’s motivating the downvotes.
You seem to be confusing support for a free market with rent seeking. Milton Friedman supported free markets, in this and your follow up comment you seem to equate this with rent seeking.
A large part of the Federalist Papers is about designing structures and incentives to make government robust against overwhelming ambition and corruption—to make ambition in one branch check ambition in another branch, similarly between state and federal and between state and state.
That said, I think Friedman (I was never on a first name basis with him) is overly dismissive of electing the right people. But again we need to set up structures and incentives differently, so elections are less of an entertaining spectacle and more like a hiring search or job interview. The structures or institutions that might improve the situation don’t have to come from legislation (though some of them could—I’m not against that on principal); e.g. parties weren’t legislated into being, and if we want something better, we should not look exclusively to legislation.
It seems at least conceivable to have some agreement across party lines that our electoral processes, as I said before, look less like a circus to which we passively attend, and more like a hiring search / job interview type process.
I’ve often thought of ironically proposing that we should legislate that job interviews have to be more like elections: i.e. stop limiting candidates’ ability to express themselves. If somebody wants to bring a brass band to an interview, it’s their free speech right to do so. If they want to spread nasty rumours about the other candidates why not?
Not so ironically, maybe our best hope is to persuade people that our current approach with its sound bites, catch phrases, push polls, gerrymandered “safe seats” and so on is a source of dangerous blindness that affects all of us, with all of our different interpretations of “the good” (of the country, etc.); to persuade people to find all of that current process repulsive, and to insist that all that airtime, column inches, etc., be devoted to information about the candidates, analysis of the current crises and challenges and possibilities, and to debates of all sorts: dozens of debates, discussions, and joint press conferences among the candidates.
One problem: “Information” (as in “information” about the candidate, etc.) is a word that gets bandied about too casually. What might possibly be done to increase the sanity with which people evaluate what is and isn’t truly “information”. I think that is the big problem that people concerned with rationality might be able to make some progress in solving.
I think that the only rules we have against spreading nasty rumors about other candidates are our laws against defamation; spreading malicious falsehoods about people in order to deprive them of business opportunities in favor of yourself is exactly the sort of thing that those laws prohibit, because then the people who did those sort of things would be the most likely to get the jobs. Job candidates would have to become willing to undercut their rivals in order to stay competitive, and we’d be at risk of risk devolving to a state where having to filter through webs of malicious falsehood in any hiring situation where the candidates are known to each other was the norm rather than the exception.
As for bringing a brass band to a job interview, candidates are entitled to do such things, with the caveat that they wouldn’t get the jobs. It would be an awfully rare position where bringing a brass band to the interview would be positive evidence of the candidate’s ability to perform the job well. Giving candidates too much leeway for self expression runs the risk of turning interviews into contests of showmanship.
which sounds to me like the state we are in w.r.t. election to political offices.
My point is nobody hires people for ordinary jobs the way we collectively hire a president. We are extremely passive, and don’t manage the process. There is a field I am very interested in called Social Epistemology (it’s a divided field with one part being excessively postmodern and relativistic; the other side, which interests me, holds that there really are such things as truth and falsehood, and the biggest name in that area is Alvin Goldman). This field is very interested in institutions, such as the law court in its different forms, that have tried to come up with procedures and standards (like selectivity in the sort of evidence you will listen to) that try to improve the chances of coming to the right conclusion. There is quite a lot of emphasis on law courts, but it occurred to me that hiring committees do something similar; they require things like resumes, and have a systematic way of questioning candidates rather than say to candidates “Come and put on a show and we’ll see what we think of you”.
I don’t understand why you’re arguing that job interviews should be more like elections in that case. If the process leads to bad outcomes for elections, and is likely to lead to bad outcomes for job interviews as well, why use it?
To quote myself:
It’s irony. I.e., it’s such a bad idea that I’d like to suggest it’s also a bad way to elect presidents.
Ah, see, I thought you meant that ironically, while it’s not a good way to elect presidents, it would be an improvement on how we conduct interviews.
Concretely, Milton Friedman probably didn’t have a workable plan for bringing about such an environment, though he may have thought he did; I’m not familiar enough with his thinking. One next-best option would be to try to convince other people that that’s what part of a solution to bad government would look like, which under a charitable interpretation of his motives, is what he was doing with that statement he made.
The nice thing about working with incentives is that they’re pretty stable relative to political leanings. I’d expect a given person’s perceptions of politicians’ level of corruption or incompetence or any other negative adjective you can think of to depend almost entirely on party affiliation, but you can actually leverage that to get changes in incentive structures passed: just frame it as necessary to curb the excesses of those guys over there, you know, the ones you hate.
And in any case the quote works just as well for the governed. As anyone who’s ever moderated a large forum can tell you, playing with incentives works almost embarrassingly well and quickly compared to working on sympathy or respect for authority. Of course, it’s also harder to do.
That sounds very intriguing. Can you give some example of how you’ve used “playing with incentives” successfully to (I assume—correct me if I’m wrong) maintain a productive forum? That might be very enlightening—seriously, no irony here.
Simplest positive example I can think of offhand: if there’s lots of content-free posting going on and you want it to go away, changing the board parameters so that user titles are no longer based on postcount goes a surprisingly long way.
Simplest negative example I can think of: if you think there’s too much complaining going on (I didn’t, but the board owner at the time did), allocating a subforum for complaints will only make things worse. Even if you call it something like “Constructive Criticism”.
Sorry, I’ve never run a forum. Is there any easy place to learn enough to make “user titles are no longer based on postcount” make sense to me (unless you want to take the time to explain it). I really am very interested.
Sure. One feature in phpBB and several other popular bulletin board packages (but not in reddit or Slashdot or any of their descendants) is the ability to set user titles: little snippets of descriptive text that get displayed after a user’s handle and which are usually intended to give some information about their status in the forum.
The most common arrangement is to have a couple of special titles for administrative positions (say, “mod” and “admin”), then several others for normal users that’re tiered based on the number of posts the user’s written, i.e. postcount: a user might start with the title “newbie” or “lurker”, then progress through five or six cutely themed titles as they post more stuff. It’s common for admins to change the exact titles and the progression pattern to suit the needs of the forum (a roleplaying forum for example might name them after monsters of increasing power), but uncommon to change the basic scheme.
You may notice that this doesn’t differentiate on post quality.
Look up some of the karma discussions on this very site.
Very few people argued that Cato was corrupt. Even those who disagreed with him mostly didn’t.
I do have experience with moderating a large forum and I still believe in not trying to corrupt people. You want people that are open for rational discouse and who changes their position when you bring them arguments to change their opinions even in the absence of giving them incentives to switch their position.
I’d say that setting up incentives so that people within a system do culturally useful things out of their own self-interest is about as close to an opposite of corrupting people as we’re likely to find.
Doing X for specially crafted incentive Y for the intrinsic value of X is a form of corruption. It’s not always possible that all decisions are made for the intrinsic value but if you have a political enviroment where there a lot pressure to do Y’s.
Especially if you can’t get any political power without Y, you won’t have many people who persue political goals for their intrinsic value in your political system.
Things work much better when the politicians do what they consider to be right instead of having to do be coercied into taking any position that’s political advantageous.
That’s a… remarkably loose definition of corruption you’ve got going on there.
I’m not sure it’s practical to make a political system completely free of incentives, as long as you’re working with humans governing humans: the closest approximations I can think of would have to involve a leadership caste socially and economically isolated from the people they govern and without any means of improving their own welfare, and that’s so far removed from anything historical I know about that I don’t even want to try working out all its long-term implications. Imperial Japan’s about the closest, and that degenerated at first into proxy governance by provincial warlords and later into a military-aristocratic dictatorship ruling in the imperial family’s name but not in practice controlled by it.
Now, given anything resembling our existing politics, it seems naive to behave as if the default incentives surrounding political power are nonexistent or weak enough that they’re drowned out by altruistic impulses among those inclined to seek power—or even among random members of the populace, if you prefer direct democracy. This being the case, it makes far more sense to me to design systems to reward competent government—however defined—rather than to high-handedly dismiss any such attempts as unethical and rely wholly on the better angels of politicians’ natures.
I’m not quite prepared to say that there can’t exist any candidate systems where this wouldn’t be necessary, but if you’ve got a proposal like that, we should really be talking about that proposal rather than speaking in generalities.
I’m not sure such a thing has been proposed (after reading most, if not all of this thread); in fact it sounds so absurd that I can’t imagine what such a proposal would look like.
Maybe ChristianKI will correct me if he/she really is proposing a “political system completely free of incentives”.
In The Tamuli, by David Eddings, one country’s political system is described as an attempt to limit corruption. (The usual caveats regarding fictional evidence apply here, of course). In short, when a person is elected onto the ruling council of the Isle of Tega, all that he owns is sold, and the money is deposited into the country’s treasury. He is then simply not permitted to own anything until his term is up, some four years later (presumably food and housing is provided at the expense of the state); when that time comes, the money in the treasury is divided among the ministers in proportion to how much they put in (and the former ministers presumably start re-purchasing stuff). Note that the one thing that the ministers are not allowed to do is to change the tax rates.
This is described as having two consequences. First of all, the Isle of Tega is the only country that always shows a profit. Secondly, the minute that a man is nominated to become a minister, he is put under immediate armed guard to prevent him from running away (and remains under armed guard until his term is over). A government position is viewed with the same trepidation as a prison sentance.
Wouldn’t they still have incentives to aid parties who promise to repay them once their term is up? Similar to how some legislators conveniently acquire lucrative positions requiring little-to-no effort on their part from companies who they have helped out through the years once they’ve retired from politics?
Or to aid their families and friends, or to adopt policies that benefit their industry or hometown or social class—I considered similar systems when I was writing the ancestor (probably unconsciously influenced by Eddings; I haven’t read him in years, though), but decided that they were transparently unworkable.
Yes, it seems both too drastic, and not really able to accomplish the desired result.
Funny, I’ve wondered about a similarly drastic action though to improve the quality of voting, namely for each election select a random 1% (or some such—small enough to not crash the economy) of the population and lock them up with nothing to do but learn about what’s going on in the country and in the world and debate who they should vote for. In the end, unlike in the jury system, it should still be a secret ballot. Of course, if as many people were exempted as in jury duty, then it would be biased. One would have to see how much exemption was unavoidable, and and see whether the bias could be sufficiently minimized.
If it’s small enough not to crash the economy, then is it big enough to reliably alter the election results? And who provides the information for them to read through?
Only if they can trust the promise; once their term is up, the parties have little real incentive to stick to their promise, after all.
There will be an incentive to aid people who immediately donate a great big chunk of money to the State, as that money will be shared out among the ministers at the end of their term in any case; but the incentive only works, there, if the great big chunk of money is more than the state would obtain by other means.
In iterated games, defection has its price.
I see your point and, on further thought, acknowledge it as correct.
This seems like possibly quite a useful bit of abstraction and offer the potential of arguing the merits of a single principle that appears in many manifestations, in politics, corporations, volunteer organizations, etc. But I’m just having trouble getting it clearly in my head. Two things might help.
1) One or 2 concrete examples where you flesh out “X” and “Y”. I spent 2 years in a math Ph.D. program, which is long enough to know that to move forward with an abstraction, it is best to start with at least a couple of examples.
2) Consider the “agency problem” (or “principle-agent problem”), which to me seems the most promising abstraction for reasoning about corruption. See http://en.wikipedia.org/wiki/Agency_problem, and maybe very close to what you’re aiming at.
Be sure to let us know when you find such people. One of the main conceits of this site is that rationalists should win. If it’s possible to get ahead by not being a rationalist (even temporarily), people are going to do that. Ultimately, I think what the original quote from Friedman boils down to is the old adage that you should try to fix the system rather than blame the people in it.
If you have corrupt politicians, blame the voters. The politicians did not vote themselves into the office. (Unless they own the vote-counting machines factory.) I guess the quote suggests that “making it politically profitable for the wrong people to do the right things”, whatever precisely that means, could still be easier than replacing the whole population of voters; or at least the majority of them.
There are worse things that a politician can be than corrupt.
There are worse things that a politician can be than corrupt.
Agreed. It’s too easy to pander to a base that doesn’t expect you to be good, just deliver a few things… things that matter a great deal less than the cumulative effect of having the right people in charge.
Devine and Cohen, Absolute Zero Gravity, p. 96.
So, uh, what’s the explanation?
The story appears to be apocryphal. I’ve heard many versions of it associated with various famous scientists. The source quoted is a collection of jokes, with very low veracity. Additionally, there are no independent versions of the story anywhere on Google. By the way, the quoted date of Sommerfeld’s death is also incorrect. I wonder if there even were (unpowered) ceiling fans in Munich’s trolleys during that time.
Good point. Effects that don’t exist don’t need to be explained.
I’m not much of an engineer, but based on my understanding of their design from the description given, I can’t see how they would even contribute to their alleged purpose.
Perhaps because pressure is (approximately) constant, for every molecule going into the car, one must leave it (on average)?
Trolleys have open windows in summer.
It’s an interesting story, but it might not be as silly as it sounds if one considers “ease of explanation” as a metric for how much credence one’s model assigns to a given scenario. (Yes, I agree this is a hackneyed way of modeling stuff.)
Unfortunately, this seems to be the default way humans do things.
Well, the world is a complicated place and we have limited working memory, so our models can only be so good without the use of external tools. In practice, I think looking for reasons why something is true, then looking for reasons why it isn’t true, has been a useful rationality technique for me. Maybe because I’m more motivated to think of creative, sometimes-valid arguments when I’m rationalizing one way or the other.
Men in Black on guessing the teacher’s password:
Zed: You’re all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we’re looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We’re here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What’s so funny, Edwards?
Edwards: Boy, Captain America over here! “The best of the best of the best, sir!” “With honors.” Yeah, he’s just really excited and he has no clue why we’re here. That’s just, that’s very funny to me.
The scene in question.
That whole testing sequence is one of the best examples in film of how to distinguish what’s expected of you from what’s actually a good idea.
(Or in that specific case, what seems to be expected of you.)
—Yagyū Munenori, The Life-Giving Sword
-Joel Spolsky
-- Steve Jobs
(The Organization Formerly Known as SIAI had this problem until relatively recently. Eliezer worked, but he never published anything.)
And they ship the characters the fans want.
If your service is down, it has no features.
And no bugs.
Well, there is one pretty major bug: That your service is not doing anything at all!
It has all the bugs. All of them.
(Well, not really. For instance, it doesn’t have any security holes.)
If it bears any resemblance to a product at all, your own admin-level access constitutes a potential security hole.
It’s a feature.
I would have quoted more, because on reading that out of context I was like “YOU DON’T SAY?”
Most people, when giving advice, don’t optimize for maximal usefulness. They optimize for something like maximal apparent-insight or maximal signaling-wisdom or maximal mind-blowing, which are a priori all very different goals. So you shouldn’t expect that incredibly useful advice sounds like incredibly insightful, wise, or mind-blowing advice in general. There’s probably a lot of incredibly useful advice that no one gives because it sounds too obvious and you don’t get to look cool by giving it. One such piece of advice I received recently was “plan things.”
There’s probably also a lot of useful advice that our minds filter out because it scans as obvious or trivial. Even when I’m trying to give maximally effective advice, I usually spend a lot of effort optimizing it for style; the better something sounds, the more people dwell on its implications and the likelier it is to stick. Fortunately, most messages leave plenty of latitude for presentation.
Alternately, you could try dressing simple advice up in enough cultural tinsel that it looks profound, as suggested here.
Well, a lot basic rationality literally seems to be about doing what is almost obvious but is hard to do because of bugs in your cognitive architecture. This reminds me of the following quote by Elon Musk in an interview where he was asked what he would say to new start-up founders:
And by the same author:
and
(because what counts after getting it out the door is how many people actually use it.)
That’s Jeff Atwood. The quote is from Joel Spolsky. While the two both work together on Stack Exchange, they’re different individuals.
Faramir, from Lord of the Rings on lost purposes and the thing that he protects
Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.
another great quote for 2013
-- Geoff Anders (paraphrased)
Did he mean if they’re someone else’s fault then you have to fix the person?
Yep.
From a participant at the January CFAR workshop. I don’t remember who. This struck me as an excellent description of what rationalists seek.
People often seem to get these mixed up, resulting in “You want useful beliefs and accurate emotions.”
Not sure what an “accurate emotion” would mean, feel like some sort of domain error. (e.g. a blue sound.)
An accurate emotion = “I’m angry because I should be angry because she is being really, really mean to me.”
A useful emotion = “Showing empathy towards someone being mean to me will minimize the cost to me of others’ hostility.”
Where’s that ‘should’ coming from? (Or are you just explaining the concept rather than endorsing it?)
I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.
Contrasting “accurate beliefs and useful emotions” with “useful beliefs and accurate emotions” would probably make a good exercise for a novice rationalist.
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.
Beliefs shoulder the burden of having to reflect the territory, while emotions don’t. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn’t seem to be addressed by the Sequences.)
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.”
This is how I have come to think of beliefs. It’s like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I’m perfectly happy to call that “truth”. So long as that does not break my tools.
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.” Superb point that. And thanks for the links.
If useful doesn’t equal accurate then you have biased your map.
The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that “feel” useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.
A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I’m better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.
useful may not be accurate, depending on one’s motives. A ‘useful’ belief may be one that allows you to do what you really want to unburdened by ethical/logistic/moral considerations. e.g., belief that non-europeans aren’t really human permits one to colonise their land without qualms.
I suppose that’s why, as a rationalist, one would prefer accurate beliefs- they don’t give you the liberty of lying to yourself like that. And as a rationalist, accurate beliefs will be far more useful than inaccurate ones.
Good point about beliefs possibly only “feeling” useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it’s often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.
A useful belief is an accurate one. It is, however, easy to believe a belief is useful without testing its veracity. Therefore it is optimal to test for accuracy in beliefs, as opposed to querying one’s belief in its usefulness.
Conversely, why not both accurate beliefs and emotions?
Let useful come into play when choosing your actions. This can include framing your emotions—but if you just go around changing your emotions to whatever’s useful, you’re not being yourself.
Taboo “being yourself”.
“being yourself”: A metaphor for a feeling which is so far removed from modern language’s ability to describe, that it’s a local impossibility for all but a tiny portion of the people in the world to taboo it. It’s purpose is to illicit the associated feeling in the listener, and not to be used as a descriptive reference. It is a feeling that is so deeply ingrained in 50% of people, that those people don’t realize the other 50% of people don’t know what it is; and so had never thought to even begin to try to explain it, much less taboo it.
tabooing the word as if it describes an action is an inadequate representation of the true meaning of the word. The same is true of tabooing the word as if it describes an emotion, a thought, a belief, or an identity.
“being yourself” is a conglomeration of two concepts. The first, “being”, requires the assumption that there is such a thing as a “state of being”, as an all-encompassing description of something that describes it’s non-physical properties as a snapshot of a single moment; and that said description is unlikely to change over time. The second, “oneself”, requires the assumption that there is such a thing as a spark of consciousness at the source of any mental processes, or related, of any living creature. This concept is reminiscent of the concept of a “soul”.
I personally find the concept of “being oneself” to be of the fallacious origin of the assumption that the spark of consciousness is separate from the current state of being, and that said state and spark do not flux and change continuously.
However, the context of the phrase “being yourself”, in this instance, requires not that this phrase be tabooed, but instead that “changing your emotions” be tabooed, along with “useful”. The question in regards to “changing your emotions” is if the author meant that truly changing one’s emotions would be “not being oneself”; or if the author meant something else, such as putting on a facade of an emotion that one is not experiencing is “not being oneself”.
“Useful” is a word that has different definitions for many people, and often changes based on context. The comment in question is likely a misunderstanding of what is meant by the word “useful”. This implies the possibility that many people have misunderstood what is meant by the word “useful”, perhaps even including the original poster of the quote.
So, the useful thing to do would not be to taboo “being yourself”, but to instead taboo “useful”.
In my case, I am using “useful” to mean an action which produces a generalized and averaged value for all involved and all observers. In this case, I consider the “value” in question to be an increase in communication ability for all posters, and a general increase in all readers’ ability to progress their own mental abilities. I could taboo further, but I don’t see any proportionally significant value in doing so.
Attempting to override your utility function. Effectively, a stab at wetware wireheading.
It’s perhaps worth noting that EY seems to have taken instead the “accurate beliefs and accurate emotions” tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what’s implied.
I mean, I suspect “accurate beliefs and useful emotions” really is the way to go; but this is something that—if it really is a sort of consensus here—we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that’s explicity (I’m going from memory in making that statement).
Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your “lens” (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.
I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.
whose thoughts and whose behaviors? not disagreeing, just asking.
My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it’s not as if I can respond to other people’s thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.
All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.
I agree you can’t respond to others’ thoughts, unless they express them such that they are “behaviors.” Interestingly, the “problem” you have with the sounds or images (or words?) which purport to be correlated to others’ thoughts is the same exact issue everyone is having with you (or me).
if we’re confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others’ expressions because of that very same issue?
I don’t understand what point you’re trying to make.
isn’t this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.
the second two paragraphs above are responding to this. sorry to throw it back at you, but perhaps i’m misunderstanding the point you were trying to make here? I thought you were questioning the value of considering/responding to others’ thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their “true” state of mind.
Let me make some more precise definitions: by “emotional responses to my thoughts” I mean “what I feel when I think a given thought,” e.g. I feel a mild negative emotion when I think about calling people. By “emotional responses to my behavior” I mean “what I feel when I perform a given action,” e.g. I feel a mild negative emotion when I call people. By “emotional responses to external stimuli” I mean “what I feel when a given thing happens in the world around me,” e.g. I feel a mild negative emotion when people call me. The distinction I’m trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.
No, I’m just making the point that for the purposes of classifying different kinds of emotion-hacking I don’t find it useful to have a category for other people’s thoughts separate from other people’s behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don’t have direct access to other people’s thoughts.
What problem?
Thanks for the clarification, now i understand.
Going back to the original comment i commented on:
Particularly with your third type of emotion hacking (“hacking your emotional responses to external stimuli”), it seems emotion hacking is vital for for epistemic rationality—i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.
I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
Can you clarify what you mean by this?
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
I agree that this is problematic but don’t see what it has to do with what I’ve been saying.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
I’m happy to agree that emotion hacking is important to epistemic rationality.
ok, wasn’t trying to play “gotcha,” just answering your question. good chat, thanks for engaging with me.
Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person’s continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I’d trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.
.
—Mike Sinnett, Boeing’s 787 chief project engineer
Isn’t the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?
I am confused.
It’s the point of the problem, anyway.
SInnett is probably a very good designer, but the battery design was outsourced.
-- Noah Brand
I’d prefer if this quote ended with ” … and then I got done weeping and started working on my shoe budget,” but oh wells.
″...And then I remembered status is positional, felt superior to the footless man, and stopped weeping.”
Shoes aren’t just about positional social status, are they? (I mean, the difference between a $20 pair of shoes and a $300 pair of shoes mostly is, but the difference between a $20 pair of shoes and no shoes at all isn’t, is it?)
This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It’s such a common and avoidable error.
I think people just accidentally conflate keeping problems in perspective with the idea that the existence of bigger problems makes the small problems negligible and therefore equivalent to non-problems.
I’ve seen this happen with positive things too; sometimes you won’t mind repeatedly doing small favors for someone and they start acting like you not minding means the favor is equivalent to doing nothing from your perspective, which is frustrating when your small but non-zero effort goes unacknowledged.
It’s sort of like approximating sinθ as 0 for small angles. ^_^
Yep. Most people seem to behave as though the choice between spending $5 and spending $10 is a much bigger deal than the choice between spending $120 and spending $125, but if anything it’s the other way round, because in the latter case you’ll be left with less money. (That heuristic does have a point for acausal reasons analogous to these insofar as you’ll have to make the first kind of choice much more often than the second, but people will still behave the same way in one-off situations.)
Another possible motivation for that heuristic: something that’s a good buy for $5 might well be a bad buy for $10, but something that’s a good buy for $120 is probably still a good buy for $125. If I find that a cheap item’s twice the cost I thought it was, that’s more likely to force me to re-do a utilitarian calculation than if I find an expensive item is 4% pricier than I thought it was.
Yes, but OTOH if I’m about to buy something for $125 it isn’t that unlikely that if I looked more carefully I could found someone else selling the same thing for $120, whereas if I’m about to buy something for $10 it’s somewhat unlikely that anyone else would sell the same thing for $5 (so looking around would most likely be a waste of time), and I’d guess these two effects would more-or-less cancel out.
I can often get a $10 good/service for $5 or less if I’m willing to delay consumption or find another seller (e.g. buying used books, not seeing films as soon as they come out, getting food at a canteen or fast food place instead of a pub or restaurant, using buses instead of trains). I might be atypical.
I think both your comment and the quote are forgetting the instrumental purpose of crying and/or feeling bad.
I can’t say I see your point. Mind explaining?
My guess: The purpose of crying is to make people around you more likely to help you.
So if you don’t have shoes, there is a chance that crying in public will make someone give you money to buy the shoes. But if there is a person without feet nearby, your chances become smaller, because people will redirect their limited altruist budgets to that other person. Your crying becomes less profitable.
… Alright, but… that’s a separate point to make altogether. It’s not a quote about making yourself as likely as possible to get others to help you, and, I would say, it doesn’t have to be; it’s a quote about how other people’s negative experiences influence the way you feel about yours.
Unfortunately, I’ve met a lot of people who forget the instrumental purposes of crying and/or feeling bad. =[
But if you look at it other way, then pointing out unpleasant facts about other people’s condition (that don’t apply to us) is equivalent to pointing out good facts about our condition, which should make us feel better, as it leaves us in a better world than we started out in.
That’s exactly the kind of thinking the world needs less of, and the kind that I was trying to warn readers against in the parent comment. Why? Just why would a worse world for someone else make for a better world for you, if that someone is not your mortal enemy? It just makes for a worse world, period.
The point isn’t that you’re taking pleasure in their misfortune, it’s that you’re taking pleasure in your own fortune. “I’m so lucky for having X.” If you don’t do that, then any improvements in your standard of living or situation in general will end up having no impact on your happiness, since you just get used to them and take them for granted and don’t even realize that you would have a million reasons to be happy. And then (in the most extreme case) you’ll end up feeling miserable because of something completely trivial, because you’re ignoring everything that could make you happy and the only things that can have any impact on your state of mind are negative ones.
Someone commented above about the instrumental value of crying and feeling bad, and you’re actually pointing out the case where crying and feeling bad fail at being instrumental. Basically, I’m for whatever attitude that gets you to stop crying and start fixing some problem, and if resetting your baseline helps, it’s fair game! It definitely works for me in some cases.
I think this quote is trying to argue against the attitude that problems that are minor compared to other problems don’t deserve any attention at all. That everyone without shoes should just wrench themselves into happiness and go around being grateful, rather than acknowledging that they keep stepping on snails and pointy things, which sucks, and making productive steps toward acquiring shoes.
I remember reading something about plastic surgeons getting kind of looked down upon because they’re not proper heroic doctors that handle real medical problems.
… I think I see where you’re coming from—by realizing we’re not at the far end of the unhappiness scale (since we have a counterexample to that), we should calibrate our feelings about our situation accordingly, yes?
It’s still not the way I view things; I’d like to say I prefer judging these things according to an absolute standard, but it’s likely that that would be less true for me than I want it to be. To the extent that it doesn’t hold true for me, I think it’s better to take into consideration better states as well as worse ones. Saying, “at least I don’t have it as bad as X” just doesn’t feel enough; everybody who doesn’t have it as bad as X could say it, and people in this category can vary widely in their levels of satisfaction, the more so the worse X has it. It’s more complete to say “Yes, but I don’t have it as good as Y either” or, better yet, “I have it better/worse than my need requires”.
Yes, pretty much.
Yes, yes, but now you are going into far more depth than the original quote. The idea behind the quote seems to have been (at least as I read it): “Be happy that you have feet, having feet is not something you should take for granted.” The quote says nothing more than that. (Well, not quite. The point it makes is not only meant to be reserved for feet specifically, but rather seems to be meant as a comment on anything people take for granted.)
What’s an actual utility?
In the example above: the fact that you have no shoes equates to negative utility for you. If you’re a normal human being who is generally well-intended and wants people to have both feet and shoes for those feet, you would feel upset if you saw someone without feet, hence more negative utility. Your negative utility from you having no shoes + negative utility from seeing someone have no feet can only amount to a more negative total score than just the one obtained by considering your own lack of shoes. Even in the case where you’re a complete egoist for whom others’ misfortunes have absolutely no impact on your own personal happiness, if you sum them up again you still end up with the same negative utility from having no shoes. Only if you’re the kind of monster that rejoices in other people’s suffering is it possible for your utility score to raise after seeing someone with no feet. Yet it seems that even people who aren’t complete monsters seem to take comfort in the fact that someone else has it worse than them, and this seems intuitive for most people, and counter-intuitive for others, i.e. me, and the person who made the quote.
(Disclaimer: I haven’t studied utilitarianism formally; probably I’m using more of an everyday definition of the word “utility”, akin to “feel-good-ness” in a broad sense. The way I’ve thought about this problem stems purely from my intuitions.)
Generally speaking, bigger problems tend to be cheaper to solve (i.e. solving them will yield more utilons per dollar); so if there is a painting in a museum that risks being sold, and there are people that risk dying from malaria, the existence of latter is a good indication that worrying about the former isn’t the most effective use of a given amount of resources. (“Concentrate on the high-order bits”—Umesh Vazirani.) But in this particular case, that heuristic doesn’t seem to work (unless I’m overestimating the cost of prosthetics).
That’s really the entire point of the original quote that this quote is making fun of. The difference between the original and this one is that the author of the second has not updated his baseline expectation that he should have shoes, and that something is wrong if he doesn’t.
Our baseline expectations determine what we consider a “loss”, in the prospect theory sense, so if seeing someone else’s problem helps you reset your baseline, it actually is a way to help you stop weeping and start working on the budget, as it were. What we call “getting perspective” on a situation is basically a name for updating your baseline expectation for how reality “ought to be” at the present moment.
(That isn’t a perfect phrasing, because English doesn’t really have distinct-enough words for different sorts of “oughts” or “shoulds”. The kind I mean is the kind where reality feels awful or crushingly disappointing if it’s not the way it “ought” to be, not the kind where you say that ideally, in a perfect world, things ought to be in thus and such a way, but you don’t experience a bad feeling about it right now. It’s a “near” sort of ought, not a “far” one. Believing the future should be a certain way doesn’t cause this sort of problem, until the future actually arrives.)
I agree that resetting your baseline is often important if you think that your lack of shoes is a soul-crushing awfulness. This quote is mainly arguing against the attitude that says “you have feet therefore your shoe problem is a non-problem, don’t even bother feeling bad or working on it”. It’s comparatively very minor, but it should be fixed just like any other problem. This quote is arguing against resetting your baseline to the point where minor problems get no attention at all.
That may be, but the actual context of the quote it’s arguing with is quite different, on a couple of fronts.
Harold Abbott, the author of the original 1934 couplet (“I had the blues because I had no shoes / Until upon the street, I met a man who had no feet”), wrote it to memorialize an encounter with a happy legless man, at a time when Abbott was dead broke and depressed. (Abbott was not actually lacking in shoes, nor the man only lacking in feet, but apparently in those days people took their couplet-writing seriously. ;-) )
Thing is, at the time he encountered the legless man (who smiled and said good morning), Abbott was actually walking to the bank in order to borrow money to go to Kansas City to look for a job. And not only did he not stop walking to the bank after the encounter, he decided to ask for twice as much money as he had originally intended to borrow. He had in fact raised his sights, rather than lowering them.
That is, the full story is not anything like, “other people have worse problems so STFU”, but rather that your attitude is a choice, and there are probably people who have much worse circumstances than you, who nonetheless have a better attitude. Abbott wrote the couplet to put on his bathroom mirror, as an ongoing reminder to have a positive outlook and persist in the face of adversity.
Which is quite a different message than what Noah Brand’s snarky quip would imply.
I think the problem that people are having with the quote is that it doesn’t actually contain the full story, and when it is repeated outside that context, the meaning they get from parsing the words is “other people have worse problems so STFU”, and it’s not a good idea to go around repeating it if people are going to predictably lack the context and misinterpret it.
I guess I didn’t quote the original article, and he was saying “I am pointing out this problem that is probably not as big or painful as this other problem, but can we please acknowledge its existence also?” And, as often happens with social issues, he was trying to preempt the inevitable “why would we care? we have it worse!” response.
I definitely agree that attitude is a choice! I wasn’t quite aware of the original quote, but I would put it down as an instrumental rationality quote as well. 8) But it sounds like his shoelessness was a symptom of bigger/different problems?
I consider Noah Brand’s quote a rationality quote because it’s a reminder that problems require real solutions. Changing your attitude to be positive is useful, but changing your attitude to accept that something that sucks will continue to suck indefinitely is not the answer.
Yes, his business (a grocery store) had just failed, taking his entire life savings with it. (And the story doesn’t actually say he was shoeless, anyway, just that the rhyme was something he posted on his mirror as a reminder of the encounter.)
“need”
Nope, the thing I’m talking about is closer to what the Buddhists would call an “attachment”, and some Buddhist-influenced writers call an “addiction”. (Others would call it a “desire”, but IMO this is inaccurate: one can desire something without being attached to actually getting it.)
On scientists trying to photograph an atom’s shadow:
Luke McKinney − 6 Microscopic Images That Will Blow Your Mind
Insultingly Stupid Movie Physics’ review of The Core
The remark included the following as a footnote:
32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can’t imagine how one could even locate that as an explanation in a matter of seconds.
Also from the review:
Unless the 32 people used the same, or very similar, pacemakers, and somebody forgot to say that.
Still sounds extremely unlikely. If a model of car has a particular design flaw, you’ll expect to hear a lot of reports of that model suffering the same malfunction, but you wouldn’t expect to hear that dozens of units within a certain radius suffered the same malfunction simultaneously. You’d need to subject them all to some sort of outside interference at the same time for that sort of occurrence to be plausible, and an event of that scale ought to leave evidence beyond its effect on all the pacemakers in the vicinity.
If I recall correctly, he also pointed out that the fact they had invited two experts on magnetic fields was also a strong clue.
See also the extra panel (hover onto the red button) in yesterday’s SMBC comic.
… I had not known about red buttons on SMBC.
roll d20… success on ‘resist re-binge’ check.
Umm… how do I use the red button on a mobile device? (I also have this problem with xkcd.)
I know that you crossed this out, but the answer to the parenthetical implied question is this: Use the xkcd viewer app.
Android (used by me regularly on my Android phone)
Apple (never used by me because I don’t have an Apple product)
Thank you!
You just press it. It also works with karma scores on LW to see the percentage of positive votes (at least on Android). I didn’t know how to read title texts on xkcd until reading TobyBartels’s comment, though.
--Tom Chivers
I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn’t look like a duck in my closet, but it also doesn’t look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there’s no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.
If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.
Insufficiently more likely. I’ve been around ducks many times without that happening to my socks. Log of the likelihood ratio would be close to zero.
You originally were talking about a duck in your closet, which isn’t the same as thing as being around ducks.
The discussion reminds me of this, which makes the point that, while corelation is not causation, if there’s no corelation, there almost certainly isn’t causation.
This is completely wrong, though not many people seem to understand that yet.
For example, the voltage across a capacitor is uncorrelated with the current through it; and another poster has pointed out the example of the thermostat, a topic I’ve also written about on occasion.
It’s a fundamental principle of causal inference that you cannot get causal conclusions from wholly acausal premises and data. (See Judea Pearl, passim.) This applies just as much to negative conclusions as positive. Absence of correlation cannot on its own be taken as evidence of absence of causation.
It depends. While true when the signal is periodic, it is not so in general. A spike of current through the capacitor results in a voltage change. Trivially, if voltage is an exponent (V=V0exp(-at), then so is current (I=C dV/dt=-aCV0 exp(-at)), with 100% correlation between the two on a given interval.
As for the Milton’s thermostat, only the perfect one is uncorrelated (the better the control system, the less the correlation), and no control system without complete future knowledge of inputs is perfect. Of course, if the control system is good enough, in practice the correlation will drown in the noise. That’s why there is so little good evidence that fiscal (or monetary) policy works.
I skipped some details. A crucial condition is that the voltage be bounded in the long term, which excludes the exponential example. Or for finite intervals, if the voltage is the same at the beginning and the end, then over that interval there will be zero correlation with its first derivative. This is true regardless of periodicity. It can be completely random (but differentiable, and well-behaved enough for the correlation coefficient to exist), and the zero correlation will still hold.
For every control system that works well enough to be considered a control system at all, the correlation will totally drown in the noise. It will be unmeasurably small, and no investigation of the system using statistical techniques can succeed if it is based on the assumption that causation must produce correlation.
For example, take the simple domestic room thermostat, which turns the heating full on when the temperature is some small delta below the set point, and off when it reaches delta above. To a first approximation, when on, the temperature ramps up linearly, and when off it ramps down linearly. A graph of power output against room temperature will consist of two parallel lines, each traversed at constant velocity. As the ambient temperature outside the room varies, the proportion of time spent in the on state will correspondingly vary. This is the only substantial correlation present in the system, and it is between two variables with no direct causal connection. Neither variable will correlate with the temperature inside. The temperature inside, averaged over many cycles, will be exactly at the set point.
It’s only when this control stystem is close to the limits of its operation—too high or too low an ambient outside temperature—does any measurable correlation develop (due to that approximation of the temperature ramp as linear breaking down). The correlation is a symptom of its incipient lack of control.
Knowledge of future inputs does not necessarily allow improved control. The room thermostat (assuming the sensing element and the heat sources have been sensibly located) keeps the temperature within delta of the set point, and could not do any better given any information beyond what it has, i.e. the actual temperature in the room. It is quite non-trivial to improve on a well-designed controller that senses nothing but the variable it controls.
Exponential decay is a very very ordinary process to find a capacitor in. Most capacitors are not in feedback control systems.
The capacitor is just a didactic example. Connect it across a laboratory power supply and twiddle the voltage up and down, and you get uncorrelated voltage and current signals.
Somewhere at home I have a gadget for using a computer as a signal generator and oscilloscope. I must try this.
On the other hand, I’d guess that 99% of actual capacitors are the gates of digital FETs (simply due to the mindbogglingly large number of FETs). Given just a moment’s glimpse of the current through such a capacitor, you can deduce quite a bit about its voltage.
False. Here (second graph) is an example of a real-life thermostat. The correlation between inside and outside temperatures is evident when the outside temperature varies.
The thermostat isn’t actually doing anything in those graphs from about 7am to 4pm. There’s just a brief burst of heat to pump the temperature up in the early morning and a brief burst of cooling in the late afternoon. Of course the indoor temperature will be heavily influenced by the outdoor temperature. It’s being allowed to vary by more than 4 degrees C.
OK, maybe I misunderstood your original point.
I wonder why EY didn’t make an example of that in Stuff That Makes Stuff Happen.
Examples like the ones I gave are not to be found in Pearl, and hardly at all in the causal analysis literature.
Sorry, can you clarify what you mean by “like the ones”. What is the distinguishing feature?
Dynamical dependencies—one variable depending on the derivative or integral of another. (Dealing with these by discretising time and replacing every variable X by an infinite series X0,X1,X2… does not, I believe, yield any useful analysis.) The result is that correlations associated with direct causal links can be exactly zero, yet not in a way that can be described as cancellation of multiple dependencies. The problem is exacerbated when there are also cyclic dependencies.
There has been some work on causal analysis of dynamical systems with feedback, but there are serious obstacles to existing approaches, which I discuss in a paper I’m currently trying to get published.
Sorry, confused. A function is not always uncorrelated with its derivative. Correlation is a measure of co-linearity, not co-dependence. Do you have any examples where statistical dependence does not imply causality without a faithfulness violation? Would you mind maybe sending me a preprint?
edit to express what I meant better: “Do you have any examples where lack of statistical dependence coexists with causality, and this happens without path cancellations?”
I omitted some details, crucially that the function be bounded. If it is, then the long-term correlation with its derivative tends to zero, providing only that it’s well-behaved enough for the correlation to be defined. Alternatively, for a finite interval, the correlation is zero if it has the same value at the beginning and the end. This is pretty much immediate from the fact that the integral of x(dx/dt) is (x^2)/2. A similar result holds for time series, the proof proceeding from the discrete analogue of that formula, (x+y)(x-y) = x^2-y^2.
To put that more concretely, if in the long term you’re getting neither richer nor poorer, then there will be no correlation between monthly average bank balance and net monthly income.
Don’t you mean causality not implying statistical dependence, which is what these examples have been showing? That pretty much is the faithfulness assumption, so of course faithfulness is violated by the systems I’ve mentioned, where causal links are associated with zero correlation. In some cases, if the system is sampled on a timescale longer than its settling time, causal links are associated not only with zero product-moment correlation, but zero mutual information of any sort.
Statistical dependence does imply that somewhere there is causality (considering identity a degenerate case of causality—when X, Y, and Z are independent, X+Y correlates with X+Z). The causality, however, need not be in the same place as the dependence.
Certainly. Is this web page current for your email address?
That’s right, sorry.
I had gotten the impression that you thought causal systems where things are related to derivatives/integrals introduce a case where this happens and it’s not due to “cancellations” but something else. From my point of view, correlation is not a very interesting measure—it’s a holdover from simple parametric statistical models that gets applied far beyond its actual capability.
People misuse simple regression models in the same way. For example, if you use linear causal regressions, direct effects are just regression coefficients. But as soon as you start using interaction terms, this stops being true (but people still try to use coefficients in these cases...)
Yes, the Harvard address still works.
I just noticed your edit:
The capacitor example is one: there is one causal arrow, so no multiple paths that could cancel, and no loops. The arrow could run in either direction, depending on whether the power supply is set up to generate a voltage or a current.
Of course, I is by definition proportional to dV/dt, and this is discoverable by looking at the short-term transient behaviour. But sampled on a long timescale you just get a sequence of i.i.d. independent pairs.
For cyclic graphs, I’m not sure how “path cancellation” is defined, if it is at all. The generic causal graph of the archetypal control system has arrows D --> P --> O and R --> O --> P, there being a cycle between P and O. The four variables are the Disturbance, the Perception, the Output, and the Reference.
If P = O+D, O is proportional to the integral of R-P, R = zero, and D is a signal varying generally on a time scale slower than the settling time of the loop, then O has a correlation with D close to −1, and O and D have correlations with P close to zero.
There are only two parameters, the settling time of the loop and the timescale of variations in D. So long as the former is substantially less than the latter, these correlations are unchanged.
Would you consider this an example of path cancellation? If so, what are the paths, and what excludes this system from the scope of theorems about faithfulness violations having measure zero? Not being a DAG is one reason, of course, but have any such theorems been extended to at least some class of cyclic graphs?
Addendum:
When D is a source with a long-term Gaussian distribution, the statistics of the system are multivariate Gaussian, so correlation coefficients capture the entire statistical dependence. Following your suggestion about non-parametric dependence tests I’ve run simulations in which D instead makes random transitions between +/- 1, and calculated statistics such as Kendall’s tau, but the general pattern is much the same. The controller takes time to respond to the sudden transitions, which allows the zero correlations to turn into weak ones, but that only happens because the controller is failing to control at those moments. The better the controller works, the smaller the correlation of P with O or D.
I’ve also realised that “non-parametric statistics” is a subject like the biology of non-elephants, or the physics of non-linear systems. Shannon mutual information sounds in theory like the best possible measure, but for continuous quantities I can get anything from zero to perfect prediction of one variable from the other just by choosing a suitable bin size for the data. No statistical conclusions without statistical assumptions.
Dear Richard,
I have not forgotten about your paper, I am just extremely busy until early March. Three quick comments though:
(a) People have viewed cyclic models as defining a stable distribution in an appropriate Markov chain. There are some complications, and it seems with cyclic models (unlike the DAG case) the graph which predicts what happens after an intervention, and the graph which represents the independence structure of the equilibrium distribution are not the same graph (this is another reason to treat the statistical and causal graphical models separately). See Richardson and Lauritzen’s chain graph paper for a simple 4 node example of this.
So when we say there is a faithfulness violation, we have to make sure we are talking about the right graph representing the right distribution.
(b) In general I view a derivative not as a node, but as an effect. So e.g. in a linear model:
y = f(x) = ax + e
dy/dx = a = E[y|do(x=1)] - E[y|do(x=0)], which is just the causal effect of x on y on the mean difference scale.
In general, the partial derivative of the outcome wrt some treatment holding the other treatments constant is a kind of direct causal effect. So viewed through that lens it is not perhaps so surprising that x and dy/dx are independent. After all, the direct effect/derivative is a function of p(y|do(x),do(other parents of y)), and we know do(.) cuts incoming arcs to y, so the distribution p(y|do(x),do(other parents of y)) is independent of p(x) by construction.
But this is more an explanation of why derivatives sensibly represent interventional effects, not whether there is something more to this observation (I think there might be). I do feel that Newton’s intuition for doing derivatives was trying to formalize a limit of “wiggle the independent variable and see what happens to the dependent variable”, which is precisely the causal effect. He was worried about physical systems, also, where causality is fairly clear.
In general, p(y) and any function of p(y | do(x)) are not independent of course.
(c) I think you define a causal model in terms of the Markov factorization, which I disagree with. The Markov factorization
p[x1,…,xn]=∏ip[xi|pa[xi]]
defines a statistical model. To define a causal model you essentially need to formally state that parents of every node are that node’s direct causes. Usually people use the truncated factorization (g-formula) to do this. See, e.g. chapter 1 in Pearl’s book.
I think that also works with acyclic graphs: suppose you have an arrow from “eXercising” to “Eating a lot”, one from “Eating a lot” to “gaining Weight”, and one from “eXercising” to “gaining Weight”, and P(X) = 0.5, P(E|X) = 0.99, P(E|~X) = 0.01, P(W|X E) = 0.5, P(W|X ~E) = 0.01, P(W|~X E) = 0.99, P(W|~X ~E) = 0.5. Then W would be nearly uncorrelated with X (P(W|X) = 0.4996, P(W|~X) = 0.5004) and nearly uncorrelated with E (P(W|E) = 0.5004, P(W|~E) = 0.4996, if I did the maths right), but it doesn’t mean it isn’t caused by either.
Yes, this is the mechanism of cancellation of multiple causal paths. In theory one can prove, with assumptions akin to the ideal point masses and inextensible strings of physics exercises, that the probability of exact cancellation is zero; in practice, finite sample sizes mean that cancellation cannot necessarily be excluded.
And then to complicate that example, consider a professional boxer who is trying to maintain his weight just below the top of a given competition band. You then have additional causal arrows back from Weight to both eXercise and Eating. As long as he succeeds in controlling his weight, it won’t correlate with exercise or eating.
Yes, this is completely wrong. There is frequently no correlation but strong causation due to effect cancellation (homeostasis, etc.)
Here’s a recent paper making this point in the context of mediation analysis in social science (I could post many more):
http://www.quantpsy.org/pubs/rucker_preacher_tormala_petty_2011.pdf
Nancy, I don’t mean to jump on you specifically here, but this does seem to me to be a special instance of a general online forum disease, where people {prefer to use | view as authoritative} online sources of information (blogs, wikipedia, even tvtropes, etc.) vs mainstream sources (books, academic papers, professionals). Vinge calls it “the net of a million lies” for a reason!
I didn’t feel jumped on, though I still don’t have a feeling for how common causation without corelation is.
The common example I go on about is any situation where a system generally succeeds at achieving a goal. This is a very large class. In such situations there will tend to be an absence of correlation between the effort made and the success at achieving it. The effort will correlate instead with the difficulties in the way. Effort and difficulty together cause the result; result and goal together cause effort.
A few concrete examples. If my central heating system works properly and I am willing to spend what it takes to keep warm, the indoor temperature of my house will be independent of both fuel consumption and external temperature, although it is caused by them.
If a government’s actions in support of some policy target are actually effective, there may appear to be little correlation between actions and outcome, creating the appearance that their actions are irrelevant.
An automatic pilot will keep an aircraft at a constant heading, speed, and altitude. Movements of the flight controls will closely respond to external air currents, even if those currents are not being sensed. Neither need correlate with such variations as remain in the trajectory of the plane, although these are caused by the flight controls and the external conditions.
“The carpets are so clean, we don’t need janitors!”
“When you do things right, people won’t be sure you’ve done anything at all.”
Not disagreeing, but just wanted to mention the useful lesson that there are some cases of causation without correlation. For example, the fuel burned by a furnace is uncorrelated with the temperature inside a home. (See: Milton Friedman’s thermostat.)
I’m not sure I understand this. Do you mean that the way your shoe looks is not evidence for the presence or absence of a duck somewhere in your closet?
I think the original quote was meant to imply that as long as your shoe doesn’t have the properties that differentiate ducks from non-ducks then your shoe possesses the absence of duck properties and should be assumed to be a non-duck. In other words, for a given object each property must have a binary value for duckness and when all properties have non-duckness values, you should conclude that the object as a whole has a non-duckness property.
I get confused by too many negatives and ducks.
I’ve just come across a fascinatingly compact observation by I. J. Good:
This is a beautifully simple recipe for a conflict of interest:
Considering absolute losses assuming failure and absolute gains conditioned on success, an adviser is incentivized to give the wrong advice, precisely when:
The ratio of agent loss to agent gain,
exceeds the odds of success versus failure
which in turn exceeds the ratio of principal loss to principal gain.
You can see this reflected in a lot of cases because the gains to an advisor often don’t scale anywhere near as fast as the gains to society or a firm. It’s the Fearful Committee Formula.
Which is not nearly as common as the reverse, the Reckless Adviser Formula, when the personal loss to the adviser is so low and the potential personal gain is so high, they recommend adoption even when the expected gain for the company is negative.
In general, this is referred to as the principal-agent problem.
Note that the adviser’s ethical problem also exists if L/V > p/(1-p) > l/v.
Is the order also inverted in the original?
Fixed.
I. J. Good’s original, which I’ve somewhat abridged, explicitly specifies that there are no competitors who cause visible losses/gains after the invention is rejected.
To clarify, this is a summary of what you’ve excluded in your quote, not a response to the other case where the ethical problem exists, correct?
It’s a summary of what I excluded—I had actually misinterpreted, hence my quote indeed was not a valid reply! The other case is indeed real, sorry.
Name three?
The success of Market-Based Management / Koch Industries appears to be due at least in part to their focus on NPV at the managerial level. You get stories like (from memory, and thus subject to fuzz) the manager of a refining plant selling the land the plant was on to a casino which was moving to the area, which he was rewarded for doing because the land the plant was on was more valuable to the casino than the company, even after factoring in the time lost because the plant was shut down and relocated. The corporate culture (and pay incentive structure) rewarded that sort of lateral use of resources, whereas a culture which compartmentalized people and departments would have balked at the lost time and disruption.
-- John C Wright
That reminds me of http://xkcd.com/690/.
Also:
-- Raymond Arritt
(Quoting this before dinner is making me hungry.)
Wikipedia may ultimately have to do one of two things, or both:
1) Provide better structure for alternate versions of contested ideas
2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.
Such a demarcation will always be challenged; I don’t see any way around that, but I’d also insist that it’s necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to “brand” (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the “smells like a fact” zone.
Several versions with explicit labeling of which viewpoint it represents would be a huge step in improving general information retrieval. Hypertext in general was obviously a huge leap, but the problem of presenting the evolution of a school of thought on a particular subject has not been solved satisfactorily IMO. Path dependence of various things is still among the information we regularly do not record/throw away. We should not be reliant upon brilliant synthesists taking interest in each subject and writing a well organized history.
-Yevgeny Yevtushenko
Ironically, the man Yevtushenko is now dead too; but the world Yevtushenko, asteroid number 4234, lives on.
I wonder if we’ll ever learn to reconstruct people-shadows from other people’s memories of them. Also, whether this is a worthwhile thing to be doing.
It’s a little creepy the way Facebook keeps dead people’s accounts around now.
I imagine that depends on what we’re willing to consider a “person-shadow”.
Any thoughts on what your minimum standard for such a thing would be?
For example, I suspect that if we’re ever able to construct artificial minds in a parameterized way at all (as opposed to merely replicating an existing mind as a black box), it won’t prove too difficult thereafter, given access to all my writings and history and whatnot, to create a mind that identifies itself as “Dave” and acts in many of the same ways I would have acted in similar situations.
I don’t know if that would be a worthwhile thing to do. If so, it would presumably only be worthwhile for what amount to entertainment purposes… people who enjoy interacting with me might enjoy interacting with such a mind in my absence.
I occasionally have dreams about people who have died in which they seem really real, where they’re not saying stuff they’ve said when they were alive but stuff that sounds like something they would say. But it’s not profound original thoughts or anything? So I think what I’m thinking is pretty close to what you’re describing.
I guess if we can make one of these, then we could see how different people’s mental models of that person were? Probably there is stuff in my mental model that I can’t articulate! Stuff that’s still useful information!
But maybe people will start using these instead of faking their deaths if they wanted to run away.
I’ve suspected—though we’re talking maybe p = 0.2 here—for a while that our internal representations of people we know well might have some of the characteristics of sapience. Not enough to be fully realized persons, but enough that there’s a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes. Accounts like your dreams seem like they might be weak evidence for that line of thought.
Authors commonly feel like the characters they write about are real, to various extents. On the mildest end of the spectrum, the characters will just surprise their creators, doing something completely contrary to the author’s expectations when they’re put in a specific scene and forcing a complete rewrite of the plot. (“These two characters were supposed to have a huge fight and hate each other for the rest of their lives, but then they actually ended up confessing their love for each other and now it looks like they’ll be happily married. This book was supposed to be about their mutual feud, so what the heck do I do now?”) Or they might just “refuse” to do something that the author wants them to do, and she’ll feel miserable afterwards if she forces the characters to act in the wrong way nevertheless. On the other end of the spectrum, the author can actually have real conversations with them going on in her head.
I’m not much of an author, but I’ve had this happen.
My mental character-models generally have no fourth wall, which has on several occasions lead to them fighting each other for my attention so as to not fade away. I’m reasonably sure I’m not insane.
That sounds mystical.
Nah, this doesn’t require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there’s not some fairly sophisticated error correction built in.
This seems plausible to me because evolution’s usually a pretty parsimonious process; I wouldn’t expect it to develop an independent mechanism for representing other minds when it’s got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it’s plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don’t have anything I’d consider strong evidence for this—hence the lowish p-value.
Relevant smbc.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I’d say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and “consulting” it (apart from that model not matching the original anyways) seems to me more like your brain playing “what if”, and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
My cached reply: “taboo exist”.
This whole train of discussion started with
I’d argue that those characteristics of sapience still belong to the system that’s playing “what-if”, not to the what-if itself. There, no exist :-)
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
Shouldn’t have done that.
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else’s Batman, though; fictional characters kind of muddy the waters here. Especially ones who’ve been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object—I’m not sure I want to call it a mind -- seems pretty blurry to me; I wouldn’t think there’d be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.
Has there been any work on how our internal representations of other people get built? I’ve only heard about the thin-slicing phenomenon but not much beyond that. I feel like sometimes people extrapolate pretty accurately—like, “[person] would never do that” or “[person] will probably just say this” but I don’t know how we know. I just kinda feel that a certain thing is something a certain person would do but I can’t tell always what they did that makes me think so or that I’m simulating a state machine or anything.
Exercise: pick a sentence to tell someone you know well, perhaps asking a question. Write down ahead of time exactly what you think they might say. Make a few different variations if you feel like it. Then ask them and record exactly what they do say. Repeat. Let us know if you see anything interesting.
There’s been some, yeah. I haven’t been able to find anything that looks terribly deep or low-level yet, and very little taking a cognitive science rather than traditional psychology approach, but Google and Wikipedia have turned up a few papers.
This isn’t my field, though; perhaps some passing psychologist or cognitive scientist would have a better idea of the current state of theory.
Relevant: Greg Egan, “Steve Fever”.
--Sam Harris
You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
How would this encourage them to actually value logic and evidence instead of just appearing to do so?
The subject’s capacity for deception is finite, and will be needed elsewhere. Sooner or later it becomes more cost-effective for the sincere belief to change.
That is breathtakingly both the most cynical and beautiful thing I have read all day :)
Postcynicism FTW!
I generally agree with your point. The problem with the specific application is that the subject’s capacity for thinking logically (especially if you want the logic to be correct) is even more limited.
If the subject is marginally capable of logical thought, the straightforward response is to try stupid random things until it becomes obvious that going along with what you want is the least exhausting option. Even fruit flies are capable of learning from personal experience.
In the event of total incapacity at logical thought… why are you going to all this trouble? What do you actually want?
That depends on how much effort you’re willing to spend on each subject verifying that they’re not faking.
People tend to conform to it’s peers values.
And for that matter, to start believing what they behave as if they believe.
It’s not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.
Want to be like or appear to be like? I’m not convinced people can be relied on to make the distinction, much less choose the “correct” one.
Or do they want to be like those folks appear to be like?
I think the most common human tactic for appearing to care is to lie to themselves about caring until they actually believe they care; once this is in place they keep up appearances by actually caring if anyone is looking, and if people look often enough this just becomes actually caring.
Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).
Couple of attempts:
The hard sciences
Professions with a professional code of ethics, and consequences for violating it.
This reminds me of
which I believe is a paraphrasing of something Jonathan Swift said, but I’m not sure. Anyone have the original?
I don’t think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about “crime being on the rise” all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.
Then you show me some statistics, and I change my mind.
In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as ‘rational,’ even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can’t afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn’t mean we’re incapable of having our minds changed.
The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect—but I predict that (for example) a parent who’d previously been worried about child abductions after hearing several highly publicized news stories, and who’d already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.
I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions—you’ve invested into them or incorporated them into your identity, for example. I think I’m more inclined to change the quote to
to make it slightly more useful in practice, because often changing the person’s mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position—and people generally don’t reveal that until they’re ready to change their mind on their own.
Oops, I guess I wasn’t sure where to put this comment.
It looks to me like you arrived at this position via weighing the available evidence. In other words, you reasoned yourself into it. Upon second reading I see you don’t have a base rate for the amount of violent crime on the news in peaceful countries, and you derived a high absolute level from a high[er than you’d like] rate of change. But you’ve shown a willingness to reason, even if you reasoned poorly (as poorly as me when I’m not careful. Scary!) So I think jooyus’ quote survives.
If you can’t appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they’ll figure it out by themselves. No one really believes God will protect them from harm...
I have some friends who do… At least insofar as things like “I don’t have to worry about finances because God is watching over me, so I won’t bother trying to keep a balanced budget.” Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.
Why not? Thinking about money is work. It involves numbers.
Moreover, it often involves a great deal of stress. Small wonder that many people try to avoid that stress by just not thinking about how they spend money.
Well… as something completely and obviously deterministic (the amount of money you have at the end of the month is the amount you had at the beginning of the month, plus the amount you’ve earned, minus the amount you’ve spent, for a sufficiently broad definition of “earn” and “spend”), that’s about the last situation in which I’d expect people to rely on God. With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.
Once you have those figures, it is deterministic; however, at the start of the month, those figures are not yet determined. One might win a small prize in a lottery; the price of some staple might unexpectedly increase or decrease; an aunt may or may not send an expensive gift; a minor traffic accident may or may not happen, requiring immediate expensive repairs.
So there are factors that you cannot control that affect your finances.
Does this cause you to doubt the veracity of the claim in the parent, or to update towards your model of what people rely on God for being wrong? I guess it should probably be both, to some extent. It’s just not really clear from your post which you’re doing.
Mostly the latter, as per Hanlon’s razor.
“Praying for healing” was quite a common occurrence at my friend’s church. I didn’t pick that as an example because’s it’s a lot less straightforward. Praying for healing probably does appear to help sometimes (placebo effect), and it’s hard enough for people who don’t believe in God to be rational about health–there aren’t just factor you cannot control, there are plenty of factors we don’t understand.
There hasn’t been a lot of money spent researching it, but meta-analysis of the studies that have been conducted show that on average there is no placebo effect.
That’s really interesting...I had not heard that. Thanks for the info!
I think that’s mostly because money is too abstract, and as long as you get by you don’t even realize what you’ve lost. Survival is much more real.
Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won’t change their behavior.
Can’t make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.
If you threaten someone in their survival they are likely to get emotional. That’s not the best mental state to apply logic.
Suicide bombers don’t suddenly start believing in reason just before they are send out to kill themselves.
Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.
Does it promote religiousness or attract the religious?
I think it just promotes grasping at straws.
Take all their stuff. Tell them that they have no evidence that it’s theirs and no logical arguments that they should be allowed to keep it.
They beat you up. People who haven’t specialized in logic and evidence have not therefore been idle.
Shoot them?
I think you just independently invented the holy war.
This is from the Sam Harris vs. William Lane Craig debate, starting around the 44 minute mark. IIRC, Luke’s old website has a review of this particular debate.
You can find out what persuades them and give them that.
And in some instances that would likely be what we call logic or evidence.
You usually can’t get someone with a spider phobia to drop his phobia by trying to convince them with logic or evidence. On the other hand there are psychological strategies to help them to get rid of the phobia.
I think cognitive behavioural therapy for phobias, which seems to work pretty well in a large number of cases, actually relies on helping people see that their fear is irrational.
As someone with a phobia, I can tell you from experience that realizing your fear is irrational doesn’t actually make the fear go away. Sometimes it even makes you feel more guilty for having it in the first place. Realizing it’s irrational just helps you develop coping strategies for acting normal when you’re freaking out in public.
Oh sure, I can definitely believe that. Maybe a better choice of wording above would have been “internalise” rather than “see”, which would rather negate my point, I guess. Or maybe it works differently for some people. I don’t have any experience with phobias or CBT myself.
It’s alief vs. belief. It’s one thing to see that, in theory, almost all spiders are harmless. It’s another to remain calm in the presence of a spider if you’ve had a history of being terrified of them.
Desensitization is a process of teaching a person how to calm themselves, and then exposing them to things which are just a little like spiders (a picture of a cartoon spider, perhaps, or the word spider). When they can calm themselves around that, they’re exposed to something a little more like a spider, and learn to be calm around that.
The alief system can learn, but it’s not necessarily a verbal process.
Even when it is verbal, as when someone learns to identify various sorts of irrational thoughts, it’s much slower than understanding an argument.
Right; that’s the “behavioural” part of cognitive behavioural therapy, right? But the “cognitive” part is an explicit, verbal process.
From this recent talk
I cannot express how true this is, at least not without a lot of swear words.
Aubrey de Grey being an immortalist himself, I’m assuming the irony to be unintentional?
Haha, didn’t occur to me until I read your comment, so there’s one data point for you.
/clicks link, watches
… I can barely understand a single word this guy is saying. Is it just me or is the audio in that video really bad? I don’t suppose it was transcribed anywhere?
It’s not just you. It was comprehensible but annoying for approximately the first 10 minutes, and then it became completely muddy. I hope there’s a transcript somewhere.
I’m confused. I thought that deathpigeon’s quote was downvoted because it was anti-deathism and not rationality, but this quote is similar in that way and it has lots of upvotes. Was deathpigeon’s quote actually downvoted because it incorrectly attributed a line to ASoIaF instead of Game of Thrones? Seriously?
I wouldn’t think so, but I wasn’t expecting five upvotes on my comment saying so, either. Maybe we really are that pedantic.
This is only incidentally anti-deathist, though; its substance has more to do with popular reactions to controversial ideas. Which doesn’t seem all that shiningly rational to me either, but perhaps I’m missing something.
Or we all secretly love anti-deathist quotes, and only downvote them when they have no rationality content because we feel it’s our duty, but when we see one that can be interpreted as slightly rationalist, we seize the excuse to upvote it. Or our liking for a quote based on its anti-deathism enhances our appreciation for its insight into rationality, via the affect heuristic.
Or perhaps there are more criteria (aesthetic, informational, other) by which these quotes may be judged than whether they are anti-death or not.
And that other quote is neither ASoIaF nor TV series, it’s a misquotation.
.
The first response that comes to my mind is “because if the butterfly were trying that hard to escape the kid, it would fly above the kid’s reach, and the kid would give up.” When I look at the scene, I see a kid chasing a butterfly, and a butterfly too stupid to realize it should flee instead of simply dodging.
Animals on the intelligence levels of butterflies (which, keep in mind, have specific mating flight patterns they use to tell other members of their species apart from things like ribbons and stray flower petals,) don’t seem to even have retreat instincts, just avoidance instincts. They can’t recognize persistent pursuit. A fly won’t hesitate to land on a person who has been trying to swat it for minutes on end.
Because you’re a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.
Three things, in no particular order:
I seem to recall that, in some obscure language, each noun has an agency level and in a sentence the most agenty noun is the subject by default, unless the verb is specially inflected to show otherwise: for example, “[dog] [bite] [man]” would mean ‘a man bit a dog’, regardless of word order, because the noun “[man]” has higher agency than “[dog]”.
Would you sooner see a tiger chasing a man, or a man running away from a tiger? If the former, it’s not just the fact that butterflies are not human, it’s the fact that the butterflies are small.
I think that, at least in the case of the lion, it would also depend on whether the two of them are moving towards the left side or the right side of my visual field. I heard that in _The Great Wave off Kanagawa_ the boats are intended to look more agenty than the wave, but for Western people it will typically look like the other way round (due to Western languages being written from left to right), and for a Westerner to get the right effect they’d have to look at the picture in a mirror. (It works for me, at least.)
Is this visual field orientation issue really Western vs Eastern? If so, has it evaporated lately?
One of the media that most lends itself to testing this notion is video games, since there is almost always an agent, and often a preferred direction to gameplay. In some cases, there is a lot of free movement but when you enter a new zone/approach a boss, it generally goes one way rather than the other.
Eastern games favoring left-to-right over right-to-left: Super Mario Brothers, Ninja Gaiden, Megaman, Ghosts and Goblins, Double Dragon, TMNT, River City Ransom, Sonic the Hedgehog, Gradius/Lifeforce, UN Squadron, Rygar, Contra, Codename: Viper, Faxanadu (at least, the beginning, which is all I saw), Excitebike, Zelda 2, Act Raiser, Wizards and Warriors, and Cave Story.
On the other side, Final Fantasy combat generally puts the party on to right side, facing left. That’s pretty leftward-oriented for sure. And very slightly—more slightly than any of the above—Metroid. Whenever you find a major powerup, you approach it from the right. You enter Tourian (the last area) from the right, and approach all 3 full bosses from the right. Those two are all I can think of with any sort of leftward bias at all.
In the west, the only games I can think of that favor right-to-left over left-to-right are Choplifter and Solaris; also, we get slightly-leftward readings on the Atari game of The Empire Strikes Back (you go left to meet the attack, but the primary agents are the attacking walkers, which are going right, and you need to keep up with them) and Pitfall (it seems mainly designed for players going right… which meant it was easier to turn around and go left; however, I’m sure the designer did this intentionally).
In absolute terms and even more at a fractional level, that’s more than the eastern games.
… Now my head hurts. And man, going to a boarding school at a young age really exposed me to a lot of games.
Huh, I just tried that, and it works for me too. When you mirror it, it looks like they’re going into the wave instead of fleeing from it. The effect is really strong; I wondered if it would still work when I knew about it, but it does.
BTW, does anyone get different effects from the emoticons :-/ and :-\ or it’s just me?
V erpragyl qvfpbirerq gung, juvyr gurl fhccbfrq gb or flabalzbhf (ba Snprobbx gurl eraqre gb gur fnzr cvp), gb zr gur sbezre srryf zber yvxr “crecyrkvgl, pbashfvba” (naq gung’f ubj V trarenyyl hfr vg), jurernf gur ynggre srryf zber yvxr “qvfnccebiny” (naq V bayl fnj gung orpnhfr zl cubar unf :-\ ohg abg :-/ nzbat gur cer-pbzcbfrq rzbgvpbaf, fb V cvpxrq gur sbezre ohg vg qvqa’g ybbx evtug gb zr).
[Edited to move the question to the front and rot-13 the rest as per Nesov’s suggestion.]
You shouldn’t prime the audience before asking a question like that.
Good point. Fixed.
Interesting. In the normal version, it looks to me like the waves are lifting the boats, and mirror-reversed it looks like the boats are driving against it.
Actually, my normal way to look at it is to focus on the wave, then the mountain, and scarcely notice the boats.
On my first look at the mirror version, the wave looked like a giant claw attacking the mountain.
Yeah, I spent a while looking for the boats in the image… I thought one of them was a beach. I think the question of which is more “agenty” was contaminated for me, though, since I read the comments before following the link to look at the image. I can make myself see either the wave as ‘chasing’ the boats, or the boats as fleeing the wave, or the boats sailing into the wave...
For me, the default orientation of the picture makes it seem like the boats are moving into it, while the flipped version makes it seem like the wave is agent-ly ‘attacking’ the boats. The difference in agentiness is more pronounced in the flipped version, though. (I’m Asian-American.)
Did you grow up in America? Would this be consistent with a genetic basis, or have you been exposed to RTL language previously?
Born and raised in the US, so English is my primary language. I had some long-term exposure to Chinese growing up as a kid (generally written up-to-down then right-to-left in our workbooks). Speaking and understanding (rudimentary) Chinese has stuck with me; the writing and reading of, has not.
Don’t good hunters have good mental models of their prey? I mean I get that you’re thinking that it wouldn’t help to feel sympathy for animals of other species. But it would help in many cases to have empathy, and to see things from the other animal’s perspective.
Butterflies are not, and to my knowledge have never been, a major prey item for H. Sapiens.
Ozy Frantz—Brain Chemicals are not Fucking Magic
Klingon proverb.
--Gabe Newell during a talk. The whole talk is worthwhile if you’re interested in institutional design or Valve.
What’s the percent chance that I’m doing it wrong?
The whole quote:
The problems you face might not require a serious approach; without more information, I can’t say.
78.544%.
-Alex Tabarrok
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
Not necessarily. You don’t punish people based on their likelihood of being guilty but based on severity of their crime.
If torture is used as tool to gain information instead of being used to punish it’s even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.
If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it’s ethical to torture to get that information.
After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.
An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don’t.
Even the first person who was tortured might be innocent or ignorant.
Yes, but that’s besides the point I tried to make. Torturing in general produces a dynamic that makes you punish more innocent people.
It seems to me that the same would apply to any in-group. The reasoning runs more-or-less as follows:
It is us (not me personally, but a group with which I strongly identify) that is treating this person badly; since we are doing it, then he must deserve it. Since he deserves it, he must be guilty. This is because if he did not deserve it, then I would be horrified at the actions of people I have always tried to emulate; and that, in turn, would mean that I had already given some support to an evil group, and had indeed put some significant effort into being a part of that group, taking up the group norms.
If the group is evil, or does evil actions, then I am evil by association.
And a good person does not want to reach that conclusion; therefore, the person being punished must be guilty. And thus, good people do evil things by not acknowledging evil being done in their name as what it is.
-- Magnificent Sasquatch
Quote of 2013!
-- Screwtape, The Screwtape Letters by C.S. Lewis
I kind of wish people did use the future more, sometimes. For example, in Australia at the moment, neither major political party supports gay marriage. And beyond all the direct arguments for/against the concept, I can’t help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve? Maybe that reasoning isn’t the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.
Here are a few things that have at one time or another been considered “obviously inevitable”:
The spread of enlightened dictatorship on the Prussian model.
The spread of eugenics.
The control of the world economy by “rational” central planners.
My point is that you appear to be overestimating how well you can predict the future.
I don’t think you really believe this argument. In particular if the success of something you opposed seemed inevitable, you’d still oppose it.
What I think is happening is that you support the “inevitable” outcome but are getting frustrated that the opposition just won’t go away like they’re “supposed” to.
Oppose in the sense of “actively work to stop it” or oppose in the sense of, “if asked about it, note that one dislikes it”? I dislike the increase of surveillance over the decades but look: Sensors get cheaper year by year. Computation gets cheaper year by year. I’m not happy to see more surveillance, but I see it as so close to inevitable, due to the dropping costs of the enabling technologies, that actively opposing it is a waste of time and effort.
To put it another way: In the original C.S.Lewis quote, Lewis includes in his own list of questions that he wants asked: “Is it possible?” I view most of the questions that Lewis disapproves of as just being ways of asking whether recent historical evidence make something look possible or impossible in the near future. In my view, usually, claims of historical inevitability are overstated, but, occasionally (as in the cheaper sensors example), I think there are situations where a fairly solid case for at least likely trends can be made.
Being elected at some point in the next 3 years. They aren’t trying to achieve anything related to homosexual marriages. They don’t care.
Um, I know this is classic Hansonian “X is not about X” cynicism, but I doubt it’s actually true of most politicians. Sure, the need to get elected skews their priorities, but they do have policy preferences, which they are willing to pursue at cost if necessary.
FWIW, 20 years ago (when my now-husband and I first got together) I expected that I would live in a world of strictly heterosexual marriages all my life.
That didn’t incline me to cease my opposition to that world.
So I can empathize with someone who expects to live in a world of increasing marriage equality but doesn’t allow that expectation to alter their opposition to that world.
Been making a game of looking for rationality quotes in the super bowl
“It’s only weird if it doesn’t work” —Bud Light Commercial
Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is “well that doesn’t work”
Well, but in the universe of the commercials, it clearly did, so long as you went to the appropriate expert.
Good observation. I will accept your correction: It’s only weird if it doesn’t work, and it doesn’t work unless you’re in Stevie Wonder’s presence
W. H. Auden, “The More Loving One”
I had a thought recently, what if the existence of a benevolent, omnipotent creator was proven? and my first thought was that I would learn to love the world as the creation of a higher power. And that disturbed me. It’s too new a thought for me to have plumbed it properly. But this reminded me. In the absence of the stars, what becomes of their beauty?
When the world is bereft of tigers, glaciers, the Amamzon, will we feel it to be sublime? imma go read the poem now
The only interpretation I’ve been able to read into this is that the speaker wants to become more emotionally accepting of death. Am I missing something?
That interpretation didn’t even occur to me, possibly because I read the whole poem instead of the bit I quoted (and maybe I quoted the wrong bit). Here is the whole thing (it’s short). I always feel a bit awkward arguing about how I interpreted a poem, so maybe this will resolve the issue?
(Incidentally, am I the only one mildly annoyed by how people seem to think of “rationality quotes” as “anti-deathism quotes”? The position may be rational, but it is not remotely related to rationality.)
You’re not the only one. We should be doing more firewalling the optimal from the rational in general.
Thank you, that was helpful. I don’t see the deathist tones anymore. Now it reads a bit more like ‘If I happened to find myself in a world without stars I think I’d adapt,’ which reminds me a bit of the Litany of Gendlin and the importance of facing reality. It makes more sense to have it here now.
This is true, and now I have to go back and look at all the anti-deathist quotes I upvoted and examine them more closely for content directly related to rationality. Damn.
William Deseriewicz
The whole speech is worth reading as one giant rationality quote
Not bad, although it seems to equate originality with goodness a little too much.
.
Do we know anything about executive function failures other than AD(H)D?
http://en.wikipedia.org/wiki/Executive_dysfunction
In most cases ‘executive dysfunction’ covers the same territory as ‘adult ADHD’, but it can also be the outcome of some kinds of brain damage.
(Joseph Heath, The Efficient Society)
Heath is an excellent writer on economics/philosophy.
Introduction to Learn Python The Hard Way, by Zed A. Shaw
If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I’ve done on the internet.
This makes me wonder how much my writing skills would improve if I retyped excellently written essays for a while.
Benjamin Franklin’s method of learning to write well is summarized here. His version:
I would expect the answer to be “not much, compared to writing and publishing horrible, horrible fanfiction”.
I’d like to see a study result on that.
In Art History class I learned that a common way for great artists to learn to paint was by copying the work of the masters. I then asked the art teacher why it was a rule that we couldn’t copy other famous historical paintings. I can’t remember her exact answer but the times I haven’t followed her advice and went and copied a great painting, I seem to have learned more. But again, I’d like to see a study result.
I’d like that too.
It makes sense intuitively, but if I can’t find any evidence either way this’ll probably seep into my subconscious now and at some point in the future I’ll just assume it as true and adopt strategies based on that assumption, which might be suboptimal.
Your grammar and spelling might improve. I think you’ve matched the wrong things in your analogy.
I’m not sure what this has to do with rationality quotes, but the extract basically convinces me to avoid the guy like the plague. The underlying premises seem to be something like:
The remaining choice when someone knows enough to feel a book is too simple for them is that they know everything.
They should discard all that they know—empty before you fill—so they can learn from someone with more knowledge than them.
Go learn lisp… -shrug-
It seems incredibly bad advice to give to someone who thinks a lot of what’s in a book’s too simple for them to essentially yell at them to shut up and knuckle down. As compared to say, pointing them to a few things that are generally not covered that well in self-learning and direct them to a more advanced book.
Agreed. I’m actually not sure if what I should take away from that introduction is “This material seems easy but isn’t, so go through everything carefully even if you think you understand it” or the opposite: “If this book seems easy, it’s not advanced enough for you and you already know everything; so read something else instead.”
I took it as meaning the second. There’s even a recommendation as to what else to read; a book on Lisp.
Of course, if your goal is to learn Python but you find Zed’s book too easy, “Read a book on Lisp” is probably not suitable advice.
I strongly suspect that’s just him being an ass. If you’re finding the concepts in his book too simple, there are plenty of other concepts you could be learning about in computer science that would expand your ability as a programmer more quickly than just picking up another language.
If you want to become a better programmer after learning the basics of a language, I recommend you go and pick up some books on the puzzles / problems in computer science and look at how to solve them using a computer. Go and read up on different search functions and path finding routines, go and read up on relational databases, and types as an expressive system rather than just as something that someone’s making you do, go and read up on using a computer to solve tic-tac toe… Things like that—you’ll get better a lot faster and become a much better programmer than you will just from picking up another language, which let’s face it you’re still not going to have a deep understanding of the uses of.
Which isn’t to say that there’s no learning in picking up another language. There is, I don’t know any good programmers who only know one language. But it’s not the fastest way to get the most gain in the beginning.
Once you have that extra knowledge about how to actually use the language you just learned. Then by all means go and learn another language.
If you just know Python, then you know what we’d call a high-level imperative language. Imperative just means you’re giving the computer a list of commands, high-level means that you’re not really telling the computer how to execute them (i.e. the further away you get from telling it to do things with specific memory locations and what commands to use from the command set on the processor the higher level the language is.)
C will give you, the rest of the procedural/imperative side of things that you didn’t really get in Python, you’ll learn about memory allocation and talking to the operating system—it’s a lower level language but still works more or less the same in the style of programming. Haskell or Lisp are both fairly high level languages, like Python, but will give you functional abstraction which is a different way of looking at things than procedural programming.
But… even if you were going to recommend a language to learn after Python, and you knew the person already knew about stuff like relational databases and search functions and could use their skill to solve problems so that you weren’t just playing a cruel joke on them, and even if you were going to recommend a functional language: deep breath … it wouldn’t be Lisp, I think.
Lisp has a horrible written style for a beginner. It does functional abstraction, it’s true enough—and that is a different way of thinking about problems than the procedural programming that’s used in Python—but so does Haskell, and Haskell programs don’t look like someone threw up a load of brackets all over the screen; they’re actually readable (which may explain why Haskell actually gets used in real life whereas I’ve never seen Lisp used for much outside of a university.) Haskell also has the awesomeness of monadic parser combinators which are really nice and don’t show up in Lisp.
Lisp’s big thing is its macros. I can’t think of much other reason to learn the language and frankly I try to use them as little as possible anyway because it’s so much easier to misuse them than it is with functions.
So, yeah. I can see where you’re coming from but I don’t think he’s really on the level there.
Would you care to share your reason for the downvote? I promise not to dispute criticism so you don’t have to worry about it escalating into a time-sink.
I can’t, because I wasn’t the one who downvoted it. (I can see why one might think so, since the comment was in response to my comment).
Your comment thoroughly explores possible routes for improvement in ability in a novice programmer who has knowledge of Python; probably to a far more detailed level than the author of the original “go read a book on Lisp” comment. I saw nothing it it that requires a downvote, but no particular benefit in continuing the original debate (debating a comment more thoroughly than the person who originally made it, in the absence of the person who originally made it, is only of particular benefit if at least one person firmly agrees with the original statement; while I think I can see where it came from, it’s a matter of indifference to me).
To me, it seems like a horribly hostile approach to teaching people, which comes across as saying, “In order to learn anything from me, you must abase yourself before me.” Which is to say, “I am incapable of conveying useful information to anyone who does not present abject submission to me.”
But then, it’s possible that I’m just hearing Severus Snape (or the class of lousy teachers he is an imitation of) in the “so you think you know everything?” bullshit.
I think the quote’s main function is to warn those who don’t know anything about programming of a kind of person they’re likely to encounter on their journey (people who know everything and think their preferences are very right), and to give them some confidence to resist these people. It also drives home the point that people who know how to program already won’t get much out of the book. I quoted it because it addresses a common failure mode of very intelligent and skilled people.
This quote was enough for me to take Learn Python The Hard Way off my reading list. I had previously heard good reports about it but this gives me the impression that the book is likely to be far too opinionated and dogmatic for my taste. Mind you I have reason to suspect the same of Python itself.
In case you’d be interested in a second opinion: I made it through twenty-one exercises of Learn Ruby the Hard Way a couple months ago, got bored, and have retained almost none of it. I’m probably not the target audience, but that doesn’t bother me so much; on the other hand, if I’m not retaining stuff after faithfully going through Hard Way’s copybook approach to language acquisition, that doesn’t speak well for its efficacy among people who are. Unless for some reason programming experience makes me less likely to retain new languages? But that’s (a) counterintuitive, and (b) contrary to data I’ve seen for natural languages, at least.
In any case, I don’t think I’ll be returning to the series.
Python is just a programming language. Insofar as it can be said to have a personality, that personality is an accommodating and inoffensive one. The community is pretty good, too; the concentration of assholes is unremarkable, and places like /r/learnpython are quick to help out beginners with questions.
My understanding having completed parts of it is that it’s aimed at someone who doesn’t know what a programming language is. If you do know, you’re probably better off with another book (and you’re also probably better off with something other than Python, but that’s my personal opinion clashing with Python’s opinions).
-- Randall Munroe
Definitely a double, but I can’t link the others right now.
I thought that unlikely, because it’s from last week’s XKCD What If?
Maybe Randall has said it before (or borrowed it from someone else).
Earlier posting
OK thanks.
I don’t know why I didn’t see it—I tried searching the page for Icarus before posting :(
I searched on the entire quote. That’s probably easier and more reliable than trying to pick out a keyword.
Well, that post was from the January thread. If you only Control-F’d this page, then it wouldn’t have come up.
That seems unlikely; the quote above was only posted about three weeks ago and nothing about Icarus turns up in a search. Can anyone find a duplicate?
Two, in fact.
It was three, but I deleted mine.
.
The publisher selected that design. The author’s involvement almost always ends with the manuscript.
-- Lawrence Watt-Evans
You don’t “judge” a book by its cover; you use the cover as additional evidence to more accurately predict what’s in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.
(Except when it’s a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)
Spoilers matter less than you think.
According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.
Gah! Spoiler!
Those error bars look large enough that I could still be right about myself even without being a total freak.
Really? 11 of the 12 stories got rated higher when spoiled, which is decent evidence against the nil hypothesis (spoilers have zero effect on hedonic ratings) regardless of the error bars’ size. Under the nil hypothesis, each story has a 50⁄50 chance of being rated higher when spoiled, giving a probability of (¹²C₁₁ × 0.5¹¹ × 0.5¹) + (¹²C₁₂ × 0.5¹² × 0.5⁰) = 0.0032 that ≥11 stories get a higher rating when spoiled. So the nil hypothesis gets rejected with a p-value of 0.0063 (the probability’s doubled to make the test two-tailed), and presumably the results are still stronger evidence against a spoilers-are-bad hypothesis.
This, of course, doesn’t account for unseen confounders, inter-individual variation in hedonic spoiler effects, publication bias, or the sample (79% female and taken from “the psychology subject pool at the University of California, San Diego”) being unrepresentative of people in general. So you’re still not necessarily a total freak!
Yeah, it doesn’t seem likely given that study that works are liked in average less when spoiled; but what I meant is that probably there are certain individuals who like works less when spoiled. (Imagine Alice said something to the effect that she prefers chocolate ice cream to vanilla ice cream, and Bob said that it’s not actually the case that vanilla tastes worse than chocolate, citing a study in which for 11 out of 12 ice cream brands their vanilla ice cream is liked more in average than their chocolate ice cream—though in most cases the difference between the averages is not much bigger than each standard deviation; even if the study was conducted among a demographic that does include Alice, that still wouldn’t necessarily mean Alice is mistaken, lying, or particularly unusual, would it?)
Just so. These are the sort of “inter-individual variation in hedonic spoiler effects” I had in mind earlier.
Edit: to elaborate a bit, it was the “error bars look large enough” bit of your earlier comment that triggered my sceptical “Really?” reaction. Apart from that bit I agree(d) with you!
Edit 2: aha, I probably did misunderstand you earlier. I originally interpreted your error bars comment as a comment on the statistical significance of the pairwise differences in bar length, but I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.
Huh. For some reason I had read that as “intra-individual”. Whatever happened to the “assume people are saying something reasonable” module in my brain?
Yep.
You can’t just ignore the error bars like that. In 8 of the 12 cases, the error bars overlap, which means there’s a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.
As a rough estimate: I’d say we can be sure that 4 stories are definitely better spoilered (>2 sd’s apart); out of the ones 1..2 sd’s apart, maybe 3 are actually better spoilered; and out of the remainder, they could’ve gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.
I don’t necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.
Ignoring the error bars does throw away potentially useful information, and this does break the rules of Bayes Club. But this makes the test a conservative one (Wikipedia: “it has very general applicability but may lack the statistical power of other tests”), which just makes the rejection of the nil hypothesis all the more convincing.
If I’m interpreting this correctly, “the error bars overlap” means that the heights of two adjacent bars are within ≈2 standard errors of each other. In that case, overlapping error bars doesn’t necessarily indicate a decent chance that the comparisons could go either way; a 2 std. error difference is quite a big one.
But this is an invalid application of the test. The sign test already allows for the possibility that each pairwise comparison can have the wrong sign. Making your own adjustments to the numbers before feeding them into the test is an overcorrection. (Indeed, if “we can be sure that 4 stories are definitely better spoilered”, there’s no need to statistically test the nil hypothesis because we already have definite evidence that it is false!)
This reminds me of a nice advantage of the sign test. One needn’t worry about squinting at error bars; it suffices to be able to see which of each pair of solid bars is longer!
Okay, if all you’re testing is that “there exist stories for which spoilers make reading more fun” then yes, you’re done at that point. As far as I’m concerned, it’s obvious that such stories exist for either direction; the conclusion “spoilers are good” or “spoilers are bad” follows if one type of story dominates.
I don’t like the study setup there. One readthrough of spoiled vs one readthrough of unspoiled material lets you compare the participants’ hedonic ratings of dramatic irony vs mystery, and it’s quite reasonable that the former would be equally or more enjoyable… but unlike in the study, in real life unspoiled material can be read twice: the first time for the mystery, then the second time for the dramatic irony; with spoiled material you only get the latter.
No, they selected them to sell more copies by highjacking the easier-to-press buttons of your nervous system.
There’s something to that, but it’s not as if Varian’s Microeconomic Analysis is going to have the cover of Spice and Wolf 1.
On the other hand, the method of judging a book’s contents by its cover clearly has holes in it considering Spice and Wolf 1 has the cover of Spice and Wolf 1.
Deliberate non sequitur alert: I’m often attracted to a cover that has holes in it. E.g. The Curious Incident of the Dog in the Night-Time.
Probably purely true for some books, but as someone who buys thousands of books a year, my impression is they are very likely to reveal who they think their readers will be (hence a lot of covers say “stay away” to me), and just occasionally they can show a startling streak of originality. E.g. the board designs (there may be no dustjacket) on Dave Eggers’ books are uniquely artistic in my opinion, and in this case since he has been seriously into graphics, I don’t think it’s any accident. You might think “Maybe this book is written by a bold and original person” and IMHO you’d be right. Also, the cover design of The Curious Incident of the Dog in the Night-Time by Mark Haddon kind of sent a message on my wavelength and it was not misleading (for me).
(Joseph Heath & Andrew Potter, The Rebel Sell)
The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)
@slicknet
If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don’t have sufficient information, would probably be dumb, ignorant, a mistake.)
This misses the point a bit due to an equivocation on “assume”. In ordinary discourse, it usually means “assume for the purpose of action until you encounter contrary evidence”. That’s very different from the scientist’s hypothetical assumptions that are made in order to figure out what follows from a hypothesis.
It’s epistemically incorrect to adopt a belief “for the purpose of action”, and permitting “contrary evidence” to correct the error doesn’t make it a non-error.
I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don’t even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as
in this case it is still technically an error but you are merely attempting to be “less wrong” about a case where you don’t have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website
Using a heuristic doesn’t require believing that it’s flawless. You are in fact performing some action, but that is also possible in the absence of careful understanding of the its effect. There is no point in doing the additional damage of accepting a belief for reasons other than evidence of its correctness.
Exactly, thanks for the clarification.
I believe that this statement, while correct, misses the point of preemptive debiasing. Yvain said it better.
The original quote draws attention to the mistake of not giving enough attention to the hypothetical where something appears to be wrong/stupid, but upon further investigation turns out to be correct/interesting. However, it confuses the importance of the hypothetical with its probability, and endorses increasing its level of certainty. I pointed out this error in the formulation, but didn’t restate the lesson of the quote (i.e. my point didn’t include the lesson, only the flaw in its presentation, so naturally it “misses” the point of the lesson by not containing it).
Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.
I don’t consider it, I assume it.
But “dumb” and “ignorant” are not points on a line, they are relative positions.
To quote this bloke at a climbing gym I used to frequent “We all suck at our own level”.
With apologies for double-commenting: “Don’t assume others are ignorant” is likely to be read by a lot of people (including myself at first) as “Aim high and don’t be easily be convinced of an inferential gap”. Posts on underconfidence may also be relevant.
I would somewhat agree with this if the phrase “making mistakes” was removed. People generally have poor reasoning skills and make non-optimal choices >99% of the time. (Yes, I am including myself and you, the reader, in this generalization.)
Or better yet, assume nothing, and reserve judgement until you have more information.
You always assume things, whether you are aware of it or not. At least by making your assumptions explicit and conscious, you have a better chance of noticing when they are wrong. And assuming “that people are dumb, ignorant, and making mistakes” is a common default subconscious failure mode.
In most situations there are multiple people other than yourself who each think the others are dumb, ignorant and making mistakes. Don’t assume that the one you happen to be interacting with at the moment is right by default.
You may or may not have noticed, but most people are biased. Whether bias counts as “dumb”, “ignorant” or “making mistakes” is left as an exercise for the reader.
-- Time Braid
-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)
This just prompted me to (hypothetically, for the sake of amusement) reinterpret many of Eliezer’s actions as a psychological experiment wherein he has contrived exaggerated scenarios in order to test this empirically.
-- Chad Fowler (from The 4-Hour Body)
Linus Pauling
The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.
It’s a joke, which I assume is intended for a mostly non-physicist audience.
We demand complete rigour from all forms of levity! The unexamined joke is not worth joking!
Mickey Mouse is dead Got kicked in the head Cause people got too serious They planned out what they said They couldn’t take the fantasy They tried to accept reality Analyzed the laughs Cause pleasure comes in halves The purity of comedy They had to take it seriously Changed the words around Tried to make it look profound …
--Sub Hum Ans, “Mickey Mouse is Dead”
To prevent lines from being merged together, add two spaces at the end of each one.
That’s so...typewriter.
Thanks.
Yes, but also being able to tell which of those ideas are good is even better.
From the alt-text in the above-linked comic:
It’s necessary, but not sufficient.
-Luc de Clapiers
“We’re even wrong about which mistakes we’re making.”
-Carl Winfeld
That’s a pretty great thing to be wrong about!
Not at all. It means you don’t know about the real mistakes you make (so you can’t fix them), and you spend resources trying to fix something that’s not really broken.
Satoshi Kanazawa
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use “science” mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that’s a bug, not a feature.
Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I’d rather people didn’t treat science as privileged against bias, and the quote above seems to encourage that.
While I pretty much agree with the quote, it doesn’t provide anyone that isn’t already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.
In any case, a scientific conclusion needn’t be inherently offensive for closer examination to be recommended: if most researchers’ backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn’t merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn’t involve decrying all but the worst studies as bad science when you’ve read no more than the abstract.
Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.
I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.
In light of the Implicit Association Test this doesn’t even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that’s where I find myself begging to differ.
I’d take an issue with “undesirable”, the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with “dangerous”: the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.
Something not (currently) possible can still be desirable.
FTL being impossible is undesirable if you want to go to the stars.
The conclusion that “FTL is impossible” is undesirable if and only iff FTL is possible.
The two conditions are very different.
They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.
Shouldn’t it read
“FTL is impossible” is undesirable if and only if FTL is possible.”
as it stands it reads “FTL is impossible” is undesirable if and only if and only if (iff) FTL is possible.
Actually, it should be “FTL is impossible” is undesirable if and only if FTL is possible.”
Facepalms okay this is why I need to proofread everything I write
Thanks
Shouldn’t it really be “Believing that FTL is impossible is undesirable iff FTL is possible”?
You seemed to be doing something clever with quotes, but mostly that made it hard to read. :P
The author originally added an extra f to the last if in the original post rendering it as “if and only if and only if” instead of “if and only if”
I think it’s pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they’re doing is going to kill the human race or not.)
That’s the thing, the science wasn’t good or bad, it was the to decision to give the results to certain people that held that quality of good/bad. And it was very, very bad. But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.
I don’t know what you mean by “intrinsic” moral qualities (is this to be contrasted with “extrinsic” moral qualities, and should I care less about the latter or what?). What I’m saying is just that the decision to pursue some scientific research has bad consequences (whether or not you intend to publicize it: doing it increases the probability that it will get publicized one way or another).
The majority of scientific discoveries (I’m tempted to say all but I’m 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.
To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.
At what point could you label research into any scientific field as having to many negative consequences to pursue?
I agree that this is a hard question.
General complaint: sometimes when I say that people should be doing a certain thing, someone responds that doing that thing requires answering hard questions. I don’t know what bringing this point up is supposed to accomplish. Yes, many things worth doing require answering hard questions. That is not a compelling reason not to do them.
I do not ask it because I wanted to stop the discussion by asking a hard question. I ask it because I aspire to do research into physics and will someday need an answer to it. As such I have been very curious about different arguments to this question. By no means did I mean by asking this question that there are things that should not be research simply how to go about finding them?
Remove any confusions you might have about metaethics, figure out what it is you value, estimate what kind of impact the research you want to do will have with respect to what you value, estimate what kind of impact the other things you could do will have with respect to what you value, pick the thing that is more valuable.
Trying to retroactively judge previous research this way is difficult because the relevant quantity you want to estimate is not the observed net value of a given piece of research (which is hard enough to estimate) but the expected net value at the time the decision was being made to do the research. I think the expected value of research into nuclear physics in the past was highly negative because of how much it increased the probability of nuclear war, but I’m not a domain expert and can’t give hard numbers to back up this assertion.
I’m reading through all of the sequences (slowly, it takes a while to truly understand and I started in 2012) and by coincidence I happen to be at the beginning of metaethics currently. Until I finish I won’t argue any further on this subject due to being confused. Thanks for help
I think nuclear weapons have a chance of killing a large number of people but are very unlikely to kill the human race.
At one point, physicists thought detonating even one nuclear bomb might set fire to the atmosphere.
This was taken seriously, and disproven before one in fact was detonated, but it’s not clear that the tests wouldn’t have gone ahead even if the verdict had come back with merely “unlikely”.
In the current day biologists, computer scientists and physicists are all working on devices which could be far more dangerous than nuclear weapons. In this case the danger is well known, but no-one high-status enough to succeed is seriously proposing a moratorium on research. To be fair, we’ve still got some time to go.
A scientist can have an inclination towards—for example—racist ideas. You can’t just call this a kind of being wrong, because depending on the truth of what they’re studying, this can make them right more often or less often.
So racist scientists are possible, and racist scientific practice is possible. I think ‘racist’ is an appropriate label for the conclusions drawn with that practice, correct or incorrect.
Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It’s not an inherent property of any of the conclusions; another researcher with completely different biases wouldn’t be racist for independently rediscovering one of them.
It’s a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn’t reduce to “larger fraction correct” or “larger fraction incorrect” because it depends on if they’re working on a problem where racists are more or less likely to be correct.
Is Newtons theory of gravity true or false? It’s neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.
The same is true for nearly every scientific model. There are problems where it’s useful to use the model. There are problems where it isn’t.
There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn’t try to declare them true or false but declares them to be very likely.
It’s also interesting that the person who made this claim isn’t working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he’s quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that’s widely considered as racist in Psychology Today. I don’t see why “racist” is no valid word to describe his conclusions.
What happens if you apply the same epistomological standards to claims that someone is racist that you apply to claims from science?
On the other hand, Kanazawa seems really good at saying controversial things that get attention… which suggests evidence for his views will overspread relative to those of his detractors. So it may make sense to hold people who say controversial stuff to high epistemological standards, or perhaps to scrutinize memes that seem unusually virulent especially carefully.
Huh, what definition of “racist” are you using here? Would you describe von Neumann’s proposal for a pre-emtive nuclear strike on the USSR as “racist”?
I’m not sure what you mean by “racist”, however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.
It’s basically about putting a low value on the life on non-white civilians. In addition “I would do to foreigners, what Ann Coulter would do to them”, is also a pretty straight way to signal racism.
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
I see no reason to treat someone who makes wrong claims about race and who’s personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.
Rather than using the ambiguous word “racist”, one could say specifically that Kanazawa is an advocate of genocide.
As I said above, did the bombings of civilians during WWII constitute “genocide”?
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”? If you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
--Jovah’s Angel by Sharon Shinn
maybe it’s just my most recent physchem lecture talking, but my instant response to that was ‘truth is a state function’. Or perhaps ‘perceived truth’, and ‘should be’. (i.e., shouldn’t depend on the history preceding current perceived truth)
— Gaston Leroux
Only with very low probability.
and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.
Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:
“Well … if I was going to Dublin, I wouldn’t start from here.”
Moral, as I see it anyway: While the heuristic “to get to Y, start from X instead of where you are” has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.
— Herbert Butterfield, The Whig Interpretation of History
Francis Spufford, Red Plenty
Is it a good book? I was thinking of buying it, but I am very risk-averse when it comes to buying fiction.
I thought it was pretty good in its own way, although I expected (coming at it from Shalizi) much more math & science than it actually had.
I am only about one-third of the way through, but it is definitely a good book thus far.
I would not personally buy it, since I only purchase fiction that I am certain I will read more than once, but it is definitely worth reading.
(Sorry, I couldn’t resist.)
Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.
-- C. S. Lewis, Out of the Silent Planet
Reminds me of this:
Karl Popper
There’s a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.
-- Martin Fowler
[Footnote to: “This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy”. The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
-- From the final screen of Call of Cthulhu: The Wasted Land
...Hooray for the phygists?
Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.
Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.
Bryan Caplan
This sounds almost horrifically dystopian, in a sort of Friendship is Optimal way.
I suppose it does, in as objective a measure something like ‘harmony’ is.
This sounds like a recipe for stagnation. A true friend is willing to encourage you to grow.
I think I parsed that quote less along the lines of ‘dude, you hardly know any math and so I won’t love you’ and more along the lines of ‘dude, you seem to have the same taste for movies and music and we can have a conversation—I love (hanging out with) you’.
The former has an objective measure and thus one can speak of definite growth while the latter is subjective.
That’s not what I mean. Suppose you have various negative personality traits that are negatively influencing your life (e.g. perhaps you are selfish or short-tempered). If you don’t carefully cull the people around you, you might start noticing that many people react negatively to you, and you might start wondering why. If you determine that the problem is with you and not them, that’s an opportunity for growth. If you only surround yourself with people who are willing, for whatever reason, to ignore your negative personality traits, then you’ve lost an opportunity to notice them.
Similarly, and this should be scary to anyone who cares about epistemic rationality, suppose you have various false beliefs and you decide that those beliefs are so important to your identity that people who don’t also believe them can’t possibly love you the way you are, so you only surround yourself with people who agree with them...
Sure, in such a case, I’ve optimized for my own ‘social harmony’. We all do this to varying degrees anyway. Signalling, sub-cultures and all that blah. Note that the quote simply speaks of a process (selection) to maximize an end (social harmony, however that is defined). It doesn’t say anything about whether such selection should be for false or true values (however these are defined).
“Love you just as you are” doesn’t imply “hate for you to change”.
After all, you are changing.
Okay, but P(doesn’t want you to change | loves you just the way you are) is higher than P(doesn’t want you to change | doesn’t love you just the way you are), and in addition P(you won’t change | you surround yourself with people who love you just the way you are) is higher than P(you won’t change | you don’t surround yourself with people who love you just the way you are).
Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded
I think you have the lesson entirely backward.
How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it’s stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.
To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.
Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don’t have that kind of evidence for AI safety.
Also, this:
… is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?
Sun Tzu on establishing a causal chain from reality to your beliefs.
Dupe.
Scott Adams
This paper argues that at least one of the following propositions must be true: (1) a happy person is very likely to start believing in magic before reaching an “unhappy” stage; (2) any unhappy person is extremely unlikely to take their dog walking a significant number of times; (3) we are almost certainly living in a stimulation. It follows that the belief that I will one day become an unhappy person who doesn’t walk their dog is false, unless I start believing in magic.
Coincidences … are the worst enemies of the truth. (Les coïncidences … sont les pires ennemies de la vérité. —Gaston Leroux, Le mystère de la chambre jaune
“For belief did not end with a public renunciation, a moment when one’s brethren called one a heretic, and damned. Belief ended in solitude, and silence, the same way it began.” -Robert V. S. Redick, The Night Of The Swarm
(I’m mid-way through the book, but perhaps I should instead say that I am mid-way through gur sryybjfuvc bs gur evat, juvpu unf sbe fbzr ernfba orra vafregrq vagb gur zvqqyr bs vg, pbzcyrgr jvgu eviraqryy, zvfgl zbhagnvaf, naq gur jvmneq qvfnccrnevat gb svtug n zbafgre).
-- David Brin
-- Seng-Ts’an
Does this mean something different than “Truth doesn’t have a moral valence”?
Cause it seems like it is trying harder to sound deep than to sound insightful. Sigh—maybe I’m just jaded by various other trying-to-sound-deep-for-its-own-sake sayings. Aka seem deep vs. is deep issues.
My primary interpretation was “attaching yourself to arguments obstructs your ability to seek the truth.” If you are interested in the truth, it does not matter if you or your interlocutor is wrong or right; it matters what the truth is.
Another interpretation is “is-thinking leads to accuracy, should-thinking leads to delusion.”
A third interpretation is “moralistic thinking degrades morals.” I don’t consider that interpretation interesting enough to agree or disagree with it.
It doesn’t seem to be clear whether Seng-Ts’an is talking about moral right and wrong, or the kind of “wrong” that is involved in “proving your opponent wrong” in debates. The first interpretation is just silly according to any philosophy that cares about ethics, but the second one does make a lot of sense.
This is probably a more plausible reading of the quote, but I think it is false. If I don’t believe I am right, or at least making an important point (such as playing devil’s advocate), I’m doubtful that my comments are relevant or helpful in figuring out what is true.
By contrast, your interpretation of the quote suggests that Professor Armstrong should be indifferent to whether particular x-risks that he has highlighted as “most dangerous” are actually the most dangerous x-risks.
Anyway, your second suggested reading is essentially my suggested reading, and I agree that your third suggested reading is not a very interesting assertion.
It may be worthwhile to consider the role of curiosity and questions.
The first interpretation sees ‘right’ and ‘wrong’ as the property of people, not ideas. Doing so is less helpful than seeing rightness as a property of ideas- the plain truth.
Thus, it suggests that the Professor should be indifferent to which x-risks he highlights as most dangerous, except for the criterion of danger. It would risk sorting his list incorrectly to confine himself by his opinion, his past statements on the issue, or those which avoid giving support to an enemy.
I was introduced to the poem by someone who was arguing against moralistic thinking, who knows much more about this sort of poetry than I do; I mention it for completeness, as it may have been the author’s preferred interpretation.
Maybe it’s a reference to the idea that you need something more important than The Truth, so that you keep testing/refining your answer when you think you’ve got to the truth.
i’m going to reply to the quote as if it means “Truth doesn’t have a moral valence” and rebuttal that truth should be held more sacred then morals rather then simply outside of it. For example if there are two cases and case 1 leads to a morally “better” (in quotes because the word better is really a black box) outcome then case 2 but case 1 leads to hiding the truth (including hiding from it yourself) then I would have to think very specifically about it. In short I abide by the rule “That which can be destroyed by the Truth should be” but am weary that this breaks down practically in many situations. So when presented with a scenario where i would be tempted to break this principle for the “greater good” or the “morally better case” I would think long and hard about whether it is a rationalization or that i did not expend the mental effort to come up with a better third alternative.
I wouldn’t be surprised if this has come up before:
―Kurt Vonnegut (attributed to Kilgore Trout), in Breakfast of Champions
Yep.
Klingon proverb.
So it’s true what they say! The opposite of a Klingon proverb is also a Klingon proverb…
Where is this from? I looked it up to see if the weird grammar was intended and couldn’t find anything.
It’s … ahem … non-canon. A different faction.
I thought it interesting that the near-inverse of a useful rationality quote can still be a useful rationality quote.
I don’t think it’s an inverse! The first one is saying you might not succeed in killing the person you’re trying to kill and the second one is saying you might instead kill someone else that you don’t want to kill! They’re two properties of the same worst-case scenario. =]
I understood the second one as saying that that blind idiot with the knife might end up killing you, not necessarily intentionally, so be careful.
But also, if you’re being a blind idiot waving your knife around, you could kill someone! So stop that. =]
--Lawrence Watt-Evans, The Spriggan Mirror
Syrio Forel, Game of Thrones based on A Song of Ice and Fire by George R R Martin
It doesn’t matter that much, but I’m pretty sure that line is original to the HBO series, not to the books.
(Not my downvotes, incidentally, but I’d speculate they come from a desire to separate rationality from anti-deathism.)
It’s not from the TV series either.
The TV series quote would be this: “There is only one God. And his name is Death. And there is only thing we say to Death: ‘Not today’.”
Basically the grandparent post seems to be just a quote from memory, combining bits and pieces from both places, accurate to neither.
I could’ve sworn it was from both of them, and, thus, from the books originally...
It’s not from the books; more generally, there isn’t anything in the books directly suggesting a connection between Syrio and the Faceless Men.
Thanks. Fixed it.
Couldn’t find it in the Arya chapters of my copy. Wasn’t looking terribly hard, though.
I remembered it vaguely, and found the more exact quote on the ASOIAF Quotes page on TvTropes since I didn’t want to search through the Arya chapters to find the exact quote, though I was prepared to.
-- Doc Scratch, Homestuck
I’m not certain what lesson on rationality I’m expected to glean from this, unless it’s “model your opponents as agents, not as executors of cached scripts”—and that seems both strongly dependent on the opponents you’re facing and a little on the trivial side.
Doc Scratch isn’t exactly the best source for rationality quotes- a guy who already knows the truth has little need to overcome flawed cognitive processes for arriving at it. Which isn’t to say the guy doesn’t say some relevant stuff:
One can do these two things, but not to the exclusion of alternatives. One can make statements which are confused or nonsensical, that are not even false.
In any case, a statement doesn’t have inherent truth value outside the way it’s interpreted by the people who hear it. The statement that “If a tree falls in the forest, it does not make a sound” is true or false depending on the meanings understood by the audience and the person uttering it. It’s entirely possible to convey false understandings by making statements which omit relevant information. To refuse to call a statement which is deliberately tailored to make its audience believe falsehoods a lie is using a distinction in an unhelpful way.
This.
It borders on arguing about the meaning of words, so I find it useful to describe what I mean by “lying”, i.e. “conveying information that adjusts someone else’s worldview away from reality”. Funnily enough, that excludes most lies-to-children..
At that point whoever I’m talking to will either point out that his definition differs, or even decide to go with mine henceforth, and either way we can start getting some real work done.
Of course, he was lying (arguably by omission); Doc Scratch was not merely reticent or uncooperative, but intentionally deceptive.
(Must resist urge to watch Cascade again …)
Meditations—Marcus Aurelius
I don’t get it.
It’s pretty much another injunction to use reason if you possess it.
I don’t see how to extract that meaning from the words I see. In particular, I don’t understand what the last sentence is trying to say. The dash is also confusing. I thought initially that this was a dialogue but now I’m less sure.