Open Thread June 2010, Part 4
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread brought to you by quantum immortality.
A comic about the trolley problem.
See also
See also
upvoted-both-I remember yours this from a couple of years ago
The veil of ignorance was a nice touch.
That’s one of the funniest things I’ve seen in a while. I wish I could upvote that more.
A visual study guide to 105 types of cognitive biases
“The Royal Society of Account Planning created this visual study guide to cognitive biases (defined as “psychological tendencies that cause the human brain to draw incorrect conclusions). It includes descriptions of 19 social biases, 8 memory biases, 42 decision-making biases, and 36 probability / belief biases.”
Some random thoughts about thinking, based mostly on my own experience.
I’ve been playing minesweeper lately (and I’ve never played before). For the uninitiated, minesweeper is a game that involves using deductive reasoning (and rarely, guessing) to locate the “mines” in a grid of identical boxes. For such an abstract puzzle, it really does a good job of working the nerves, since one bad click can spoil several minutes’ effort.
I was surprised to find that even when I could be logically certain about the state of a box, I felt afraid that I was incorrect (before I clicked), and (mildly) amazed when I turned out to be correct. It felt like some kind of low level psychic power or something. So it seems that our brains don’t exactly “trust” deductive reasoning. Maybe because problems in the ancestral environment didn’t have clean, logical solutions?
I also find that when I’m stymied by a puzzle, if I turn my attention to something else for a while, when I come back, I can easily find some way forward. The effect is stunning, an unsolvable problem becomes trivial five minutes later. I’m pretty sure there is a name for this phenomenon, but I don’t know what it is. In any case, it’s jarring.
Another random thought. When I’m sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I’m sad. I don’t know why this works, but it seems to make the emotion abstract, as though it happened to somebody else.
Explicitly acknowledging emotions as things with causes is a huge chunk of managing them deliberately. (I have a post in the works on this, but I’m not sure when I’ll pull it together.)
Lots of references to the CBT literature would be nice… no need to reinvent the wheel; CBT has a lot of useful things to say about NATs, and strategies to take care of them. (Then again this applies mostly to negative emotions, and deliberately managing positive emotions seems like a cool thing to do too.) That said, more instrumental rationality posts would be great.
What does NAT stand for?
I don’t think that works for me. I often can’t identify a specific cause of my sad feeling, and when I can, thinking about it often makes me feel worse rather than better.
Well I don’t mean ruminating about the cause of the sad feeling. That is probably one of the worst things you can do. Rather I meant just identifying it.
For example, when a girlfriend and I broke up (this was a couple years ago) I spent maybe two days feeling really depressed. Eventually, I thought to myself, “You’re sad because you broke up with your girlfriend.”
That really put it in perspective for me. It made me think of all the cheesy teen movies where kids breakup with their sweethearts and act like it’s the end of the world, when in the viewer sees it as a normal, even banal rite of passage to adulthood. I had always thought people who reacted like that were ridiculous. In other words, it feels like that thought put the issue in “far mode” for me.
That works if there is a specific cause, but like some other people have said, my sad feelings aren’t caused by external events.
Same here. I also found that often there’s not any cause in the sense of something specific upsetting me; it’s just an automatic reaction to not getting enough social interaction.
Arguably, problems in the modern environment don’t have clean, logical solutions either! Note also that people get good at games like minesweeper and chess through learning. If the brain was primarily a big deductive logic machine, it would become good at these games immediately upon understanding the rules; no learning would be necessary.
I’m nitpicking, but maybe it was simple pleasure at getting the game?
http://arxiv.org/abs/1006.3868
I guess everyone here already understands this stuff, but I’ll still try to summarize why “model checking” is an argument against “naive Bayesians” like Eliezer’s OB persona. Shalizi has written about this at length on his blog and elsewhere, as has Gelman, but maybe I can make the argument a little clearer for novices.
Imagine you have a prior, then some data comes in, you update and obtain a posterior that overwhelmingly supports one hypothesis. The Bayesian is supposed to say “done” at this point. But we’re actually not done. We have only “used all the information available in the sample” in the Bayesian sense, but not in the colloquial sense!
See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there’s a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I’d do this kind of check. The catch is, a perfect Bayesian wouldn’t. The question is, why?
But my sense is that the “substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such”, as well as Eliezer’s OB persona, is talking more about a prior implicit in informal human reasoning than about anything that’s written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn’t write down. Is that wrong?
I don’t think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean “what informal human reasoning should be”. In that case I’d like a formal description of what it should be (ahem).
Solomonoff induction, mebbe?
Wei Dai thought up a counterexample to that :-)
Gelman/Shalizi don’t seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.
It seems to me that Wei Dai’s argument is flawed (and I may be overly arrogant in saying this; I haven’t even had breakfast this morning.)
He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don’t fundamentally see why “measure zero hypothesis” is equivalent to “impossible;” for example the hypothesis of “they’re making it up as they go along” having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other “humans have been wrong forever” should have a consistent probability mass which will grow in comparison to the other hypothesis “they are making it up.”
Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI’s prior distribution to give “impossible” things low but nonzero probability.
Wei Dai’s argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such “impossible” things positive probability, but still sum to 1.0 over all hypotheses, then by all means let’s hear it.
Yeah well it is certainly a good argument against that. The title of the thread is “is induction unformalizable” which point I’m unconvinced of.
If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for “things I haven’t thought up yet.” On the other hand I’m not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.
There’s no general way to have a “none of the above” hypothesis as part of your prior, because it doesn’t make any specific prediction and thus you can’t update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.
Well then I guess I would hypothesize that solving the problem of a universal prior is equivalent to solving the problem of NOTA. I don’t really know enough to get technical here. If your point is that it’s not a good idea to model humans as Bayesians, I agree. If your point is that it’s impossible, I’m unconvinced. Maybe after I finish reading Jaynes I’ll have a better idea of the formalisms involved.
I thought that what I’m about to say is standard, but perhaps it isn’t.
Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them.
Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be “i’m hallucinating or insane”, which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability.
Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don’t turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the “incoming” causal “message”, and, from the other direction, the incoming inferential message.
But neither “model checking” nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren’t alone in that either.
(Please tell me if this sounds too True Believerish.)
I have been googling for references to “computational epistemology”, “algorithmic epistemology”, “bayesian algorithms” and “epistemic algorithm” on LessWrong, and (other than my article) this is the only reference I was able to find to things in the vague category of (i) proposing that the community work on writing real, practical epistemic algorithms (i.e. in software), (ii) announcing having written epistemic algorithms or (iii) explaining how precisely to perform any epistemic algorithm in particular. (A runner-up is this post which aspires to “focus on the ideal epistemic algorithm” but AFAICT doesn’t really describe an algorithm.)
Who is “Pearl”?
Oh wow, thanks. I think at the time I was overconfident that some more educated Bayesian had worked through the details of what I was describing. But the causality-related stuff is definitely covered by Judea Pearl (the Pearl I was referring to) in his book *Causality* (2000).
This sounds like a confusion between a theoretical perfect Bayesian and practical approximations. The perfect Bayesian wouldn’t have any use for model checking because from the start it always considers every hypothesis it is capable of formulating, whereas the prior used by a human scientist won’t ever even come close to encoding all of their knowledge.
(A more “Bayesian” alternative to model checking is to have an explicit “none of the above” hypothesis as part of your prior.)
NOTA is addressed in the paper as inadequate. What does it predict?
See here.
I don’t see how that’s possible. How do you compute the likelihood of the NOTA hypothesis given the data?
NOTA is not well-specified in the general case, but in at least one specific case it’s been done. Jaynes’s student Larry Bretthorst made a useable NOTA hypothesis in a simplified version of a radar target identification problem (link to a pdf of the doc).
(Somewhat bizarrely, the same sort of approach could probably be made to work in certain problems in proteomics in which the data-generating process shares the key features of the data-generating process in Bretthorst’s simplified problem.)
If I’m not mistaken, such problems would contain some enumerated hypotheses—point peaks in a well-defined parameter space—and the NOTA hypothesis would be a uniformly thin layer over the rest of that space. Can’t tell what key features the data-generating process must have, though. Or am I failing reading comprehension again?
Yep.
I think the key features that make the NOTA hypothesis feasible are (i) all possible hypotheses generate signals of a known form (but with free parameters), and (ii) although the space of all possible hypotheses is too large to enumerate, we have a partial library of “interesting” hypotheses of particularly high prior probability for which the generated signals are known even more specifically than in the general case.
Model checking is completely compatible with “perfect Bayesianism.” In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It’s a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.
However, a difference in the data and a simulation from your model doesn’t necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what’s going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.
You shouldn’t need real-world data to determine if your model of your own prior was reasonable or not. Something else is going on here. Model checking uses the data to figure out if your prior was reasonable, which is a reasonable but non-Bayesian idea.
Well, if you’re just checking your prior, then I suppose you don’t need real data at all. Make up some numbers and see what happens. What you’re really checking (if you’re being a Bayesian about it, i.e. not like Gelman and company) is not whether your data could come from a model with that prior, but rather whether the properties of the prior you chose seems to match up with the prior you’re modeling. For example, maybe the prior you chose forces two parameters, a and b, to be independent no matter what the data say. In reality, though, you think it’s perfectly reasonable for there to be some association between those two parameters. If you don’t already know that your prior is deficient in this way, posterior predictive checking can pick it up.
In reality, you’re usually checking both your prior and the other parts of your model at the same time, so you might as well use your data, but I could see using different fake data sets in order to check your prior in different ways.
Apologies if this has already been covered elsewhere, but isn’t a prior just a belief? The prior is by definition whatever it was rational to believe before the acquisition of new evidence (assuming a perfect Bayesian, anyway). I’m not quite sure what you mean when you propose that a prior could be wrong; either all priors are statements of belief and therefore true, or all priors are statements of probability that must be less accurate than a posterior that incorporates more evidence.
I suspect that there are additional steps I’m not considering.
Nope, this isn’t part of the definition of the prior, and I don’t see how it could be. The prior is whatever you actually believe before any evidence comes in.
If you have a procedure to determine which priors are “rational” before looking at the evidence, please share it with us. Some people here believe religiously in maxent, others swear by the universal prior, I personally rather like reference priors, but the Bayesian apparatus doesn’t really give us a means of determining the “best” among those. I wrote about these topics here before. If you want the one-word summary, the area is a mess.
Thanks for the links (and your post!), I now have a much clearer idea of the depths of my ignorance on this topic.
I want to believe that there is some optimal general prior, but it seems much more likely that we do not live in so convenient a world.
But if you can evaluate how good a prior is, then there has to be an optimal one (or several). You have to have something as your prior, and so whichever one is the best out of those you can choose is the one you should have. As for how certain you are that it’s the best, it’s (to some extent) turtles all the way down.
Instead of using “optimal general prior”, I should have said that I was pessimistic about the existence of a standard for evaluating priors (or, more properly, prior probability distributions) that is optimal in all circumstances, if that’s any clearer.
Having thought about the problem some more, though, I think my pessimism may have been premature.
A prior probability distribution is nothing more than a weighted set of hypotheses. A perfect Bayesian would consider every possible hypothesis, which is impossible unless hypotheses are countable, and they aren’t; the ideal for Bayesian reasoning as I understand it is thus unattainable, but this doesn’t mean that there are benefits to be found in moving toward that ideal.
So, perfect Bayesian or not, we have some set of hypotheses which need to be located before we can consider them and assign them a probabilistic weight. Before we acquire any rational evidence at all, there is necessarily only one factor that we can use to distinguish between hypotheses: how hard they are to locate. If it is also true that hypotheses which are easier to locate make more predictions and that hypotheses which make more predictions are more useful (and while I have not seen proofs of these propositions I’m inclined to suspect that they exist), then we are perfectly justified in assigning a probability to a hypothesis based on it’s locate-ability.
This reduces the problem of prior probability evaluation to the problem of locate-ability evaluation, to which it seems maxent and its fellows are proposed answers. It’s again possible there is no objectively best way to evaluate locate-ability, but I don’t yet see a reason for this to be so.
Again, if I’ve mis-thought or failed to justify a step in my reasoning, please call me on it.
This doesn’t sound right to me. Imagine you’re tossing a coin repeatedly. Hypothesis 1 says the coin is fair. Hypothesis 2 says the coin repeats the sequence HTTTHHTHTHTTTT over and over in a loop. The second hypothesis is harder to locate, but makes a stronger prediction.
The proper formalization for your concept of locate-ability is the Solomonoff prior. Unfortunately we can’t do inference based on it because it’s uncomputable.
Maxent and friends aren’t motivated by a desire to formalize locate-ability. Maxent is the “most uniform” distribution on a space of hypotheses; the “Jeffreys rule” is a means of constructing priors that are invariant under reparameterizations of the space of hypotheses; “matching priors” give you frequentist coverage guarantees, and so on.
Please don’t take my words for gospel just because I sound knowledgeable! At this point I recommend you to actually study the math and come to your own conclusions. Maybe contact user Cyan, he’s a professional statistician who inspired me to learn this stuff. IMO, discussing Bayesianism as some kind of philosophical system without digging into the math is counterproductive, though people around here do that a lot.
I’m in the process of digging into the math, so hopefully some point soon I’ll be able to back up my suspicions in a more rigorous way.
I was talking about the number of predictions, not their strength. So Hypothesis 1 predicts any sequence of coin-flips that converges on 50%, and Hypothesis 2 predicts only sequences that repeat HTTTHHTHTHTTTT. Hypothesis 1 explains many more possible worlds than Hypothesis 2, and so without evidence as to which world we inhabit, Hypothesis 1 is much more likely.
Since I’ve already conceded that being a Perfect Bayesian is impossible, I’m not surprised to hear that measuring locate-ability is likewise impossible (especially because the one reduces to the other). It just means that we should determine prior probabilities by approximating Solomonoff complexity as best we can.
Thanks for taking the time to comment, by the way.
Then let’s try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be either HTTTHHTHTHTTTT repeated forever, or TTHTHTTTHTHHHHH repeated forever. The second one is harder to locate, but describes two possible worlds rather than one.
Maybe your idea can be fixed somehow, but I see no way yet. Keep digging.
I’ve just reread Eliezer’s post on Occam’s Razor and it seems to have clarified my thinking a little.
I originally said:
But I would now say:
This solves the problem your counterexample presents: Hypothesis 1 describes only one possible world, but Hypothesis 2 requires say, ~30 more bits of information (for those particular strings of results, plus a disjunction) to describe only two possible worlds, making it 2^30 / 2 times less likely.
Then let’s try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be HTTTHHTHTHTTTT repeated forever, where the can take different values on each repetition. The second hypothesis is harder to locate but describes an infinite number of possible worlds :-)
If at first you don’t succeed, try, try again!
The problem with this counterexample is that you can’t actually repeat something forever.
Even taking the case where we repeat each sequence 1000 times, which seems like it should be similar, you’ll end up with 1000 coin flips and 15000 coin flips for Hypothesis 1 and Hypothesis 2, respectively. So the odds of being in a world where Hypothesis 1 is true are 1 in 2^1000, but the odds of being in a world where Hypothesis 2 is true are 1 in 2^15000.
It’s an apples to balloons comparison, basically.
(I spent about twenty minutes staring at an empty comment box and sweating blood before I figured this out, for the record.)
I think this is still wrong. Take the finite case where both hypotheses are used to explain sequences of a billion throws. Then the first hypothesis describes one world, and the second one describes an exponentially huge number of worlds. You seem to think that the length of the sequence should depend on the length of the hypothesis, and I don’t understand why.
That is an awesome counter-example, thank you. I think I may wait to ponder this further until I have a better grasp of the math involved.
I’m not sure I’m willing to grant that’s impossible in principle. Presumably, you need to find some way of choosing your priors, and some time later you can check your calibration, and you can then evaluate the effectiveness of one method versus another.
If there’s any way to determine whether you’ve won bets in a series, then it’s possible to rank methods for choosing the correct bet. And that general principle can continue all the way down. And if there isn’t any way of determining whether you’ve won, then I’d wonder if you’re talking about anything at all (weird thought experiments aside).
That check should be part of updating your prior. If you updated and got a hypothesis that didn’t fit the data, you didn’t update very well. You need to take this into account when you’re updating (and you also need to take into account the possibility of experimental error: there’s a small chance the data are wrong).
Hopefully the Book Club will get around to covering that as part of Chapter 4.
I can’t recall that it has anything to do with “updating your prior”; Jaynes just says that if you get nonsense posterior probabilities, you need to go back and include additional hypotheses in the set you’re considering, and this changes the analysis.
See also the quote (I can’t be bothered to find it now but I posted it a while ago to a quotes thread) where Jaynes says probability theory doesn’t do the job of thinking up hypotheses for you.
About the Rumsfeld quote mentioned in the most recent top-level post:
Why is it that people mock Rumsfeld so incessantly for this? Whatever reason you might have not to like him, this is probably the most insightful thing any government official has said at a press conference. And yet he’s ridiculed for it by the very same people that are emphasizing, or at least should be emphasizing, the imporance of the insight.
Heck, some people even thought it was clever to format it into a poem.
What gives? Is this just a case of “no good deed goes unpunished”?
ETA: In your answer, be sure to say, not just what’s wrong with the quote or its context, but why people don’t make that as their criticism instead of just saying, ha ha, the quote sure is funny.
I agree that the quote is insightful and brilliant.
I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war.
Politics is the mind-killer.
Some ideas.
People didn’t/don’t like Rumsfeld.
In the quote’s original context, Rumsfeld used it as the basis of a non-answer to a question:
People think Rumsfeld’s particular phrasing is funny, and people don’t judge it as insightful enough to overcome the initial ‘hee hee that sounds funny’ reaction.
However insightful the quote is, Rumsfeld arguably failed to translate it into appropriate action (or appropriate non-action), which might have made it seem simply ironic or contrary rather than insightful.
(Edit to fix formatting.)
So what would be the non-funny way to say? IMHO, Rumsfeld’s phrasing is what you get if you just say it the most direct way possible.
This is what always bothers me: people who say, “hey, what you said was valid and all, but the way you said it was strange/stupid”. Er, so what would be the non-strange/stupid way to say it? “Uh, implementation issue.”
In the exchange, it looks like the reporter’s followup question is nonsense. It only makes sense to ask if it’s a known unknown, since you, er, never know the unknown unknowns. (Hee hee! I said something that sounds funny! Now you can mock me while also promoting what I said as insightful!)
See also the edit to my original comment.
I’m not sure I’m capable of a good answer for the edited version of the question. I would guess (even more so than I’m guessing in my grandparent comment!) that once someone’s ‘ha ha’ reaction kicks in (whether it’s a ‘ha ha his syntax is funny,’ ‘ha ha how ironic those words are in that context,’ or a ‘ha ha look at him scramble to avoid that question’ kind of ‘ha ha’), it obscures the perfectly rational denotation of what Rumsfeld said.
I don’t know of a way to make it less funny without losing directness. I think the verbal (as opposed to situational) humor comes from a combination of saying the word ‘known’ and its derivatives lots of times in the same paragraph, using the same kind of structure for consecutive clauses/sentences, and the fact that what Rumsfeld is saying appears obvious once he’s said it. And I can’t immediately think of a direct way of expressing precisely what Rumsfeld’s saying without using the same kind of repetition, and what he’s saying will always sound obvious once it’s said.
Things that are obvious once thought of, but not before, are often funny when pointed out, especially when pointed out in a direct and pithy way. That’s basically how observational comedians operate. (See also Yogi Berra.) It’s one of those quirks of human behavior a public speaker just has to contend with.
Strictly speaking that’s true, although for Rumsfeld to avoid the question on that basis is IMO at best pedantic; it’s not hard to get an idea of what the reporter is trying to get at, even though their question’s ill-phrased.
(Belated edit—I should say that it would be pedantic, not that it is pedantic. Rumsfeld didn’t actually avoid the question based on the reporter’s phrasing, he just refused to answer.)
Right, that would make sense, except that the very same people, upon shifting gears and nominally changing topics, suddenly find this remark insightful—“but ignore this when we go back to mocking Rumsfeld!”
Wow, you have got to see Under Siege 2. It has this exchange (from memory):
Bad guy #2: What’s that? [...]
Bad guy #1: It’s a chemical weapons plant. And we know about it. And they know that we know. But we make-believe that we don’t know, and they make-believe that they believe that we don’t know, but know that we know. Everybody knows.
Yes, “damned if you do, damned if you don’t” is fun, but ultimately to be avoided by respectable people.
Right, but aren’t they typically followed by the appreciation of the insight rather than derision of whoever points it out?
True, but it’s not really Rumsfeld’s job to improve reporters’ questions. I mean, he might be a Bayesian master if he did, but it’s not really to be expected.
I imagine the people who used the quote to mock Rumsfeld were already inclined to treat the quote uncharitably, and used its funniness/odd-soundingness as a pretext to mock him.
Yeah, that got a giggle from me. Makes me wonder why some kinds of repetition are funny and some aren’t!
Agreed—I didn’t mean to condone simultaneously mocking Rumsfeld’s quote while acknowledging its saneness, just to explain why one might find it funny.
It is (well, was) his job to make a good faith effort to try and answer their questions. (At least on paper, anyway. If we’re being cynical, we might argue that his actual job was to avoid tough questions.) If I justified evading otherwise good questions in a Q&A because of minor lexical flubs, that would make the Q&A something of a charade.
It’s possibly a matter of people being already disposed to dislike Rumsfeld, combined with a feeling that if he had so much understanding of ignorance, he shouldn’t have been so pro-war.
I agree that it’s a brilliant idea, and that’s why I cited him. He does the best job of describing that particular idea that I know of, and I’m amazed, as you are, that he said it at a press conference. I vehemently disagree with his politics, but that doesn’t make him stupid or incapable of brilliance.
If the tone of my post came across as mocking, that was not at all my intention.
I didn’t mean to imply you were mocking him; I just mentioned your post because that’s what reminded me to ask what I’ve been wondering about—and you saved me some effort in finding something to cut-and-paste ;-)
I am surely not the first to recognise the similarity to this poem.
ETA: no, I’m not.
Hm, those are superficially similar, maybe, but I’m glad that someone, at least, was asking “er, what’s the deal with the Rumsfeld quote?” back in ’03.
Wow, really? I honestly didn’t know that quote ever provoked ridicule! Of course I also don’t know how Rumsfeld is and didn’t know he was a politician.
I agree the quote is insightful and brilliant.
I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war.
Politics is the mind-killer.
Part one of a five part series on the Dunning-Kruger effect, by Errol Morris.
http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1/
Also note that Oscar winning director Morris’s next project is a dark comedy that is a fictionalized version of the founding of Alcor!
Ooh, it’s nice to see more details on the lemon juice bank robber. When I first heard about him I thought he was probably schizophrenic. Maybe he was, but the details make it sound like he may indeed have been just really stupid.
Isn’t that a bad thing? I suspect a major source will be that recent book...
I thought that Morris’s 30 minute interview with Saul Kent showed a favorable perspective on cryonics, or at least a true non-bias.
Watch and decide for yourself:
http://www.youtube.com/watch?v=HaHavhQllDI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=36
http://www.youtube.com/watch?v=Psm96dR1d1A&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=37
http://www.youtube.com/watch?v=gBYIzWblGTI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=38
On not being able to cut reality at the joints because you don’t even know what a joint is: diagnosing schizophrenia
genes, memes and parasites?
tl:dr:”People who suffer from schizophrenia are, in fact, three times more likely to carry T. gondii than those who do not.”
“Over the last five years or so, evidence has been building that some human cultural shifts might be influenced, or even caused, by the spread of Toxoplasma gondii.”
“In the United States, 12.3 percent of women tested carried the parasite, and in the United Kingdom only 6.6 percent were infected. But in some countries, statistics were much higher. 45 percent of those tested in France were infected, and in Yugoslavia 66.8 percent were infected!”
Wow. How is this parasite spread? Could those ‘girly germs’ that I avoided in primary school actually reduce my chances of getting schizophrenia?
wait, what’s a girly germ? I googled it and it game me a link about a Micronesian island :/
Do young kids where you are come tease each other about the other sex? ‘Cooties?’ Whatever they call it.
My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?
It’s a major pregnancy risk.
Do young kids where you are come tease each other about the other sex? ‘Cooties?’ Whatever they call it.
My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?
Ick. My double posting browser bug again.
Have you tried using another browser? That might help you figure out if the problem is actually on the browser end and not something weird with the LW software.
I’m using a different browser (different computer same browser by name) now and it is working fine. My other browser seems to work fine for a while after I restart it until some event causes it to thereafter double post every time. My hunch is that I could identify the triggering of one of the plugins as the cause. Even then the symptom is outright bizarre. What kind of bug would make the browser double send all post requests?
Perhaps a failed attempt at spyware!
No matter. I don’t like my other computer anyway.
A recent study found that one effective way to resist procrastination in future tasks is to forgive previous procrastination- because the negative emotions that would otherwise remain create an ugh field around that task.
I found the study recently, but I’ve personally found this to be effective previously. Forcing your way through an ugh field isn’t sustainable due to our limited supply of willpower (this is hardly a new idea, but I haven’t seen it referenced in my limited readings on LW.)
Some people have tried to emphasize that point but it isn’t always universally understood.
Deus Ex: Human Revolution
IGN Preview
It has been a while since I needed to buy a new computer to play a game.
In addition to being a sequel to Deus Ex and looking generally bad-ass, transhumanism is explicitly mentioned. From the FAQ:
I remember a post by Eliezer in which he was talking about how a lot of people who believe in evolution are actually exhibiting the same thinking styles that creationists use when they justify their belief in evolution (using buzz words like “evidence” and “natural selection” without having a deep understanding of what they’re talking about, having Guessed the Teacher’s Password ). I can’t remember what this post was called—does anybody remember? I remember it being good and wanted to refer people to it.
I remember reading a post titled “Science as Attire,” which struck me as making a very good point along these lines. It could be what you’re looking for.
As a related point, it seems to me that people who do understand evolution (and generally have a strong background in math and natural sciences) are on average heavily biased in their treatment of creationism, in at least two important ways. First, as per the point made in the above linked post, they don’t stop to think that the great majority of folks who do believe in evolution don’t actually have any better understanding of it than creationists. (In fact, I would say that the best informed creationists I’ve read, despite the biases that lead them towards their ultimate conclusions, have a much better understanding of evolution than, say, a typical journalist who will attack them as ignorant.) Second, they tend to way overestimate the significance of the phenomenon. Honestly, if I were to write down a list of widespread delusions sorted by the practical dangers they pose, creationism probably wouldn’t make the top fifty.
I’m extremely curious to hear both your list and JoshuaZ’s list of the top 20 or so most harmful delusions. Feel free to sort by category (1-4, 5-10, 11-20, etc.) rather than rank in individual order.
I’ll give you a big one: Dying a martyr’s death gives you a one-way ticket to Paradise.
I’ve separated some forms of alternative medicine out when one might arguably put them closer together. Also, I’m including Young Earth Creationism, but not creationism as a whole. Where that goes might be a bit more complicated. There’s some overlap between some of these (such as young earth creationism and religion). The list also does not include any beliefs that have a fundamentally moral component. I’ve tried to not include beliefs which are stupid but hard to deal with empirically (say that there’s something morally inferior about specific racial groups). Finally, when compiling this list I’ve tried to avoid thinking too much about the overall balance that the delusion provides. So for example, religion is listed where it is based on the harm it does, without taking into account the societal benefits that it also produces.
1-4: Religion, Ayurveda, Homeopathy, Traditional Chinese medicine (as standardized post 1950s)
5-10 The belief that intelligence differences have no strong genetic component. The belief that intelligence differences have no strong environmental component. The belief that there are no serious existential threats to humans. The belief that external cosmetic features or national allegiances are strong indicators of mental superiority or inferiority. That human females have fundamentally less mental capacity and that this difference is enough to be a useful data point when evaluating humans. The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to. (The primary reason this gets on the list is the sheer size of China. There are other governments which are much, much worse and have similar delusions by the people. But the damage level done is frequently much smaller.)
11-20 Vaccines cause autism. Young Earth Creationism. Invisible Hand of the Market solves everything. Government solves everything. Providence. That there are not fundamental limits on certain natural resources. That nuclear power is intrinsically worse than other forms of energy. The belief that large segments of the population are fundamentally not good at math or science. Astrology. The belief that antibiotics can deal with viral infections.
There were a few that I wanted to stick on for essentially emotional reasons. So for example Holocaust Denial almost got on the list and when I tried to justify it I saw myself engaging in what was clearly motivated cognition.
This list is very preliminary. The grouping is also very tentative and could likely be easily subject to change.
Is it trust or fear that is the real problem in that case? What would you do as an average Chinese citizen who wanted to change the policy? (Then, the same question assuming you were an actual Chinese citizen who didn’t have your philosophical mind, intelligence, idealism and resourcefulness.)
It seems like it is a mix. From people I’ve spoken to in China and the impression I get from what I’ve read about the Chinese censorship, the majority of people are generally ok with letting the government control things and think that that’s really for the best. This seems to be changing slightly with the younger generation but it is hard to tell.
Good points certainly. I’m not sure any average Chinese citizen alone can do anything. If I were an actual Chinese citizen alone given my “philosophical mind, intelligence, idealism and resourcefulness,” I’m not sure I’d do anything either, not because I can’t, but because the risk would be high. It is easy to say “oh, people in X situation should do Y because that’s morally better or better for everyone overall” when one isn’t in that situation. When one’s life, family, or livelihood is the one being threatened then it is obviously going to be a lot more difficult. It isn’t that I’m a coward (although I might be) it is just that standing up to the government in that sort of situation takes a lot of courage that I’m pretty sure I (and most people) don’t have. But if the general population took an attitude that was more willing to do minor things (spread things like TOR or other methods of getting around the Great Firewall for example), then things might be different. But even that might not have a large impact.
So yeah, I may need to take this off the list.
I get the impression that overall, the younger generation is more apathetic about politics than the older one.
(Though there is also the relatively recent phenomenon of “angry youths” (fenqing), who rant on forums and such.)
Lists like that are good !
I’m a bit surprised at that one—the current Chinese government seems pretty rational and efficient to me, and I’d be hard-pressed to say what I would do differently in it’s place or rather—there are things I would do differently, but I’m not sure I’d get better results).
Control of information by the government should be seen mostly as a way of preserving it’s own power. So I’m not really sure of how to interpret “The belief that the Chinese government can be trusted to [...] decide what information they should or should not have access to.”—could you rephrase that belief so that it’s irrationality becomes more apparent, maybe tabooing “can be trusted to” ? If you mean “Chinese people wrongly believe that the government is restricting information access for their own good”, then I’m not sure that a lot of people actually believe that, and for those that do, that believing it does any harm.
Ok. My impression is that that is a common belief in China and is connected to the belief that the government doesn’t actively lie. I don’t have a very good citation for this other than general impressions so I’m going to point to a relevant blog entry by a friend who spent a few years in China where she discusses this with examples. There are of course even limits to how far that will go. This is also complicated by the fact that many of the really serious harm in China (detainment of citizens for questioning policies, beatings and torture, ignoring of basic environmental and safety issues) stem from the local governments rather than the central government, and the relationship between Beijing and the local governments is very complicated. See also my remarks above to wedrifid which touch on these issues also. So yeah, it may make sense to take this off the list given the lack of harm directly coming from this issue.
I don’t interpret the story in that blog post that way at all. People repeating nationalist lies doesn’t mean they’ve been fooled.
I highly recommend these posts about the psychology of mass lies. I don’t recommend the third part.
This one caught my eye, I don’t think I’ve seen this listed as an obvious delusion before. Can you maybe expand more on this? I guess the idea is that a much larger number of people could make use of math or science if they weren’t predisposed to think that they belong in an incapable segment?
I’m thinking of something like picking the quarter of population that scores in the bottom at a standard IQ test or the local SAT-equivalent as the “large segment of population” though. A test for basic science and mathematics skills could be being able to successfully figure out solutions for some introductionary exercises from a freshman university course in mathematics or science, given the exercise, relevant textbooks and prerequisite materials, and, say, up to a week to work out things from the textbook.
It doesn’t seem obvious to me that such a test would end up with results that would make the original assertion go straight into ‘delusion’ status. My suspicions are somewhat based on the article from a couple of years back, which claimed that many freshman computer science students seem to simple lack the basic mental model building ability needed to start comprehending programming.
Yes. And more people would go into math and science.
That’s a very interesting article. I think that the level of, and type of abstraction necessary to program is already orders of magnitude beyond where most people stop being willing to do math. My own experience in regards to tutoring students who aren’t doing well in math is that one of the primary issues is one of confidence: students of all types think they aren’t good at math and thus freeze up when they see something that is slightly different from what they’ve done before. If they understand that they aren’t bad at math or that they don’t need to be bad at math, they are much more likely to be willing to try to play around with a problem a bit rather than just panic.
I was an undergraduate at Yale which is generally considered to be a decent school that admits people who are by and large not dumb. And one thing that struck me was that even in that sort of setting, many people minimized the amount of math and science they took. When asked about it the most common claim was that they weren’t good at it. Some of those people are going to end up as future senators and congressman and have close to zero idea of how science works or how statistics work other than at the level they got from high school. If we’re lucky, they know the difference between a median and a mean.
Does anybody actually claim to believe that ?
This view is surprisingly common. I don’t want to move to much to a potentially mind-killing subject, but the idea isn’t uncommon among certain groups in US politics. Indeed, they think it so strongly about some resources that they take it almost as an ideological point. This occurs when discussing oil most frequently. Emphasis is placed on things like the Eugene Island field and abiotic oil which they argue shows we won’t run out of oil. The second is particularly galling because even if the abiotic oil hypotheses were correct the level of oil production would still be orders of magnitudes below the consumption rate. I’d point more generally to followers of Julian Simon (not Simon himself per se. His own arguments were generally more nuanced and subtle than what many people seem to get out of them).
Where would you put ‘belief in free will’ and ‘belief in determinism’?
They probably wouldn’t get anywhere on the list for the reason that a) I’m not convinced that either determinism or free will as often given are actually well-defined notions and b) I don’t see either belief as causing much harm in practice.
Mass_Driver:
I’m not sure if that would be a smart move, since it would mean an extremely high concentration of unsupported controversial claims in a single post. Many of my opinions on these matters would require non-obvious lengthy justifications, and just dumping them into a list would likely leave most readers scratching their heads. If you’re really curious, you can read the comment threads I’ve participated in for a sample, in particular those in which I argue against beliefs that aren’t specific to my interlocutors.
Also, it should be noted that the exact composition of the list would depend on the granularity of individual entries. If each entry covered a relatively wide class of beliefs, creationism might find itself among the top fifty (though probably nowhere near the top ten).
In this format that sounds like a good thing! At worst it would spark curiosity and provoke discussion. At best people would encounter a startling opinion that they had never seriously considered, think about for 60 seconds then form an understanding that either agrees with yours or disagrees, for a considered reason.
seconded, but a list of 20 seems too long/too much work, no?
I’d be thinking 5. :)
I’d be thinking 5. :)
Taking into account what I already said about needing to influence people who can actually use beliefs (thus controlling for things like atheism, evolution, etc.)...
FAI and related.
Inability to do math.
Failures around believing the state of the world is good (thinking aging is a good thing and the like).
Believing that politics is the best way to influence the world.
What is the delusion here?
What is the delusion here? Do you mean people convincing themselves that they can’t do math?
This seems too subjective to label a delusion.
What do you mean by best and by influence?
Inability to do math? Really? Are you talking ‘disinclination to shut up and multiply’ or actual ability to do math?
I love math but don’t really think most people need it.
Dredging this up from deep nesting, because I think it’s important: wedrifid says
Yes. Never tell anyone that what you’re teaching them is hard. When you do that, you’re telling them they’ll fail, telling them to fail.
But if you tell them it’s easy, then they will be embarrassed for failing at something easy, or can’t be proud of succeeding at something easy.
Telling them it’s easy is also a bad idea.
It strikes me that giving no information about the general difficulty of the subject is also a bad idea. (I imagined myself struggling with a topic where I had no information on how hard others found it, and my hypothetical self was ashamed, because clearly if it were something everyone found hard, they’d warn people and teach it more slowly, so it must be easy for everybody else but me.)
Ideally, you’d teach the student not to be concerned with how well or how quickly they learn compared to others, which is a general learning technique that can apply to any field.
Simply telling people not to worry about that doesn’t… actually work, does it? That would genuinely surprise me.
Math anxiety is actually very common, and one of the ways to reduce it is to make students aware of the problem. It’s not as simple as saying “just don’t worry”, but in my experience as a tutor, it can be helpful to give gentle reminders that everyone learns at their own pace and that it may take some effort to understand a concept.
Math is all about trying many blind alleys before you figure out the correct approach, and teaching using examples where you try many wrong approaches first can help students understand that you don’t have to “get it” immediately, and it’s ok to struggle through it sometimes. It’s less “this is hard” and more “this often takes some effort to understand completely, so don’t panic”.
When I teach, I don’t say anything about “easy” or “difficult”. I just teach the material. What is this “easy”, this “difficult”? There is no “easy” or “difficult” for a Jedi—there is only the work to be done and the effort it takes. “Difficult” means “I will fail”. “Effort” means “I will succeed”.
You are torturing yourself by inventing fictional evidence. You have an entire imaginary scenario there, shadows and fog conjured from thin air.
I don’t think Alicorn’s evidence is completely fictional. It’s a simulation. It’s not as much evidence as if she had experienced it in real life, but it’s much better than, e.g. the evidence of Terminator on future AIs.
This is a distinction without a difference. “Terminator” is a simulation—the writers didn’t make it up out of nothing. Granted, their purpose is to tell an entertaining story, but the idea that this is what future AIs would be like has been around for a long time, despite Asimov’s efforts to create a framework for telling stories of friendly robots.
Or to put it the other way round, Alicorn’s scenario is as fictional as “Terminator”. It is made out of plausible-sounding elements, as Terminator is, but the “clearly” and “must” and “everybody else but me” are signs that far too much belief is being placed in it.
Right, and there’s the issue of whose fault the difficulty is. Sure, the student might not really be trying. But also, the teacher may not be explaining in a way that speaks to the learner’s natural fluency. A method that works for the geeky types won’t work work for more neurotypical types.
For my part, I never have trouble explaining high school math to those who haven’t completed it, even if they’re told that trig, calculus, etc. is hard. It’s because I first focus on finding out where exactly their knowledge deficit is and why the subject matter is useful. Of course, teachers don’t have the luxury of one-on-one instruction, but yes, how you present the material matters greatly.
Most people don’t need to understand evolution. Maybe we should distinguish between “harmful to self”, “harmful to society”, and “harmful to a democratic society”.
If you can’t do math at a fairly advanced level—at least having competence with information theory, probability, statistics, and calculus—you can’t understand the world beyond what’s visible on its (metaphorical) surface.
I’ve got a tangential question: what math, if learned by more people, would give the biggest improvement in understanding for the effort put into learning it?
Take calculus, for example. It’s great stuff if you want to talk about rates of change, or understand anything involving physics. There’s the benefit; how about the cost? Most people who learn it have a very hard time doing so, and they’re already well above average in mathematical ability. So, the benefit mostly relates to understanding physics, and the cost is fairly high for most people.
Compare this with learning basic probability and statistical thinking. I’m not necessarily talking about learning anything in depth, but people should have at least some exposure to ideas like probability distributions, variance, normal distributions and how they arise, and basic design of experiments—blinding, controlling for variables, and so on. This should be a lot easier to learn than calculus, and it would give insight into things that apply to more people.
I’ll give a concrete example: racism. Typical racist statements, like “black people are lazy and untrustworthy,” couldn’t possibly be true in more than a statistical sense, and obviously a statistical statement about a large group doesn’t apply to every member of that group—there’s plenty of variance to take into account. Basic statistical thinking makes racist bigotry sound preposterously silly, like someone claiming that the earth is flat. This also applies to every other form of irrational bigotry that I can think of off the top of my head.
Remember when Larry Summers suggested that maybe part of the reason for the underrepresentation of women in Harvard’s science faculty was that women may have lower variance in intelligence than men, and so are underrepresented in the highest part of the intelligence bell curve? What almost everybody heard was “Women can’t be scientists because they’re stupid.” People heard a statistical statement and had no idea how to understand it.
There are important, relevant subjects that people just can not understand without basic statistical thinking. I would like to see most people exposed to basic statistical thinking.
Are there any other kinds of math that offer high bang-for-the-buck, as far as learning difficulty goes? (I’ve always thought that the math behind computer programming was damn useful stuff, but the engineering students I’ve talked with usually find it harder than calculus, so maybe that’s not the best idea.)
Probability theory as extended logic.
I think it can be presented in a manner accessible to many (Jaynes PT:LOS is not accessible to many).
Tangential question to your tangential question: I’m puzzled, which math are you talking about here? The only math relevant to programming that I can think of that engineering students would also learn would be discrete math, but the extent needed for good programming competency is pretty small and easy to pick up.
Are we talking numerical computing instead, with optimization problems and approximating solutions to DE’s? That’s the only thing I can think of relevant to engineering for which the math background might be more difficult than calculus.
I was thinking more basic: induction, recursion, reasoning about trees. Understanding those things on an intuitive level is one of the main barriers that people face when they learn to program. It’s one thing to be able to solve problems out of a textbook involving induction or recursion, but another thing to learn them so well that they become obvious—and it’s that higher level of understanding that’s important if you want to actually use these concepts.
I’m not sure about all the details, but I believe that there was a small kerfuffle a few decades ago over a suggestion to change the apex of U.S. ``school mathematics″ from calculus to a sort of discrete math for programming course. I cannot remember what sort of topics were suggested though. I do remember having the impression that the debate was won by the pro-calculus camp fairly decisively—of course, we all see that school mathematics hasn’t changed much.
Calculus might not be the best example of a skill with relatively low payoff, because you need some calculus to understand what a continuous probability distribution is.
I do? I thought I understood both calculus and continuous probability but I didn’t know one relied on the other. You are probably right, sometimes things that are ‘obvious’ just don’t get remembered.
For example, suppose you have a biased coin which lands heads up with probability p. A probability distribution that represents your belief about p is usually a non-negative real function f on the unit interval whose integral is 1. Your credence in the proposition that p lies between 1⁄3 and 1⁄2 is the integral of f from 1⁄3 to 1⁄2.
Yes, extremely obvious now that you mention it. :)
Well above average mathematical ability and cannot do calculus to the extent of understanding rates of change? For crying out loud. You multiply by the number up to the top right of the letter then reduce that number by 1. Or you do the reverse in the reverse order. You know, like you put on your socks then your shoes but have to take off your shoes then take off your socks.
Sometimes drawing a picture helps prime an intuitive understanding of the physics. You start with a graph of velocity vs time. That is the ‘acceleration’. See… it is getting faster each second. Now, use a pencil and progressively color in under the line. that’s the distance that is getting covered. See how later on more when it is going faster more distance is being traveled at one time and we have to shade in more area? Now, remember how we can find the area of a triangle? Well, will you look at that… the maths came out the same!
I teach calculus often. Students don’t get hung up on mechanical things like (x^3)′ = 3x^2. They instead get hung up on what
%20=%20\lim_{h%20\to%200}%20\dfrac{f(x+h)%20-%20f(x)}{h})has to do with the derivative as a rate of change or as a slope of a tangent line. And from the perspective of a calculus student who has gone through the standard run of American school math, I can understand. It does require a level up in mathematical sophistication.
That’s the problem. See that bunch of symbols? That isn’t the best way to teach stuff. It is like trying to teach them math while speaking a foreign language (even if technically we are saving the greek till next month). To teach that concept you start with the kind of picture I was previously describing, have them practice that till they get it then progress to diagrams that change once in the middle, etc.
Perhaps the students here were prepared differently but the average student started getting problems with calculus when it reached a point slightly beyond what you require for the basic physics we were talking about here. ie. they would be able to do 1. and but have no chance at all with 2:
I’m not claiming that working from the definition of derivative is the best way to present the topic. But it is certainly necessary to present the definition if the calculus is being taught in math course. Part of doing math is being rigorous. Doing derivatives without the definition is just calling on a black box.
On the other hand, once one has the intuition for the concept in hand through more tangible things like pictures, graphs, velociraptors, etc., the definition falls out so naturally that it ceases to be something which is memorized and is something that can be produced ``on the fly″.
A definition is a black box (that happens to have official status). The process I describe above leads, when managed with foresight, to an intuitive way to produce a definition. Sure, it may not include the slogan “brought to you by apostrophe, the letters LIM and an arrow” but you can go on to tell them “this is how impressive mathematcians say you should write this stuff that you already understand” and they’ll get it.
I note that some people do learn best by having a black box definition shoved down their throats while others learn best by building from a solid foundation of understanding. Juggling both types isn’t easy.
That is close to saying “this stuff is hard”. How about first showing the students the diagram that that definition is a direct transcription of, and then getting the formula from it?
(Actually, many would struggle with 1. due to difficulty with comprehension and abstract problem solving. They could handle the calculus but need someone to hold their hand with the actual thinking part. That’s what we really fail to teach effectively.)
People get the simple concepts mixed together with a bunch of mathy-looking symbols and equations, and it all congeals into an undifferentiated mass of confusing math. Yes, I know calculus is actually pretty straightforward, but we’re probably not a representative sample. Talk with random bewildered college freshmen to combat sample bias. I did this, and what I learned is that most people have serious trouble learning calculus.
Now, if you want to be able to partially understand a bunch of physics stuff but you don’t necessarily need to be able to do the math, you could probably get away with a small subset of what people learn in calculus classes. If you learned about integration and differentiation (but not how to do them symbolically), as well as vectors, vector fields, and divergence and curl, then you could probably get more benefit-per-hour-of-study than if you went and learned calculus properly. It leaves a bad taste in my mouth, though.
When taught well the calculus required for the sort of applications you mentioned is not something that causes significant trouble, certainly not compared to vector fields, divergence or curl. By taught well, if you will excuse my lack of seemly modesty, is how I taught it in my (extremely brief—don’t let me get started on what I think of western school systems) stint teaching high school physics. The biggest problem for people learning basic calculus is that people teaching it try to convey that it is hard.
I’m only talking here about the level of stuff required for everyday physics. Definitely not for the vast majority of calculus that we try to teach them.
Aw, please ? I’d be interested in hearing about the differences with other systems :)
It has been said that democracy is the worst form of government except all the others that have been tried. --Sir Winston Churchill
I’m not quite going to make that analogy but I will hasten to assert that there are far worse systems of education than ours. Including some that are ’like ours but magnified”.
In terms of healthy psychological development and practical skill acquisition the apprenticeship systems of various cultures have been better. Right now I can refer to the school system on one of the Solomon Islands. The culture is that of a primitive coastal village but with western influences. Western teaching materials and a teacher are provided but occurs in the morning for 4 hours a day. No breaks are needed and nor is any pointless time wasting. The children then spend their time surfing. But they surf carrying spears and catch fish while they are doing it.
What appeals to me about that system is:
The shorter time period.
Most of the time kids spend at school is a blatant waste. in particular, in the youngest years a lot of what the kids are doing is ‘growing older’. That is what is required for their brains to handle the next critical learning skills.
Much more than 4 hours a day of learning is squandered on diminishing returns. The Cambridge Handbook of Expertise and Expert Performance suggests that 4 hours per day of deliberate practice (7 days a week for 10 years) is a good approximate guide for how to gain world-class expert level performance in a field. It is remarkably stable across many domains.
The children’s social lives are not dominated by playground politics and are not essentially limited to same age peers.
Not only are the extracurricular activities physically healthier than more time wasted in classes they are better for brain development too. What is the formula for increased release of Neurotrophic Growth Factors, consolidation into stable Neurogenesis and optimized attention control and cognitive performance? Aerobic exercise + activities requiring extensive coordination + a healthy diet including adequate Omega-3 intake. That’s right. Spending hours swimming, surfing and catching fish with spears is just about perfect.
I agree, and this is a tragedy in that it makes it so students don’t have marketable skill by 14 as they would in an apprentice system, and so are dependent on mommy and daddy. This “age of genuine independence from parents” is increasing all the time, and there’s no excuse for it. It disenfrachises children more than any legal age restrictions on this or that.
While as a mathematician I find that claim touching, I can’t really agree with it. To use the example that was one of the starting points of this conversation, how much math do you need to understand evolution? Sure, if you want to really understand the modern synthesis in detail you need math. And if you want to make specific predictions about what will happen to allele frequencies you’ll need math. But in those cases it is very basic probability and maybe a tiny bit of calculus (and even then, more often than not you can use the formulas without actually knowing why they work beyond a very rough idea).
Similar remarks apply to other areas. I don’t need a deep understanding of any of those subjects to have a basic idea about atoms, although again I will need some of them if I want to actually make useful predictions (say for Brownian motion).
Similarly, I don’t need any of those subjects to understand the Keplerian model of orbits, and I’ll only need one of those four (calculus) if I want to make more precise estimations for orbits (using Newtonian laws).
The amount of actual math needed to understand the physical world is pretty minimal unless one is doing hard core physics or chemistry.
For example… trying to work out what happens when I shoot a neverending stream of electrons at a black hole. The related theories were more or less incomprehensible to me at first glance. Not being able to do off the wall theorizing on everything at the drop of the hat has to at least make 49!
The human-scale physical world is relatively easy to understand, and we may have evolved or learned to perform the trickier computations using specialized modules, such as perhaps recognizing parabolas to predict where a thrown object will land. You get far with linear models, for instance, assuming that the distance something will move is proportional to the force that you hit it with, or that the damage done is proportional to the size of the rock you hit something with. You rarely come across any trajectory where the second derivative changes sign.
The social world, the economic world, ecology, game theory, predicting the future, and politics are harder to understand. There are a lot of nonlinear and even non-differentiable interactions. To understanding a phenomenon qualitatively, it’s helpful to perform a stability analysis, and recognize likely stable areas, and also unstable regions where you have phase transitions, period doublings, and other catastrophes, You usually can’t do the math and solve one of these systems; but if you’ve worked with a lot of toy systems mathematically, you’ll understand the kind of behaviors you might see, and know something about how the number of variables and the correlations between them affect the limits of linear extrapolation. So you won’t assume that a global warming rate of .017C/year will lead to a temperature increase of 1.7C in 100 years.
I’m making this up as I go; I don’t have any good evidence at hand. I have the impression that I use math a lot to understand the world (but not the “physical” world of kinematics). I haven’t observed myself and counted how often it happens.
I’d like this to be true, as I want the time I spend learning math in the future to be as useful as you say, but I seem to have come rather far by knowing the superficial version of a lot of things. Knowing the actual math from something like PT:LOS would be great, and I plan on reaching at least that level in the Bayesian conspiracy, but I can currently talk about things like quantum physics and UDT and speed priors and turn this into changes in expected anticipation. I don’t know what Kolmogorov complexity is, really, in a strictly formal from-the-axioms sense, nor Solomonoff induction, but I reference it or things related to it about 10 times a day in conversations at SIAI house, and people who know a lot more than I do mostly don’t laugh at my postulations. Perhaps you mean a deeper level of understanding? I’d like to achieve that, but my current level seems to be doing me well. Perhaps I’m an outlier. (I flunked out of high school calculus and ‘Algebra 2’ and haven’t learned any math since. I know the Wikipedia/Scholarpedia versions of a whole bunch of things, including information theory, computer science, algorithmic probability, set theory, etc., but I gloss over the fancy Greek letters and weird symbols and pretend I know the terms anyway.)
A public reminder to myself so as to make use of consistency pressure: I shouldn’t write comments like the one I wrote above. It lingers too long on a specific argument that is not particularly strong and was probably subconsciously fueled by a desire to talk about myself and perhaps countersignal to someone whose writing I respect (Phil Goetz).
I have a belief that I can fix things like this, having spent time working with other students in high school. If I ever meet you in person, will you assist me in testing that belief? ;)
I’m pretty sure that most people around lesswrong have about the same level of familiarity with most subjects (outside whatever field they actually specialize). Although I do think that you are relatively weak in mathematics, but advanced math just really isn’t that important, vis-a-vis being generally well educated and rational.
This one.
Are you then asserting that non-utilitarian views constitute a delusion?
I’m asserting that saying “We must do X, because it produces good effect Y”, when there is option Z, which delivers the same Y for half the cost, is a delusion.
This seems more like a common cognitive error than a delusion. How are you defining delusion? It seems like I am using a more narrow definition of delusion. I’m using something like “statement or class of statements about the physical world that are demonstrably extremely unlikely to be true.” What definition are you using?
Lucas’s statement fits this definition. This may me be clearer if you consider just “we must do X”, which is a claim about the physical world. The because part does not happen to change this.
If you don’t agree that the truncated claim fits the criteria I infer that the most likely difference in definitions between you and Lucas is in not so much around ‘delusion’ but rather about what ‘must’ means in relation to the physical world. This would make what you say true even if it isn’t grounded in my preferred ontology.
Ah, so the issue is that I see “must” as entangled with moral and ethical claims that aren’t necessarily connected to the physical world in any useful fashion.
Exactly! And to delve somewhat deeper into the levels of meaning there are many who would say that ‘must’ or weaker ‘should’ claims are about satisfying a given ‘rightness’ function. Of those people many of them will say that the ‘rightness’ function can’t reasonably be described as something that is part of the physical world. After accepting that position some will say that a ‘must’ claim is making an objective assertion about what best satisfies a known ‘rightness’ function. In perhaps simpler terms, I’ll look at the X/Y example we already have:
There are various things that can be accepted or rejected as ‘delusions’, that may be considered claims about the physical world. (In most cases the proposed delusion would be the negation, but the ‘can?’ is symmetric.)
Can lacking the belief “We must do things that have good effects” be a delusion?
Can lacking the belief “Y is a good effect” be a delusion?
Can lacking the belief “X has the effect Y” be a delusion?
Can lacking the belief “Z delivers the same Y for half the cost” be a delusion?
Can having the belief “We must do X even if Z does Y for half the cost” be a delusion?
Let’s see...
Your claim requires that your reject 1, 2 and 5 as possible candidates for delusion.
There are some that would reject 1 and 2 as candidates for delusion but say that 5 is a candidate because it implies fallacious reasoning based on the other arbitrary not-physical premises.
I accept 1 and 2 as possible candidates too via an ontology that formalises (and grounds in the physical) the way that normative claims are actually used in practice. But I never presume this definition when in conversation unless I know the others in the conversation are either familiar with technical formalism or using the colloquial meaning of ‘should/must’.
When I assert that Lucas’s statement is correct it is based off the “5” claim. It didn’t even occur to me to reject 5 as a possible delusion because it just seems so obvious to me. If you can’t make the claim without a mental screw up then it is a delusion, dammit! Even when is concluding something that you don’t consider to be part of the physical universe.
Exactly! And to delve somewhat deeper into the levels of meaning there are many who would say that ‘must’ or weaker ‘should’ claims are about satisfying a given ‘rightness’ function. Of those people many of them will say that the ‘rightness’ function can’t reasonably be described as something that is part of the physical world. After accepting that position some will say that a ‘must’ claim is making an objective assertion about what best satisfies a known ‘rightness’ function. In perhaps simpler terms, I’ll look at the X/Y example we already have:
There are various things that can be accepted or rejected as ‘delusions’, that may be considered claims about the physical world. (In most cases the proposed delusion would be the negation, but the ‘can?’ is symmetric.)
Can lacking the belief “We must do things that have good effects” be a delusion?
Can lacking the belief “Y is a good effect” be a delusion?
Can lacking the belief “X has the effect Y” be a delusion?
Can lacking the belief “Z delivers the same Y for half the cost” be a delusion?
Can having the belief “We must do X even if Z does Y for half the cost” be a delusion?
Let’s see...
Your claim requires that your reject 1, 2 and 5 as possible candidates for delusion.
There are some that would reject 1 and 2 as candidates for delusion but say that 5 is a candidate because it implies fallacious reasoning based on the other arbitrary not-physical premises.
I accept 1 and 2 as possible candidates too via an ontology that formalises (and grounds in the physical) the way that normative claims are actually used in practice. But I never presume this definition when in conversation unless I know the others in the conversation are either familiar with technical formalism or using the colloquial meaning of ‘should/must’.
When I assert that Lucas’s statement is correct it is based off the “5” claim. It didn’t even occur to me to reject 5 as a possible delusion because it just seems so obvious to me. If you can’t make the claim without a mental screw up then it is a delusion, dammit! Even when is concluding something that you don’t consider to be part of the physical universe.
The creation of an FAI is not the most important thing the species could be doing.
The best way to create an FAI is not...
If I might jump in on the listing of delusions, I think that perhaps one of the most important things to understand about widespread delusions is who, in fact, holds them. A bunch of rednecks in Louisiana not believing in evolution isn’t important, because even if they did, it wouldn’t inform other parts of their worldview. In general, the specific delusions of ordinary people (IQ < 120) aren’t important, because they aren’t the ones who are actually affecting anything. Even improving the rationality and general problem awareness of smart people (120 < IQ < 135) doesn’t really help, because then you get people who will expend enormous effort doing things like evangelizing atheism to the ordinary people and fighting global warming and the like. Raising the sanity waterline is important, but effort should be focused on people with the ability to actually use true beliefs.
I’m less sure. I would have thought that they affect things indirectly at least through social transmission of beliefs, what they choose to spend their money on, and the demands they make of politicians.
Arguably, one should expect it to help less than improving the rationality and awareness of people with IQ < 120, just because there are 11 times as many people with IQ < 120 than there are with 120 < IQ < 135.
I sincerely hope that you are using IQ as only the crudest shorthand for “ability to actually use true beliefs,” but your point in general is very well taken. Please do jump in if you have a listing of the most harmful delusions. :-)
IQ >= 120 is a fairly low bar. IQ is also a strong indicator for the potential for someone’s behavior to be influenced by delusions (rather than near mode thinking + social pressure being the dominant adaptation.)
Do you mean do say that people of ordinary intelligence, as a general rule, don’t actually believe whatever it is they say they believe, but instead just parrot what those around them say? You might be right. I think I need to find a way to re-immerse myself in a crowd of people of average intelligence; it’s been far too long, and my predictive/descriptive powers for such people are fraying.
Note that none of this is sarcasm; this comment is entirely sincere.
Wedrifid only said “potential”; most people, smart or not, behave as you say. And I would expand “delusion” to ’belief”: being smart is correlated with being influenced by beliefs, true or false.
That people act on beliefs or have at all coherent world-views is the most dangerous widespread delusion. (“The world is mad.”) Immersing yourself in a crowd of average intelligence might help you see this, but I rather doubt that your associates act on their beliefs.
Another thing that is dangerous is the people that actually act on their beliefs. They are much harder to control. People ‘acting as if’ pragmatically don’t do things that we strongly socially penalize.
Not on their stated beliefs; surely, but don’t most people have a set of actual beliefs? Can’t these actual beliefs, at least in some contexts, be nudged so as to influence the level and direction of cognitive dissonance, which in turn can influence actions?
There’s certainly evidence that intelligent people are more likely to have more coherent worldviews. For example, the GSS data shows that higher vocabulary is associated with more extreme political views to either end of the traditional political spectrum. There’s similar research for IQ scores but I don’t have a citation for that.
Are you saying more extreme political views are more coherent? I’m not following this.
Blueberry:
That seems like an almost self-evident observation to me. I have never seen anyone state clearly any political or ideological principles, of whatever sort and from whatever position, whose straightforward application wouldn’t lead to positions that are utterly extremist by the standards of the present centrist opinion.
Getting people with regular respectable opinions to contradict themselves by asking a few Socratic questions is a trivial exercise (though not one that’s likely to endear you to them!). The same is not necessarily true for certain extremist positions.
And it seems self-evidently false to me, so I’m very curious what exactly you mean.
If you take any one principle and apply it across the board, to everything, without limitation, you’ll end up with an extremist position, basically by definition. So in that sense, extremist positions may be simpler than moderate ones. But that’s more “extrapolation” and “exaggeration” than “straighforward application”.
Moderate positions tend to carefully draw lines to balance out many different principles. I’m not sure how to discuss this without giving contemporary political examples, so I’ll do so with the warning that I’m not necessarily for or against any of the following moderate positions, and I’m not intending to debate any of them; I’m just claiming that they’re moderate and consistent.
The government should be able to impose a progressive tax on people’s incomes, which it can then use for national defense, infrastructure, and social programs, while still allowing individuals to make profits (contrast communism and pure libertarianism)
Individuals over 18 who have not been convicted of a felony should be able to carry a handgun, but not an automatic weapon, after a brief background check, except in certain public places (contrast with complete banning of guns and with a free market on all weapons)
The government should regulate and approve the sale of some kinds of chemicals, completely banning some, allowing some with a doctor’s prescription, and allowing some to be sold freely over the counter after careful review
People over a certain age X should be able to freely have consensual sex in private with each other without government interference; people under X-n should not be allowed to engage in sex; people in between should be allowed to have sex only with people close to their own age
The country should guard its borders and not let anyone in without approval, and deport anyone found to have entered illegally, but should grant entry to tourists and grant a visa to a small number of students and workers
You can feel free to add your own if you’d like. But I don’t see how any of these are incoherent or contradictory. What Socratic questions would expose them?
What you list are explicit descriptions of concrete positions on various issues, not the underlying principles and logic. However, what I had in mind is that if you take some typical persons whose positions on concrete issues are moderate and respectable by the contemporary standards, and ask them to state some abstract principles underlying their beliefs, a simple deduction from the stated principles will often lead to different and much more extreme positions in a straightforward way. If called out on this, your interlocutors will likely appeal to a disorganized and incoherent set of exceptions and special cases to rationalize away the problem, even though before the problem is pointed out, they would affirm these principles in enthusiastic and absolute terms.
Let me give you an example of Socratic questioning of this sort that I applied in practice once. In the remainder of the comment, I’ll assume that we’re in the U.S. or some other contemporary Western society.
Let’s discuss the principle that religion and state should be separate, in the sense that each citizen should be free to affirm and follow any religious beliefs whatsoever as long as this doesn’t imply any illegal actions, and the state should consider religious beliefs as a matter of purely private and personal choice, like a taste in food or music. You’ll probably agree that this principle doesn’t sound too extreme when stated in these words, and many people with ideological affiliations not too far from the center would enthusiastically affirm it.
But now take these people and ask them: should the government considered religion as a protected category in anti-discrimination laws? Currently, it does. Your employer may demand from you to look and behave in certain ways, and the burden is on you to comply under the threat of getting fired; pleading that this would be contrary to your personal tastes and preferences won’t help you at all. Yet if this is contrary to your religion, the government will intervene and compel him to accommodate you within reasonable (and, arguably, sometimes unreasonable) limits. But this is clearly contrary to the above stated principle. How can the state flex its muscle to support your religious beliefs, if it considers them equivalent to mere personal preferences and gives no special support to religion over other sorts of interests and hobbies people have?
Trouble is, arguing that religious beliefs shouldn’t be protected by anti-discrimination laws is definitely an extreme position nowadays. It opposes a firm consensus of the entire contemporary mainstream, and to make things even more incoherent, it will provoke hostility especially among certain ideological groups whose members normally consider secularism as a part of their core principles. Among the people who affirm the above principle in the abstract, very few will bite that bullet—people normally never bite bullets based on abstract principles—so you’ll likely hear a stream of incoherent special pleading aimed to justify its non-application here. That’s the sort of incoherence typical of the contemporary moderates I’m talking about.
On the other hand, someone who doesn’t accept the separation of religion and state at all, or who is a principled libertarian opposed to anti-discrimination laws altogether—which are both extremist positions by today’s standards—won’t suffer from this incoherence.
Do you ever find people who bite the other bullet and say that, well, the principle wasn’t really all that good after all, since it didn’t allow for this particular exception?
As far as I’m concerned, religious beliefs should be given exactly the same protections as political beliefs, and no more. (Religion is given all too much deference in the United States today.) If you can refuse to hire people because they belong to the Raving Loonies Party—and it’s legal to do so in most states—then it should also be legal to refuse to hire people who belong to the Church of Raving Loonies.
If we started treating religious and political beliefs as commensurate, I think this would result in—at least in some regions—greater deference to politics, not lesser deference to religion.
OK, what if we reword this as “the state should consider religious beliefs as a matter of purely private and personal choice, because they are very important and the state is not good at identifying or encouraging appropriate religious beliefs.”
Isn’t that a coherent, moderate principle that explains much of American policy on what to do when religion intrudes onto the public sphere? According to this principle, the state can ban religious discrimination because this reinforces private choice of religion and does not require the state to inquire at all into which religious beliefs are appropriate. Yet, also according to this principle, public schools should not allow prayer during class time, because this would interfere with private choice of religion and requires the state to express an opinion about which religious beliefs are appropriate.
I don’t deny the general assertion that many Americans fail the “express Socratically consistent principles and policies” test, but I’m with Blueberry in that I think moderate, coherent principles are quite possible.
Mass_Driver:
Trouble is, this still requires that the state must decide what qualifies as a religious belief, and what not. Once this determination has been made, things in the former category will receive important active support from the state. There is also the flipside, of course: the government is presently prohibited from actively promoting certain beliefs because it would mean “establishing religion” according to the reigning precedent, but it can actively promote others because they don’t qualify as “religious.”
Now, if there existed some objective way—a way that would carve reality at the joints—to draw limits between religion on one side and ideology, philosophy, custom, moral outlook, and just plain personal opinions and tastes on the other, such determination could be made in a coherent way. But I don’t see any coherent way to draw such limits, certainly not in a way that would be consistent with the present range of moderate positions on these issues.
(By the way, another interesting way to get respectable-thinking folks into a tremendous contradiction is to get them to enthusiastically affirm that legal discrimination on the basis of attributes that are a pure accident of birth is evil—and then point out that this implies that any system of citizenships, passports, visas, and immigration laws must be evil. Especially if you add that religion is usually much easier to change than nationality! Pursuing this line of thought further leads to a gold mine of incoherences in the whole “normal” range of beliefs nowadays, as regularly demonstrated on Overcoming Bias.)
Another thing I should perhaps make sure to point out is that I don’t necessarily consider coherence as a virtue in human affairs, though that’s a complex topic in its own right.
Typically, yes. People with extreme views typically don’t fail to make inferences from their beliefs along the lines of “X is good, so doing Y, which creates even more of X’s goodness, would be even better!” Y might in fact be utterly stupid and evil and wrong, and a moderate with less extreme views might be against it, but the moderate and the extremist might both agree with X, even though the failure of logic that leads the extremist to endorse the evil Y is the belief that X is good.
Do more extreme political views signify more coherent worldviews?
You really should watch your grammar, syntax, and spelling while commenting on intelligence. The irony is distracting, otherwise. Unless you were referring to the CIA and FBI?
It might be more generally a sign that I shouldn’t comment when it is late at night in my timezone. Also, it should constitute evidence that we need better spellcheckers that don’t just catch non-words but also words that are clearly wrong from minimal context (although in this particular case catching that that was the wrong word would almost seem to require solving the natural language problem unless one had very good statistical methods).
I differentiate between ‘actually believe’ and ‘act as if they are an agent with the belief that’. All people mostly do the latter but high IQ people are somewhat more likely to let ‘actual beliefs’ interfere with their lives.
I would say that people of ordinary intelligence don’t actually have anything that I would identify as a non-trivial belief. They might say they believe in god, but they don’t actually expect to get the pony they prayed for (even if they say that they do). However, they do have accurate beliefs regarding, say, how to cook food, or whether jumping off a building is a healthy idea, because they actually have to use such beliefs.
In a democracy, specific delusions of ordinary people are important.
In a representative democracy, the specific delusions of the elected and unelected officials are important.
If you said that it wouldn’t make the top 10, I’d find that not implausible. Claiming it wouldn’t make the top 50 seems implausible. Actual dangers posed by creationism:1) It makes people have a general more anti-science attitude and makes children less likely to become scientists 2) it takes up large sets of resources that would be spent usefully otherwise 3) it actively includes the spreading of a lot of misinformation 4) it snags otherwise bright minds who might otherwise becomes productive individuals (Jonathan Sarfati for example is a chess master, unambiguously quite bright, and had multiple good scientific papers before getting roped into YECism. Michael Behe is in a similar situation although for ID rather than young earth creationism). 5) The young earth variants encourage a narrow time outlook which is not helpful for long-term planning about the world or appreciation of serious existential threats (although honestly so few people pay attention to existential risks this is probably a minor issue) 6) It causes actual scientists and teachers to lose their jobs or have their work restricted (admittedly this isn’t common but that’s partially because creationism doesn’t have much ground). 7) It encourages general extremist religious attitudes.
So not in the top 10? I’d agree with that. But I have trouble seeing it not in the top 50 most dangerous widespread delusions.
Thanks, this is what I had in mind.
I don’t remember a post by Eliezer on the subject but it is oh so true. I often feel a ‘cringe’ reaction when I hear ‘evidence’ being used as religious symbol. It is the same cringe reaction I get when I hear people say “God Says” on something that I know isn’t even covered in their bible. In both cases something BAD is going on that has nothing to do with whether or not there is a God.
Here is some javascript to help follow LW comments. It only works if your browser supports offline storage. You can check that here.
To use it, follow the pastebin link, select all that text and make a bookmark out of it. Then, when reading a LW page, just click the bookmark. Unread comments will be highlighted, and you can jump to next unread comment by clicking on that new thing in the top left corner. The script looks up every (new) comment on the page and stores its ID in the local database.
Edit: to be more specific, all comments are marked as read as soon as the script is run. I could come up with a version that only marks them as read once you click that thing in upper left corner. Let me know if you’re using it or if you’d like anything changed/added.
I made a similar Greasemonkey script some time ago.
Strange occurrence in US South Carolina Democratic primary.
The Washington Post profiled Alvin Greene last week
10 minute video interview with Greene
What happened here?
Wikipedia has a list of possible explanations.
Fivethirtyeight lists possible explanations and analysis.
Rawl and co presented five hours of testimony that the results could only be attributed to a problem with the voting machines.
What is your probability estimate for Alvin Greene’s win in this election being legitimate (Greene getting lucky as a result of aggregate voter intent+indifference+confusion, as opposed to voting machine malfunction or some sort of active conspiracy)? What evidence do you need in order to update your estimate?
Not ready to answer the rationalist questions, but why is it that, as soon as elections don’t go toward someone who played the standard political game, suddenly, “it must be a mistake somehow”? You guys set the terms of the primaries, you pick the voting machines. If you’re not ready to trust them before the election, the time to contest them was back then, not when you don’t like the result.
Where was Rawl on the important issue of voting machine reliability when they did “what they’re supposed to”?
I understand that elections are evidence, and given the prior on Greene, this particular election may be insufficient to justify a posterior that Greene has the most “support”, however defined. But elections also serve as a bright line to settle an issue. We could argue forever about who “really” has the most votes, but eventually we have to say who won, and elections are just as much about finality on that issue as they are as an evidential test of fact.
To an extent, then, it doesn’t matter that Greene didn’t “really” get the most votes. If you allow every election to be indefinitely contested until you’re convinced there’s no reason the loser really should have won, elections never settle anything. The price for indifference to voting procedure reliability (in this case, the machines) should be acceptance of a bad outcome for that time, to be corrected for the next election, or through the recall process.
Frankly, if Greene had lost but could present evidence of the strength Rawl presented, we wouldn’t even be having this conversation.
ETA: Oh, and you gotta love this:
Damn those candidates with autism symptoms! Only manipulative people like us deserve to win elections!
I should point out that most of the people who ought to know about the issue, have been screaming bloody murder about electronic voting machines for some time now. Politicians and the general public just haven’t been listening. This issue is surfacing now, not because it wasn’t an issue before, but because having a specific election to point to makes it easier to get people to listen. It also helps that the election wasn’t an important one (it was a Democratic primary for a safe Republican seat), and the candidates involved don’t have the resources to influence the discussion like they normally would.
This doesn’t sound like autism to me. It sounds more like a neurotypical individual who is dealing with a very unexpected and stressful set of events and having to talk about them.
Be that as it may, those are typical characteristics of high-functioning autistics, and I’m more than a little bothered that they view this as justification for reversing his victory.
Take the part I bolded and remove the “incoherent rambling” bit, and you could be describing me! Well, at least my normal mode of speech without deliberate self-adjustment.
And my lack of incoherent rambling is a judgment call ;-)
Well… knowing that someone is autistic is some inferential evidence in favor of them being a good hacker.
Yes. Exactly. This is true for lawsuits as well: getting a final answer is more important than getting the “right” answer, which is why finality is an important judicial value that courts balance.
My most likely explanations would be 1) software bug(s) 2) voter whim or confusion 3) odd hypothesis no one has thought of yet. Active intent to steal the nomination a distant fourth. Make it 60⁄30 among the first two.
Evidence? Well, anything credible, but how likely is that. :)
I put a very high probability that some form of tampering occurred primarily due to the failure of the data to obey a generalized Benford’s law. Although a large amount of noise has been made about the the fact that some counties had more votes cast in the Republic governor’s race than reported turnout, I don’t see that as strong evidence of fraud since turnout levels in local elections are often based on the counting ability of the election volunteers who often aren’t very competent.
I’d give probability estimates very similar to those of Jim’s but with a slightly higher percentage for people actually voting for him. I’d do that I think by moving most of the probability mass from the idea of someone tampering with the election to expose the insecure voting machines which implies a very strange set of ethical thought processes. I’ve also had enough experience in local elections to know that sometimes very weird things happen for reasons that no one can explain (and that this occurs even with systems that are difficult to tamper with). So using the primary breakdown given by Jim I’d put it as follows:
Edit: Thinking this through another possibility that should be listed is deliberate Republican cross-over (since it is an open primary) but given the evidence that seems of negligible probability at this point (< .01)).
I would count that under “voters actually voted for him”
Ok. Yeah, so that should probably be a subcategory of that in that it explains the weird results in a sensible fashion.
I don’t know the details about the American voting system, but (or maybe therefore) I am surprised how low estimates all people give to the possibility that the result is genuine. My estimate (without much research, I’ve just read the links) is
0.5 voters actually voted for Greene
0.3 error of some kind
0.2 conspiracy
In order to update, any evidence is accepted, of course. What I would most like to see: results of some statistical survey, conducted either before or better after the election, historical data concerning performance of black candidates, historical data from elections with big difference between the intensity of the campaign between the competing candidates, a lot of independent testimonies of trustworthy voters reporting non-standard behaviour of the voting machines, description of how can the results be altered (and what is normally done to avoid that).
I would say
Conspiracy is a really stupid claim for this result—it is an incredibly unimportant election. If someone was going to purposely jigger the results of an election, they would do it where it actually mattered. The only reason it is still on there is that people sometimes do do really stupid things (as opposed to normally stupid things that they do all the time).
Here is my probability distribution:
Voters actually voted for him: 0.1
Someone tampered with the voting machines or memory cards to make Alvin Greene win: 0.4
...and that person did it because they wanted Alvin Greene to win: 0.1
...and that person did it for kicks: 0.1
...and that person did it because they wanted to expose the insecure voting machines: 0.2
Someone meant to tamper with a different election on the same ballot, but accidentally altered the democratic primary additionally or instead: 0.1
The votes were altered by leftover malware from a previous election which was also hacked: 0.2
There was a legitimate error in setting up or managing the voting machines altered the vote totals: 0.2
Note that I started researching this topic with an atypically high prior probability for voting machine fraud, and believe that it is very likely that major US elections in the past were altered this way. The strongest direct evidence I see for fraud having occurred is that there were “three counties with more votes cast in Republican governor’s race than reported turnout in the Republican primary” FiveThirtyEight. Note that this means botched vote fraud, not correctly-implemented vote fraud, since correctly implemented vote fraud, using a strategy such as the Hursti hack, would have changed the votes but not the turnout numbers.
The Benford’s Law analysis on FiveThirtyEight, on the other hand, I find very unconvincing—first because it has a low p-value, and second because it doesn’t represent the way voting machine fraud really works; it can only detect if someone makes up vote totals from scratch, rather than adding to or subtracting from real vote totals.
Probability that this person would have a worse influence on the senate than a more standard politician: 5%.
I would give it lower than that, US Senators have surprisingly little power.
That is not important when considering the probability that Alvin Greene would have a worse influence on the Senate than the average politician if he got elected. It is only important when considering the probability that he would have a much worse influence on the Senate than average.
???
I mean, in the sense that the US government is like a massive Ouija board that is not really controlled by anyone, then sure. But the senators seem to have a particularly heavy hand on the board.
Sorry, I meant “influence”, not “power”.
Conditional on their winning the election, presumably.
I’m not sure that is technically necessary given the precise phrasing.
Because, unless he is a politician, the sentence fails to make sense, because ‘more standard politician’ requires him to be one? If so, I think being selected as a candidate makes you a politician.
It seems to make sense without any fancy interpretation.
I think voters were clueless about both candidates, but they like to fill in all the boxes on the ballot, so they chose the name that has the higher positive affect by far: “Alvin Greene”.
To me that would be sufficient to explain the entire anomaly, if not for the mysterious origin of Greene’s $10,000 filing fee.
Also the possible “Al Green” effect—voters may have thought they were voting for the famous soul singer.
The next election being won by a ficus would boost my estimate. Or, you know, something else ridiculous like an action hero actor.
Why is this at all ridiculous? Is there any reason to believe Arnold Schwarzenegger has done a significantly worse job than other governors, controlling for ability of the legislature to agree on anything and the health of the economy?
It merely serves to illustrate what politics is really about. It certainly isn’t about voting for people who are the best suited for making and implementing the decisions that are best for the country, planet or species. I actually would have voted for him unless he had a particularly remarkable opponent. All else being equal I take a contribution in another field that is popular and that I appreciate is a more important signal to me than success as a pure courtier. It is unfortunate that I do not have reason to consider consider political popularity as a stronger signal of country-leading competence than creating ‘kindergarten cop’.
I’ve already assigned a low probability to Alvin being at all worse than the alternatives. I expect Arnold would be ‘even’ better.
(Oh, and I do think that one liner is sub par. It would be better to stick to actual ridiculous rather than superficially ridiculous.)
Probability that this person would have a worse influence on the senate than a more standard politician: 5%.
my breakdown:
Conspiracy: 19%
Error of some kind: 80%
Voters actually voted for him: 1%
(Given that there is a unique cause, and it is one of those three, of course.)
Surely the “voters aren’t actually paying attention” hypothesis deserves more than 1% probability.
that could fall under ‘error of some kind’.
The sting of poverty
What bees and dented cars can teach about what it means to be poor—and the flaws of economics
http://www.boston.com/bostonglobe/ideas/articles/2008/03/30/the_sting_of_poverty/?page=full
and lots of Hacker News comments: http://news.ycombinator.com/item?id=1467832
Has anybody looked into OpenCog? And why is it that the wiki doesn’t include much in the way of references to previous AI projects?
If making a Friendly AI is compared to landing on the moon, I’d say OpenCog is something like the scaffolding for a backyard rocket. It still needs something extra—the rocket—and even then it won’t achieve escape velocity. But a radically scaled-up version of OpenCog—with a lot more theory behind it, and tailored to run at the level of a whole data center rather than on a single PC—is the sort of toolset that could make a singularity.
For those of you who don’t want to register to fanfic.com to receive notifications of new chapters to Harry Potter and the methods of rationality, I have added a Mailinglist. You can add yourself here: http://felix-benner.com/cgi-bin/mailman/listinfo/fanfic It is still untested so I don’t know it will work, but I assume so.
Another economics WTF:
A lot of you may remember my criticism of mainstream economics, that they become so detached from what is meant by a “good economy”, that they advocate things that are positively destructive in this original, down-to-earth sense.
Scott Sumner, I find to be particularly guilty of this. His sound economic reasoning has led him to believe that what the economy vitally needs right now is for banks to make bad (or at least wasteful) loans, just to get money circulating and prop up nominal GDP—a measure known to be meaningless because it’s an artifact of the money supply and has to be adjusted for interpretation.
Fed up with him saying this kind of thing, I sarcastically posted this remark:
And in his immediately following comment, he said,
Huh?
You did actually paraphrase his position, so his agreement is a sign of self consistency even when things are not presented with his preferred framing. This much at least is a positive in my book.
As for the position itself… it is idiotic. What is the phrase? “Lost Purpose”?
Yeah, I’m thinking of writing an article on this issue with the title “Lost Economy”, both a play on that Yudkowsky article, and having the meaning “lost ability to economize”.
A blogger I read made a point that I will incorporate: that people of a certain ideology were screaming bloody murder at how destructive it is to nationalize of this or that part of the economy, but also believe “the economy” will “recover” in just a few years. This blogger remarked that, “um, guys, if you can nationalize sectors of the economy and only cause a few years of pain, then what the hell were we fighting for this whole time? The worst that can come from doing the opposite of what we want is four years of sub-par growth? I thought the consequences would be worse than that …”
As for Sumner’s position: I just don’t see by what standard “lots of shoddy loans to prop up fake numbers” constitutes a “good economy”.
Indeed. While I find the general arguments about market efficiency persuasive, there’s a big blind spot in the view that “The economy will always operate efficiently despite interference, unless that interference is by something we call a ‘government’”.
Sure, you’d need to be able to replace the symbol (government) with the substance of what causal mechanisms you believe are responsible for damage to the economy, and why they’re associated with the government.
Just to clarify, though, I wasn’t criticizing a particular anti-government view, just a particular combination of views. I can understand if someone says, “Nationalization isn’t that bad, the economy won’t be hurt much by it.”
Or if someone said, “Nationalization is devastating, and it will take ages for the economy to recover from one, if it ever does!”
But I see a big problem with someone who wants to believe both that nationalization is devastating, and that “the economy” will recover after one in just a few years. No, if it really is devastating, your definition of “the economy” and its “goodness” need to reflect that somehow.
News and mental focus
I think Derren Brown uses this as a mind hack a lot.: http://www.youtube.com/watch?v=3Vz_YTNLn6w (notice specific diversion into spatial memory, it’s probably been tried and tested as the best distraction from the color of money in hand)
I feel that mental focus if VERY weak and very exploitable.
As a side note, I think there is another, less obious, mental hack going on, on the audience. Derren claims (in the intro to this TV series) that there is no acting here, but a lot of misdirection. I believe it. I think when he shows this trick work 2 out of 3 times, it’s probably more like 2 out of 30. My guess is that he biases the sample quite cleverly, showing 3 cases is exactly the minimum that you can show giving the impression that a) reporting is honest (see—I showed a failure!) and b) the ‘magic’ works in most cases. Also I think getting caught/embarrassed by a hot dog vendor evokes certain associations that yeah, he can be beat which prevent you from thinking how much he can be beat.
Here is to you Derren, Master of Dark Arts.
Note however that Derren Brown’s tricks have turned out to be staged in at least one instance. This makes me extremely skeptical towards the rest of them too.
Oh. I thought the point of the subway anecdote jnf gb unir na rkphfr gb fyvc va n cerfhccbfvgvba, va gur sbez “Gnxr vg [gur zbarl], vg’f svar”.
Yes, I missed it, largely due to lack of knowledge of NLP. I wouldn’t be surprised if the spatial thing is true also, (and possibly intended) making people picture something is supposed to make them look up IIRC.
Statisticians Andrew Gelman and Cosma Shalizi have a new preprint out, ‘Philosophy and the practice of Bayesian statistics.’ The abstract:
A TED talk: “Laurie Santos: How monkeys mirror human irrationality ”
“Why do we make irrational decisions so predictably? Laurie Santos looks for the roots of human irrationality by watching the way our primate relatives make decisions. A clever series of experiments in “monkeynomics” shows that some of the silly choices we make, monkeys make too.”
http://www.youtube.com/watch?v=DUd8XA-5HEk
Interesting speech. I wonder whether the monkeys had a safe way to save their tokens, and whether the experiment would play out the same way if it could be done with squirrels.
She implies that the amount of complexity in finance is just there. I agree with Scott Adams that a good bit of complexity is a deliberate effort to confuse people into making bad choices.
If things are complex you may need to meet with a financial advisor—and then they can try to sell you more stuff.
The next advances in genomics may happen in China
http://www.economist.com/node/16349434?story_id=16349434
I started writing something but it came up short for an article, so I’m posting it here:
Title: On the unprovability of the omni*
Our hero is walking down the street, thinking about proofs and disproofs of the existence of a god. This is no big coincidence as our hero does this often. Suddenly, between one step and the next, the world around her fades out, and she finds herself standing on thin air, surrounded by empty space. Then she hears a voice. “I am Omega. The all-powerful, all-knowing, all-good, ever-present being. I see you have been debating my existence with true purity of heart, so I have decided to provide you with any evidence you request”. Once the shock wears off, our hero runs through the list of possible requests she could make. Healing the sick? Perhaps the reanimation of a dead person? Some time-travel? Maybe this could still be doubted. How about creation of a solar system? Or a universe? Maybe a proof of P vs. NP? Alas, our hero realises that any evidence she could request would only be proof of the power of Omega to produce just that thing, not an inclusive proof.
What’s more, our hero knows that her thinking is subject to the operation of her mind and the readings of her senses, something she cannot trust in the presence of a vastly overpowering entity. The lower bound of power required of Omega to produce any experience for our hero is much lower than the power to create universes. It is the ability to control only the senses of our hero, become a kind of hypervisor, and simulate all requests. While this is great power indeed, the distance from there to omnipotence is great indeed. Similarly for omniscience, omnipresence, and omnibenevolence.
Our hero does not ask anything of Omega, and their meeting ends uneventfully, at least in terms of new universes being created, or problems thought unsolvable being solved. She does realise though, that omnipotence, omniscience, omnipresence, and omnibenevolence are not properties that can be verified by a human. If this is the definition of a god that theists are working with, then it is not only undisprovable, it is also unprovable. Taking knowledge to be ‘justified true belief’, a belief in an omni* god can never be justified, putting if firmly in the territory of the unknowable. The strongest claims that can be reasonably made are that of a being that is very powerful, very knowledgeable, etc. But that is not nearly as interesting.
Now, I have posted a question along those lines in this thread before, with little response. What I would like your feedback on is whether this is a reasonable argument, whether I’ve gotten something completely wrong in my epistemology, and whether there have been similar arguments made by others. All help appreciated, cheers.
Wait… a being which, while possibly not omni-anything, is likely very powerful, offers to provide her any evidence she likes, and she considers and rejects the “healing the sick” and “resurrecting the dead” plans?
Not to mention a solution to the P=NP problem (or the Riemann Hypothesis)?
A super-powerful agent who is desperate to prove itself to her! That’s the perfect opportunity! Unless she messes up the requested ‘proof’ she has can become a demi-god, just below the Omega (until Omega cracks it with her).
“If you are Omnipotent please prove it by giving me a pet genie.”
“Genie, I want you to create an FAI that has my CEV.”
“Genie, please do whatever my FAI tells you to.”
That should result in an exponentially growing multiverse of universes, with each universe self-replicating on a sub-nano-second time frame while simultaniously expanding in size and neg-entropy, all arranged for maximum Fun. Still not proof of Omnipotence but hey, it’ll do.
That’s a good point. Any ideas on how to mend the hole?
Have Omega offer to provide the proof but then will ask for an answer to the question of whether he is actually omni*. If the answer is incorrect he will destroy the world, if correct he will let the world continue with whatever changes were made by the “wish”. There is also the choice not to play.
You would have to make him non omni benevolent though.
Can we not get around this by using randomly chosen questions? And then we have IP=PSPACE, so anything that’s in PSPACE, he can relatively quickly convince us he can solve. Obligatory Scott Aaronson link.
Thanks for that link, it was quite good. Any chance you could elaborate a bit on the IP=PSPACE identity?
No, I don’t really know complexity theory at all, so I couldn’t really tell you any more than Wikipedia could.
What if your hero asks to be made omniscient, including the capacity to still be able to think well in the face of all that knowledge?
Throw in omnibenevolence if you like, but I think you get some contradictions if you ask omnipotence. Either that, or you and Omega coalesce.
How could you test your omniscience to be sure it’s the real thing?
Asking to modify yourself may be a useful strategy, (or maybe not, as you note) but it’s not something that’s available to philosophers trying to prove the existence of a god. As far as we know that is :)
It’s possible that looking at how you’d test something which claims to be omniscience would give some pointers to finding unknown unknowns and unknown knowns.
Or also show you if there are unknowable unknowns?
An unknowable unknown: I shot a rocket across the cosmic horizon. On the rocket was a qGrenade set to detonate on a timer. Did my Schrödinger’s rocket explode when the timer went off in my Everett branch?
I don’t see that decoherence would occur in that case.
This once again explains why “reality” is a largely meaningless concept.
Wow. I maybe understand where you are alluding to, but I’m not sure I’m reverse engineering the thoughts right. Explain for me?
Whether or not it’s meaningful, it’s certainly useful, especially by Phillip K. Dick’s definition: “Reality is that which, when you stop believing in it, doesn’t go away.”
I’m pretty sure unknowability would have to be proven rather than shown.
Nothing is provable to the level you demand (well, pretty much nothing, cogito ergo sum and all that). Given that none of the omni* are well defined, the question doesn’t mean much either.
Are you saying that it’s an inference problem and after enough pieces of evidence we should just accept omnipotence (for instance) as the best hypothesis with a high degree of confidence, as we trust gravity now? How about the mind control problem?
Also, what you say about the omni* being not well defined sounds interesting. can you elaborate?
That’s exactly what I’m saying, and you’re right to point out that mind control will always be a more probable explanation than omnipotence (as will mental illness). If I knew that something would continue to apeear omnipotent, I would just treat it as omnipotent (which equates to “accepting the simulation” if the actual explanation is mind control).
Omnipotence is badly defined because it leads to questions like “Can Omega create a rock so heavy that Omega cannot lift it?”, can omnipotent beings create logical contradictions? Can they make 2+2=3? Omniscience leads to similar problems, can Omega answer the halting problem for programs that can call Omega as an oracle? Omnibenevolence is the least paradox ridden, but the hardest to define. Whose version of good is Omega working toward?
Your logic is ok. By the way Thomas Aquinas thought along this lines, but in different direction. However discussing scholastic here doesn’t make much sense (if it can make sense at all).
Yet another exchange regarding experts and the “difficulty of explaining” excuse. It’s the kind of exchange normally found here, and with LW regulars, so given the topic matter, I thought folks here would be interested in they haven’t seen it already.
You appear to be responding to a different point than the one Robin was making in the original post.
Robin’s post centers on the “intellectually nutritious” metaphor (and has nothing to do with “difficulty of explaining”). Your reply conflates that with some argument about reverence for particular authorities, which Robin isn’t making except insofar as it is implied by his use of the word “classic”.
I don’t think I am. He says, “You need to read the classics.” I say, “No, I just need to know their key insights.”
I further say that people who think you need to read a particular classic are typically wrong, as others have assimilated its insights—and become capable of discussing the related issues—without having to read it.
How is that not responsive? Where do I make an issue of reverence for authorities?
OK, “reverence for authorities” might a red herring here. Please disregard that and accept a fractional apology; I think my observation still stands.
Robin’s saying “the expected value of your reading (something like) a classic is higher than the expected value of equivalent time spent reading (something like) my blog”.
He isn’t saying “you need to read the classics (and nothing else will do)”, in spite of what the title says. You sound as if you’re reacting to the title only—and an idiosyncratic reading of it at that.
Your point regarding a specific article—Coase’s—may have merit. Some issues you need to consider are:
Reading a primary source often allows you to understand how it has been misunderstood; there is a (ahem) classic example in the field of software engineering, where for years the article cited as the primary inspiration for the well-known “waterfall lifecycle” was Winston Royce 1970; it turns out, when you actually read the article, that it condemns the waterfall cycle as oversimplistic and unworkable—here we have a misunderstanding with a cost measured in billions and attributable to failure to read the classics carefully.
As a corollary, modern popularizations of a classic may contain distortions due to the popularizer’s various other biases, including poor skill at explaining; just as much as they may enhance the value of the classic by providing a streamlined explanation; how are you to sort one from the other?
A distilled explanation of the insight from a classic strips it of all the anecdotes and background material which lent the insight force in the first place; that may be valuable and, depending on your purpose, even more valuable than reading the primary source, but it doesn’t convey the same understanding; your grasp on why the insight has force may be shakier than if you’d read the primary source. There’s often a trade-off between time spent acquiring an insight and depth of understanding. (Admittedly, this trade-off can be substantially modified by the time you spent exercising the insight.)
Another respondent on Robin’s blog says “Pfui, blogs have led me to classics”. Well, that point doesn’t work if all you ever read are blogs, showing precisely how I suspect folks are misunderstanding Robin’s point.
What Robin says is that there is a hierarchy of sources of knowledge, not all are worth the same, and it’s unwise to spend all your time on secondary or tertiary (etc.) sources that (often) are lesser sources of intellectual nourishment. In short, there’s a reason the classics are acknowledged as such.
It would astound me if this reason was that they were the optimal source of educational. That would completely shake my entire understanding of the fairness of the universe.
Better than the classics are the later sources that cover the same material once the culture has had a chance to fully process the insights and experiment with the best way to understand them. You pick the sources that become popular and respected despite not having the prestige of being the ‘first one to get really popular in the area’. You want the best, not the ‘first famous’ and shouldn’t expect that to be the same source. After all, the author of the Classic had to do all the hard work of thinking of the ideas in the first place… we can’t expect him to also manage to perfect the expression of them and teach them in the most effective manner. Give the poor guy a break!
As an example,
Reading Dawkins may be more effective than reading Darwin, to appreciate descent with modification and differential survival as an optimization algorithm.
Reading Darwin may be more effective than reading Dawkins, to appreciate what intellectual work went into following contemporary evidence to that conclusion, in the face of a world filled with bias and confusion.
Reading Dawkins OR Darwin is—and I think that is Robin’s point—more valuable than the same time spent reading blogs expounding shaky speculations on evolution.
I’m underlining your point about Darwin—just getting the insights doesn’t give you information about the process of thinking them out.
Also, a “just the insights” version will probably leave out any caveats the originator of the insights included.
Spectacularly so in the case of the Waterfall software development process. It’s as if the “classic” in question had said “Drowning kittens” at the end of page 1, and of course the beginning of page 2 goes right on to say ”...is evil, don’t do it”. But everyone reads page one which has a lovely diagram and goes, “Oh yeah; drowning kittens. Wonderful idea, let’s make that the official government norm for feline management.”
100% agree that is Robin’s point and another 100% with Robin’s point. Hmm. Wrong place to throw 100% around. Let’s see… 99.5% and 83% respectively. Akrasia considerations and the intrinsic benefits of the social experience of engaging with a near-in-time social network account for the other 17%.
You seem to be missing the examples at the moment, but I’ll give one… it’s damn hard to learn relativity by reading Einstein’s original papers. Your average undergraduate textbook gives a much better explanation of special relativity.
On the other hand, when it comes to studying history, sometimes classics are still the best sources. For example, when it comes to the Peloponnesian War, everything written by anyone other than Thucydides is merely footnotes.
No, I think I addressed the broader point he was making, not just the title: He’s saying, don’t just rely on blog posts and blog comment exchanges—actually read the classic works. This would imply that these blog discussions suffer from lack of appreciation of certain classics that imparted Serious knowledge.
I disputed this diagnosis of the problem. The phenomenon Robin_Hanson describes is more due to experts not understanding their own topics, and not communicating the fruits of these classics. The proper response to this, I contend, is not to wade through classics, hoping to be able to sort the good from the bad. Rather, it’s for those who are aware of the classics’ insights to understand and present them where applicable.
In other words, not to do what Gene Callahan does in the (corrected) link.
This is why I challenged Robin_Hanson to say what he’s doing about it: if people really are stumbling along, unaware of some classic writer’s insight on the matter, a work that just completely enlightens and clarifies the debate, what is he doing to make sure these insights are applied to the relevant issue? That is how you establish the worth of classics, by repeated ability to obviate debates that people get into when they aren’t familiar with them.
It’s true that in reading works that draw from the classics, you have to separate the good from the bad, but you have to do that anyway—and classics will typically have a lot of bad with the good.
If classics are higher up on the hierarchy, it is specific classics that are known for being completely good, or for because their bad part is known and articulated to the learner in advance. But that requires advising of specific classics, not telling someone to read classics in general.
Keep in mind, you were my example of someone failing to learn the best arguments against gay rights, despite a sincere effort to find them. The experts either didn’t understand the arguments, or weren’t able to apply them in discussions. How many (additional!) classics would you need to have read to be enlightened about this?
Perhaps we’re actually on the same page there. I don’t think Robin was saying “read classics in general”, so much as “go and spend some quality time with what you’d think is a truly awesome classic”. If he had been saying “go and spend time reading classics just because they had the ‘classic’ label stamped on them” I’d also disagree with him.
One issue is that judgments of “intellectually nutritious” vary from person to person in extremely idiosyncratic ways. For instance I’m currently reading Wilson and Sperber’s Relevance which comes heartily recommended by Cosma Shalizi but is more or less boring me to death. You never know in advance which book is going to shake your world-view to its foundations.
Maybe we need to make a distinction here between one-topic classics and broader-ranging, multi-topic classics. What I would need (and love) to read is the “Gödel, Escher, Bach” of moral theories. :)
But while I derived nourishment from Rawls Theory of Justice I wouldn’t necessarily seek out “classics” of communautarism (or other traditions making a strong case against e.g. gay rights), because I don’t feel that dire a need to expose my ideas on moral theories to contradiction. I’d be keen to get that contradiction in smaller and more pre-digested doses.
Usually when I have identified a topic as really, really important I find it worthwhile to round out my understanding of it by going back to primary or early sources, if only because every later commentator is implicitly referring back to them, even if “between the lines”.
I also seek out the “classic” in a field when my own ideas stand in stark opposition to those attributed to that field. For instance I read F.W. Taylor’s original “Scientific Management” book because I spent quite a bit of energy criticizing “Taylorism”, and to criticize something effectively it’s judicious to do everything you can not to misrepresent it.
Well, I’m not sure where we agree or don’t now. We certainly agree here:
Yes, yes you should learn about these contradictions of your worldview from summaries of the insights that go against it.
But you also say:
But what’s the difference? If I’m already so lacking as to need to read (more) classics, how would I even know which classics are worth it? He gives no advice in this respect, and if he did, I wouldn’t be so critical. But then it would be an issue about whether people should read this or that book, not about “classics” as such.
Did you regard gay rights as really, really important?
And at times we also discover that the eponymous mascot’s actual ideas are quite a lot different to those that we are rejecting. Then at least we know to always direct the criticisms at “Taylorism” and never “Taylor” (depending whether the mascot in question shares the insanity.)
You’d need to spell out more precisely what he’s doing that you think deserves criticism.
Interestingly I seem to have read quite a few of the “classics” that come up in that discussion on “what science does”. Polanyi’s Personal Knowledge, Feyerabend’s Against Method, Lakatos’ Proofs and Refutations, Kuhn’s Structure of Scientific Revolutions. Not Popper however—I’ve read The Open Society but not his other works.
Given your stance on “explaining” those strike me as good examples of the kind of stuff you might want to have read because that would leave you in a better position to criticize what you’re criticizing: less prone to misrepresenting it. (As for me, I’m now investing a lot of time and energy into this “Bayesian” stuff, which definitely is sort of a counterpoint to my prior leanings.)
Exactly what I referred to in the previous paragraph.
Callahan is, supposedly, aware of these classics’ insights. Did he present them where applicable? Show evidence he understands them? No. Every time he drops the name of a great author or a classic, he fails to put the argument in his own words, sketch it out, or show its applicability to the arguments under discussion.
For example, he drops the remark that “Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]” as if it were conclusively settled. Then, when I explain why this can’t possibly be the case, Callahan is unable to provide any further elaboration of why that is (and I couldn’t find a reference to it anywhere).
The problem, I contend, is therefore on his end. To the extent that Callahan’s list of classics is relevant, and that he is a majestic bearer of this deep, hard-won knowledge, he is unable to actually show how the classics are relevant, and what amazing arguments are presented in them that obviate our discussion. The duty falls on him to make them relevant, not for everyone else to just go out and read everything he has, just because he thinks, in all his gullible wisdom, that it will totally convince us.
Note: I wasn’t alone in noticing Callahan’s refusal to engage. Another poster remarked:
Also, people like to make a big deal about how clever Quine’s holism argument is, but if you’re at all familiar with Bayesianism, you roll your eyes at it. Yes, theories can’t be tested in isolation, but Bayesian inference can tell you which beliefs are most strongly weakened by which evidence, showing that you have a basis for saying which theory was, in effect, tested by the observations.
Things like these make me skeptical of those who claim that these philosophers have something worthwhile to say to me about science. I would rather focus on reading the epistemology of those who are actually making real, unfakeable, un-groupthinkable progress, like Sebastian Thrun and Judea Pearl.
I think Lakatos, Proofs and Refutations is a fun book, but the chief thing I learned from it is that mathematical proofs aren’t absolutely true, even when there is no error in reasoning. It’s about mathematics, not science. It’s also quite short, particularly if you skip the second, much more mathematically-involved dialogue.
I learned the opposite: that mathematical proofs can be and should be absolutely true. When they fall short, it is a sign that some confusion still remains in the concepts.
I see no contradiction between these interpretations. :P
If they’re never absolutely true (your interpretation), how can they ever be absolutely true (my interpretation)?
I said mathematical proofs aren’t absolute because mathematical proofs and refutations are subject to philosophical, linguistic debate—argument about whether the proof fits the concept being played with, argument which can result in (for example) proof-constructed definitions. During this process, one might say that the original proof or refutation is correct, but no longer appropriate, or that the original proof is incorrect. Neither statement implies different behavior.
You’re basically doing the same when you name-drop “a Bayesian revival in the sciences”. I’ve been here for months trying to figure out what the hell people mean by “Bayesian” and frankly feel little the wiser. It’s interesting to me, so I keep digging, but clearly explained? Give me a break. :)
I found Polanyi somewhat obscure (all that I could conclude from Personal Knowledge was that I was totally devoid of spiritual knowledge), so I won’t defend him. But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a “methodological rule of science”, can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed “methodology”.
As an example being impartial certainly isn’t required to do good science; you can start out having a hunch and being damn sure your hunch is correct, and the energy to devise clever ways to turn your hunch into a workable theory lets you succeed where others don’t even acknowledge there is a problem to be solved. Semmelweis seems to be a good example of an opinionated scientist. Or maybe Seth Roberts.
What’s your take on string theorists? ;)
That’s not remotely the same thing—I wasn’t bringing that up as some kind of substantiation for any argument, while Callahan was mentioning the thing about “a priori crystallography” (???) as an argument.
So? I was arguing about what deserves to be called science, not what happens to be called science. And yes, people practice “ideal science” imperfectly, but that’s no evidence against the validity of the ideal, any more than it’s a criticism of circles that no one ever uses a perfect one. Furthermore, every time someone points to one of these counterexamples, it happens to be at best a strawman view. Like what you do here:
The claim isn’t that you have to be impartial, but that you must adhere to a method that will filter out your partiality. That is, there has to be something that can distinguish your method from groupthink, from decreeing something true merely because you have a gentleman’s agreement not to contradict it.
This is something interesting: Perceptions of distance seem to change depending on whether an object is desirable or undesirable.
A recent comment about Descartes inspired this thought: the simplest possible utility function for an agent is one that only values survival of mind, as in “I think therefore I am”. This function also seems to be immune to the wireheading problem because it’s optimizing something directly perceivable by the agent, rather than some proxy indicator.
But when I started thinking about an AI with this utility function, I became very confused. How exactly do you express this concept of “me” in the code of a utility-maximizing agent? The problem sounds easy enough: it doesn’t refer to any mystical human qualities like “consciousness”, it’s purely a question about programming tricks, but still it looks quite impossible to solve. Any thoughts?
You want the program to keep running in the context of the world. To specify what that means, you need to build on top of an ontology that refers to the world. But figuring out such ontology is a very difficult problem and you can’t even in principle refer to the whole world as it really is: you’ll always have uncertainty left, even in a general ontological model.
The program will have to know what tradeoffs to make, for example whether it’s important to survive in most possible worlds with fair probability, or in at least one possible world with high probability. These would lead to very different behavior, and the possibility of such tradeoffs exemplifies how much data such preference would require. If additionally you want to keep most of the world as it would be if the AI was never created, that’s another complex counterfactual for you to bake in into its preference.
It’s a very difficult problem, probably more difficult that FAI, since for FAI we at least have some hope of cheating and copying formal preference from an existing blueprint, and here you have to build that from scratch, translating your requirements from human-speak to formal specification.
An agent’s “me” is its model of itself. This is already a fairly complicated thing for an agent to have, and it need not have one.
Why do you say that an agent can “directly perceive” its own mind? Or anything else? A perception is just a signal somewhere inside the agent: a voltage, a train of neural firings, or whatever. It can never be identical to the thing that caused it, the thing that it is a perception of. People can very easily have mistaken ideas of who they are.
Program must have something to preserve. My first thought is preservation of declarative memory: ensure that future contain chain of systems, implementing same goal, with overlapping declarative memory.
I haven’t made an analysis, just first thought.
It refers to mystical human qualities like “me” and “think”. Basically I put it in the exact same category as ‘consciousness’.
No it doesn’t. I’m not interested in replicating the inner experience of humans. I’m interested in something that can be easily noticed and tested from the outside: a program that chooses the actions that allow the program to keep running. It just looks like a trickier version of the quine problem, do you think that one’s impossible as well?
If you want this to work in the real world, not a just much simpler computational environment, then for starters: what counts as a “program” “running”? And what distinguishes “the” program from other possible programs? These seem likely to be in the same category as (not to mention subproblems of) consciousness, whatever that category is.
Right now I’d be content with an answer in some simple computational environment. Let’s solve the easy problem before attempting the hard one.
My observation is just that the process you’re going through here in taking the “I think therefore I am” and making it into the descriptive and testable system is similar to the process others may go through to find the simplest way to have a ‘conscious’ system. In fact many people would resolve ‘conscious’ to a very similar kind of system!
I do not think either are impossible to do once you make, shall we say, appropriate executive decisions regarding resolving the ambiguity in “me” or “conscious” into something useful. In fact, I think both are useful problems to look at.
It’s not hard to design a program with a model of the world that includes itself (though actually coding it requires more effort). The first step is to forget about self-modeling, and just ask, how can I model a world with programs? Then later on you put that model in a program, and then you add a few variables or data structures which represent properties of that program itself.
None of this solves problems about consciousness, objective referential meaning of data structures, and so on. But it’s not hard to design a program which will make choices according to a utility function which refers in turn to the program itself.
Well, I don’t want to solve the problem of consciousness right now. You seem to be thinking along correct lines, but I’d appreciate it if you gave a more fleshed out example—not necessarily working code, but an unambiguous spec would be nice.
Getting a program to represent aspects of itself is a well-studied topic. As for representing its relationship to a larger environment, two simple examples:
1) It would be easy to write a program whose “goal” is to always be the biggest memory hog. All it has to do is constantly run a background calculation of adjustable computational intensity, periodically consult its place in the rankings, and if it’s not number one, increase its demand on CPU resources.
2) Any nonplayer character in a game which fights to preserve itself is also engaged in a limited form of self-preservation. And the computational mechanisms for this example should be directly transposable to a physical situation, like robots in a gladiator arena.
All these examples work through indirect self-reference. The program or robot doesn’t know that it is representing itself. This is why I said that self-modeling is not the challenge. If you want your program to engage in sophisticated feats of self-analysis and self-preservation—e.g. figuring out ways to prevent its mainframe from being switched off, asking itself whether a particular port to another platform would still preserve its identity, and so on—the hard part is not the self part. The hard part is to create a program that can reason about such topics at all, whether or not they apply to itself. If you can create an AI which could solve such problems (keeping the power on, protecting core identity) for another AI, you are more than 99% of the way to having an AI that can solve those problems for itself.
This concept is extremely complex (for example, which “outside” are you talking about?).
You seem to be reading more than I intended into my original question. If the program is running in a simulated world, we’re on the outside.
Yes, using a formal world simplifies this a lot.
Statistical Analysis Overflow is trying to start up. If you’d be a regular contributor go over and commit, if enough commit it’ll go into beta.
It’s a “Proposed Q&A site for statistics, data analysis, data mining and data visualization”, like Stack Overflow or Math Overflow.
A question for LW regulars: is there a rule of thumb for how often it is acceptable to make top-level posts?
There was some discussion of that here. Suggestions include once or twice per week, let karma be your guide, and don’t worry about posting too much.
Fabulous, thanks.
Not sure what the others will say, but for me it depends on the quality. I’d be overjoyed to see a new post by Yvain, Nesov or Wei Dai every morning. (Yep, I consider these three posts to be the gold standard for LW. Not to say that there weren’t others like them, of course.) Your own first post was exceptionally good for a first post, but the topic is kinda controversial, so I’d be extra cautious and wait another day or two to avoid being seen as “spamming” or “hijacking the agenda”.
Why thank you (I’m blushing like a schoolgirl). I don’t imagine I’ll have anything ready for at least another day or two, but it seemed like a good question to ask just in case.
My next post will hopefully be a little less controversial and a little more practical. Managing jealousy isn’t simple by any means, but it’s a little less tied up with people’s value systems.
I don’t know if I’m exactly a regular, but I’d naively think that if one makes posts that are well-written, relevant, and not redundant, the total number won’t be an issue.
Stating P=NP Without Turing Machines
http://rjlipton.wordpress.com/2010/06/26/stating-pnp-without-turing-machines/
Mindfulness meditation improves cognition: Evidence of brief mental training
If a being presented itself to you and claimed to be omni(potent/scient/present/benevolent), what evidence would you require to accept its claim?
(EDIT: On a second reading, this sounds like a typical theist opening a conversation. I assure you, this is not the case. I am genuinely interested in the range of possible answers to this question.)
There’s an interesting article in the New York Times on warfare among chimpanzees. One problem, though, is that they attempt to explain the level of coordination necessary in warfare with group selection. This, of course, will not do. I’m under-read in evolutionary biology, but it seems like kin selection accounts for this phenomenon just fine. You are more likely to be related to members of your group than an opposing group, so taking territory from a rival group doesn’t just increase your fitness directly, but indirectly through your shared genes among group members.
What do you think, LessWrong?
edit: Some commentary on the article.
Group selection has been vilified; but irrationally so. Group selection has been observed many times in human groups, so dismissing it is silly.
From the LWwiki:
So, can you point to one of these observations? (and if so, update the wiki!)
I can point to observations of groups being eliminated, and in some of these cases, it seems obvious that elimination was attributable to a behavior, a biological phenotype, or a social phenotype. For instance, there was a group of related tribes in South America, described IIRC in “Life among the Yanomamo”, who were very aggressive and kept killing and raping members of neighboring tribes. Eventually, the neighboring tribes got together and killed every last man of the aggressive tribe that they could find. The book “Black Robe” fictionalizes a real-life account of another group selection incident, in which one North American tribe adopted Christianity, and (the book implies) as a result became less violent and were wiped out by neighboring non-Christian tribes. The villages of the Christianized natives of Papua New Guinea are at this moment being razed by the (Muslim) Indonesian army (not that you’ll hear anything about it in the news), which you could relate to either the religious or the technological difference between the groups.
I don’t know what counts as an “adaptation”. When Spanish genes spread rapidly among the natives of central America due to the superior technology of Spain, was that an adaptation?
What I do know is that social norms lead to differential reproductive success. There is obvious group selection going on in the world right now that favors culture that place a high value on high birth rate, or that prohibit birth control.
But group selection is a more specific idea, the idea that a trait can become widespread due to it’s positive effects on group success, regardless of the effects on individual fitness. An example of group selection would be a trait such that: (1) groups in which it is widespread win, (2) lacking the trait doesn’t lower the reproductive success of an individual member of such a group. While your examples show (1) it is not clear that they satisfy (2).
Then I must admit confusion here: when human groups have norms that punish “defectors”, genes that predispose someone to play a “tit for tat” strategy (or, to some extent, altruism) rather than defection are rewarded and spread through the gene pool faster. Is that not a case where group-favoring genes become widespread? To the extent it diverges from the definition you gave, that’s because of pretty arbitrary caveats.
I thought that counted as group selection, but was regarded as a “special case” because it requires enforcement of norms to an extent that has only been observed in humans.
Edit: And what other species has anything like China’s one-child policy?
The definition of group selection, from Wikipedia:
The key is that the benefit to the group is at least part of what is driving the adaptation. Now an adaptation (like tit-for-tat) can certainly benefit the group, but that doesn’t mean there is group selection going on—the benefit to the group has to be part of the cause for the trait’s spread, apart from the benefit from the individual.
Tit-for-tat is individually fitness maximizing in many situations. In fact, it’s an Evolutionary Stable Strategy. So in a population of tit-for-tat players, it’s fitness maximizing to play tit-for-tat. So tit-for-tat is not an example of group selection, or at least it’s existence doesn’t imply group selection has occurred.
That’s a decision of a small group of people imposed on a much larger group of people. If each person was individually choosing to have only one child, then it might be group selection. With that being said, the changing birth patterns of developed countries is an interesting phenomena to consider. It’s probably just a case of external conditions changing faster than evolution changes us though.
Gravitomagnetism—what’s up with that?
It’s an phrasing of how gravity works with equations that have the same form as Maxwell’s equations. And frankly, it’s pretty neat: writing the laws for gravity this way gets you mechanics while approximately accounting for general relativity (how approximate and what it leaves off, I’m not sure of).
When I first found out about this, it blew my mind to know that gravity acts just like electromagnetism, but for different properties. We all know about the parallel between Coulomb’s law and Newton’s law of gravitation, but the gravitoelectromagnetism (GEM) equations show that it goes a lot deeper.
Besides being a good way to ease into an intuitive understanding of the Einstein field equations, to me, it’s basically saying that gravity and EM are both obeying some more general law. Anyone know if work has been done in unifying gravity and EM this way? All I hear about is that it’s easy to unify strong, weak, and EM forces, but gravity is the stumbling block, so this should be something they’d want to explore more.
Yet when you go investigate “gravitational induction” to find out how the gravitic parallel to magnetic fields works, you find that this gravitomagnetic field is called the torsion field, and its existence is (at least approximately) implied by general relativity, but then the Wikipedia page says that the torsion field is a pseudoscientific concept. Hm...
So, anyone have an understanding of the GEM analogy and can make sense of this? Does it suggest a way to unify gravity and EM? Or how to create a coil of mass flow that can “gravitize” a region (as a coil of current magnitizes a metal bar)?
No, what’s happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.
No.
Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you go from one to the other. That means your intuitive understanding of EM doesn’t map well to understanding gravity.
True, but what got me the most interested is the gravitic analog of magnetic fields. It shows that masses can produce something analogous to magnetism by their rotation. Rotate one way, you drag the object closer; rotate the other way, you push it away. This allows both attraction and repulsion in the equations for gravity, and suggests something similar is going on that generates magnetism.
Your link to “torsion field” talks about a completely different concept than the one in GEM. That concept is indeed a notorious example of pseudoscience here in Russia.
I’d mostly like to echo what mindviews said—similar math is not unification—and point out that there was an actual attempt at unification in Kaluza-Klein theory. But I don’t actually know anything about that, I should note...
I’m intrigued by the notion and would like to hear more from someone who can tell me whether I can take this seriously. That ‘approximately accounting for’ part scares me. Is that just word chioce that makes it sound scary? Or perhaps an approximation in the way that Newtonian physics is an approximation? Or maybe it is only an approximation is as much as it suffers the same problem all our theories do of being unable to unify all of our physics at once… I’d need someone several levels ahead of me to figure that out.
It’s definitely better of an approximation than Newtonian physics. This paper might help, as it derives the GEM equations from GR and specifically states what simplifying assumptions it uses, which look to be basically “for greater-than-subatomic distances”. And that’s exactly where you care about gravity anyway. (At subatomic distances, the other three forces dominate.)
(At least, they do when the other forces are configured to counter each other.)
Be careful, you are near fringe science domain.
Is there an on-line ‘rationality test’ anywhere, and if not, would it be worth making one? (testing for various types of biases, etc.) Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of their blind spots.
Chimps copy high status individuals in their groups
Schroedinger Cat is dead. Maybe it’s time to update plausibility of classic many worlds interpretation is spite of “Einselection, Envariance, Quantum Darwinism”.
I am not sufficiently competent to analyze work of W.H. Zurek, but I think that work can be a great source of insights.
Edit: Abstract. Zurek derived Born’s rule.
The “derivation” is on page 12.
The repeated problem for many worlds is that if the quantum state is 1⁄2 |dead cat> + sqrt(3)/2 |live cat>, then (squaring the coefficients) the probability of dead cat is 1⁄4, the probability of live cat is 3⁄4, and so there should be three times as many live cats compared to dead cats (for such a wavefunction); but the decomposition into wavefunction components just produces one dead-cat world and one live-cat world, which naively suggests equal probabilities. The problem is, how do you interpret a superposition like that, in terms of coexisting, equally real worlds, so as to give the right probabilities.
It looks like part of what Zurek does is to pick a basis (Schmidt decomposition) where the components all have the same amplitude—which means they all have the same probability, so the naive branch-counting method works! A potential problem with this way of proceeding is that, expressed in the position basis, the branches end up being complicated superpositions of spatial configurations. (The space of quantum states, the Hilbert space, is a large abstract vector space with a coordinate basis formally labeled by spatial configurations, so the basis vectors of a different basis will be sums of those position-basis vectors.) Explaining complicated superpositions which don’t look like reality by positing the existence of many worlds, each of which is itself a complicated superposition that doesn’t look like reality, is not very promising. It’s sort of okay to do this for microscopic entities because we don’t have apriori knowledge about what their reality is like, and we might suppose that the abstract Hilbert-space vector is the actual reality; but somewhere between microscopic and macroscopic, you have to produce an actual live cat, and not just a live cat summed with an epsilon-amplitude dead cat. I have no idea how Zurek deals with this.
Actually, Zurek has a lot of background assumptions which make his reasoning obscure to me and I really don’t expect it to make sense in the end, though it’s impossible to be sure until you have decoded his outlook. His philosophy is a weird mixture of Bohr’s antirealism and Everett’s multirealism, and in other papers he says things like
(thanks to DZS for the quote). And of course it’s nonsense to say that something doesn’t exist until there are multiple copies of it (how many is the magic number? how can you make an existing copy of a nonexistent original?). Zurek is using the words “objective existence” in some twisted way. I’m sure the reason is that he doesn’t have the answer to QM, but he wants to believe he does; that is how smart people end up writing nonsense. But I would have to understand his system to offer a more precise diagnosis.
There’s a skepticism stack overflow site proposed. If enough follow it, it will go into beta. So if that’s your thing, go here
Max More writes about biases that treat natural chemicals as safer than man-made chemicals, natural hazards as safer than man-made hazards, and the status quo as preferable to possible futures, in The proactionary principle.