Open Thread: May 2010, Part 2
The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
I have an idea I’d like to discuss that might perhaps be good enough for my first top-level post once it’s developed a bit further, but I’d first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I’ll post a rough outline here as a request for comments.
It’s about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It’s a matter of practical interest to most people, where it’s basically impossible not to have an opinion. So people have strong opinions, and you basically can’t avoid forming one too.
The available hard scientific evidence doesn’t say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of course, lots of folks won’t have their logic straight, so it’s enough that a particular factual belief is perceived to correlate with a popular or unpopular normative belief to be a subject of widespread bias in one or the other direction.)
Finally, the prevailing opinions on the subject have changed heavily through history, both factually and normatively, and people view the normative beliefs prevailing today as enlightened progress over terrible evils of the past.
These conditions of course apply to lots of stuff related to politics, social issues, etc. Now, the exact bias mechanism I have in mind is as follows.
As per the assumptions (3) and (4), people are aware (more or less) that the opinions on the subject in question were very different in the past, both factually and normatively. Since they support the present norms, they’ll of course believe that the past norms were evil and good riddance to them. They’ll chalk that one up for “progress”—in their minds, the same vaguely defined historical process that brought us science and prosperity in place of superstition and squalor, improvements that are impossible to deny, has also brought us good and enlightened normative beliefs on this issue instead of the former unfair, harmful, or just plain disturbing norms. However, since the area in question, as we’ve assumed under (2), is not amenable to a hard-scientific straightening out of facts from bullshit, it’s not at all clear that the presently prevailing factual beliefs are not severely biased. In fact, regardless of what normative beliefs one has about it, there is no rational reason at all to believe that the factual beliefs about the topic did not in fact become more remote from reality compared to some point in the past.
And now we get to the troublesome part where the biases get their ironclad armor: arguing that we’ve actually been increasingly deluding ourselves factually about some such topic ever since some point in the past, no matter how good the argument and evidence presented, will as per (3) and (4) automatically be perceived as an attack on the cherished contemporary normative beliefs by a reactionary moral monster. This will be true in the sense that updating the modern false factual beliefs will undermine some widely accepted consequentialist arguments for the modern normative beliefs—but regardless, even if one is still committed to these normative beliefs, they should be defended using logic and truth, not bias and falsity. Moreover, since both the normative and factual historical changes in prevailing beliefs have been chalked up to “progress,” the argument will be seen as an attack on progress as such, including its parts that have brought indisputable enrichment and true insight, and is thus seen as sacrilege against all the associated high-status ideas, institutions, and people.
To put it as briefly as possible, the bias is against valid arguments presenting evidence that certain historical changes in factual beliefs have been away from reality and towards greater delusions and biases. It rests on:
a biased moralistic reaction to what is perceived as an attack on the modern cherished normative beliefs, and
a bias in favor of ideas (and the associated institutions and individuals, both contemporary and historical) that enjoy the high status awarded by being a contributor to “progress.”
What should be emphasized is that this results in factual beliefs being wrong and biased, and the normative beliefs, whatever one’s opinion about their ultimate validity, owing lots of their support to factually flawed consequentialist arguments.
Does this make any sense? It’s just a quick dump of some three-quarters-baked ideas, but I’d like to see if it can be refined and expanded into an article.
It seems a common bias to me and worth exploring.
Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.
This reminds me of a related bias—people generally don’t have any idea how much of the stuff in their heads was made up on very little evidence, and I will bring up a (hopefully) just moderately warm button issue to discuss it.
What is science fiction? If you’re reading this, you probably believe you can recognize science fiction, give a definition, and adjudicate edge cases.
I’ve read a moderate number of discussions on the subject, and eventually came to the conclusion that people develop very strong intuitions very quickly about human cultural inventions which are actually very blurry around the edges and may be incoherent in the middle. (Why is psi science fiction while magic is fantasy?)
And people generally don’t notice that their concepts aren’t universally held unless they argue about them with other people, and even then, the typical reaction is to believe that one is right and the other people are wrong.
As for the future and the past, it’s easy enough to find historians to tell you, in detail, that your generalizations about the past leave a tremendous amount out. It should be easier to see that futures are estimates at best, but it can be hard to notice even that.
As to whether I could give a definition of science fiction, Similarity Clusters and similar posts have convinced me that the kind of definition I’d normally make would not capture what I meant by the term.
I’ve noticed a similar thing happen with people trying to define ‘literary fiction.’ Makes me wonder what other domains might have this bias.
My assumption is that it’s all of them.
Reading efforts to define science fiction is why I’ve never looked at efforts at defining who’s a Jew. I have a least a sketchy knowledge of legal definitions for Reform and Orthodox, but that doesn’t cover the emotional territory.
What’s a poem? What’s a real American?
If you can find a area of human creation where there aren’t impassioned arguments about what a real whatever is, please let me know.
What’s a paperclip?
It’s an inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.
So those don’t count?
Correct.
Do you value those hunks of plastic more than other hunks of plastic?
Do you value inwardly-thrice-bent plastic wire that can non-destructively fasten paper together at an edge more than other hunks of plastic?
No.
No.
Why?
Because they’re not inwardly-thrice-bent metal wires that can non-destructively fasten paper together at an edge?
Is this classification algorithm really that difficult to learn?
I meant why do you not value plastic clips… oh, I get it, you value what you value, just like we do. But do you have any sort of rationalization or argument whereby it makes intuitive sense to you to value metal clips and not plastic ones?
Think for a minute about what it would be like for the WHOLE UNIVERSE to be plastic paperclips, okay? Wouldn’t you just be trying to send them into a star or something? What good are plastic papercips? Plastic.
*Shudders*
Clippy, that’s how we humans feel about a whole universe of metal paperclips. Imagine if there was a plastic-Clippy who wanted to destroy all metals and turn the universe into plastic paperclips. Wouldn’t you be scared? That’s how we feel about you.
That still seems just a bit paranoid. Why would I wipe you out when you could be put to use making papercips?
Imagine being put to use making plastic paperclips.
I don’t think those scenarios have the same badness for the referent. I know for a fact that some humans voluntarily make metal paperclips, or contribute to the causal chain necessary for producing them (designers, managers, metal miners, etc.), or desire that someone else provide for them paperclips. Do you have reason to believe these various, varied humans are atypical in some way?
We make paperclips instrumentally, because they are useful to us, but we would stop making them or destroy them if doing so would help us. Imagine an entity that found metal clips useful in the process of building machines that make plastic clips, but who ultimately only valued plastic clips and would destroy the metal if doing so helped it.
I suspect that you make other things besides paperclips—parts for other Clippy instances, for example. Does that imply that you’d consider it acceptable to be forced by a stronger AI into producing only Clippy-parts that would never be assembled into paperclip-producing Clippy-instances?
The paperclips that we produce are produced because we find paperclips instrumentally useful, as you find Clippy-parts instrumentally useful.
What is the distinction here between plastic and metal? They both do a very good job at keeping paper together. And plastic paperclips do so less destructively since they make less of an indentation in the paper.
Let me put it to you this way: would you rather have a block of metal, or a block of plastic? Just a simple question.
Or let’s say you were in some enemy base. Would you rather have those wimply plastic paperclips, or an unbendable, solid, metal paperclip, which can pick locks, complete circuits, clean out grime …
To ask the question is to answer it—seriously.
In the enemy base scenario, I would rather have a paperclip made out of military grade composite, which can have an arbitrary % of metal by mass, from 0% metal to >50% metal.
Do you not value paperclips made out of supermaterials more than metal paperclips?
Non-metal paperclips aren’t.
If you want to talk about making paperclip makers out of non-metals, you have a point.
If you want to claim that reasonable Clippys can disagree (before knowledge/value reconciliation) about how much metal content a paperclip can have before it’s bad, you have a point.
But in any case, composites must be constructed in their finished form. A fully-formed, fully-committed “block of composite”, where no demand for such a block exists, and certainly not at any good price, should be just as useless to you.
Are not some paperclips better than others? I (and you) would both get a lot more utility out of a paperclip made out of computronium than a paperclip made out of aluminum.
I find that paperclips often leave imprints of themselves in paper, if left clipped there for a long time. Does this not count as destruction?
Nope, it doesn’t count as destruction. Not when compared to pinning, stapling, riveting, nailing, bolting, or welding, anyway.
Good point. I guess physicists don’t spend much time arguing what a ‘real electron’ is, but once you start talking about abstract ideas...
Considerable efforts have been made here to have a stable meaning for rationality. I think it’s worked.
It’s a stable meaning...so maybe that just forestalls the argument until Less Wrongian rationalists meet other rationalists!
Yes, that’s a good point. However, one difference between my idea and the nostalgia biases is that I don’t expect that the latter, even if placed under utmost scrutiny, would turn out to be responsible for as many severe and entirely non-obvious false beliefs in practice. My impression is that in our culture, people are much better at detecting biased nostalgia than biased reverence for what are held to be instances of moral and intellectual progress.
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia.
What determines which bias is more effective? I would guess that the main determinant is whether you identify with the community that brought about the “progress”. If you do identify with them, then it must be good, because you and your kind did it. If, instead, you identify with the community that had progress imposed on them, you probably think of it as a foreign influence, and a deviation from the historical norm. This deviation, being unnatural, will either burn itself out or bring the entire community down in ruin.
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
Moreover, there is also the question of which bias affects what kinds of people more. I am more interested in biases that affect people who are on the whole smarter and more knowledgeable and rational. It seems to me that among such people, the nostalgic biases are less widespread, for a number of reasons. For example, scientists will be more likely than the general population to appreciate the extent of the scientific progress and the crudity of the past superstitions it has displaced in many areas of human knowledge, so I would expect that when it comes to issues outside their area of expertise, they would be—on average—biased in favor of contemporary consensus views when someone argues that they’ve become more remote from reality relative to some point in the past.
Hmm. Maybe it would help to give more concrete examples, because I might have misunderstood the kinds of beliefs that you’re talking about. Things like gender relations, race relations, and environmental policy were significantly different within living memory. Now, things like institutionalized slavery or a powerful monarchy are pretty much alien to modern developed countries. But these policies are advocated only by intellectuals—that is, by those who are widely read enough to have developed a nostalgia for a past that they never lived.
Actually, now you’ve nudged my mind in the right direction! Let’s consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That’s certainly what I had thought until I read a recent paper titled “Ordeals” by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don’t feel like going through the math of the original paper.]
I can’t say with absolute confidence if Leeson’s arguments are correct or not, but they sound highly plausible to me, and certainly can’t be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the possible: (1) in the given circumstances in which medieval Europeans lived, trials by ordeal were perhaps more effective in making correct verdicts in practice than if they had used something similar to our modern courts of law instead, and (2) the verdict accuracy rate by trials by ordeal could well have been greater than that achieved by our modern courts of law, which can’t be realistically considered to be anywhere near perfect. As Leeson says:
Now, let’s look at the issue and separate the relevant normative and factual beliefs involved. The prevailing normative belief today is that the only acceptable way to determine criminal guilt is to use evidence-based trials in front of courts, whose job is to judge the evidence as free of bias as possible. It’s a purely normative view, which states that anything else would simply be unjust and illegitimate, period. However, underlying this normative belief, and serving as its important consequentialist basis, there is also the factual belief that despite all the unavoidable biases, evidence-based trials necessarily produce more accurate verdicts than other methods, especially ancient methods such as the trial by ordeal that involved superstitions.
Yet, if Leeson is correct—and we should seriously consider that possibility—this factual belief, despite having been universally accepted in our civilization for centuries, is false. What follows is that there may actually be a non-obvious way to produce more accurate verdicts even in our day and age, based on different institutions, but nobody is taking the possibility seriously because of the universal (and biased) factual belief about the practical optimality of the modern court system. It also follows that a thousand years ago, Europeans could easily have caused more wrongful punishment by abolishing trials by ordeal and replacing them with evidence-based trials, even though such a change would be judged by the modern consensus view as a vast improvement, both morally and in practical accuracy.
Another interesting remark is that, from what I’ve seen on legal blogs, Leeson’s paper was met with polite and interested skepticism, not derision and hostility. However, it seems to me that this is because the topic is so extremely remote that it has no bearing whatsoever on any modern ideological controversies; I have no doubt that a similar positive reexamination of some negatively judged past belief or institution that still has significant ideological weight would provoke far more hostility. That seems to be another piece of evidence suggesting that severe biases might be found lurking under the modern consensus on a great many issues, operating via the mechanism I’m proposing.
I skimmed Leeson’s paper, and it looks like it has no quantitative evidence for the true accuracy of trial by ordeal. It has quantitative evidence for one of the other predictions he makes with his theory (the prediction that most people who go through ordeals are exonerated by them, which prediction is supported by the corresponding numbers, though not resoundingly), but Leeson doesn’t know what the actual hit rate of trial by orderal is.
This doesn’t mean Leeson’s a bad guy or anything—I bet no one can get a good estimate of trial by ordeal’s accuracy, since we’re here too late to get the necessary data. But it does mean he’s exaggerating (probably unconsciously) the implications of his paper—ultimately, his model will always fit the data as long as sufficiently many people believed trial by ordeal was accurate, independent of true accuracy. So the fact that his model pretty much fits the data is not strong evidence of true accuracy. Given that Leeson’s model fits the data he does have, and the fact that fact-finding methods were relatively poor in medieval times, I think your ‘interesting proposition’ #1 is quite likely, but we don’t gain much new information about #2.
(Edit—it might also be possible to incorporate ordeal-like tests into modern police work! ‘Machine is never wrong, son.’)
That’s interesting. I think you’re right that no one reacts too negatively to this news because they don’t see any real danger that it would be implemented.
But suppose there were a real movement to bring back trial by ordeal. According to the paper’s abstract, trial by ordeal was so effective because the defendants held certain superstitious belief. Therefore, if we wanted it to work again, we would need to change peoples’ worldview so that they again held such beliefs.
But there’s reason to expect that these beliefs would cause a great deal of harm — enough to outweigh the benefit from more accurate trials. For example, maybe airlines wouldn’t perform such careful maintenance on an airplane if a bunch of nuns would be riding it, since God wouldn’t allow a plane full of nuns to go down.
Well, look at me — I launched right into rationalizing a counter-argument. As with so many of the biases that Robin Hanson talks about, one has to ask, does my dismissal of the suggestion show that we’re right to reject it, or am I just providing another example of the bias in action?
It’s the old noble lie in a different package.
Tyrrell_McAllister:
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
I don’t think that nostalgia bias would be harder to detect in general—it’s easy to detect in our culture because it isn’t a general part of a culture (that seems to be pretty much what you’re saying).
However, the opposite may have held for, say, imperial China, or medieval Europe.
Yeah, looks good! I would like to see a top-level article on this, and I think fruit X would be a good example to start with.
If the issue is how to fight back against these problems, I bet you could make a lot of headway by first establishing a bit of credibility as an X-eater, and then making your claims while being clear that you are not nostalgic. E.g. eat an X fruit on TV while you are on a talk show explaining that X fruit isn’t healthy in the long run. “I’m not [munch] a religious bigot, [crunch], I just think there might [slurp] be some poisonous chemicals [crunch] in this fruit and that we should run a few studies to [nibble] find out.”
Humor helps, as does theater.
My immediate reaction to reading this was that it was obvious that the particular hot-button issue that inspired it was the recent PUA debate… but I notice nobody else seems to have picked up on that, so now I’m wondering… was that what you had in mind, or am I just being self-obsessed?
(don’t worry, I’m not itching to restart that issue, I’m just curious about whether or not I’m imagining things)
ETA: Ok, after reading the rest of the comments more thoroughly, I guess I’m not the only person who figured that was your inspiration.
Personally, I would suggest you use the concrete examples, rather than abstract or hypothetical ‘poison-fruit’ kind of stories—those things never seem to be effective intuition pumps (for me at least). If you want to avoid the mind-killing effect of a hot-button issue, I think a better idea is just to use multiple concrete examples, and to choose them such that any given person is unlikely to have the same opinion on both of them.
Recent controversy on LW about gender, dating etc seems to fall into exactly this pattern.
In particular, there is heavy conflation of the facts of the matter about what kind of behavior women are attracted to with normative propositions about which gender is “better” and whether which is more blameworthy.
Gender equality discussions (Larry summers!) seem to fall into the same trap.
Yes, it was in fact thinking about that topic that made me try to write these thoughts down systematically. What I would like to do is to present them in a way that would elicit well-argued responses that don’t get sidetracked into mind-killer reactions (and the latter would inevitably happen in places where people put less emphasis on rationality than here, so this site seems like a suitable venue). Ultimately, I want to see if I’m making sense, or if I’m just seeking sophisticated rationalizations for some false unconventional opinions I managed to propagandize myself into.
Another type of example you could use in this topic is a real one, that occurred in the past.
This would better than a fictional example, actually, as it brings in evidence from reality much earlier.
Indeed, that is a good strategy. However, sometimes if you make it too abstract, people don’t actually get what you’re talking about. It’s a fine line!
Are you referring to my article? I didn’t mean to give the impression that either strategy was better.
This bias needs a name, like “moral progress bias”.
I ask myself what your case studies might be. The Mencius Moldbug grand unified theory comes to mind: belief in “human neurological uniformity”, statist economics, democracy as a force for good, winning wars by winning hearts and minds, etc, is all supposed to be one great error, descending from a prior belief that is simultaneously moral, political, and anthropological, and held in place by the sort of bias you describe.
You might also want to explore a related notion of “intellectual progress bias”, whereby a body of pseudo-knowledge is insulated from critical examination, not by moral sentiments, but simply by the belief that it is knowledge and that the history of its growth is one of discovery rather than of illusions piled ever higher.
Mitchell_Porter:
Well, any concrete case studies are by the very nature of the topic potentially inflammatory, so I’d first like to see if the topic can be discussed in the abstract before throwing myself into an all-out dissection of some belief that it’s disreputable to question.
One good case study could perhaps be the belief in democracy, where the moral belief in its righteousness is entangled with the factual belief that it results in freedom and prosperity—and bringing up counterexamples is commonly met with frantic No True Scotsman replies and hostile questioning of one’s motives and moral character. It would mean opening an enormous can of worms, of course.
Yes, this is a very useful notion. I think it would be interesting to combine it with some of my earlier speculations about what conditions are apt to cause an area of knowledge to enter such a vicious circle where delusions and bullshit are piled ever higher under a deluded pretense of progress.
As written up here, it’s a bit abstract for my personal tastes. I can’t tell from this description whether in the potential post you’re planning on using specific examples to make your points, probably because you’re writing carefully due to the sensitive nature of the subject matter. I suspect the post will be received more favorably if you give specific examples of some of these cherished normative beliefs, explain why they result in these biases that you’re describing, etc.
On the other hand, given the potentially polarizing nature of the beliefs, there’s no guarantee that you won’t excite some controversy and downvotes if you do take that path. But given the subject matter of some of your other recent comments, I (and others) can probably guess at least some what of you have in mind and will be thinking about it as we read your submission anyway. And in that case, it’s probably better to be explicit than to have people making their own guesses about what you’re thinking.
I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:
A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).
Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.
At the same time, there has been a traditional belief that if you eat X, you’ll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn’t tell if it’s true or just a superstition that came as a side-effect of the religious taboo. You’d see that poor folks who eat it do get sick more often, but their disease might be just due to poverty, and you’d need sophisticated statistics and controlled studies to tell reliably which way it is.
At a later time, as science progresses and religion withdraws in front of it, and religious figures lose power and prestige, old superstitions and taboos perish, and now defying them is considered more and more cool and progressive. In particular, believing that eating fruit X is bad is now a mark of bigoted fundamentalism. Cool fashionable people will eat X occasionally just to prove a point, historians decry the horrors of the dark ages when poor people were sadistically persecuted for eating it, and a general consensus has been formed that its supposed unhealthiness has never been more than just another religiously motivated superstition. “X-eater” eventually becomes a metaphor for a smart fashionable free-thinker in these people’s culture, and “X-phobe” for a bigoted yokel.
People who eat X in significant quantities still get sick more, but the consensus explanation is that it’s because, since it’s free but not very tasty food, eating it correlates with poverty and thus all sorts of awful living conditions.
Now, notice that in this world, the prevailing normative belief on this issue has moved from draconian religious taboos to a laissez-faire approach, while at the same time, a closely related factual belief has moved significantly away from reality. For all the cruelty of the religious taboo, and the fact that poor folks may well prefer bad health later to starving now, the traditional belief that eating X is bad for your health was factually true. Yet a contrarian scientist who now suggests that this might be true after all will provoke derision and scorn. What is he, one of those crazed fundamentalists who want to bring back the days when poor folks were whipped and pilloried for picking X to feed their starving kids in years of bad harvest?
I think this example would illustrate quite clearly the sort of bias I have in mind. The questions however are:
Does it sound like too close an analogy to some present hot-button issue?
Does the idea that we might be suffering from some analogous biases sound too outlandish? I do believe that many such biases exist in the world today, and I probably myself suffer from some of them, but as you said, taking concrete examples might sound too controversial and polarizing.
I can think of several hot-button issues that are analogous to this parable — or would be, if the parable were modified as follows:
As science progresses, religious figures lose some power and prestige, but manage to hold on to quite a bit of it. Old superstitions and taboos perish at different rates in different communities, and defying them is considered more cool and progressive in some subcultures and cities. Someone will eat fruit X on television and the live audience will applaud, but a grouchy old X-phobe watching the show will grumble about it.
A conference with the stated goal of exploring possible health detriments of X will attract people interested in thinking rationally about public health, as well as genuine X-phobes. The two kinds of people don’t look any different.
The X-phobes pick up science and rationality buzzwords and then start jabbering about the preliminary cherrypicked scientific results impugning X, with their own superstition and illogical arguments mixed in. Twentysomething crypto-X-phobes seeking to revitalize their religion now claim that their religion is really all about protecting people from the harms of X, and feed college students subtle misinterpretations of the scientific evidence. In response to all this, Snopes.com gets to work discrediting any claim of the form “X is bad”. The few rational scientists studying the harmfulness of X are shunned by their peers.
What’s a rationalist to do? Personally, whenever I hear someone say “I think we should seriously consider the possibility that such-and-such may be true, despite it being politically incorrect”, I consider it more likely than not that they are privileging the hypothesis. People have to work hard to convince me of their rationality.
Yes, that would certainly make the parable much closer to some issues that other people have already pointed out! However, you say:
Well, if the intellectual standards in the academic mainstream of the relevant fields are particularly low, and the predominant ideological biases push very strongly in the direction of the established conclusion that the contrarians are attacking, the situation is, at the very least, much less clear. But yes, organized groups of contrarians are often motivated by their own internal biases, which they constantly reinforce within their peculiar venues of echo-chamber discourse. Often they even develop some internal form of strangely inverted political correctness.
Moreover, my parable assumes that there are still non-trivial lingering groups of X-phobe fundamentalists when the first contrarian scientists appear. But what if the situation ends up with complete extirpation of all sorts of anti-X-ism, and virtually nobody is left who supports it any more, long before statisticians in this hypothetical world figure out the procedures necessary to examine the issue correctly? Imagine anti-X-ism as a mere remote historical memory, with no more supporters than, say, monarchism in the U.S. today. The question is—are there any such issues today, where past beliefs have been replaced by inaccurate ones that it doesn’t even occur to anyone any more to question, not because it would be politically incorrect, but simply because alternatives are no longer even conceivable?
Maybe you could use the parable but put in brackets like you have with (sort of like smoking) but give very different ones for each point. That will keep the parable from seeming outlandish while not really starting a discussion of the bracketed illustrations. Smoking was a good illustration because it isn’t that hot a button any more but we can remember went it was.
Actually, maybe I could try a similar parable about a world in which there’s a severe, brutally enforced religious taboo against smoking and a widespread belief that it’s unhealthy, and then when the enlightened opinion turns against the religious beliefs and norms of old, smoking becomes a symbol of progress and freethinking—and those who try to present evidence that it is bad for you after all are derided as wanting to bring back the inquisition.
Though this perhaps wouldn’t be effective since the modern respectable opinion is compatible with criminalization of recreational drugs, so the image of freethinkers decrying what is basically a case of drug prohibition as characteristic of superstitious dark ages doesn’t really click. I’ll have to think about this more.
Actually, you might be surprised to learn that Randian Objectivists held a similar view (or at least Rand herself did), that smoking is a symbol of man’s[1] harnessing of fire by the power of reason. Here’s a video that caricatures the view (when they get to talking about smoking).
I don’t think they actually denied its harmful health effects though.
ETA: [1] Rand’s gendered language, not mine.
Yes, I’m familiar with this. Though in fairness, I’ve read conflicting reports about it, with some old-guard Randians claiming that they all stopped smoking once, according to them, scientific evidence for its damaging effects became convincing. I don’t know how much (if any) currency denialism on this issue had among them back in the day.
Rothbard’s “Mozart was a Red” is a brilliant piece of satire, though! I’m not even that familiar with the details of Rand’s life and personality, but just from the behavior and attitudes I’ve seen from her contemporary followers, every line of it rings with hilarious parody.
Reminds me a little of homosexuality, but only a little.
Personally, I like this approach. Leave out the contemporary hot buttons, at least at first. First keep it abstract, with fanciful examples, so that people don’t read it with their “am I forced to believe?” glasses on. Then, once people have internalized your points, we can start to talk about whether this or that sacrosanct belief is really due to this bias.
Yes; as soon as you got to the correlates-with-poverty part, I thought to myself, ‘what is he doing with this racism metaphor?’
I would think you could do with some explanation of why people aren’t genetically programmed to avoid eating X. Assuming that it has been around for an evolutionarily significant period. Some explanations could be that it interacts with something in the new diet or that humans have lost a gene required to process it.
Some taboos have survived well into the modern times due to innate, noncultural instincts. Take for example avoiding incest and the taboo around that. That is still alive and well. We could probably screen for genetic faults, or have sperm/egg donations for sibling couples nowadays but we don’t see many people saying we should relax that taboo.
Edit: The instinct is called the Westermarck Effect and has been show resistant to cultural pressure. The question is why cultural pressure works to break down other taboos, especially with regards to mating/relationships, which we should be good at by now. We have been doing them long enough.
There might be emotional as well as genetic reasons for avoiding incest. We don’t really know much about the subject. If anyone’s having an emotionally healthy (or at least no worse than average) incestuous relationship, they aren’t going to be talking about it.
The upvotes and interested responses indicate that there’s more than enough enthusiasm for a top-level post. Stop cluttering up the open thread! :-)
It seems like this general topic has already been discussed pretty extensively by e.g. Mencius Moldbug and Steve Sailer.
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That’s an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases—the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don’t appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills—who, compared to other people, have much less difficulty with overcoming their biases once they’re pointed out and critically dissecting all sorts of unpleasant questions—tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn’t even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.
Antibiotics. The common wisdom is, that we use them too much. Might be, that the opposite is true. A more massive poisoning of pathogens with antibiotics could push them over the edge, to the oblivion. This way, when we use the antibiotics reluctantly, we give them a chance to adapt and to flourish.
It just might be.
Do you have a citation for that?
As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice—they give them in overwhelming force to eradicate the bacteria completely. For example, they’ll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren’t deliberately scrimping on the antibiotics in an individual context.
The “give as few antibiotics as possible” rule mostly applies to giving them to as few patients as possible. If there’s a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.
The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.
This is one example. Maybe as free as the aspirin antibiotics would do here:
Link
All serious cases of stomach/duodenal ulcer are already tested for h. pylori and treated with several different antibiotics if found positive.
I know. But not long ago, nobody expected that a bacteria is to blame. On the contrary! It was postulated, that no bacteria could possibly survive the stomach environment.
So what are you suggesting with that example? That we should pre-emptively treat all diseases with antibiotics just in case bacteria are to blame?
Read my original post, what I am saying. Above.
The reason I asked is that I don’t understand what you’re saying in the original post.
If you mean that we’re not giving enough antibiotics to people with stomach problems, well, that’s why I answered that we are currently giving enough antibiotics to people with stomach problems—in particular, we’re giving them two antibiotics plus a proton pump inhibitor, which is clinically demonstrated to be enough to get rid of h. pylori.
If you mean we should be giving antibiotics for diseases that aren’t currently believed to be caused by bacteria, on the off chance that they will turn out to in fact be caused by bacteria like stomach ulcers were, it doesn’t really work like that. There are dozens of antibiotics, many of which are specifically targeted at specific bacteria. If we don’t know what bacteria are causing a disease, we can’t target it with antibiotics except by giving the patient one of everything, which is a good way to kill them. This is ignoring the economic implications of giving drugs that can cost up to thousands of dollars per regimen for conditions that we have no reason to think they’d help for, the ethical issues in giving drugs with side effects up to and including death when they might not be necessary, and the medical issues involved in helping bacteria build up antibiotic immunity.
If I’m misunderstanding you, you’re going to have to explain what was in your post above better.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced just the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced jut the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.
I’m doing an MSc in Computer Forensics and have stumbled into doing a large project using Bayesian reasoning for guessing at what data is (machine code, ascii, C code, HTML etc). This has caused me to think again about what problems you encounter when trying to actually apply bayesian reasoning to large problems.
I’ll probably cover this in my write up; are people interested in it? The math won’t be anything special, but a concrete problem might show the problems better than abstract reasoning,
It also could serve as a precursor to some vaguely AI-ish topics I am interested in. More insect and simple creature stuff than full human level though.
I’m interested, and I suspect it relates to a question I’m a little interested in.
If a computer has to sort a big wad of data, how can it identify whether some of it is already sorted?
We developed the solution, in fact we evolved it.
Here is the source code in C++.
Partially or segmentally ordered arrays are not sorted again at all.
I’d be fascinated for both theoretical and practical reasons—I’m a network security guy by day, so I’m frequently looking at incomplete binary data captured between transient ports and wondering what it is.
Any given goal that I have tends to require an enormous amount of “administrative support” in the form of homeostasis, chores, transportation, and relationship maintenance. I estimate that the ratio may be as high as 7:1 in favor of what my conscious mind experiences as administrative bullshit, even for relatively simple tasks.
For example, suppose I want to go kayaking with friends. My desire to go kayaking is not strong enough to override my desire for food, water, or comfortable clothing, so I will usually make sure to acquire and pack enough of these things to keep me in good supply while I’m out and about. I might be out of snack bars, so I bike to the store to get more. Some of the clothing I want is probably dirty, so I have to clean it. I have to drive to the nearest river; this means I have to book a Zipcar and walk to the Zipcar first. If I didn’t rent, I’d have to spend some time on car maintenance. When I get to the river, I have to rent a kayak; again, if I didn’t rent, I’d have to spend some time loading and unloading and cleaning the kayak. After I wait in line and rent the kayak, I have to ride upstream in a bus to get to the drop-off point.
Of course, I don’t want to go alone; I want to go with friends. So I have to call or e-mail people till I find someone who likes kayaking and has some free time that matches up with mine and isn’t on crutches or sick at the moment. Knowing who likes kayaking and who has free time when—or at least knowing it well enough to do an intelligent search that doesn’t take all day—requires checking in with lots of acquaintances on a regular basis to see how they’re doing.
There are certainly moments of pleasure involved in all of these tasks; clean water tastes good; it feels nice to check in on a friend’s health; there might be a pretty view from the bumpy bus ride upstream. But what I wanted to do, mostly, was go kayaking with friends. It might take me 4-7 hours to get ready to kayak for 1-2 hours. Some of the chores can be streamlined or routinized, but if it costs me effort to be sure to do the same chore at the same time every week, then it’s not clear exactly how much I’m saving in terms of time and energy.
I have the same problem at work; although, by mainstream society’s standards, I am a reasonably successful professional, I can’t really sit down and write a great essay when I’m too hot, or, at least, it seems like I would be more productive if I stopped writing for 5 minutes and cranked up the A/C or changed into shorts. An hour later, it seems like I would be more productive if I stopped writing for 20 minutes and ate lunch. Later that afternoon, it seems like I would be more productive if I stopped for a few minutes and read an interesting article on general science. These things happen even in an ideal working environment, when I’m by myself in a place I’m familiar with. If I have coworkers, or if I’m in a new town, there are even more distractions. If I have to learn who to ask for help with learning to use the new software so that I can research the data that I need to write a report, then I might spend 6 hours preparing to spend 1 hour writing a report.
All this worries me for two reasons: (1) I might be failing to actually optimize for my goals if I only spend 10-20% of my time directly performing target actions like “write essay” or “kayak with friends,” and (2) even if I am successfully optimizing, it sucks that the way to achieve the results that I want is to let my attention dwell on the most efficient ways to, say, brush my teeth. I don’t just want to go kayaking, I want to think about kayaking. Thinking about driving to the river seems like a waste of cognitive “time” to me.
Does anyone else have similar concerns? Anyone have insights or comments? Am I framing the issue in a useful way? Is the central problem clearly articulated? Just about any feedback at all would be appreciated.
Yes, no, yes, yes. This is a very well-written post, incidentally. Good work.
I have nothing to add, but I want to tell you I’m happy you wrote this post, so that you don’t get discouraged by the lack of comments.
*not caring*
How good are you at making paperclips? Is it the same way, where you spend hours getting ready to make them, but only maybe an hour or so actually turning them out (or in)?
General question on UDT/TDT, now that they’ve come up again: I know Eliezer said that UDT fixes some of the problems with TDT; I know he’s also said that TDT also handles logical uncertainty whereas UDT doesn’t. I’m aware Eliezer has not published the details of TDT, but did he and Wei Dai ever synthesize these into something that extends both of them? Or try to, and fail? Or what?
Since I’m going to be a dad soon, I started a blog on parenting from a rationalist perspective, where I jot down notes on interesting info when I find it.
I’d like to focus on “practical advice backed by deep theories”. I’m open to suggestions on resources, recommended articles, etc. Some of the topics could probably make good discussions on LessWrong!
Dale McGowan of Parenting Beyond Belief is one resource that I know of. He has a blog (sample posts i and ii), a book Raising Freethinkers (see also the posts about the book on his blog), and links to other resources including an online discussion forum and various secular parenting groups around the United States.
Seconding Dale’s work.
Thanks; I knew about the blog but didn’t know about the forum, which probably has some quite good resources.
I guess I have a different focus than he does : I’m not interested in religion or the lack thereof, but rather in learning about the best way to raise kids, and how to navigate through the conflicting advice of various experts and peers. I’m not interested in “how do I help my kids find meaning in a Universe without God” as much as “how can I best help my kids become well-balanced open minded productive intelligent and well-prepared adults and not spoiled whiny brats”.
Also—I live in France, which is already plenty secular. That probably explains why religion isn’t a very big issue. My parents (atheists) didn’t pay much attention to religion, neither did my wife, religion never was much of a conversation topic at school, and I expect the same will go for my kids.
Wow, you French are open minded :).
Typo notwithstanding, all but one of those “wives” could have been an ex-wife.
Edited :)
Maybe the time’s ripe for a meetup here? There’s at least four of us in or near Paris, and if we announce one others might delurk.
Back on-topic, I’m not sure what-all I can say about parenting, but having 3 I’m pretty sure I’ve made a bunch of mistakes that others can benefit from. ;)
It seems as though you think the primary risk is being too permissive, with no significant risk of being too harsh. Is it plausible that all the risk is in one direction?
No—where did I give that impression?
On re-reading, I think that “well-balanced open minded” implies that you are concerned with being too strict as well as being too permissive, but my attention was caught by the higher emotion level of the last clause.
Also, it was just a one-sentence summary of why religion wasn’t my main concern when talking about “rational parenting”, you shouldn’t read too much into it :)
ETA: This scheme is done. All three donations have been made and matched by me.
I want to give $180 to the Singularity Institute, but I’m looking for three people to match my donation by giving at least $60 each. If this scheme works, the Singularity Institute will get $360.
If you want to become one of the three matchers, I would be very grateful, and here’s how I think we should do it:
You donate using this link. Reply to this thread saying how much you are donating. Feel free to give more than $60 if you can spare it, but that won’t affect how much I give.
In your donation’s “Public Comment” field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn’t work for me, so I don’t expect it to work for you. For now, I’ll just believe you if you say you’ve donated. If you would be convinced to donate by seeing evidence that I’m not lying, let me know and I’ll get you some.
I will do the same. (Or if you’re the first matching donor, then I already have—see directly below.)
To show that I’m serious, I’m donating my first $60 right now. I will donate my second $60 after the second matching donor, and my third $60 after the third matching donor.
If you already donate regularly, please wait until it looks like my scheme is failing before taking up one of the matching-donor slots. But if you have never donated despite always wanting to, then here’s a chance to double your help.
I’m also interested in information people might have about whether this scheme is a good idea (compared to, say, quietly making the donation alone).
I’ve donated $60 and put a message requesting confirmation here in my public comment.
Great, thank you! (You’re the first donor, so I matched yours in May.)
It looks like the public comment isn’t an effective way to communicate with SIAI people—I included the same request in mine but got no response here. I’m debating whether to e-mail an SIAI person directly, but for now I’m just going to believe anyone who says they’ve donated.
The second donation is done (and matched by me).
So I’m trying to find myself some cryo insurance. I went to a State Farm guy today and he mentioned that they’d want a saliva sample. That’s fine; I asked for a list of all the things they’ll do with it. He didn’t have one on hand and sent me home promising to e-mail me the list.
Apparently the underwriting company will not provide this information except for the explicitly incomplete list I got from the insurance guy in the first place (HIV, liver and kidney function, drugs, alcohol, tobacco, and “no genetic or DNA testing”).
Is it just me or is it outrageous that I can’t get this information? Can anyone tell me an agency that will give me this kind of thing when I ask?
Indeed, that is rather outrageous. It runs afoul of pretty much any current conception of information privacy; I’m pretty sure what they’re doing would be illegal in the EU, as long as saliva counts as personal information. It’s pretty standard anyway for anyone who’s collecting your personal information to tell you what it will and will not be used for.
It doesn’t seem outrageous to me. You are asking them to bet against your death. There are many ways to die and due to adverse selection potentially fatal conditions are likely to be over-represented in applicants for their policies. It doesn’t seem unreasonable for them to try and leave themselves as much leeway as possible in detecting attempted fraud. It’s just sound underwriting.
I don’t object to their wanting the sample. In fact, I can’t think of much I’d reasonably expect them to test for that would cause me not to give it to them. But I want them to tell me what it is for.
If they were explicit about exactly what tests they planned to do they would open themselves up to gaming. Better to be non-specific and reserve the freedom to adapt. For similar reasons bodies trying to prevent and detect doping in sports will generally not want to publicize exactly what tests they perform.
Is LessWrong undergoing a surge in popularity the last two months? What does everyone make of this:
http://siteanalytics.compete.com/overcomingbias.com+lesswrong.com/
I’m guessing the Harry Potter fanfic has something to do with this.
They certainly have the traffic to cause it..
http://siteanalytics.compete.com/lesswrong.com+fanfiction.net/
If the fanfic effectively quintupled the traffic of LW, and about 8% of their visitors actually made it here, it must be doing really well…
“Harry Potter and the Methods of Rationality” started at the end of February ’10 and has over 4000 reviews 92 days later—it is doing very well.
The timing fits.
Possibly a variation on the attribution bias: Wildly underestimating how hard it is for other people to change.
While I believe that both attribution bias and my unnamed bias are extremely common, they contradict each other.
Attribution bias includes believing that people have stable character traits as shown by their actions. This “people should be what I want—immediately!” bias assumes that those character traits will go away, leading to improved behavior, after a single rebuke or possibly as the result of inspiration.
The combination of attribution bias and “other people should change immediately” bias suggests a third bias: Outrage bias, a habit of seeing the world in such a way that it’s reliably infuriating. This is especially visible in politics, but it’s common enough in columnists generally.
There was an old essay by Ursula Vernon—Divine Social Workers and the Secret of Happiness—that plays on the outrage bias theme.
That’s brilliant. Outrage bias deserves a top-level post.
Thank you. I’ll see what I can come up with.
Meanwhile, it’s interesting to ask people who say some social feature has gotten worse whether they have evidence that things used to be better. Sometimes they do, but frequently they don’t.
Gawande on checklists and medicine
Checklists are literally life-savers in ICUs—there’s just too much crucial which needs to be done, and too many interruptions, to avoid serious mistakes without offloading some of the work of memory onto an system.
However, checklists are low status.
I suggest that the problem starts earlier than rock-starism. Conventional schooling still tests on memory, and I think there’s a leftover effect that one ought to be able to remember the basics, or be shown to be an inferior sort of person.
Sidetrack into science fiction: Varley’s Eight Worlds stories have it that medicine has become so advanced and routinized that it’s a low status occupation for people who want to work with their hands. When I read the stories, I wondered if he was getting a little indirect revenge on doctors. I do wonder what it could take for that to happen to medicine. Anyone have histories of de-professionalization in any field?
There’s also a book: The Checklist Manifesto: How to Get Things Right:
Journalism, ongoing, according to some. Clay Shirky’s book Here comes everybody makes an interesting link between this process and Ronald Coase’s theory of the firm.
Surely not intrisically. Think of astronauts’ checklists.
Suggestion: instead of “low status” as an explanation for why people do or don’t do something, look for something closer to the specific domain. (Is it possible that doctors’ practice is much influenced by media portrayal of how doctors behave? By expectations of their “customers”?)
Morendil:
Astronauts are soldiers. Unlike doctors, soldiers have a huge incentive not to let their beliefs depart too far from reality because of status or any other considerations, for the simple reason that it may easily cause them personally, and not just someone else, to get killed or maimed. Thus, military culture is extremely practice-oriented. Due to their universal usefulness, checklist-driven procedures are a large part of it, and having to participate in them is not considered demeaning, even for super-high-status soldiers like fighter pilots. Eventually, strict rule-driven procedures associated with the military often even develop a cool factor of their own (consider launch or takeoff scenes from war action movies).
Of course, soldiers who lack such incentives will, like WW1 generals, quickly develop usual human delusions driven by status dynamics. But astronauts are clearly not in that category.
So your narrative is “checklists fail to take root because they are low-status, except where their being a serious matter for the people who use them (not just bystanders) causes them to be accepted, and in one such case they gain high status for extraneous reasons”.
Why, then, isn’t the rising cost of malpractice insurance enough to drive acceptance of checklists? What does it take to overcome an initial low-status perception? How do we even explain such perception in the first place?
As I understand it, drastic, rare, and somewhat random punishment does little to change behavior. Reliable small punishments change behavior.
That analysis would be inconsistent with my understanding of how checklists have been adopted in, say, civilian aviation: extensive analysis of the rare disaster leading to the creation of new procedures.
Again, my point was to prompt an alternative explanation to the hypothesis “checklists are not used by surgeons because the practice is intrinsically low-status”. Why (other than the OB-inherited obsession of the LW readership with “status”) does this hypothesis seem favored at the outset?
How would we go about weighing this hypothesis against alternatives? For instance, “checklists are not used because surgeons in movies never use them”, or “checklists are not used because surgeons are not trained to understand the difference between a checklist and a shopping list”, or “checklists are not used because surgeons are reluctant to change their practices until it becomes widely accepted that the change has a proven beneficial impact”?
Morendil:
One relevant difference is that the medical profession is at liberty to self-regulate more than probably any other, which is itself an artifact of their status. Observe how e.g. truckers are rigorously regulated because it’s perceived as dangerous if they drive tired and sleep-deprived, but patients are routinely treated by medical residents working under the regime of 100+ hour weeks and 36-hour shifts.
Even the recent initiatives for regulatory limits on the residents’ work hours are presented as a measure that the medical profession has gracefully decided to undertake in its wisdom and benevolence—not by any means as an external government imposition to eradicate harmful misbehavior, which is the way politicians normally talk about regulation. (Just remember how they speak when regulation of e.g. oil or finance industries is in order.)
The reason I attach high plausibility to such explanations is simply that status is the primary preoccupation of humans as soon as their barest physical subsistence needs are met. Whenever you see humans doing something without an immediate instrumental purpose, there is a very high chance that it’s a status-oriented behavior, or at least behavior aimed at satisfying some urge that originally evolved as instrumental to human status games.
The alternatives you mentioned are by no means incompatible with status-based explanations, and some of them are in fact reducible to it. For example, the behavior of doctors in TV shows is a reflection of the whole complex of popular beliefs and attitudes from which the medical profession draws its extraordinary status—and which in turn shapes these beliefs and attitudes to some extent. So, as I wrote in one of my other comments, if doctor TV shows started showing cool-looking checklist rituals prior to the characters’ heroic exploits, these rituals would probably develop a prestigious image, like countdown procedures in action movies, which would likely facilitate their adoption in practice.
At the very least this seems to be privileging an extraversion hypothesis. You can only gain status by interacting in some way with other people, yet it is not uncommon for people to shun company and instead devote time to solitary occupations with scant status benefits.
Under your justification for favoring status explanations, the only reason anyone ever reads a book is to brag about it. This seems wrong, prima facie, as well as simplistic.
Morendil:
Note that I also mentioned “satisfying some urge that originally evolved as instrumental to human status games” in my above statement. Today’s world is full of super-stimuli that powerfully resonate with ancestral urges even though they don’t actually lead towards the goals that these urges had originally evolved to promote, and are often even antithetical to these goals. Just like candy bars cheat the heuristic urges that evolved to identify nutritious and healthy food in the ancestral environment, it is reasonable to expect that solitary occupations with scant (or even negative) status benefits cheat the heuristic urges that originally evolved as useful in status games, or for furthering some other goal that they no longer achieve reliably in the modern environment.
You will probably agree that super-stimulation of status-seeking urges explains at least some non-beneficial solitary activities with high plausibility, for example when people neglect their real-life responsibilities by getting caught up in the thrill of virtual leadership and accomplishment provided by video-games. Of course, this by no means applies to all such activities; it is likely that the enjoyment found in some of them is rooted in urges that evolved for different reasons.
To address your above example, unless we assume some supernatural component of the human mind, I see no possible explanation of human book-reading except as a super-stimulus for some ancestral urges (whether status-related or not), unless of course it’s done not for enjoyment, but purely to acquire information necessary for other goals. While it’s far from being a complete explanation of human book-reading, it seems plausible to me that people sometimes enjoy books in part because it enhances their status signaling abilities in matters of erudition and taste. Also, it seems to me that stories super-stimulate the human urges for gossip, which are likely a device with an original status-related purpose, and all sorts of complex information may super-stimulate curiosity, whose evolution has likely been only partly status-based. These are of course just rough outlines of the complete truth, which we don’t yet know.
On the other hand, to get back to the original topic, when it comes to issues where actual power and prestige in human societies is at stake, in any case I’ve ever given much thought, the prominent role of widespread status-related beliefs and behaviors seems to me strikingly obvious and without any rival explanations that would be even remotely as plausible. The ability to account for such explanations by evolutionary theories additionally enhances their plausibility, as well as the fact that many deep and accurate insights into human nature by classical writers and philosophers can be faithfully retold in the more explicit modern language of status dynamics.
I just want to throw in a note that I don’t think human motivation is adequately explained by status alone—I would expand the list to SASS: Status, Affiliation, Safety, and Stimulation. (Where, as some folks here have pointed out, “Safety” might be more accurately described as stability, certainty, or control, rather than being purely about physical safety.)
Book-reading, in particular, is more likely to meet Safety/Stimulation needs than Status or Affiliation ones.… though you could maybe get those latter two from a book club or an academic setting.
pjeby:
I agree, but most complex and multi-faceted human behaviors are likely to be compelled by a mixture of these motives. My impression is that status features more often and more prominently that most people imagine, and its often masqueraded and rationalized by pretenses of other motivations.
My hypothesis is that super-stimulation of the same urges that cause people to enjoy gossip is responsible for a significant part (though by no means all) of human enjoyment of books and other ways of presenting stories. This would be a good example of super-stimulating an urge whose original evolution was to a large degree driven by status games, in a way that however has no direct relation to the present-day status games.
People just as routinely masquerade and rationalize the other three, actually.
However, that’s because their operation is fairly opaque to consciousness. We have built-in machinery for processing social signals relating to Status and Affiliation, and during our “impressionable” years, we learn to value the things that are associated with them, and come to treat them as terminal values in themselves.
IOW, SASS is how we learn to have non-SASS terminal values. So, when a person claims to be acting out of a non-SASS value, they’re not really lying. It’s just that they’re not usually aware of (i.e. have forgotten about) the triggers that shaped the acquisition of that value in the first place.
Plenty of other animals manage to be curious, needing actual stories. Also, some of us like to read things that aren’t gossip or stories.
Presumably one could test your hypothesis by finding out whether individuals lose interest in reading when they gain status; my personal experience suggests this is not the case, and that instead books compete with other forms of stimulation.
So, ISTM that even if curiosity (and certain templates for what to be curious about) were shaped by status competition, this doesn’t mean there is an operational connection between books and one’s self-perception of status.
To a certain extent, we could say that everything is about status, in the same way that every organ is a reproductive organ. But saying that everything is X, is the same as saying as nothing is X—it reduces your predictive power, rather than increasing it.
In retrospect, I probably should have put more care into the wording of my comments in this thread (which I wrote more quickly and with less proofreading than usual). Several people have understood my positions as more extreme than I honestly meant them to be, and I evidently failed in conveying some of the more subtle points I had in mind.
While I agree with most of your above comment, there seems to be a major misunderstanding here (probably due to my lack of clarity):
Well, insofar as reading is a directly status-related activity, nothing I hypothesized predicts that, nor is it the case in reality. In fact, if you enjoy high status as an intellectual, you are required to read a lot constantly to maintain that status; having nothing much to say when you’re asked what you’ve read lately would be a major embarrassment. Of course, this is rarely by itself a very prominent motivation—people who achieve high intellectual status usually have more than enough interest in reading out of curiosity and professional needs—but I wouldn’t say it’s entirely negligible either, especially when it comes to trendy highbrow literature.
However, that’s not at all what I had in mind with my reading-as-gossip-super-stimulus hypothesis. What I had in mind there is that the appeal of certain genres of literature and other storytelling media might be in part due to the fact that they stimulate the same urges that make people enjoy gossip. Thanks to these media, besides the thin diet of mundane real-world gossip, you get to enjoy huge amounts of artificial gossip skillfully crafted to be super-interesting, albeit about people who are fictional (or at least remote and personally irrelevant).
This mechanism has nothing at all to do with one’s actual status and behaviors that influence it. The status connection here lies the fact that the gossip-enjoying urges had previously evolved under the influence of status dynamics, in which gossip is one of the key practical instruments. Their present stimulation with concentrated artificial gossip delivered via literature and other media no longer serves this function; it is merely stimulating an urge that evolved as an adaptation to a different environment.
So that list doesn’t include curiosity. Are you denying that curiosity is a significant drive? Or (say) competence?
Curiosity falls under the “stimulation” heading, as does skill acquisition for its own sake (e.g. video games).
To be fair, the SASS list is more a convenient set of categories, than it is an attempt to be a comprehensive and rigorously-proven classification system. However, it’s definitely “less wrong” than assuming everything is about status… yet not so unwieldy as the systems that claim 16 or more basic human drives.
That I can live with. :)
The evolution of a desire for competence is an excellent question. Impulses such as curiosity and systemizing could be related to developing competence.
Systemizing could indeed be useful for your survival, and the survival of those around you, via tool-making, weapon-making, hunting/cooking techniques, etc… So systemizing could be a status-related adaptation.
Yet if your systemizing skills create a breakthrough (e.g. you design a useful tool), then your tribe may well accord you status, enhancing your survival and reproduction.
A desire for competence could also be useful for mating, because competence displays “good genes.” This is true of skills that don’t provide such obvious survival benefits, such as singing and dancing.
A desire for competence, and adaptations that facilitate its development (curiosity, systemizing), could well be useful for any combination of survival, reproduction, and status.
There’s nothing “super” about a book: no corresponding “normal” stimulus that elicits a natural response, such that a book is an exaggerated version of it.
Book-reading is explained straightforwardly enough as satisfying curiosity, a trait we share with many species (think cats).
If reading a book sometimes trumps the quest for status, then the latter cannot be THE primary preoccupation of people beyond bare physical subsistence. You will at least need to retreat to “an important” preoccupation.
Now, if you were to explore this topic without jumping to conclusions, perhaps you’d recognize this one example as the start of a list, and would in an unbiased manner draw up a somewhat realistic list of the activities typical humans engage in, and sort them into “activities having a high status component” and “activities not primarily status-related”. Then we might form a better picture of “how important”.
Morendil:
I disagree, for the reasons I’ve already discussed at length. You don’t seem to have read my above comment carefully, or perhaps my exposition was poor.
I did mention curiosity as one part of the motivation for reading books. Moreover, the curiosity explanation itself contradicts your above claim: a book, or a story told any other way, presents far more material (albeit fictional) for curiosity-satisfaction than is available from real-life events, and this material is intentionally and skillfully crafted to have great appeal in this regard, so it clearly does provide a super-stimulus for this particular urge.
Besides, as I also mentioned in my above post, there is also the human urge for gossip, which is pretty obviously related to status games, and is clearly super-stimulated by (at least some) books and other story-telling media. Finally, there is also the motivation of status seeking via demonstrating taste and erudition. All these, and possibly many other factors would probably feature in a complete theory of this particular human behavior.
Again, you don’t seem to understand my point about the difference between: (1) human behaviors that actually enhance status, or promote goals that lead towards its enhancement, and (2) behaviors driven by urges that had originally evolved for status-seeking purposes in the ancestral environment, but which misfire in the modern environment—just like e.g. the human taste for sugar was a good nutritional heuristic in a sugar-poor environment, but leads us to bad nutritional choices in the present environment full of cheap sugar-rich super-stimuli.
But as I explained above, you don’t seem to have understood my remarks about this example correctly. (I allow for the possibility that my writing was too bad to be understandable, of course.) I’ve explained the issue again now, and my conclusion is still that your example is incorrect. If you believe that my reasoning in this case is invalid, or if you have other examples that you believe refute my main thesis, I’d love to hear your arguments, but please make sure to address the substance of what I’ve written.
I read mostly non-fiction books, mostly to satisfy my curiosity. A recent example was “Freakonomics”. That appears to defuse your argument...
I dispute that a book is a “superstimulus” in the same sense in which that term has predictive power when applied to herring gull parents, to the sexual arousal response in humans, or to the appeal of fast-food flavors. I am unwilling, more generally, to interpret the term “super-stimulus” broadly enough to encompass any case where a given behaviour is explained by an urge vaguely related to another urge that existed in the ancestral environment.
If books in general were superstimuli for some existing urge, then any book would elicit the hijacked response to that urge (and we would be able to make a book irresistible by exaggerating the relevant cues). Instead, I find myself discriminating quite sharply between “interesting” books and “boring” books. (For instance I can’t stand the sight of most “trade” books that are supposed to appeal to programmers, like “Functional Programming in a Nutshell”.)
Why do people knit? I’d say that the urges involved are mostly competence and caring, rather than status. Why do I learn how to solder, and take apart consumer electronics? Curiosity, not status.
The common theme is that caring, competence and curiosity did plausibly exist in the ancestral environment, so it isn’t necessary to invoke status when there is a clearer link to other drives. I’m OK with having status (properly understood) take its rightful place in a pantheon of inherited drives, but it drives me nuts to see it trotted out to explain everything.
For some reason, we seem to be talking past each other—you appear to be replying to an incomplete and exaggerated version of what I had in mind. I accept the possibility that this is because I expressed my ideas in a confusing and poorly worded manner, but whatever the reason, we seem to be stuck at this point.
Therefore, regarding the book-reading issue, I will try to restate a few key elements of my position briefly:
It was not my intention to set forth a complete theory of human motives for reading books, but merely to bring up several examples of motives that are, in my opinion, likely involved (sometimes exclusively) in a significant percentage of all instances of book-reading behaviors.
I did not claim, and it would indeed be absurd to claim, that all these motives, or even any particular one of them, play a role in every instance of book-reading behavior.
Neither did I claim, which would also be absurd, that these motives and their biological causes are present to the same extent across any given set of individuals. Consequently, neither do the reactions to any particular book necessarily have the same underlying motivation across any given set of individuals, even if they all happen to be positive (or in other respects behaviorally similar) for all members of that set.
Ultimately, the goal of discussing these examples was to demonstrate the difference between: (1) effective status-seeking behaviors, and (2) behaviors that just execute adaptations that originally evolved due to status-related reasons, but no longer serve status goals effectively in the modern environment. In particular, some instances of human book-reading behavior fall into one or both of these categories (which does not imply that even these particular instances don’t involve other, unrelated motivations too).
Maybe not only my writing, but also my reading comprehension has been poor, but in your replies, I honestly don’t see any objections that wouldn’t either implicitly agree with what I said or rest on the misunderstanding of some of the above points.
And this doesn’t contradict anything I had in mind, nor anything I’ve written, unless my writing has been really poor (which possibility I allow for).
That said, when it comes to human behaviors where much is at stake in terms of power, prestige, and wealth, I believe it’s hard to think of any in which status considerations don’t play a significant role. In particular, when it comest to the issue that started this discussion, I have yet to see anyone elaborate on any plausible-sounding explanation that wouldn’t revolve around status dynamics.
Well: long-winded, maybe. Fine otherwise; I’d mention it less.
You wrote that beyond life or death “status is the primary preoccupation of humans”, I disagreed and in particular with “THE primary preoccupation”. You seem to now have appropriately qualified that initial statement; I’ll certainly agree, for my part, that some people sometimes read books for bragging rights.
I definitely agree that various forms of “status” play significant roles in human motivation. Given that all of our behaviour rests in one way or another in executing biological adaptations I have no contention with your thesis. I strongly suspect that what we call “status” is not one mechanism but several, so that in each case it pays to hug the query.
In the spirit of Morendil’s question: what other professions should be shunning useful but low-status tools (particularly checklists) for the same reason as doctors, according to the status model? I don’t know enough about (a) lawyers, (b) politicians, (c) businesspeople, (d) salespeople, or (e) other high status professions to judge either what your model would predict or what they do.
It’s worth noting that engineering is (moderately-)high-status but involves risk of personal cost in case of error, making the fact that it shows widespread adherence to restrictive professional standards explicable under the status theory.
Now that’s an interesting question! Off the top of my head, some occupations where I’d expect that status considerations interfere with the adoption of effective procedures would be:
Judges—ultra-high status, near-zero discipline for incompetence.
Teaching, at all levels—unrealistically high status (assuming you subscribe to the cynical theories about education being mostly a wasteful signaling effort), fairly weak control for competence, lacking even clear benchmarks of success.
Research in dubious areas—similarly, high status coupled with weak incentives for producing sound work instead of junk science.
For example, there are research areas where statistical methods are used to reach “scientific” conclusions by researchers with august academic titles who are however completely stumped by the finer points of statistical inference. In some such areas, hiring a math B.A. to perform a list of routine checks for gross errors in statistics and logic would probably prevent the publication of more junk science than their entire peer review system. Yet I think status considerations would probably conspire against such a solution in many instances.
I disagree somewhat that judges face near-zero discipline for incompetence. Except for judges on the highest court in a jurisdiction, most judges frequently face the prospect that the opinions they author may be reversed. It is true that frequent reversals will almost never lead to the sanction of the judge losing his or her job (due to lifetime appointments or ineffectiveness of elections at removing incumbent judges except for the most serious and publicized faults). But the resulting hit to status for frequent reversals can be quite serious; and because judges are so high status, as you note, they tend to be very concerned with maintaining that status. The handful of judges I’ve known personally have been quite concerned with their reversal rate and they particularly don’t want to be reversed in a way that is embarrassing to them because it suggests laziness, incompetence, poor reasoning, cutting corners, or the like. (On the other hand, reversal for disagreements that can be characterized as “political” is probably not seen as quite so status-lowering.) At any rate, the law does provide checklist-like procedures or guidelines in many instances, and most judges do follow them, at least in part because failure to do so could lead to reversal.
Expanding on your example of judges- this fits in with general problems for people in the legal professions. For example, there have now been for many years pretty decent understanding about problems with the standard line-up system for criminal suspects. There are also easy fixes for those problems. Yet very few places have implemented them. Similarly, there have been serious problems with police and judges acting against people who try to videotape their interactions with police. Discussing this in too much detail may however run into the standard mind-killing subject.
Morendil:
My understanding is that the present (U.S.) system of malpractice lawsuits and insurance doesn’t leave much incentive for extraordinary caution by individual doctors. Once you’ve paid your malpractice insurance, which you have to do in any case, you’re OK as long as your screwups aren’t particularly extreme by the usual standards. Moreover, members of the profession hold their ranks together very tightly, and will give up on you only in cases of extremely reckless misbehavior. They know that unlike their public image, they are in fact mere humans, and any one of them might find himself in the same trouble due to some stupid screwup tomorrow. And to establish a malpractice claim, you need not only be smart enough to figure out that they’ve done something bad to you, but also get expert testimony from distinguished members of the profession to agree with you.
I am not very knowledgeable about this topic, though, so please take this as my impression based on anecdotal data and incomplete exposure to the relevant literature. It would be interesting if someone more knowledgeable is available to comment.
I’d say that in a sense, it’s a collective action problem. The pre-flight checks done by fighter pilots (and even to some extent by ordinary pilots) are perceived as cool-looking rituals, and not a status-lowering activity at all, because these procedures have come to be associated with the jobs of high-status individuals. Similarly, if there was a cool-looking checklist procedure done by those doctors on TV shows, presented as something that is only a necessary overture for acts of brilliance and heroism, and automatically associated with doctors in the popular mind, it would come to be perceived as a cool high-status thing. But as it is, in the present state of affairs, it comes off as a status-lowering imposition on people whose jobs are supposed to be one hundred percent about brilliance and heroism.
Also, there is the problem of the doctor-nurse status disparity. Pilots, despite having much higher status, don’t look down on their mechanics much; after all, they have to literally trust them with their lives. (And it’s similar for other military examples too.) Not so for doctors; it is probably a humiliating experience for them to be effectively supervised and rebuked for errors by nurses. (Again, I’m not an insider in the profession, so this is just my best guess based on the available information.)
The above cited article answers that question almost directly: the idea that typical doctors are doing such a lousy job that they would benefit from a simple checklist to avoid forgetting trivial routine things contradicts the very source of their high status, namely the public perception of them as individuals of extraordinary character and intellectual abilities, completely unlike us ordinary folks who screw things up all the time by stupidly forgetting some simple detail. The author, as I noted earlier, feels the need to disclaim such implications to avoid sounding too radical and offensive. Medicine has been a subject of magical thinking in every human culture, and ours is no exception.
The people who decide malpractice suits are likely to be more sympathetic to pleas of having used one’s judgment and experience but making a mistake, over having used a rigid set of rules from which one did not deviate even as the patient took a turn for the worse.
Yes, there is a powerful irrational status-driven reaction against the idea that something so rudimentary as checklists could improve the work of people who are a subject of high status reverence and magical thinking. Note how even in this article, the author feels the need for pious disclaimers, denying emphatically in the part you quoted that this finding presents any evidence against the heroic qualities of character and intellect that the general public ascribes to doctors.
Of course, the fact that this method dramatically inverts the status hierarchy by letting nurses effectively supervise doctors doesn’t help either. In our culture, when it comes to immense status differences between people who work closely together, relations between doctors and nurses are probably comparable only to those between commissioned officers and ordinary soldiers. I don’t think such a wide chasm separates even household servants from their employers.
This reminds me of the historical case of Ignaz Semmelweis, who figured out in mid-19th century, before Pasteur and the germ theory of disease, that doctors could avoid killing lots of their patients simply by washing their hands in disinfectant before operations. The reaction of the medical establishment was unsurprising by the usual rules of human status dynamics—his ideas were scornfully rejected as silly and arrogant pseudoscience. What effrontery to suggest that the august medical profession has been massively killing people by failing to implement such a simple measure! Poor Semmelweis, scorned, ostracized, and depressed, turned to alcoholism and eventually died in an insane asylum. Hand-washing yesterday, checklists today.
I’m pretty sure it’s more complicated than that. My impression is that experienced nurses can generate some clout, and that (if I can believe Heinlein) experienced sergeants can have influence over new lieutenants. This is informal, and dependent both on the ability of the subordinate to be firm without seeming to upset the hierarchy and the receptiveness of the person who’s theoretically in charge.
Does anyone have actual information?
From an article about the athletes’ brains:
Unsurprisingly, most of the article is about elite athlete’s brains being more efficient in using their skills and better at making predictions about playing, but then....
I wonder whether there are similar brain differences between top mathematicians and everyone else, and if such a simple method could make people better at math.
It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I’d put a low prior probability on simple electrical stimulation having the desired effect.
I’d give it a medium prior probability—it’s impossible to operate at a high level if the simple operations are clogged by inefficiency.
It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I’d put a low prior probability on simple electrical stimulation having the desired effect.
Craig Venter et al. have succeeded in creating the first functional synthetic bacterial genome.
http://www.sciencemag.org/cgi/content/full/328/5981/958 http://www.sciencemag.org/cgi/content/abstract/science.1190719 http://arstechnica.com/science/news/2010/05/first-functional-synthetic-bacterial-genome-announced.ars http://www.jcvi.org/cms/research/projects/first-self-replicating-synthetic-bacterial-cell/overview/
I wrote up a post yesterday, but I found I was unable to post it, except as a draft, since I lack the necessary karma. I thought it might be an interesting thing to discuss, however, since lots of folks here have deeper knowledge than I do about markets and game theory
I’ve been working recently for an auction house that deals in things like fine art, etc. I’ve noticed, by observing many auctions, that certain behaviors are pretty reliable, and I wonder if the system isn’t “game-able” to produce more desirable outcomes for the different parties involved.
I think Less Wrong readers might have some interesting insights into the situation. Hopefully, at the least, it’s an interesting thing to think about for a few minutes. Feel free to point out if this is well-worn territory; in fact, any feedback is welcome.
Structure of the game:
We have objects consigned with us. We have our experts evaluate the objects, and provide an estimate of their value, based on previous auction outcomes for similar objects and their own expertise. So, for instance, a piece of furniture may be estimated to bring a value of $400-$800, a particular painting might be estimated to bring $10,000-$20,000, and so forth. While not “arbitrary”, they are to some degree simply good guesses.
We publish a catalog before the auction, listing the items up for sale along with their estimated values. A minimum bid is set, usually half of the low estimate.
The auction proceeds following bidding increments, which vary from one price bracket to another. So, for instance, between $100 and $200, the bidding increments are $10 -- so, $100, $110, $120, etc. Between, say, $10,000 and $20,000 however, the bidding increments are $1000 -- $10,000; $11,000; $12,000 and so forth.
Regardless of what bracket the prices fall into, there are several tendencies that happen frequently:
-- Bidders will readily bid on items they want which still have an asking price of below the low estimate. They feel like they’re getting a bargain. —Bidding will slow between the low estimate and the high estimate. Here, they’re really relying on the estimate for their idea of whether or not the deal is so good. —Bidders become much more reticent about continuing to bid once the price reaches or exceeds the high estimate.
-- “Bidding wars” are more likely to the degree to which bidders feel they still have room to get a “good deal”
It seems to me that it is advantageous (in terms of maximizing the final price paid for an item) to have MORE bidding increments between the low and high estimates than it is to have fewer. That is to say --
I would expect more bids on an item which has an estimate between $200-$400, where there are 20 bid increments between the low estimate and the high estimate, than I would expect on an item estimated to sell between $10,000 and $15,000, which only has five bidding increments between the low and the high.
Now, naturally, some of that has to do with the fact that the lower-priced item is affordable to more bidders. It’s also worth noting that increasing the bid increments makes sure that the auction itself doesn’t take forever to complete (the higher increments cause bidders to drop out faster, regardless of the number of increments.
So, all this in mind, it seems plausible to me that we could marginally improve the prices being paid for our larger value objects in one of two ways:
-- Increase the granularity of the bidding increments at higher values —Provide low estimates that allow for a larger number of bidding increments (instead of saying, “the estimate is $15,000-$20,000” we could say, “the estimate is $10,000 - $20,000″)
It seems tricky to figure out whether the strategy works, though. After all, each of these objects is unique; it’s not like shares of stock or pork bellies or something, where you have a whole bunch of the same stuff and the market is setting a price.
My questions for you all then: —Is my thinking on this subject sound? —Do you think that the number of bidding increments available to bidders can affect their behavior in the way I’ve outlined (am I right?) —Assume we implement one of the two strategies for maximizing the prices paid. Is there any reliable way to measure the outcome to see if it worked?
Make sure you’re asking yourself, “what experiment would disprove my hypothesis?” You have several hypotheses in there which might not be optimal.
An experiment which would disprove my hypothesis regarding more bidding increments would be something like:
Run at least three auctions for the same or similar items with the same or similar bidders, one using normal estimates and bidding increments for a control, one where the low estimate was lowered to allow more increments, and one with the same estimates, but more granular increments. IF the price paid in each auction was roughly equivalent, THEN the hypothesis is disproven.
The problem with that is the nature of the property we auction—there’s only one of anything. Each auction lot is, in important ways, different from the others. There’s only one of this painting; only one of this desk. Even when two objects are similar, there are still often condition differences and so forth.
I’ll have to consult with some of the appraisers and see if there’s ever an exception to this rule.
But ok, that brings up another interesting question. Is there a way of simulating auction behavior? Has someone written a computer program to do this sort of thing? What kinds of assumptions do they make about the behaviors of individual agents?
Do you have a large body of data? It’s possible a statistician would be capable of devising appropriate measures to test your hypothesis.
If we assume that the appraisals are disconnected from the winning bids*, then couldn’t one just see whether the ratio of sale:appraisal is increasing? If the appraisals are honest, then any jiggery-pokery should alter the ratio—eg. a successful manipulation will lead to people paying an average 93%, where they used to pay 90%.
that is, there is no feedback—the appraisers don’t look at recent sales and say, oh, I’ve been lowballing all my estimates! I’d better start raising them.
No.40 on Yahoo’s homepage- “Is aging a disease?”
Is aging a disease? I doubt it. Aging is probably many diseases, prominent ones being accumulation of errors in genetic code, deterioration of muscle, growth of material intrusions into blood vessels … there’s no particular reason to think that a cure for one will cure any other.
That said, I think the medical professionals working on this are aware of the variety of damage mechanisms that need addressing—I just want to make sure that we don’t forget them.
It wouldn’t surprise me if accumulated errors explain a lot of the symptoms of aging.
On the other hand, aging could be at least partly an independent syndrome—progeria suggests that.
From the article:
I wonder how many appearances of this idea (“making 70-80 year lives healthy would be awesome, but trying to vastly extend lifespans would be weird”) are due to public relations expediency, and how many are due to the speakers actually believing it.
Well, in fairness so far we’ve had a lot of trouble handling general aging. Also, note that what Dillin said is having an 100 year old person live to be 250. Not, someone born today living to 250. That’s a very different circumstance. The first is much more difficult than the second since all the aging has already taken place.
Ooh, speaking of Harry Potter and the Methods, someone totally needs to write an Atlas Shrugged fanfic in which some of the characters are actually good at achieving true beliefs instead of just paying lip service to “rationality.” If I had more time, I’d call it … Dagny Taggart and the Logic of Science.
(Strictly for the sake of completeness, I’ll note here that I couldn’t resist writing a rough draft of one short chapter of Dagny Taggart and the First Welfare Theorem.)
Amazing videos, both in presentation and content.
Drive: on how money can be a bad motivator, and what leads to better productivity
http://www.youtube.com/watch?v=u6XAPnuFjJc
Smile or die: on ‘positive thinking’
http://www.youtube.com/user/theRSAorg#p/a/u/1/u5um8QWWRvo
Thanks! Voted up.
I have run into a problem in statistics which might interest people here, and also I’d quite like to know if there is a good solution.
In charm mixing we try to measure mixing parameters imaginatively named x and y. (They are normalised mass and width differences of mass eigenstates, but this is not important to the problem.) In the most experimentally-accessible decay channel, however, we are not sensitive to x and y directly, but to rotated quantities
y′=ycosdelta xsindelta
where the strong phase delta is unknown. In fact, the situation is a bit worse than this; we get our result from a fit to a sum of terms, where one term is proportional to y’ and the other to (x’^2+y’^2). Consequently the experimental result is in fact a value for y’ and a value for x’^2. Because of uncertainty, the x’^2 value may be negative, which is unphysical—x’ cannot be imaginary, but if it’s close to zero you can make a mistake and measure x’^2 as negative.
Now, for comparison with other experiments we’d like to make a confidence-limit contour in (x,y) space. It seems that this contour ought to be an annulus, a doughnut shape, since we don’t know anything about delta and we can therefore only tell how far we are from the origin. However, I cannot figure out how to make a contour which consistently (for all values of delta) has 68% coverage. Does anyone have an insight?
I’m curious about this too, not because I’m working on any problems like this, but just because it sounds interesting. I have no insights, but the popular Feldman and Cousins paper about building confidence belts that don’t stray into unphysical ranges might be helpful. Ditto the papers citing that one.
Thank you, that paper contained the solution. The trick is to consider r^2=x’^2+y’^2 as the variable of interest, and note that it may be measured negative; then construct the confidence bands using the ordering principle given in their section III, with a numerical rather than analytical calculation of the likelihood ratios since the probability depends on x’^2 and y’ in a complicated way rather than straightforwardly on the distance from zero. But that’s all implementation details, the concept is exactly what Feldman and Cousins outline.
No problem! I was wondering if I was wasting your time with a shot in the dark—glad to hear it helped.
Nick Bostrom has posted a PDF of his Anthropic Bias book: http://www.anthropic-principle.com/book/anthropicbias.html
As someone who read it years ago when you had to ILL or buy it, I’m very pleased to see it up and heartily recommend it to everyone on LW who hasn’t read it yet. (If you don’t want to follow the link and see for yourself, the book focuses on the Doomsday problem and some related issues like Sleeping Beauty, which, incidentally, has come up here recently.)
I have been wondering whether the time was ripe to (say) tweet or blog about how wonderful the LessWrong wiki is. “If you’re interested in improving your thinking, the LessWrong wiki is getting to be a great resource”. The audience I’m likely to reach is mostly software professionals.
So I attempted to take as unbiased a look at the wiki as I could, putting myself into the shoes of someone motivated by the above lead.
Roadblock the first: the home page says “This wiki exists to support the community blog”. This seems to undermine the implicit promise that people I send to the wiki will find the wiki itself a worthwhile investment of their time and attention.
The home page directs the wiki-only reader to the “topic tree”. This purports to be “a list of all topics discussed on this wiki”. It is in fact a list of all pages, a significant difference. Biases and fallacies together comprise one large LW topic, for instance, with Techniques another substantial chapter, and Positions, Problems, and Theorems forming a separate heading of “supporting material”.
Roadblock the second: the “featured articles” include Complexity of Value (getting the point of which requires IMO the bridging of a substantial inferential distance), and Paperclip Maximizer (ditto, this is an FAI trope), only Ev Psych has some relevance to the topic “thinking better”.
My options as I see them: a) abandon the idea of driving outside traffic to the wiki and direct people to the blog instead, b) formulate a different promise and lead, c) improve the wiki.
Discuss. ;)
Most of these issues can be handled by small modification to the Wiki, better organization especially, and clear marking of what articles require which background articles.
A while ago, I was promoting trn.
One of the great virtues is having a lot of flexibility in what you’re shown. In particular, you can choose to not see anything by a given poster.
I was mostly thinking of trn as a way of making it more feasible to follow what you want to read in high-volume discussions, but it’s also a way of defusing quarrels, and I think it would be especially handy now.
Speculation about The Methods, which I put here because I want credit for brilliance if I’m right.
The one-pass creation of stable time loops can be accomplished by a Turing machine in the following manner: Have a machine simulate a countably finite set of universes by allocating clock ticks after the fashion of Cantor’s diagonal argument. In each universe, wherever there exists an object with the properties that a Time-Turner exploits, spawn new Universes at every tick by inserting new matter “from [1 to N_max] ticks ahead”, where N_max is the maximum range of that Time-Turner-like object. If this new matter includes a human brain, it will have memories of the future; even if not, its wave function may have couplings to future events. Now simulate as usual. If the timeline becomes inconsistent, that is, events occur such that the wave-function of the new matter could not be produced, cease simulating that timeline. No conscious being would ever observe an unstable time loop, although admittedly a few gazillion of them would cease to exist every few ticks, especially after irresponsible experiments such as Harry’s effort to prove P=NP. Harry has not thought of this explanation because after all he’s only eleven and CS is not his main interest; this is why he speculates that the universe may not be Turing-computable.
Observe also that quantum theory already includes acausal time loops, namely the virtual particle pairs making up the quantum foam. The amplitude for such a loop goes extremely rapidly to zero if the loop is unstable; one could exploit this property to create a Time-Turner. The ethical issues are the same as those involved in ordinary teleportation through space by the creation of a copy and destruction of the original, with the added risk that as long as they coexist one of them may do something to break the stability of the loop and destroy them both.
The HP:MR universe seems necessarily to exist in the same multiverse as FUMMC, especially since characters from other authors (Blake, Mornelithe Falconsbane, Harold Shea) show up as throwaways. But I would not expect this to have much of an impact on the plot.
There’s been serious examination of this sort of time loop before. See Scott Aaronson’s remarks which show that it in fact allows you to solve quickly not just NP problems but also everything in PSPACE (which is noteworthy because we know that P != PSPACE (Edit:That’s wrong, see remark below)).
As to where the HP:MR universe exists, given that in that universe the Lord of the Rings is fiction, but Harold Shea is not, nor is Buffy or many other things, I think that the inclusion of such references is more Eliezer playing around with a very weak fourth wall for humorous purposes rather than anything worth actually analyzing.
Edit on further thought:Your hypothesis while plausible doesn’t explain the message Harry ends up getting. One possible explanation for that is that what actually happens is that various single universe loops are attempted until things settle down in a consistent fashion. If the non-consistent fashions were sufficiently off the wall, Harry may have tried to warn his past selves not to do what they would do. Thus, having a don’t mess with time message might be an attracting point.
Quick note, P != PSPACE is not in fact proven.
Unrelated addendum: It occurs to me that Harry was able to get the “Do not mess with time” message because his outputs and inputs were insufficiently digital. He considered the possibility of getting nothing, but didn’t think to lump in with it the possibility of getting anything outside of the range specified. Why did he get specifically that message, preventing him from noticing the problem is easily fixable? Because this is CS world, of course, so we assume that nature chooses adversarially...
Right sorry, we know that P != EXP from the hierarchy results. (Gah. Should have realize that made no sense to think that P != PSPACE could come from from hierarchy results, given that one is a time defined set and one is a space defined set in order to get that one would probably need an intermediate computational class or a deep equivalence).
Edit: I should also probably specify that Sniffnoy was the person who made me aware of the the Aaronson work cited above.
WRT some recent posts on consciousness, mostly by Academician, eg “There must be something more”:
There are 3 popular stances on consciousness:
Consciousness is spiritual, non-physical.
Consciousness can be explained by materialism.
Consciousness does not exist. (How I characterize the Dennett position.)
Suppose you provide a complete, materialistic account of how a human behaves, that explains every detail of how sensory stimuli are translated into beliefs and actions. A person holding position 2 will say, “Okay, but you still need to explain consciousness.” A person holding position 3 denies that there’s anything more to be explained.
I’ve found these posts perplexing, and I think this is why: What’s happening is that someone who holds position 3 is arguing against position 2 by characterizing it as position 1.
I find your reading of these posts perplexing. I do not know of anyone who believes that consciousness does not exist and certainly not Dennett. ‘Explaining how every detail of how sensory stimuli are translated into beliefs and actions’ has very little to due with consciousness. Explaining how we are aware of sensory stimuli and beliefs and actions is what consciousness is about. It is not thought—it is awareness of thought. It is also about how we remember experience.
If you want to understand how someone can hold the positions they do, you will have to understand that they are not confusing cognition, action or perception with consciousness. Consciousness has to do with being aware of some of your cognition, action and perception.
This does not mean that consciousness is unimportant, it is extremely important.
I agree that Dennett does not explain consciousness by explaining cognition, action and perception in “Consciousness Explained”. I, too, was a little disappointed in the title but it was written almost 20 years ago. 20 years ago the neuroscience revolution was just starting.
Dennet doesn’t know that he doesn’t belief in consciousness. But he doesn’t believe in qualia. I interpret that as not believing in consciousness. And, the way he tries to explain consciousness indicates that he thinks that if you explain a system’s input-output behavior, you’ve explained everything about the system. This also implies that there is no phenomena other than input-output to be explained; this implies there is no such thing as consciousness.
(Asking what a philosopher “believes” is a tricky question, since analysis usually show many important propositions that their writings imply both belief and disbelief of. This applies to all people, of course; it’s just more problematic in philosophers.)
My point is that they are. They think that explaining the perception, cognition, and action is all they need to worry about, and all else is mysticism.
You seem very confused about Dennett’s ideas. He believes in subjective experience; he just thinks that philosophers have used the term “qualia” in misleading and inaccurate ways, and it’s better to just talk about subjective experience. He also thinks that it is important to explain people’s perceptions of consciousness: he writes about the idea of “heterophenomenology”, which is to treat people’s perceptions and experience as data that needs to be explained, but is not necessarily completely accurate or reliable.
I will take RobinZ’s good advice the not talk about qualia (for some time anyway). It is a philosophical term. Consciousness is a different matter, needs to be discussed and is too important to put in the ‘taboo’ bin. We need consciousness to remember, to learn and to do the prediction involved in controlling movement. It is a scientific term as well as a philosophical one and an ordinary everyday one.
Controlled movement does not require consciousness, memory, learning, or prediction. This (simulated) machine has none of those things, yet it walks over uneven terrain and searches for (simulated) food. What controlled movement requires is control.
Memory, learning, and prediction do not require consciousness. Mundane machines and software exist that do all of these things without anyone attributing consciousness to them.
People may think they are conscious of how they move, but they are not. Unless you have studied human physiology, it is unlikely that you can say which of your muscles are exerted in performing any particular movement. People are conscious of muscular action only at a rather high level of abstraction: “pick up a cup” rather than “activate the abductor pollicis brevis”. Most of the learning that happens when you learn Tai Chi, yoga, dance, or martial arts, is not accessible to consciousness. There are exercises that you can tell people exactly how to do, and demonstrate in front of them, and yet they will go wrong the first time they try. Then the instructor gives the class a metaphor for the required movement, involving, say, an imaginary lead-weighted diving boot on one foot, and suddenly the students get it. Where is consciousness in that process?
I believe there is scientific agreement that the memory of an event in episodic memory only can be done it the event is consciously experienced. No conscious experience = no episodic memory
A certain type of learning depends on episodic memory and so conscious experience.
The fine control of movement depends on the comparison between expectation and result, ie error signals. As it appears to be consciousness that gives access across the brain to a near future prediction, it is needed for fine control. Prediction is only valuable in it is accessible.
I am not saying that memory, learning or fine motor control is ‘done’ in consciousness (or even that in other systems, such as robots, there would not be other ways to do these things.) I am only saying that the science implies that in the human brain we need to have conscious experience in order for these processes to work properly.
Yes, consciousness is certainly involved in the way we do some of those things, but I don’t see that as evidence that that is why we have consciousness. Consciousness is involved in many things: modelling other people, solving problems, imagining anticipated situations, and so on. But how did it come about and why?
FWIW, I don’t think anyone has come close to explaining consciousness yet. Every attempt ends up pointing to some physical phenomenon, demonstrated or hypothesised, and saying “that’s consciousness”. But the most they explain is people’s reports of being conscious, not the experience that they are reports of. I don’t have an explanation for the experience either. I don’t even have an idea of what an explanation would look like.
In terms of Eliezer’s metaphor of the Explain/Worship/Ignore dialog box, I don’t worship the ineffable mystery, nor ignore the question by declaring it solved, but I don’t know how to hit the Explain button either. For the time being the dialog will just have to float there unanswered.
Concurred. I want to point out that Julian Jaynes presents a lot of evidence for the lack of a role for consciousness for these and many other things in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind. (And yes, I know his general thesis is kind of flaky, but he handles this very narrow topic well.)
One of his examples is how people, under experimental conditions and without even knowing it, adjust muscles that can’t be consciously controlled, in order to optimally contain a source of irritation. They never report any conscious recognition of the correlation between that muscle’s flexing and the irritation (which was ensured to exist by the experiment, and which irritation they were aware of).
It may in fact be possible to drive while unconscious, though not very well.
I’m fairly sure a friend of a friend was on a similar insomnia drug and held a long, apparently-coherent phone conversation with her sister, to whom she had not spoken in some time. And then woke up later and thought, “I should call my sister—we haven’t spoken in a long time.”
Let me just say I find the stories more plausible than the newswriters seem to.
I apologize—what I meant wasn’t “drop the subject of consciousness”, but “don’t use the specific word ‘consciousness’”:
Besides the original essay linked and quoted above, there’s elaboration on the value of the exercise here.
Edit: For example, were I to begin to contribute to this conversation, I would probably talk about self-awareness, the internal trace of successive experiences attended to, and the narrative chains of internal monologue or dialogue that we observe and recall on introspection—not “consciousness”.
The “tree falling in a forest” question was posed before people knew that sound was caused by vibrations, or even that sound was a physical phenomenon. It wasn’t asking the same question it’s asking now. It may have been intended to ask, “Is sound a physical phenomenon?”
Confession: I always assumed (until EY’s article, believe it or not!) that the “tree falling in a forest …” philosophical dilemma was asking whether the tree makes vibrations.
That is, I thought the issue it’s trying to address is, “If nothing is around to verify the vibrations, how do you know the vibrations really happen in that circumstance? What keeps you from believing that whenever nobody’s around [nor e.g. any sensor], the vibrations just don’t happen?”
(In yet other words, a question about belief in the implied invisible, or inaudible as the case may be.)
Over what period, exactly, was the question widely accepted to be making a point about the difference between vibrations and auditory experiences, as Eliezer seemed to imply is the common understanding?
I’ve encountered people asking the question with both meanings or sometimes a combination of meanings. Like many of these questions of a similar form, the questions are often so muddled as to be close to useless.
I don’t think that’s correct. The notion that sound is vibrations in air dates back to at least Aristotle. See for example here
I don’t know, but Aristotle’s writings were not well-known in Europe from the 6th through the end of the 12th centuries. They were re-introduced via the Crusades.
By the way, the modern phrasing of the dilemma is, “If people are in a multiplayer game on Xbox Live, and everyone’s headset is muted, does a whiny 11-year-old still complain about lag?”
Do you have a citation for that? The earliest reference I see is Berkeley.
I don’t. Sorry, I thought the question was medieval, but now can’t remember why I thought that. Probably just from giving the question-asker the benefit of the doubt. If the original asker was Berkeley, then it was just a stupid question.
I take your point, I really do. I will for example avoid ‘qualia’ as a word and use other terms.
But here is my problem. I have been following what the scientists that research it have been saying about consciousness for some years. They call it consciousness. They call it that because the people they know and I know and you know call it that. Now you are suggesting nicely that I call it something else and there is no other simple word or phrase that describes consciousness.
When I wrote a post I defined as well as I could how I was using the word. I could invent a word like ‘xness’ but I would have to keep saying that ‘xness’ is like consciousness in everything but name. And it would not accomplish much because it is not the word or even particular philosophies that is the source of the problem. It is the how and where and why and when that the brain produces consciousness. If we disagreed about what an electron was, it would not help to change the name. In the same way, if we disagree about what consciousness is, this is not a semantic problem. We know what we are talking about as well as we would if we could point at it, we have a different views about its nature.
That’s not quite what I meant either (although I actually approve of avoiding the term “qualia”, full stop):
The specific advantage I see of cracking open the black-box of “consciousness” in this conversation is that I expect it to be the fastest way to one of the following useful outcomes:
“But you haven’t talked about fribblety chacocoa opoloba.” “I haven’t talked about what? I don’t think I’ve ever actually observed that.”
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “Oh, I see the confusion! No, he’s talking about albittiver rikvotil, as you can see from this context, that quote, and this journal paper.”
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “But that doesn’t exist, according to the four experiments described in these three research papers, and doesn’t have to exist by this philosophical argument.”
Edit: Also, there’s no requirement that you actually solve the problem of what it is—a sufficiently specific and detailed map leading to the thing to be observed suffices.
Ok, its my bed time here in France. I will sleep on this and maybe I can be more positive in the morning. But the likelihood is that I will go back to the occassional lurck.
Your comment does not make a great deal of sense to me, no one appears to be interested in what I am interested in (contrary to what I thought previously), the horrid disagreement about Alicorn’s posting is disturbing, and so was the discussion of asking for a drink. I was not upset at the time with the remarks about my spelling and I would correct them. But now I think, is there any latitude for a dyslexic? I thought the site was for discussion ideas not everything but.
Good night.
Good night.
I apologize for making a big deal of this, but my main point is that I want to know I’m talking about the thing you’re interested in, not about something else. I wasn’t even really trying to address what you said—just to make some suggestions to reduce the confusion floating around.
Have a good night—hope I can catch you on the flip side.
Apology accepted. You are not the problem—I would not go away because of one conversation.
I have decided that I will take a less active part in LW for a while. It is very time consuming and I have a lot of actually productive reading and blogging to do. By productive I mean things that add to my understanding. I will look to see what has been posted and will probably read the odd one. I may even write a small comment from time to time. The posting that I was preparing for LW will be abandoned. I would put in too much effort for too little serious productive useful discussion. Better to put the effort elsewhere.
I think what you’re talking about needs a different name. ‘Attention’ might be an informal one and ‘executive control’ a more formal one, or just ‘planning’, if we’re talking AI instead of psychology. ‘Reflection’, if we’re talking about metacognition.
Like RichardKennaway said, the tasks you describe sound like things that existing narrow AI robotic systems can already do, yet it sounds quite odd to describe current-gen robots as conscious. Talking about consciousness here is confusing at least to me.
Outside qualia and Chalmers’ hard probjem of consciousness, is the term consciousness really necessary for something that can’t be expressed in more precise terms?
Do we? That would be good news; but I doubt it’s true.
I think I answered this in another sub-thread of this discussion. But, here it is again in outline.
We only remember in episodic memory events that we had conscious awareness of. Some types of learning rely on episodic memory. The remembering and the learning are not necessarily, not even probably, part of the conscious process but without consciousness we do not have them. The prediction is part of the monitoring and correcting of on-going motor actions. In order to create the prediction and to use it, various parts of the cortex doing different things have to have access to the prediction. This wide-ranging access seems to be one of the hallmarks of consciousness. So does the slight forward projection of the actual conscious awareness—ie there is a possibility that it is the actual prediction and well is the mode of access.
I hope this answers the question of why I said what I said. I don’t wish to continue this discussion at the present time. As I told RobinZ, I currently have other things to do with my time and find LW has been going off-topic in ways that I don’t find useful. However, you have always been willing to seriously debate and stay on topic, so I have answered your comment. I will probably return to LW at some time. Until then, good luck.
Thanks. I know you don’t want to continue discussion; but I note, for others reading this, that in this explanation, you’re using the word “conscious” to mean “at the center of attention”. This is not the same question I’m asking, which is about “consciousness” as “the experience of qualia”.
I made my comment because it’s very important to know whether experiencing qualia is efficient. Is there any reason to expect that future AIs will have qualia; or can they do what they want to do just as well (maybe better) by not having that feature? If experiencing qualia does not confer an advantage to an AI, then we’re headed for a universe devoid of qualia. That’s the big lose for the universe.
Avoiding that common qualia/attention confusion is reason enough not to taboo “qualia”, which is more precise than “consciousness”.
You seem to be missing the point about what he means to taboo a word. In LessWrong speak, this means to expand out what you mean by the term rather than just use the term itself. So for example, if we tabooed “prime number” we’d need to say instead something like “an integer greater than one that has no positive, non-trivial divisors.” This sort of step is very important when discussing something like consciousness because so many people have different ideas about what the term means.
Taboo “qualia” and “consciousness”. You are speaking with great confidence in a discussion involving philosophical terms, and this is always a mistake if you have not already unambiguously defined these terms. And unambiguous definitions of philosophical terms are always controversial, and always in my experience lead to argument. Rationalist taboo, please.
AI and rationality should then also be taboo. Unless you can unambiguously define them.
what do we mean by rationality does a pretty good job of that. Though it should be noted that the notion of tabooing a term is for a particular situation where there is confusion / disagreement involving the term in question, and so “AI” at least is not worth tabooing in response to the parent comment.
With respect to this forum:
I can see a lot of possible benefits to creating a computer program capable of producing good solutions to any arbitrarily selected real-world problem, and I agree that the secondary meaning of “morally-correct” implicit in the word “good” makes this task even more difficult than it already appears to be.
It is fairly obvious from the many examples of high-g people spiraling off into ridiculous positions that it takes much more than smarts to be able to reliably and accurately figure out what is going on and make plans, and it would be useful (and entertaining, if I’m honest) to know what kind of errors I am likely to make and what methods I may be neglecting when it comes to figuring out what is going on and making plans.
That said, I should have made it clear how narrow the scope of my request was: I have no problem with colloquial use of the term “consciousness” under ordinary circumstances. I requested the restriction in this case specifically because this discussion hinges on details of the definition which are frequently perceived as obvious in contradictory ways by different participants. Tabooing the term avoids that tar pit.
Do you see the symmetry of this situation? A Dennettian sees people who (by their lights) hold position (1), arguing against (2) (which they take to be their own) by characterising it as (3).
So, is AlephNeil pegging Academician as an advocate of (2) and PhilGoetz pegging A. as an advocate of (3)? But a non-Dennettian like me can admit that Dennett is in camp (2), just not a rich enough variant of (2).
There’s an orthogonal distinction, which is whether one believes that it is possible to produce a complete materialistic account of behavior that does not explain consciousness. (IIRC EY has said “no” to this question in the past.) If the answer truly is “no”, then (2) and (3) above would collapse into the same position, given enough knowledge.
I think I’m getting sidetracked… The problem with (3) is that it doesn’t allow you to /try/ to explain consciousness, and criticizes anyone in camp (2) who tries to explain consciousness as being in camp (1). Camp (3) are people, like Dennett, who think there’s no use trying to explain how qualia arise from material causes; we should just ignore them. As long as we can compute the output behavior from the input (they would presumably say), we understand everything material there is to understand; therefore, trying to understand anything else is non-materialism.
Help me here. What is it about qualia that has to be explained before there can be at least an outline theory of what consciousness is? Is it what they are? Is it where they are stored? Is it how they are selected? Is it how they get bound to an object? Is it how real they seem? Is it how they are sometimes inappropriate?
So we can’t answer those questions today. But we probably can in the next decade. And it would be a lot easier to find answers if we had a idea of how consciousness worked and more exactly what it does and why. We are closer to answering those questions.
Taboo consciousness before you file Dennett, please.
Is (3) the only one that is compatible with a computational theory of mind?
(2) is too, if consciousness is defined such that it is either an epiphenomenon of other mental processes or a specific, well-defined feature that is necessary to certain things human minds do. (I take the latter position: consciousness does something (a mind without it wouldn’t act the same, without intentionally imitating it) and there is no reason to expect it will not be compatible with materialism.)
So, I just had a strange sort of akrasia problem.
I was doing my evening routine, getting washed up and stuff in preparation for going to bed. Earlier in the evening, I had read P.J. Eby’s The Hidden Meaning of “Just Do It”, and so I decided I would “just do” this routine, i.e. simply avoid doing anything else, and watch the actions of the routine unfold in front of me. So, I used the toilet, and began washing my hands, when it occurred to me that if I do not interfere, I will never stop rinsing my hands. I did not interfere, however, and sure enough, I ended up just standing there, with my hands resting limply under the running water, doing nothing. My mind went over what I needed to do next, in various levels of detail; after a minute or two of this, I realized that I was leaning on my elbows, forming a triangle shape which prevented me from moving my hands out of the flow of the water. Once I realized this, I was able to stand up straight, freeing my hands to go on to the next task.
(Instead of doing that, however, I came downstairs to write about it on Less Wrong. But that’s another story.)
Why did it take me so long to figure out what I needed to do next in order to continue the routine non-forcefully?
Had you recently eaten any brownies of unknown origin?
Or gone 24 hours without sleeping?
Possibly because you’d partially knocked out your ability to make choices.
Mercifully, you didn’t have the ability to make very deep changes. There are advantages to not being software.
The ability to change all aspects of oneself is not a property of software. Software can easily be made completely unable, partially able, or completely able to modify itself.
Fair enough, though evolved beings (which could include software) are probably less likely to be able to break themselves than designed beings capable of useful self-modification.
You know, you could say that software often has two parts: a crystalline part and a fluid part. Programs usually consist mostly of crystalline aspects: if I took a mathematical proof verifier and tweaked its axioms, even only a tiny bit, it would probably break completely. However, they often contain fluid aspects as well, such as the frequency at which the garbage collector should run, or eagerness to try a particular strategy over its alternative. If you change a fluid aspect of a program by a small amount, the program’s behavior might get a bit worse, but it definitely won’t end up being clobbered.
I’ve always thought that we should design Friendly AI like this. Only give it control over the fluid parts of itself, the parts of itself it can modify all it wants without damaging its (self-)honesty. Make the fluid parts powerful enough that if an insight occurs, the insight can be incorporated into the AI’s behavior somehow.
I’m sure that an AI will have more than two levels of internal stability. Some parts will be very stable (presumably, the use of logic and (we hope) Friendliness). Some parts will be very fluid (updating the immediate environment). There would be a big intermediate range of viscous-to-firm (general principles for dealing with people, how to improve its intelligence).
“Science Saturday: The Great Singularity Debate”
Eliezer Yudkowsky and Massimo Pigliucci
http://bloggingheads.tv/diavlogs/28165
What a strange debate that was! I was very surprised to find Pigliucci arguing, inter alia, that intelligence/consciousness might have to be implemented on carbon atoms in order to work.
And then he came out with the trope whereby the spirit of the AI machine looks, from outside itself, at its goals and spontaneously decides to change them.
He is a very interesting thinker usually, but he seemed very naive in this particular area.
The case for carbon atoms is pretty weak.
However, we can imagine some types of organic molecule have a mini giga-computer on board—their design encoded in the constants of nature, and that their dynamics can be tapped by trapping the vibrating molecule in an organic matrix.
Then carbon-based computers would have access to the giga-computer—while silicon-based ones would not—and would therefore work enormously more slowly.
This is a feeble case—but not a totally ridiculous one. Enthusiasts for non-computable physical processes play up this kind of possibility even further.
Okay, I think I get you. Maybe there could be some substrates that allow much faster processing than others (orders of magnitude); this would make the substrate an important engineering issue. Is that what you’re saying?
But we are in the lofty realm of “in principle” here. If I can just imagine a computer—as big as the universe if you like—that simulates Massimo Pigliucci plus inputs and outputs on silicon or germanium or whatever you want, then intelligence/consciousness is not substrate dependent (again in principle). I think this is the case, the alternative being that there is something especially consciousnessy about carbon chemistry, which seems awfully dubious.
Yes, kinda. There are also the possibilities of novel types of computation being involved. We know about quantum computers. They can’t do things classical computers can’t do—but they can do them faster—in some cases MUCH faster. Maybe there are other types of computation—besides classical computation and quantum computation that we have yet to discover. Quantum computation was only discovered relatively recently—so maybe the future holds other possibilities. Gateways to oracles, etc.
It doesn’t look as though the brain is anything other than a classical neural network—which could fairly-obviously be ported onto silicon—if we had fast enough silicon. However, there is at least some room for doubt on this point.
I think Pigliucci is somewhat hung up on the technicality of whether a computer system can instantiate an (a) intelligence or (b) a human intelligence. Clearly he is gravely skeptical that it could be a human intelligence. But he seems to conflate or interchange this skepticism with his skepticism in a general computer intelligence. I don’t think anybody really thinks an AI will be exactly like a human, so I’m not that impressed by these distinctions. Whereas it seems like Pigliucci thinks that’s one of the main talking points? I wish Pigliucci read these comments so we could talk to him… are you out there Massimo?
I’ve just run into a second alumnus of my undergrad school from Less Wrong, and it has me curious, because… it’s a tiny school. So this’d be quite a coincidence, and there might be a correlation to dig up.
Present yourselves, former (or current) students of Simon’s Rock. I was there from the fall of ’04 until graduating with my BA in spring ’08 (I was abroad the spring of my junior year though).
If you lurk and don’t want to delurk, feel free to contact me privately. If you don’t have an account, e-mail me at alicorn@intelligence.org :)
I was at SR from the Fall 2005 semester until halfway through the Fall 2006 semester.
The goat’s on a pole. Amen.
I was there in the ’03-’04 year.
http://www.tnr.com/article/books-and-arts/true-lies
I recommend the linked article—it’s a review of a book about the details and effects of social pressure to not express one’s actual beliefs, including stability of generally unwanted social systems, and bloodless revolution when the beliefs change faster than the institutions.
See also Racial Paranoia, which describes the unintended consequence of the high cost of being overtly racist in the US—it’s impossible to know how racist any individual is, so people go nuts looking for clues about the level of racism. However, people aren’t crazy—they’re showing a rational response to a crazy-making situation.
Ryk Spoor’s latest universe, of which the only published book so far is Grand Central Arena, has as major characters people who were raised in simulated worlds, and later covers their escape therefrom. Just occurred to me that some LWers might be interested.
Eliezer seems to have gone dark lately. Anybody know what he’s up to?
Apparently working full-time on his rationality book, while occasionally fighting writer’s block by producing chapters of Harry Potter and the Methods of Rationality.
21 chapters later...
Thanks!
The Association for Advancement of Artificial Intelligence (AAAI) convened a “Presidential Panel on Long-Term AI Futures”. Read their August 2009 Interim Report from the Panel Chairs:
Don’t know how we missed this when it happened; I learned about this from the Hacker News thread. I’ve yet to find anything they’ve put out to justify this position.
Reference
Would anyone be interested if we were to have more regular LW meetups around the East Bay or San Francisco areas? We probably wouldn’t have the benefit of the SIAIfolks’ company in that case, but having the meetups at a location easily accessible by BART may help increase the number of people from the surrounding area who can attend. (Also, I hear that preparing for and hosting meetups at Benton can be somewhat taxing on the people who work there, so having them at restaurants will allow us to do it more frequently, if there is demand for such.)
I would happily participate in a San Francisco meetup. As an alternative to a restaurant setting, it would be possible to meet in the Noisebridge hackerspace.
I have a request for those bayesianly inclined among the LW crowd.
I had mentioned in an article that I had become addicted to watching theist/atheist debates. Unfortunately I have not weaned myself off this addiction as of yet. In one I watched recently, it is William Lane Craig (the theist that Eliezer wanted to debate) arguing for the provability of the resurrection of Jesus, and New Testament scholar Dr. Bart Ehrman arguing for its historical improvability.
At some point in this debate, Dr. Ehrman argues that miracles are fundamentally unprovable by historical analysis as they are by definition ‘the least probable event’. So history cannot find them as ‘the most probable event’. Craig then responds by bringing up Bayes’ theorem, to prove show that (if I understand correctly) Dr. Ehrman is ignoring the evidence.
To hear the arguments for yourselves, the video of the debate is here, Dr. Ehrman’s argument is stated from 34:59 to 36:02, and Dr. Craig’s counter is from 42:03 to 46:24. (do not go past 46:24, as Dr. Craig derails and starts talking supernatural again)
Against my prior bias, I strongly suspect that Dr. Craig is in fact correct. The irony is that while Dr. Craig usually wins his debates by sophistry and pure rhetorical ability, this time he gets bogged down by the details of actually trying to debunk a bad argument. In a turn of poetic justice, the argument goes over the head of the audience, and this debate is one of the few that Dr. Craig can be said to have lost.
The problem I see in this argument is that if you define a miracle as the least expected event (low prior), you can still prove a miracle if it leaves behind strong enough evidence (multiple, unbiased, independent, consistent sources), which would give you a high posterior.
So, to those of you who care to look into this, do I have it right? Have I lost the plot somewhere?
I didn’t bother listening to Craig’s rebuttal, because I agree with you that what Ehrman’s saying from 34:58 to 36:02 is poorly argued, and I don’t even need Bayes’ theorem to see it. My transcription of Ehrman:
But this is silly. If a historian, or anyone, can establish that X probably happened, they can establish that X’s complement probably didn’t happen (because P(X) + P(¬X) = 1). So how can Ehrman argue that history can establish what probably happened but not what probably didn’t? I suspect there are other issues (like playing definitional games with the word ‘miracle’ and suggesting an event ‘defies probability’ - what would that even mean?) but his claims about what historians can and can’t do is the most obvious issue to me.
I think we have a problem. While the default at LW is to not want to believe in possible miracles done by God, there’s considerable interest in knowing whether we live in a simulation.
Aside from logic or from careful examination of physics which find indicators of another level, the other category of evidence for this world being a simulation is transient anomalies. How do you evaluate reports of anomalies?
I think my main rule of thumb is to think about how anomalous the anomaly is, and the strength of the evidence for it. More anomalous and less well substantiated anomalies get taken less seriously.
Martin Gardner died today.
So long, and thanks for all the ahas.
Wired has an article ‘Accept Defeat: The Neuroscience of Screwing Up,’ about how scientists and the brain handle unexpected data and anomalies, and our preference to ignore them or explain them away.
I sometimes get various ideas for inventions, but I’m not sure what to do with it, as they are often unrelated to my work, and I don’t really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?
Sell patents.
(or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)
Sell patents is right, but only if your invention is something that sells and markets itself because it is so obviously awesome and not just an incremental improvement on an existing invention. Even if it’s an incredibly awesome invention, you may be better off raising money and doing it all yourself.
I’m generally good at telling people whether or not their ideas are any good—if you want to talk privately sometime, let me know.
I vaguely recall a thread where folks discussed what makes jokes funny, and advanced some theories. This may well have been in an Open Thread or buried deep within the comments of an unrelated post—at any rate I can’t find it.
Anyone who remembers seeing it or participating, I’d appreciate help locating it...
http://lesswrong.com/lw/1s4/open_thread_february_2010_part_2/1mps
I was able to track it down because I remembered enough about a comment I’d made.
Do we have advanced search?
How about adding (open?) tagging for comments?
Jewish saying: Who is strong? Whoever can resist telling a joke.
Because I am not strong: A cowboy was wearing a paper hat, paper chaps, paper boots. He was arrested. For rustling.
Thanks! What I had in mind exactly.
Searching for “humor” turned up too many results, not sorted in any helpful way. The search results are provided by Google, I don’t know how customizable that is but I’d assume not much. What would have helped in this instance is a way to sort by date.
Egan’s Law is “It all adds up to normality.” What adds up to what, exactly?
We have always lived in the universe of quantum mechanics, or the Tegmark Level IV Multiverse, but I don’t understand why it is supposed to add up to normality. I understand that this word “normality” is supposed to help me dissolve some of the weirder aspects of this universe, but it doesn’t seem to work as I am not at all convinced that the universe actually does add up to normality.
Is it really proper to assume from the start that the universe (multiverse) adds up to normality? Is it useful?
Will this all make sense to me if I read Quarantine?
I believe “normality” in this case refers to the Middle World of day-to-day experience. Whatever your theory predicts with respect to small particles colliding at near-light speeds (for example), it ought also to predict that you can turn on your stove, boil water, and steep a cup of tea.
Continuing my thinking about Pascal’s mugging, I think I’ve an argument for why one specifically wants the prior probability of a reward to be proportional/linear to the reward and not one of the other possible relationships. A longish excerpt:
One way to try to escape a mugging is to unilaterally declare that all probabilities below a certain small probability will be treated as zero. With the right pair of lower limit and mugger’s credibility, the mugging will not take place.
But such a ad hoc method violates common axioms of probability theory, and thus we can expect there to be repercussions. It turns out to be easy to turn such a person into a money pump, if not by the mugging itself.
Suppose your friend adopts this position, and he says specifically that any probabilities less than or equal to 1⁄20 are 0. You then suggest a game; the two of you will roll a d20 die, and if the die turns up 1-19, you will pay him one penny and if the die turns up 20, he pays you one dollar—no, one bazillion dollars.
Your friend then calculates: there is a 19⁄20 chance that he will win a little money, and there is a 1⁄20 chance he will lose a lot of money—but wait, 1⁄20 is too small to matter! It is zero chance, by his rule. So, there is no way he can lose money on this game and he can only make money.
He is of course wrong, and on the 5th roll you walk away with everything he owns. (And you can do this as many times as you like.)
Nor can it be rescued simply by shrinking the probability; the same game that defeated a limit of 1⁄20 can easily be generalized to 1/n, and in fact, we can make the game even worse: instead of 1 to n-1 winning, we define 0 and 1 as winning, and each increment as losing. As n increases, the large percentage of instant losses increases.
So clearly this modification doesn’t work.
If we think about the scaling of the prior probability of a reward, there’s clearly 3 general sorts of behavior:
1) sublinear relationship where the probability shrinks slower than the reward increases 2) a linear relationship where the probability shrinks as fast as the reward increases 3) and a superlinear relationship where the probability shrinks faster than the reward increases.
The first doesn’t resolve the mugging. If the mugging is ever rejected, then the mugger merely increases the reward and can always devise an accepted mugging.
The second and third both resolve the mugging—the agent will reject a large mugging if it rejected a smaller one. But is there any reason to prefer the linear relationship to the superlinear?
Yes, by the same reasoning as leads us to reject the lower limit solution. We can come up with a game or a set of deals which presents the same payoff but which an agent will accept when they should reject and vice versa.
Let’s suppose a mugger tries a different mugging. Instead of asking for 20$ he asks for a penny, and he mentions that he will ask this another 1999 times; because the reward is so small, the probability is much higher than if he had asked for a flat $20 and offered a consequently larger reward.
If the agent’s prior was not so low that they always rejected the mugger, then we would see an odd flip—for small sums, the agent would accept the mugging and then as the sums increased, it would suddenly start rejecting muggings, even when the same total sums would have been transacted. That is, just by changing some labels while leaving the aggregate utility alone, we change the agent’s decisions.
If the agent had been using a linear relationship, there would be no such flip. If the agent rejected the penny offer it would reject the 20$ offer and vice versa. It would be consistently right (or wrong). The sublinear and superlinear relationships are inconsistently wrong.
Has anybody else thought that the Inverse Ninja Law is just the Bystander Effect in disguise?
(Yes, I’ve been reading this.)
Buddha: The quintessential rational mind
http://www.hindu.com/mag/2010/05/23/stories/2010052350210600.htm
Let’s suppose Church-Turing thesis is true.
Are all mathematical problems solvable?
Are they all solvable to humans?
If there is a proof* for every true theorem, then we need only to enumerate all possible texts and look for one that proves—or disproves—say, Goldbach’s conjecture. The procedure will stop every time.
(* Proof not in the sense of “formal proof in a specific system”, but “a text understandable by a human as a proof”.)
But this can’t possibly be right—if the human mind that looks at the proofs is Turing-computable, then we’ve just solved the Halting Problem—after all, we can pose the halting of any Turing machine as a mathematical problem.
So what does that mean?
Not all true theorems have a proof? (what does that even mean)
Not all proofs are possible to follow by a human? (very pessimistic, in my opinion)
Some other answer I’m missing?
You can also extend the question to any human-made AIs/posthuman minds, but this doesn’t help much—if the one looking at proofs can reliably self-improve, then the Halting Problem would still be solved.
EDIT: A longer explanation of the problem, by a friend.
Picture an enormous polynomial f(x, y, …) with integer coefficients: something like 3x^2 − 6y + 5 but bigger. Now, if the Diophantine equation f(x, y, …) = 0 has a solution then this can easily be proved—you just have to plug in the numbers and calculate the result. (Even if you’re not told the numbers in advance, you can iterate over all possible arguments and still prove the result in a finite time.)
But now suppose that this particular f doesn’t have any solutions. (Think about whether you want to deny that the previous sentence is meaningful—personally I think it is).
Can we necessarily prove it doesn’t have any solutions? Well, there’s no algorithm that can correctly decide whether f has a solution for all Diophantine equations f. (See “Hilbert’s Tenth Problem”.) So certainly there exists an f, without any solutions, such that “f has no solutions” is not a theorem of (say) ZFC set theory. (Because for any formal axiomatic system, one can write down an algorithm that will enumerate all of its theorems.)
Perhaps, like Roger Penrose, you think that human mathematicians have some magical non-algorithmic ‘truth-seeing’ capability. Unfortunately, human thought being non-algorithmic would require that physics itself be uncomputable i.e. an accurate computer simulation of a brain solving a mathematical problem would be impossible even in principle. Otherwise, you must conclude that some theorems of the form “this Diophantine equation has no solutions” are not humanly provable.
I think that Eliezer’s post, Complexity and Intelligence, is really germane to your query.
Here’s a thought experiment, just for fun:
Let’s say, for simplicity’s sake, that your mind (and environment) is currently being run on some Turing machine T, which had initial state S. What if you considered the sentence G, which is a Gödel-encoded statement that “if you run T on S, it will never contain an instance of humpolec rationally concluding that G is a theorem”? (Of course, specifying that predicate would be a beastly problem, but in theory it’s a finite mathematical specification.)
You would therefore be actually unable to rationally conclude that G is a theorem, and of course it would thereby be a true, finitely specifiable mathematical statement.
It’s up to you, of course, which bullets you choose to bite in response to this.
You seem to be somewhat confused about the basic notions of computability and Goedel’s incompleteness results and their mutual connection. Besides the replies you’ve received in this thread, I’d recommend that you read through this lecture by Scott Aaronson, which is, out of anything I’ve seen so far, the clearest and most accessible brief exposition of these issues that is still fully accurate and free of nonsense:
http://www.scottaaronson.com/democritus/lec3.html
Nope. Not if physics is computable.
Nope. Not if human minds are computable.
It means exactly that your Turing machine enumerating all possible texts may never halt. What does it mean in terms of the validity of the theorem? Nothing. The truth value of that theorem may be forever inaccessible to us without appeal to a more powerful axiomatic system or without access to a hypercomputer.
Alas, both of those are correct.
Read about Gödel’s Incompleteness Theorem, preferably from Gödel, Escher, Bach by Douglas Hofstadter. As for the specific example of Goldbach’s conjecture, I’d bet on it being provable (or if it is false, the procedure would prove that by finding a counterexample), but yes, there are true facts of number theory that cannot be proven.
Next, if I remember correctly, theorem-proving programs have already produced correct proofs that are easily machine-verifiable but intractably long and complicated and apparently meaningless to humans.
I read GEB. Doesn’t Gödel’s theorem talk about proofs in specific formal systems?
I consider this a question of scale. Besides, the theorem-proving program is written by humans and humans understand (and agree with) its correctness, so in some sense humans understand the correct proofs.
It applies to any formal system capable of proving theorems of number theory.
But then what do you mean by “possible to follow by a human”?
Right. So if humans reasoning follows some specified formal system, they can’t prove it. But does it really follow one?
We can’t, for example, point to some Turing machine and say “It halts because of (...), but I can’t prove it”—because in doing so we’re already providing some sort of reasoning.
Maybe “it’s possible for a human, given enough time and resources, to verify validity of such proof”.
Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but “the human brain is isomorphic to a formal system” does not imply “a human’s knowledge of some fact is isomorphic to a formal proof”. What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but you can’t get to P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.
So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally undecidable statements, as this does not need to be isomorphic to any impossible proofs in any actual formal system. This just doesn’t count as “solving the halting problem” (any more than Gödel’s ability to identify certain unprovable statements as true in the first place refutes his own theorem), because a solution to the halting problem must be at the level of formal proof, not of empirical reasoning; the latter is necessarily imprecise and probabilistic. Unless you think that a human “given enough time and resources” could literally always get an answer and always be right, a human cannot be a true halting oracle, even if they can correctly assign a very high or very low probability to some formally undecidable statements.
Well written— maybe this deserves a full post, even granted that the posts you linked are very near in concept-space.
Perhaps. But would it be controversial or novel enough to warrant one? I’d think that most people here 1) already don’t believe that the human mind is more powerful than a universal Turing machine or a formal system, and 2) could correctly refute this type of argument, if they thought about it. Am I wrong about either of those (probably #2 if anything)? Or, perhaps, have sufficiently few people thought about it that bringing it up as a thought exercise (presenting the argument and encouraging people to evaluate it for themselves before looking at anyone else’s take) would be worthwhile, even if it doesn’t generally result in people changing their minds about anything?
It would be to some extent redundant with the posts you linked, but the specific point about the difference between human reasoning and formal reasoning is a new one to this blog. I, too, would be interested in reading it.
You’re probably right about both, but I would still enjoy reading such a post.
I think it could turn out really well if written with the relatively new lurkers in mind, and it does include a new idea that takes a few paragraphs to spell out well. That says “top-level” to me.
Comments:
(1) Empirical vs Non-empirical is, I think, a bit of a red herring because insofar as empirical data (e.g. the output of a computer program) bears on mathematical questions, what we glean from it could all, in principle, have been deduced ‘a priori’ (i.e. entirely in the thinker’s mind, without any sensory engagement with the world.)
(2) You ought to read about Chaitin’s constant ‘Omega’, the ‘halting probability’, which is a number between 0 and 1.
I think we should be able to prove something along these lines: Assume that there is a constant K such that your “mental state” does not contain more than K bits of information (this seems horribly vague, but if we assume that the mind’s information is contained in the body’s information then we just need to assume that your body never requires more than K bits to ‘write down’).
Then it is impossible for you to ‘compress’ the binary expansion of Omega by more than K + L bits, for some constant L (the same L for all possible intelligent beings.)
This puts some very severe limits on how closely your ‘subjective probabilities’ for the bits of Omega can approach the real thing. For instance, either there must be only finitely many bits b where your subjective probability that b = 0 differs from 1⁄2, or else, if you guess something other than 1⁄2 infinitely many times, you must ‘guess wrongly’ exactly 1⁄2 of the time (with the pattern of correct and incorrect guesses being itself totally random).
Basically, it sounds like you’re saying: “If we’re prepared to let go of the demand to have strict, formal proofs, we can still acquire empirical evidence, even very convincing evidence, about the truth or falsity of mathematical statements.” This may be true in some cases, but there are others (like the bits of Omega) where we find mathematical facts (expressible as propositions of number theory) that are completely inaccessible by any means. (And in some way that I’m not quite sure yet how to express, I suspect that the ‘gap’ between the limits of ‘formal proof’ and ‘empirical reasoning’ is insignificant compared to the vast ‘terra incognita’ that lies beyond both.)
I’ll have to think some more about it, but this looks like a correct answer. Thank you.
I myself will have to recheck this in the morning, as it’s 4:30 AM here and I am suspicious of philosophical reasoning I do while tired, but I’ll probably still agree with it tomorrow since I mostly copied that (with a bit of elaboration) from something I had already written elsewhere. :)
I also believe there are true things about the material universe which people are intrinsically unable to comprehend—aspects so complex that they can’t be broken down into few or small enough chunks for people to fit it into their minds.
This isn’t the same thing as chaos theory—I’m suggesting that there are aspects of the universe which are as explicable as Newtonian mechanics—except that we, even with our best tools and with improved brains, won’t be able to understand them.
This is obviously unprovable (and I don’t think it can be proved that any particular thing is unmanageably complex*), but considering how much bigger the universe is than human brains, I think it’s the way to bet.
*Ever since it was proven that arbitrary digits of pi can be computed (afaik, only in binary) without computing the preceding digits, I don’t think I can trust my intuition about what tasks are possible.
Not just in binary.
Is that really a ‘physical’ aspect, or a mathematical one? Newtonian mechanics can be (I think) derived from lower level principles.
So do you mean something that is a consequence of possible ‘theory of everything’, or a part of it?
I’m not dead certain whether “physical” and “mathematical” can be completely disentangled. I’m assuming that gravity following an inverse square law is just a fact which couldn’t be deduced from first principles.
I’m not sure what “theory of everything” covers. I thought it represented the hope that a fundamental general theory would be simple enough that at least a few people could understand it.
It may actually be derivable anthropically: exponents other than 2 or 1 prohibit stable orbits, and an exponent of 1, as Zack says, implies 2-dimensional space, which might be too simple for observers.
Though it should be noted that even if we allow for anthropic arguments, it is impossible to ascertain whether the inverse-square law is fundamentally true, or just a very good approximation of some far more complex actual law. Therefore, the truly fundamental laws are impenetrable to such reasoning: at maximum, we can ascertain that the fundamental laws, whatever they are, must have approximations with anthropically relevant properties to the extent that we are influenced by them. And indeed, when it comes to gravity, the inverse-square law is highly accurate for our practical purposes, but it’s only a good approximation of the predictions of the more complicated general relativity—itself likely just an approximation of the more accurate and complicated quantum gravity—that happens to hold in the conditions that prevail in the part of spacetime we inhabit.
I suppose the only way out of this would be to devise an anthropic argument where our existence hinges on the lack of arbitrarily small deviations from the law we wish to derive anthropically. I don’t know if perhaps some sound arguments along those lines could be derived from reasoning about the very early universe.
You can deduce it from the fact that that space is three-dimensional (consider an illustrative diagram), but why space should be three-dimensional, I can’t say.
That’s a plausible argument. A priori, one could have a three-dimensional world with some other inverse law, and it would be mathematically consistent. It would just be weird (and would rule out a lot simple causation mechanisms for the force.)
Well, we do inhabit a three-dimensional world in which the inverse-square law holds only approximately, and when a more accurate theory was arrived upon, it turned out to be weird and anything but simple.
Interestingly, when the perihelion precession of Mercury turned out be an unsolvable problem for Newton’s theory, there were serious proposals to reconsider whether the exponent in Newton’s law might perhaps be not exactly two, but some other close number:
Of course, in the sort of space that general relativity deals with, our Euclidean intuitive concept of “distance” completely breaks down, and r itself is no longer an automatically clear concept. There are actually several different general-relativistic definitions of “spatial distance” that all make some practical sense and correspond to our intuitive concept in the classical limit, but yield completely different numbers in situations where Euclidean/Newtonian approximations no longer hold.
Also, I don’t know if there’s any a priori reason for gravity.
Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle—given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
I think I’m saying that there will be parts of a theory of everything which just won’t compress small enough to fit into human minds, not just that the consequences of a TOE will be too hard to compute.
Do you think a theory of everything is possible?
Parts that won’t compress? Almost certainly, the expansions of small parts of a system can have much higher Kolmogorov complexity than the entire theory of everything.
The Tegmark IV multiverse is so big that a human brain can’t comprehend nearly any of it, but the theory as a whole can be written with four words: “All mathematical structures exist”. In terms of Kolmogorov complexity, it doesn’t get much simpler than those four words.
For anyone reading this that hasn’t read any of Tegmark’s writing, you should. http://space.mit.edu/home/tegmark/crazy.html Tegmark is one of the best popular science writers out there, so the popular versions he has posted aren’t dumbed down, they are just missing most of the math.
Tegmark predicts that in 50 years you will be able to buy a t-shirt with the theory of everything printed on it.
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as “A wizard did it” (even shorter!), but still.
(I do still find the Level IV Multiverse plausible, and it is probably the most parsimonious explanation of why the universe happens to exist; I only mean to say that to convey a real understanding of it still takes a bit more than four words.)
Actually, I’m quite unclear about what the statement “All mathematical structures exist” could mean, so I have a hard time evaluating its Kolmogorov complexity. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place?
ETA: When I try to think about how I would fully flesh out the hypothesis that “All mathematical structures exist”, all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag “exists” to all the outputs of the first program.
Two immediate problems:
(1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least.
(2) More to the point that I was making above, nothing significant really follows from applying the tag “exists” to things. You would have functionally the same overall program if you applied the tag “is blue” to all the outputs of the first program instead. You aren’t really saying anything just by applying arbitrary tags to things. But what else are you going to do?
What are the Tegmark multiverses relevant to? Why should I try to understand them?
Really? In which parallel universe? Every one? This one?
This one.
Don’t we live in a multiverse? Doesn’t our Universe splits in two after every quantum event?
How then Tegmark & Co. can predict something for the next 50 years? Almost certainly will happen—somewhere in the Multiverse. Just as almost everything opposite, only on the other side of the Multiverse.
According to Tegmark, at least.
Now he predicts a T shirt in 50 years time! Isn’t it a little weird?
All predictions in a splitting multiverse setting have to understood as saying something like “in the majority of resulting branches, the following will be true.” Otherwise predictions become meaningless. This fits in nicely with a probabilistic understanding. The correct probability of the even occurring is the fraction of multiverses descended from this current universe that satisfy the condition.
Edit: This isn’t quite true. If I flip a coin, the probability of it coming up heads is in some sense 1⁄2 even though if I flip it right now, any quantum effects might be too small to have any effect on the flip. There’s a distinction probability due to fundamentally probabilistic aspects of the universe and probability due to ignorance.
Let’s remember that if we’re talking about a multiverse in the MWI sense, then universes have to be weighted by the squared norm of their amplitude. Otherwise you get, well, the ridiculous consequences being talked about here… (as well as being able to solve problems in PP in polynomial time on a quantum computer).
Right ok. So in that case, even if we have more new universes being created by a given specific descendant universe, the total measure of that set of universes won’t be any higher than that of the original descendant universe, yes? So that makes this problem go away.
Any credible reference to that?
Not off the top of my head. It follows from having the squared norm and from the transformations being unitary. Sniffnoy may have a direct source for the point.
How do you know that something will be included in the majority of branches. Suppose that a nuclear war starts in a branch. A lot of radioactivity will be around, a lot of quantum events, a lot of splittings and a lot of “postnuclear” parallel worlds. The majority? Maybe, I don’t know. Tegmark knows? I don’t think so.
The small amount of additional radioactivity shouldn’t substantially alter how many branches there are. Keep in mind that in the standard multiverse model for quantum mechanics, a split occurs for a lot of events that have nothing to do with radioactivity. For example, a lot of behavior with electrons will also cause splitting. The additional radioactivity from a nuclear exchange simply won’t matter much.
ANY increase, from whatever reason, in the number of splittings, would trigger an exponential surge of that particular branch.
The number of splitting is the dominant fitness factor. Those universes which split the most, inherit the Multiverse.
If you buy this Multiverse theory of course, I don’t.
Hmm, that’s a valid point. It doesn’t increase linearly with the number of splitting. I still don’t think it should matter. Every atom that isn’t simple hydrogen atom is radioactive to some extent (the probability of decay is just really, really, tiny). I’m not at all sure that a radioactive planet (in the sense of having a lot of atoms with non-negligible chance of decay) will actually produce more branches than one which does not. Can someone who knows more about the relevant physics comment? I’m not sure I know enough to make a confident statement about this.
MWI is almost the default religion of this list members. And as in every religion, awkward questions are ignored. Downvoted, maybe.
It might help if you read the relevant sections of the conversation before you make accusations about something being a “religion.” Note that Sniffnoy’s remark above already resolved this.
What Sniffnoy’s remark resolves this?
Everything is weighted by squared-norm of the amplitude. And, y’know, quantum mechanics is unitary. What needs to be preserved, is preserved.
More generally, we might imagine that we lived in a world where physics was just probabilistic in the ordinary way, rather than quantum (in the sense of based on amplitudes); MWI might also be a natural way to think if we lived in that world (though not as natural as it is in the world of actual QM, as in that world we wouldn’t have any real need for MWI); then, well, everything would be weighted by probability, and everything would be stochastic rather than unitary. Of course if you don’t require preservation of whatever the appropriate weighting is, you’ll get an absurd result.
You do seem to be pretty confused about what MWI says; it does not, as you seem to suggest, posit a finite number of universes, which split at discrete points, and where the probability of an event is the proportion of universes it occurs in. “Universes” here are just identified with the states that we’re looking at a wave function over, or perhaps trajectories through such, so there are infinitely many. And having the universes split and not interfere with each other, would work with ordinary probability, but it won’t work with quantum amplitudes—if that were the case we’d just see probabilistic effects, not quantum effects. The many worlds of MWI do interfere with each other. When decoherence occurs the result is to effectively split collections of universes off from each other so they don’t interfere anymore, but in a coherent quantum system the notion of splitting doesn’t make much sense.
Remember, the key suppositions of MWI are just that A. the equations of quantum mechanics are literally true all the time—there is no magical waveform collapse; and B. the wavefunction is a complete description of reality; it’s not guiding any hidden variables. (And I suppose, C., decoherence is responsible for the appearance of collapse, etc., but that’s more of a conclusion than a suppostion.) Hence why it’s claimed here that MWI wins by Occam’s Razor. It really is the minimal interpretation of QM!
If there is an actual problem with MWI, I’d say it’s the one Scott Aaronson points out here (I doubt this observation is original to him, but not being too familiar of the history of this, it’s the first place I’d seen it; does anyone know the history of this?); the virtue of MWI is its minimality, but unfortunately it seems to be too minimal to answer this question! Assuming the question is meaningful, anyway. But the alternatives still seem distinctly unsatisfactory...
You can’t get the probabilities from those suppositions. And without the probabilities, MWI has no predictive power; it’s just a metaphysics which says “Everything that can happen does happen”, and which then gives wrong predictions if you count the worlds the way you would count anything else.
But even if you can justify the required probability measure, there is another problem. John Bell once wrote of Bohmian theories (see last paragraph here):
In a Bohmian theory, you take the classical theory that is to be quantized, and add to the classical equations of motion a nonlocal term, dependent on the wavefunction, which adds an extra wiggle to the motion, giving you quantum behavior. The nonlocality means that you need a notion of objective simultaneity in order to define that term. So when you construct the Bohmian counterpart of a relativistic quantum theory (i.e. of a quantum field theory), you will still see relativistic effects like length contraction and time dilation (since they are in the classical counterpart of the quantum field theory), but you have to pick a reference frame in order to make the Bohmian construction—which might be seen as an indication of its artificiality.
The same thing happens in MWI. In MWI you reify the wavefunction—you assume it is a real thing—and then you divide it up into worlds. To perform this division, you need a universal time coordinate, so relativity disappears at the fundamental level. Furthermore, since there is no particular connection between the worlds of the wavefunction in one moment, and the worlds of the wavefunction in the next moment, you don’t even have persistence of a world in time, so you can’t even think about performing a Lorentz transformation. Instead, you have a set of disconnected world-moments, with mysterious nonstandard probabilities attached to them in order to make predictions turn out right.
All of that says to me that the MWI construction is just as artificial as the Bohmian one.
Sorry, yes. I took weighting things by squared-norm of amplitude as implicit, seeing as we’re discussing QM in the first place.
That doesn’t excuse the MWI at all. Could very well be, that something else is needed to resolve the dilemmas.
And you haven’t answer my question, maybe something else.
The weighting quantity is conserved. So far as I can tell, that entirely answers the objection you raised. I’m really not seeing where it fails. Could you explain?
Edit: s/preserved/conserved/
If I understand you correctly, there is an equal number of world splits every second in every branch. They are all weighted, so that no branch can explode?
Is that correct?
Worlds are weighted by squared-norm of amplitude, a quantity that is conserved. If two worlds are really not interfering with each other any more, then amplitude will not somehow vanish from the future of one and appear in the future in the other.
In this remark. His expansion below should make it clear what the relevant points are.
I think a relatively simple theory of everything is possible. This is however not based on anything solid—I’m a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.
One thing I haven’t elaborated on here (and probably more hand-waving/philosophy than mathematics):
If Church-Turing thesis is true, there is no way for a human to prove any mathematical problem. However, does it have to follow that not every theorem has a proof?
What if every true theorem has a proof, not necessarily understandable to humans, yet somehow sound? That is, there exists a (Turing-computable) mind that can understand/verify this proof.
(Of course there is no one ‘universal mind’ that would understand all proofs, or this would obviously fail. And for the same reason there can be no procedure of finding such a mind/verifying one is right.)
Does the idea of not-universally-comprehensible proofs make sense? Or does it collapse in some way?
Does inverse of fundamental attribution error have a good name?
I just thought about it the other day and my brain went into a startling direction.
The fundamental attribution error says that it’s self-contradictory to explain other people’s actions by their internal traits while explaining your own actions by external circumstances. It goes on to say that the second explanation (circumstances) is uniformly more correct. What if that’s an error, and the first explanation actually works better in many cases? For example, the set point of happiness does appear to exist, so there’s some truth in labeling someone “an unhappy person” if you see them unhappy at the moment.
I’m agnostic about setpoint theory.
For some changes their effects on happiness (either positive or negative) fade somewhat with time, but I’m not convinced at all that it’s true for all changes, or that it’s typical for the effect to fade down to anywhere near 0%.
Maybe other changes never fade. Maybe it’s even typical for changes not to fade. Setpoint theory sounds too much like generalizing from one example.
(not that any of this is related to lack of good name for “preference for situational attributions” or whatever we want to call it)
Indeed, virtue theory in ethics suggests that people usually act according to habits of behavior. Of course, there’s some empirical psychology that suggests this may be incorrect.
Is that a bias that exists? Does it exist in the same people as fundamental attribution error? Can they both function simultaneously?
I don’t think there’s a standard name for it. I’d go with “bias towards situational attributions.”
A little over the top:
http://www.popsci.com/technology/article/2010-05/new-software-can-assemble-army-overnight-making-human-bosses-obsolete
I believe the “unreasonable effectiveness of mathematics in the natural sciences” can be explained based on the following idea. Physical systems prohibit logical contradiction, and hence, physical systems form just another kind of axiomatic, logical, and therefore mathematical system. To take a crude example, two different rocks cannot occupy the same point in space, due to logical contradiction. This allows the ability to mathematically talk about the rocks. Note that this example is definitively crude, since there are other things like bosons which actually can occupy the same position, but anyways, hopefully you get the idea.
What is the status of this argument in the philosophy of mathematics? Or general comments/references?
Except that....that isn’t a logical contradiction!
You have inadvertently demonstrated one of the best arguments for the study of mathematics: it stretches the imagination. The ability to imagine wild, exotic, crazy phenomena that seem to defy common sense—and thus, in particular, not to confuse common sense with logic—is crucial for anyone who seriously aspires to understand the world or solve unsolved problems.
When Albert Einstein said that imagination was more important than knowledge, this is surely what he meant.
I can see how that phrasing would strike you as being redundant or inaccurate. To try to clarify --
The rocks not occupying the same point in space is a logical contradiction in the following sense: If it wasn’t a logical contradiction, there wouldn’t be anything preventing it. You might claim this is a “physical” contradiction or a contradiction of “reality”, but I am attempting to identify this feature as a signature example of a sort of logic of reality.
In this comment, I wrote:
You replied:
Actually, they are both true if A itself is false. This is the import of the logical principle ex falso quodlibet.
But I take your point to be that certain logical statements (such as “A ⇒ ~~A”) are true of any actual physical system.
It is true that things are a certain way. They are not some other way. So, if a territory satisfies A, it follows that it does not satisfy ~A. And this is a fact about the territory. After all, the point of a map is to be something from which you can extract purported facts about the territory.
However, what is not in the territory is the delineation of its properties into axioms, on the one hand, and theorems, on the other. There are just the properties of the territory, all co-equal, none with logical priority. The territory just is the way it is.
For example, consider the statements “A” and “~~A”, where A is the application of some particular predicate to the territory. It is not as though there is one property or feature of the territory according to which it satisfies A, while there is some other property of the territory according to which it satisfies ~~A. That feature of the territory in virtue of which it satisfies A is exactly the same feature in virtue of which it satisfies ~~A.
In the logic, “A” and “~~A” are two distinct well-formed formulas, and it can be proven that one entails the other. But in the territory there are no two distinct features corresponding to these two wffs, so it’s not really sensible to speak of an entailment relationship in any nontrivial sense. The territory just is the way that the territory is, and this way, being the way that the territory is, is the way that the territory is. There is nothing more to be said with regard to the territory itself, qua logical system.
What about a tautology such as “A ⇒ ~~A”? Tautologies do give us true statements about the territory. But, importantly, such a statement is not true in virtue of any feature of the territory. The tautology would have been true no matter what features the territory had. There is nothing in the territory making “A ⇒ ~~A” be true of it. In contrast, there is something in the logical system making “A ⇒ ~~A” be a theorem in it — namely, certain axioms and rules of inference such that “A ⇒ ~~A” is derivable. (Some systems with different axioms or rules of inference would not have this wff as a theorem). This is another reason why the territory ought not to be thought of as a logical system of which the features are axioms or theorems.
Thank you for comment, and I hope this reply isn’t too long for you to read. I think your last sentence sums up your comment somewhat:
In support of this, you mention:
It seems like things are getting confused here. I take “A ⇒ ~~A” to be a necessary condition for proposition A to make sense. In order to make things concrete, let me use a real example. Say that proposition A is, “This particular rock weighs 1.5 pounds with uncertainty sigma.” This seems like a fairly reasonable, easily imaginable statement. Now clearly, A is simply a rendition or re-representation of the reality that is the physical system. In other words, proposition A only tells you what reality tells you by holding the rock in your hands, or throwing it through the air, or vaporizing it and measuring the amount of output energy. The only difference in this case is that the reality is encoded in human language.
For A to make sense, clearly “A ⇒ ~~A” must be true. For the rock to weigh 1.5 plus/minus sigma, it must not—not weigh 1.5 plus/minus sigma. That strikes me more or less a requirement imposed by human language, not so much a requirement of physical reality.
For this reason I think that your example of “A ⇒ ~~A” does not get to the heart of my point. My point is slightly different. Consider again the proposition “A true ⇒ (if A then B) OR (if A then not B)”. Take B as: “This rock is heavier than this pencil.” Now, assuming that the pencil does not lie in the weight range 1.5 plus/minus sigma, then this proposition must be true. And now, this statement is significantly more complicated than “A ⇒ ~~A”, and it implies that (under proper restrictions) you can make longer logical statements, and continuing further, statements which are no longer trivial and just a property of human language.
Side-note: I suppose these particular examples are all tautological so they don’t quite show the full richness of a logical system. However, it would be easy to make theorems, such as “if A AND C, then B” (where C could be specified similar to A or B.) Then we would see not only tautologies but also theorems and other propositions which are all encoded as we would expect from a typical logical system.
Now, the fact that this sort of statement works comes straight out of the territory. Our maps to A and B are merely re-representations of reality, and they are what reality is telling us, only encoded in human language. So we are seeing that reality appears to obey the same logical rules that we have come to expect from ordinary kinds of logical systems.
Now, I am not claiming that the physical systems (the territory) is somehow naturally encoding itself into these re-representations. Clearly, the human mind is at work in realizing these re-representations. But once these re-representations are realized, it really is the territory which takes on a logical structure.
So I am not claiming that the physical system is naturally a system of axioms and theorems and so on. My proposition is weaker and more generic, and only says that the physical system has a logical character. My real punchline, I suppose, is to say that this logical character of the re-representation is non-trivial. As you say, “Things are a certain way. They are not some other way.” But the way in which they are is logical. They are in a way which is the same way that logical statements are encoded. This is non-trivial because physical systems at the highest level just look like a huge collection of various and vague facts. We have no reason (a priori) to expect physical systems to map about in this way—but they do! And this I claim allows for math to be so effective in working with reality in general.
I’m a little confused by this example. The proposition
A ⇒ (if A then B) OR (if A then not B)
is a logical tautology. It’s truth doesn’t depend on whether “the pencil does not lie in the weight range 1.5 plus/minus sigma”. In fact, just the consequent
(if A then B) OR (if A then not B)
by itself is a logical tautology. So, I have two questions:
(1) Is there a reason why you didn’t use just the consequent as your example? Is there a reason why it wouldn’t “get to the heart” of your point?
(2) Just to be perfectly clear, are you claiming that the truth of some tautologies, such as A ⇒ ~~A , is “trivial and just a property of human language”, while the truth of some other tautologies is not?
Sorry, I caught that myself earlier and added a sidenote, but you must have read before I finished:
Edit: Or, sorry, just to complete, in case you had read that—the tautology does depend on whether the pencil lies in the range of 1.5 plus/minus sigma. If the pencil lies in that range, we can’t say B or ~B.
In answer to (1.), I’m not using the consequent because you identified the fact that the consequent can imply anything by logical explosion. I was referring to the “A=>~A” example not getting to the heart of the point because that example is too simple to reveal anything of substance, as I subsequently discuss.
In answer to (2.), I am not claiming that some tautologies are “less true”. I am just roughly showing how there is a gradation from obvious tautologies to less obvious tautologies to tautologies which may not even be recognizable as tautologies, to theorems, and so on.
First, I, at least, am glad that you’re asking these questions. Even on purely selfish grounds, it’s giving me an opportunity to clarify my own thoughts to myself.
Now, I’m having a hard time understanding each of your paragraphs above.
B meant “This rock is heavier than this pencil.” So, “B or ~B” means “Either this rock is heavier than this pencil, or this rock is not heavier than this pencil.” Surely that is something that I can say truthfully regardless of where the pencil’s weight lies. So I don’t understand why you say that we can’t say “B or ~B” if the pencil’s weight lies in a certain range.
I didn’t say that the consequent can imply anything “by logical explosion”. On the contrary, since the consequent is a tautology, it only implies TRUE things. Given any tautology T and false proposition P, the implication T ⇒ P is false.
More generally, I don’t understand the principle by which you seem to say that A ⇒ ~~A is “too simple”, while other tautologies are not. Or are you now saying that all tautologies are too simple, and that you want to focus attention on certain non-tautologies, like “if A AND C, then B” ?
But surely this is just a matter of our computational power, just as some arithmetic claims seem “obvious”, while others are beyond our power to verify with our most powerful computers in a reasonable amount of time. The collection of “obvious” arithmetic claims grows as our computational power grows. Similarly, the collection of “obvious” tautologies grows as our computational power grows. It doesn’t seem right to think of this “obviousness” as having anything to do with the territory. It seems entirely a property of how well we can work with our map.
My idea was that the rock weighs 1.5 plus/minus sigma. If the pencil then weighs 1.5 plus/minus sigma, then you can’t compare their weights with absolute certainty. The difference in their weights is a statistical proposition; the presence of the sigma factor means that the pencil must weigh less than (1.5 minus sigma) or more than (1.5 plus sigma) for B or ~B to hold. But anyways, I might concede your point as I didn’t really intend this to be so technical.
Sorry, “logical explosion” is just a synonym for “ex falso quodiblet”, which you originally mentioned. You originally pointed out that the consequent can imply anything because of ex falso quodiblet, when A is not true. That wasn’t my intention, so I added the A true qualifier.
It initially seemed too simple for me, but maybe you are right. My original thinking was that “A ⇒ ~~A” seems to mean merely that a statement makes sense, whereas other propositions seem to have more meaning outside of that context. Also, the class of tautologies between different propositions seems to generalize the class of tautologies with a single proposition.
I hadn’t really thought about this, and I’m not sure how important it is to the argument, although it is an interesting point. Maybe we should come back to this if you think this is a key point. For the moment I am going to move to the other reply...
Little note to self:
I guess my original idea (i.e., the idea I had in my very first question in the open thread) was that the physical systems can be phrased in the form of tautologies. Now, I don’t know enough about mathematical logic, but I guess my intuition was/is telling me that if you have a system which is completely described by tautologies, than by (hypothetically) fine-graining these tautologies to cover all options and then breaking the tautologies into alternative theorems, we have an entire “mathematical structure” (i.e., propositions and relations between propositions, based on logic) for the reality. And this structure would be consistent, because we had already shown that the tautologies could be formed consistently using the (hypothetically) available data. Then physics would work by seizing on these structures and attempting to figure out which theorems were true, refining the list of theorems down into results, and so on and so forth.
I’m beginning to worry I might lose the reader do to the impression I am “moving the goalpost” or something of that nature… If this appears to be the case, I apologize and just have to admit my ignorance. I wasn’t entirely sure what I was thinking about to start out with and that was really why I made my post. This is really helping me understand what I was thinking.
Tell me whether the following seems to capture the spirit of your observation:
Let C be the collection of all propositional formulas that are provably true in the propositional calculus whenever you assume that each of their atomic propositions are true. In other words, C contains exactly those formulas that get a “T” in the row of their truth-tables where all atomic propositions get a “T”.
Note that C contains all tautologies, but it also contains the formula A ⇒ B, because A ⇒ B is true when both A and B are true. However, C does not contain A ⇒ ~B, because this formula is false when both A and B are true.
Now consider some physical system S, and let T be the collection of all true assertions about S.
Note that T depends on the physical system that you are considering, but C does not. The elements of C depend only on the rules of the propositional calculus.
Maybe the observation that you are getting at is the following: For any actual physical system S, we have that T is closed under all of the formulas in C. That is, given f in C, and given A, B, . . . in T, we have that the proposition f(A, B, . . .) is also in T. This is remarkable, because T depends on S, while C does not.
Does that look like what you are trying to say?
This looks somewhat similar to what I was thinking and the attempt at formalization seems helpful. But it’s hard for me to be sure. It’s hard for me to understand the conceptual meaning and implications of it. What are your own thoughts on your formalization there?
I’ve also recently found something interesting where people denote the criterion of mathematical existence as freedom from contradiction. This can be found on pg. 5 of Tegmark here, attributed to Hilbert.
This looks disturbingly similar to my root idea and makes me want to do some reading on this stuff. I have been unknowingly claiming the criterion for physical existence is the same as that for mathematical existence.
I’m inclined to think that it doesn’t really show anything metaphysically significant. When we encode facts about S as propositions, we are conceptually slicing and dicing the-way-S-is into discrete features for our map of S. No matter how we had sliced up the-way-S-is, we would have gotten a collection of features encoded as proposition. Finer or coarser slicings would have given us more or less specific propositions (i.e., propositions that pick out minuter details).
When we put those propositions back together with propositional formulas, we are, in some sense, recombining some of the features to describe a finer or coarser fact about the system. The fact that T is closed under all the formulas in C just says that, when we slice up the-way-S-is, and then recombine some of the slices, what we get is just another slice of the-way-S-is. In other words, my remark about T and C is just part of what it means to pick out particular features of a physical system.
Though the word “tautology” is often used to refer to statements like (A v ~A), in mathematical logic any true statement is a tautology. Are you talking about the distinction between axioms and derived theorems in a formal system?
I’m not aware of any strong emphasis on this argument. It seems at first glance to be problematic at multiple levels.
One problem with your approach is that humans have evolved in a single, very well-behaved universe. So we have intuition both from instinct and from internalized experience that makes it very hard for us to tell what is actually a logical contradiction and what is not. Indeed, one of the reasons I suspect that so many people have issues with things like special and general relativity as well as quantum mechanics is that they can’t get over that these aspects of the universe don’t fit well with their intuitions.
Consider for a moment what a universe would look like where 1 + 1 did not equal 2. What would that look like? It isn’t clear to me that this is even a meaningful question. But that may be because these concepts are so ingrained in us that we can’t think without them. Thus, it may be that math works well for understanding the universe because humans have no other option. One could imagine us meeting an alien species that has some completely different but very effective way of understanding the universe that isn’t isomorphic to math at all.
Logical operation is quite well defined, with or without regards to human perception of that logic. The idea that logic may not be understood does not contradict the idea that an internal logic (may) underlie physical systems. (Note, maybe see my clarification below, here. )
Granted logic is somewhat mysterious and it is hard to imagine what a different kind of logic would look like. However, that is immaterial to my idea. The idea is just that you have signatures of illogic (e.g., both statements (a.) if A, then B, and (b.) if A, then not B, both true at the same time) which seem to be non-present in physical systems.
I’d say the real reason is the Pauli principle, which is a physical law not entailed by logic alone. I see no logical contradiction at all imagining two rocks that occupy the same place, like a 3D version of the two-dimensional picture you’d get by using two projectors to project their pictures onto the same point on a screen.
Around the turn of the last century, the logicists, like Frege and Russell, attempted to reduce all of mathematics to logic; to prove that all mathematical truths were logical truths. However, the systems they used (provably) failed, because they were inconsistent.
Furthermore, it seems likely that any attempt at logicism must fail. Firstly, any system of standard mathematics requires the existence of an infinite number of numbers, but modern logic generally has very weak ontological commitments: they only require the existence of a single object. For mathematics to be purely logical, it must be tautological—true in every possible world*, and yet any system of arithmetic will be false in a world with a finite number of elements.
Secondly, both attempts to treat numbers as objects (Frege) or concepts/classes (Russell) have problems. Frege’s awful arguments for numbers being objects notwithstanding, he has trouble with the Julius Caesar Objection; he can’t show that the number four isn’t Julius Caesar, because what this (abstract) object is is quite under-defined. Using classes for numbers might be worse; on both their systems, classes form a strict hierarchy, with a nth level classes falling under (n+1)th classes, and no other. Numbers are defined as being the concept which has all those concept’s whose elements are equinumerous; the class of all pairs, the class of all triples, etc. But because of the stratification, the class of all pairs of objects is different from the class of all pairs of first level classes, which is different from the class of all pairs of second level classes, and so on. As such, you have an infinite number of ’2’s, with no mathematical relations between them. Worse, you can’t count a set like {blue chair, red chair, truth, justice}, because it contains objects and concepts.
What seems more likely to me is that there are an infinite variety of mathematical structures, purely syntax without any semantic relevance to the physical world, and without ‘existence’ in any real sense, as a matter of induction we’ve realised that some can be interpreted in manners relevant to the external world. As evidence, consider the fact that different, mathematics are applicable in areas: probability theory here, complex integration here, addition here, geometry here...
*strictly speaking, true in every structure.
No, that’s not right. Russell and Whitehead’s Principia Mathematica is the fullest statement of logicism, and its system was never proved inconsistent.
Here I’m less certain, but I’m pretty sure that that’s not right either. You would have relations among two such 2s, but those relations would be of a higher type than either 2. But, again, I’m definitely vaguer on how that would work.
Yes, sorry, I meant that Frege failed his system was inconsistent (though possibly not if you replace Basic Law 5 with Hume’s Law). Russell, on the other hand, simply runs into Incompleteness; you can’t prove all of mathematics from logic because you can’t prove it full stop.
You’d have ’2’s of all cardinalities, so to have a relation between them, you would need to move into the uncountables—but then there are new pairs to be formed here… Essentially, you can reconstruct Russell’s original paradox, comparing the cardinality of the set with the cardinality of certain things that fall under it.
You could mitigate this but cutting short the recursion, and simply allowing the relation to hold between the first n levels of concepts or so., on pain of arbitrariness.
I’m curious as to the downvotes; was I off-topic, too long, or simply wrong? Edit: And (if it’s acceptable to ask about other people’s downvotes) why was zero call downvoted?
That’s not a problem for logicism per se. Logicism isn’t really a claim about what it takes to prove mathematical claims. So it doesn’t fail if you can’t prove some mathematics by a certain means. Rather, logicism is a claim about what mathematical assertions mean. According to logicism, mathematical claims ultimately boil down to assertions about whether certain abstract relationships among predicates entail other abstract relationships among predicates, where this entailment holds completely regardless of the meaning of the predicates. That is, mathematical claims boil down to claims of pure logical entailment.
So, if you discover that your particular mathematical system is incomplete, then what you’ve really done is discover that you had missed some principles of logic. It’s as though you’d known that P ∧ P entails P, but you just hadn’t noticed that P ∨ P entails P as well.
(But you were right about why logicism ultimately failed to convince everyone: Mathematics seems to have ontological commitments, where pure logic does not.)
I didn’t downvote either comment. Your comment was probably downvoted because some readers considered its arguments to be wrong or unclear. zero call’s comment was probably downvoted because it smacks of the mind projection fallacy, especially here:
The organization of facts into axioms, rules of inference, proofs, and theorems doesn’t seem to be an ontologically fundamental one. We superimpose this structure when we form mental models of things. That is, the logical structure of things exists in the map, not the territory.
I wish you would have made this last comment on the post directly, so that I could reply to that there. Anyways, the point I was offering was that the logical structure does exist in the territory, not just the map. Our maps are merely reflecting this property of the territory. The fundamental signature of this is the observation that physical systems, when viewed in a map which exists only as a re-representation or translation (as opposed to an interpretation) amenable to logical analysis, are shown to prohibit logical contradiction. (For example, the two statements (if A, then B) and (if A, then not B) cannot both be true, where A and B are statements in some re-representation of the physical system.)
I’ll move that part of my comment there, with my apologies.
That’s quite alright—thank you for your discussion.
I appreciate your comments but I’m having trouble seeing your point with regards to the idea. To reiterate, with regards to your last paragraph,
I’m proposing that these interpretations work because the internal physical systems (the territory) obeys the same properties as consistent mathematical systems—see my comment to TM below.
There is a great deal of difference between it operating, in certain regards, on the same sort of rules (rules isomorphic to) mathematics, and mathematics being applicable because physics isn’t logically inconsistent. It’s not a logical contradiction to say that two points have the same position, nor to say that 2+2=1 (for the latter, consider arithmetic modulo 3). Nor can maths be deduced purely from logic; partly because logic doesn’t require the existence of more than one object.
Russell did try to deduce maths from logic plus some axioms about how the world worked—that there were an infinite number of things, etc., but the applicability of the maths is always going to be an empirical question.
http://moronail.net/img/902_NATURAL_SELECTION_It_can_be_tampered_animals