Justifiable Erroneous Scientific Pessimism
In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust). If this is the case then Fermi’s estimate of a “ten percent” probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it’s not totally clear to me why “10%” instead of “2%” or “50%” but then I’m not Fermi.
We’re all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory. We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight. Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin’s careful estimate from multiple sources that the Sun was around sixty million years of age. This was wrong, but because of new physics—though you could make a case that new physics might well be expected in this case—and there was some degree of contrary evidence from geology, as I understand it—and that’s not exactly the same as technological skepticism—but still. Where there are sort of two, there may be more. Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could’ve seen coming?
I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is “justifiable” if you try hard enough to find excuses and then not question them further, so I’ll phrase it more carefully this way: I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would’ve been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints. (So that relaxed standards for “justifiability” will just produce even more justifiable cases for the technological possibility.) We probably should also not accept as “erroneous” any prediction of technological impossibility where it required more than, say, seventy years to get the technology.
- 25 Jun 2013 8:47 UTC; 39 points) 's comment on A personal history of involvement with effective altruism by (
“Continental drift” is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...
They didn’t have a mechanism at all until subduction and hence plate tectonics was discovered. The expanding earth theory was actually considered not implausible by geologists for quite a while—it didn’t have anything like a plausible mechanism, but neither did continental drift. I was surprised to discover how recent this was.
There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn’t exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.
The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf
Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau’s impossibility proof is entirely correct.
And that’s why we don’t use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don’t really deserve to be called crystals.
The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.
Lord Kelvin was wrong but was he pessimistic? He wasn’t saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn’t seem to be the case that ‘underestimating a quantity’ is pessimism. If nothing else, the quantity might be ‘number of babies killed’.
It was pessimistic in the sense that under his estimate the sun was steadily cooling and so we’d all freeze to death long before the real sun will present us any trouble.
Did he give an estimate of when we’d all freeze to death?
He estimated the sun was no more than 20 million years old, and presumably did not expect it to last for more than a few tens of millions of years more.
Not that I know of. Gravitational collapse is a really lousy, short-term source of energy, which is why he gave such a shorter estimate. Still on the scale of millions of years, I think.
The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn’t revolve around the Sun.
I thought that parallax argument was applied to the stars, not the Sun?
Yeah, that’s what I meant. (No parallax in star observations → the Earth isn’t moving → the Sun is revolving around the Earth.)
*there would have been a parallax given assumptions at the time regarding the distance of the stars.
I’ve wondered though: if there were no planets besides Earth would we have persisted as geocentrists until the 19th century?
If there were no celestial bodies but Earth and the sun, we would have been just as correct as heliocentrists.
I don’t think that’s right.
The center of mass for the Earth-sun system is inside the sun; so, yeah, the heliocentrists wouldn’t be “just as correct”.
If the two masses were equal, then Earth and Sun would orbit a point that was equidistant to them; and in that scenario heliocentrists and geocentrists would be equally wrong....
Why privilege the center of mass as the reference point? Do we need to find the densest concentration of mass in the known universe to determine what we call the punctum fixum and what we call the punctum mobile?
As far as I can tell, most of the local universe revolves around me. That may be a common human misconception, seeing as I’m not a black hole, if we only go by centers of mass. But do we have to?
(Also, “densest concentration of mass” would probably be in the bible belt.)
I think the center of mass thing is a bit of a red herring here. While velocity and position are all relative, rotation is absolute. You can determine if you’re spinning without reference to the outside world. For example, imagine a space station you spin for “gravity”. You can tell how fast it’s spinning without looking outside by measuring how much gravity there is.
You can work in earth-stationary coordinates, there will just be some annoying odd terms in your math as a result (it’s a non-inertial reference frame).
Technically, no you can’t. Per EY’s points on Mach’s principle, spinning yourself around (with the resulting apparent movement of stars and feeling of centrifugal stresses) is observationally equivalent to the rest of the universe conspiring to rotate around you oppositely.
The c.g. of the earth/sun solar system would likewise lack a privileged position in such a world.
Is that correct? Spinning implies rotation implies acceleration, which I’d always thought could be detected without external reference points.
Without taking a stance on Mach’s principle or that specific question of observational equivalence, what about a spinning body in an otherwise empty universe? As an extreme example, my own body could spin only so fast before tearing itself apart. Surely this holds even if I’m floating in an otherwise utterly empty universe?
This is addressed later in the article, very well IMHO. Let me just give the relevant excerpts:
I worry I’m missing something obvious, but that EY quote doesn’t seem to address my belief (namely, that detecting accleration doesn’t need an external reference point). It just argues there’s no absolute origin to use as an external reference point.
Silas is talking about this:
Edit: You are correct from a classical physics standpoint that if you are in a windowless room on a merry-go-round, you can tell whether the merry-go-round is standing still versus spinning at a constant speed. (For instance, you could shoot a billiard ball and see whether its path is straight or curved.) This contrasts with the analogous situation in a windowless train car, where you cannot tell whether the train is standing still versus moving with a constant velocity.
Right, that (a small portion of it) was what I quoted first, one exchange upthread, and satt still held to the intuition that there are rotational stresses in the absence of the universe’s background matter. So I went back/up/down[1] a level to the basic question of when you can rule out a certain “absolute” in nature: when the simplest laws stop requiring it.
The point I was trying to make (which I should have been more specific on) was that, just as the Galilean observation set sufficed to rule out “special” velocities and leave only relative ones, our observation set now has, as an optimal description, laws that give no privilege to any non-relative motion, including higher derivatives of velocity.
[1] whichever preposition would be least offensive
Ah, sorry. Upthread reading fail on my part.
As far as I can tell, what I’m saying holds even for non-spinning accelerating objects, and under quantum physics. According to QFT, a sufficiently sensitive thermometer accelerating through a vacuum detects a higher temperature than a non-accelerating thermometer would. This appears to be a way for a thermometer to tell whether it’s accelerating without having to “look” at distant stars & such.
Hm, I’m not sure the thermometer can conclude that it’s accelerating from seeing the black body radiation. I think it’s equivalent to there being an event horizon behind it emitting hawking radiation (this happens when you accelerate at a constant rate). The thermometer can’t tell if it’s next to a black hole or if it’s accelerating. Could be wrong though, but I vaguely remember something along these lines.
I don’t see anything incorrect in what you say. (Sounds to me like a direct consequence of the equivalence principle, although I’m no GR expert.) But I’m assuming away the possibility of rogue black holes in this hypothetical, since I’m wondering whether a sufficiently sensitive sensor could detect its own acceleration even inside an otherwise empty universe (or at least without reference to the rest of the cosmos).
I think I misunderstood what you and Silas were talking about. (Note though that my train thought experiment was about a train with a constant velocity. The billiard ball technique works to detect acceleration of the train even if no rotation is involved.)
Yes, all acceleration is absolute, not relative. You don’t need hypothetical esoteric effects to detect it, a usual weighing scale will do. Gravity throws a bit of a quirk in it, of course.
I’m simultaneously reassured (that my intuition’s correct) & confused (about SilasBarta & Eliezer’s remarks, since they read to me like they contradict my intuition). Maybe I should post a comment on the Sequences post rather than continuing to press the point here, though.
[Edit: originally linked the wrong Sequences post, fixed that.]
I agree that it’s at least quite plausible (as per your post, it’s not proven to follow from GR) that if the universe spun around you, it might be exactly the same as if you were spinning. However, if there’s no background at all, then I’m pretty sure the predictions of GR are unambiguous. If there’s no preferred rotation, then what do you predict to happen when you spin newton’s bucket at different rates relative to each other?
EDIT: Also, although now I’m getting a bit out of my league, I believe that even in the massive external rotating shell case, the effect is miniscule.
EDIT 2: See this comment.
Are you sure you linked the right comment? That’s just someone talking about centripetal vs centrifugal.
No, I didn’t. It’s fixed now, thanks.
That’s a justifiable error, but I don’t see how it’s pessimistic.
“Pessimistic” is a loaded term and I’m not sure if it’s all that useful in the context of this discussion in the first place.
It’s crucial to the original point that Eliezer was making, which was differentiating technological pessimism from technological optimism.
This isn’t technology, and though it makes a difference to the universe as a whole, it wouldn’t be better or worse for us either way.
Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.
EDIT: hmm, I’m having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.
This isn’t what you asked for, but I might as well enumerate a few of these examples, for everyone’s benefit. For the field of AI research:
George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.
Mortimer Taube (1960) — not long before computers began to regularly dominate amateur and then expert chess players. (Edit: this one seems wrong)
Satosi Watanabe (1974) — a couple decades before both supervised and unsupervised machine learning took off.
Also, Hubert Dreyfus mocked the capabilities of chess computers, and compared AI to alchemy, in Dreyfus (1965) — a mere two years before he was defeated by the chess computer Mac Hack.
Technically, he was correct.
I like the idea of football (soccer) played by quadrupeds.
Taube did not mean “Machines cannot be made to choose good chess moves” (a claim that has, indeed, been amply falsified). Here’s a bit more context, from the linked paper.
Taube’s point, if I’m not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have something like the same experience as a human player has: seeing the spatial relationships between the pieces, for example. He thinks that’s something machines fundamentally cannot do, and that is why he thinks machines cannot play chess.
Now, for the avoidance of doubt, I think he was badly wrong about all that. Someone blind from birth can learn to play chess, and I hope Taube wouldn’t really want to say that such a player isn’t really playing chess because she isn’t having the same visual/spatial experiences as a sighted player. And most likely one day computers (or some other artificially constructed machines) will be having experiences every bit as rich and authentic as humans have. (Taube wrote a book claiming this was impossible. I haven’t seen it myself, but from what little I’ve read about it its arguments were very weak.)
But his main claim about machines here isn’t one that’s been nicely falsified by later events. We have machines that do a very good job of evaluating positions and choosing moves, but he never claimed that that was impossible. We don’t yet have machines that play chess in the very strong sense he’s demanding, or even the weaker sense of using anything closely analogous to human visual perception to play. (I suppose you might say that programs using a “bitboard” representation are doing something a little along those lines, but somehow I doubt Taube would have been convinced.)
… Also, Taube wasn’t a scientist or a computer expert or a chess expert or even a philosopher. He was a librarian. A librarian is a fine thing to be, but it doesn’t confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.
You accuse lukeprog of being misleading in taking a quote from a mere “librarian”, and as we all know, a librarian is a harmless drudge who just shelves books, hence
I accuse you of being highly misleading in at least two ways here:
in 1960, a librarian is one of the occupations—outside actual computer-based occupations—most likely to have hands-on familiarity with computers & things like Boolean logic, for the obvious reason that being a librarian is often about research where computers are invaluable. A librarian could well have extensive experience, and so it’s not much of a mark against him.
Mortimer Taube turns out to be the kind of ‘librarian’ who exemplifies this; the little byline to his letter about “Documentation Incorporated” should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids’ books, but because you did not see fit to add any background on what sort of ‘librarian’ Taube was, I will:
So to summarize: he was a trained philosopher and tech startup co-founder who invented new information technology and handled documentation tasks who was familiar with the cybernetics literature and traveled in the same circles as people like Vannevar Bush.
And you write
!
An upvote for correctly contextualizing what Taube wrote, and a mental downvote for being lazy or deceptive in your final paragraph.
I really can’t think of a polite way to say this, so:
Bullshit.
I wasn’t accusing Luke of anything; I was disagreeing with him. Disagreement is not accusation. When I want to make an accusation, I will make an accusation, like this one: You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes, and I have to say I’m pretty shocked to see someone as generally excellent as you behaving in such a way.
I do not think, and I did not say, and I had not the slightest intention of implying, that “a librarian is a harmless drudge who just shelves books”.
Allow me to remind you how Luke’s comment begins. The boldface emphasis is mine.
Taube was, despite his many excellent qualities, not a scientist as that term is generally understood, and he was, despite his many excellent qualities, not working in “the field of AI research”.
(Yes, I know the Wikipedia page says he was “a true innovator in the field of science”. Reading what it says he did, though, I really can’t see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don’t think that “not science” is in any way the same sort of thing as “not valuable” or “not important” or “not difficult”. What the creators of (say) the Firefox web browser did was important and valuable and difficult, but happens not to be science. What Beethoven did was important and valuable and difficult, but happens not to be science. What Martin Luther King did was important and valuable and difficult, but happens not to be science.)
Pointing this out doesn’t mean I think there’s anything wrong with being a librarian. When I said “a librarian is a fine thing to be”, I meant it. (And, for the avoidance of doubt, it is my opinion both when “librarian” means “someone who shelves books in a library” and when it means “a world-class expert on organizing information in catalogues”.)
Now, having said all that, I should add that you are quite right about one thing: when I said that Taube was neither a computer expert nor a philosopher, I was oversimplifying. (Not least because I hadn’t looked deeply into Taube’s career.) He was an important innovator in the use of punched cards for document indexing, which is quite a bit like being a computer expert; and he was a PhD in philosophy, which is quite a bit like being a philosopher. None the less, I stand by what I said: neither being a world-class expert in document indexing, nor knowing a lot about punched-card reading machinery, nor being a PhD in philosophy, seems to me to be the kind of expertise that makes it particularly startling if one’s wrong about whether machines can play chess.
(And, once again, for the avoidance of doubt, I am not in the least trying to belittle his expertise and creativity. I just don’t see that they were the kind of expertise and creativity that make it startling for someone to be wrong about the possibilities of computer chess-playing.)
[EDITED to clarify a bit of wording and add some emphasis. … And again, later, to add a missing negative; oops. Also, while I’m here, two other remarks. 1: I regret the confrontational tone this exchange has taken; but I don’t see any way I could have responded sufficiently forcefully to the accusations levelled at me without perpetuating it. 2: I see a lot of downvotes are flying around in this subthread. For the record, I haven’t cast any.]
You were claiming he cherrypicked the example; I’ll quote again:
If that were true, Luke would be seriously cherrypicking and that is not a harmless error but the sort of biased selection and lying which one would rightly take into account in considering flipping the bozo bit on someone and henceforth ignoring anything they said. This isn’t a harmless mistake of attribution or minor peccadilloe that might hurt a single clause or subpoint or tangential argument, this is the sort of thing that discredits an entire line of thought. Maybe you didn’t mean it as an accusation, but I treat it as one since if it was true it would be very serious; in much the same way maybe someone bringing up the fact that the lead author on a drug study has taken millions of dollars from the dug company doesn’t mean anything serious by it, hey, they’re just discussing the paper, but I would take it very seriously indeed and maybe even ignore the study entirely.
Duly noted, but see above, I don’t especially care what you actually think, I care just what you wrote and whether it is a serious issue with Luke’s comment.
Right. I’m sure you actually meant “I think librarians are fantastic smart people who know everything about everything and have many valid and expert opinions, however it just so happens that chess and AI and cybernetics happens to be one of the few areas where their informed commentary is worthless and ′ it doesn’t confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here’”.
If working on key organization schemes and pushing forward the field of information science cannot be construed as ‘science’ no matter how broadly defined, then I guess we’d better exempt computer science and AI from that moniker too.
ಠ_ಠ Actually, that doesn’t quite convey my impression of your no-true-Scotsmanning, I’ll try that again: ಠ_ಠ ಠ_ಠ ಠ_ಠ A PhD in philosophy is not enough to be called a philosopher? zomgwtfbbq.
This appears to me to be an instance of a common error: assuming that when someone says something, they intended every inference you find it natural to make from it. It doesn’t appear to me, at all, that for Luke to have been wrong in the way I say he was he needs to have been a liar or bozo or whatever else you’re trying to suggest I accused him of being.
(I’m puzzled, too. We seem to be agreed that Luke’s quotation gives a misleading impression about what claim Taube was making, and—rightly, in my opinion—you don’t appear to have concluded from this that Luke was dishonestly cherrypicking and needs the bozo bit flipped. But I don’t understand, at all, why giving a misleading impression about Taube’s relevant expertise is a worse thing to “accuse” him of than giving a misleading impression about what Taube was claiming. Either of them means that the quotation from Taube fails to serve the purpose Luke put it there for.)
If you don’t especially care what I actually think, then what the hell are you doing putting words into my mouth about how librarians are uninteresting low-status unintellectual drudges? (Which, just in case it needs saying again, in no way resemble my actual opinion.)
I meant what I said. I did not mean what you said. I also did not mean the particular equally-ridiculous thing you now sarcastically suggest I could have meant. I honestly have no idea what I’ve done to bring forth all this hostility, but if you want an actual reasoned discussion then I politely suggest that you stop flinging shit at me and then we can have one.
Those last five words are yours, not mine. I’m sure you can find definitions according to which Taube’s work was “science”. I’m also sure you can quickly and easily think of plenty of instances where “no matter how broadly defined” ends up meaning “way too broadly defined for most purposes”. (Here’s an extreme example: Richard Dawkins is on record as accepting the term “cultural Christian” as applying to him. I would accordingly not say that RD cannot be construed as ‘Christian’ no matter how broadly defined—but, none the less, for most purposes describing him as a Christian would be silly. Taube’s work is certainly nearer to being science than Richard Dawkins is to being a Christian; the point of the example is to clarify my point, not to be a perfect analogy.)
Ian Bostridge has a doctorate in history, and spent some time as an academic historian. However, I would not now call him a historian but a singer. (Or, more specifically, a tenor.) Angela Merkel has a PhD in physics, but I wouldn’t now call her a physicist but a politician (or, perhaps, some more august term along those lines). George Soros has a PhD in philosophy but I wouldn’t call him a philosopher.
So: no, the fact that someone got a PhD in philosophy in 1935 is not sufficient reason to call them a philosopher in 1960. As I say, having a PhD in philosophy is certainly quite like being a philosopher; it’s certainly not wholly irrelevant; I oversimplified and I shouldn’t have. But it’s not the same thing.
It’s a common error indeed, and one that is justifiable when enough other people draw that error. Yeah Hitler said to kill all the Jews, but he really meant to kill the Jew inside, not real Jews. If I may quote your other comment:
Indeed.
Right, because you just threw that in for no reason...
And I even gave several. Feel free to deal with the examples; do you think computer science and AI are not ‘science’?
I don’t see what’s the least bit silly about describing him a a “cultural Christian” especially if he accepts the label. He was indeed raised in a Christian culture and implicitly accepts a lot of the background beliefs like belief in guilt and sin (heck, I still think in those terms to some degree and say things like ‘goddamn it’); even if we don’t go quite as far as Moldbug in diagnosing Dawkins as holding to a puritanical secular Christanity, the influence is ineradicable. There is no view from nowhere.
Wow, so not only is he a trained historian who has published & defended his doctorate of original research, you describe him as actually having been in academia post-graduate school, and you still won’t describe him as a historian? Would I describe him as a historian? Heck yes. Because if I won’t even grant that description to Bostridge, I don’t know who the heck I would grant it to. You know, describing someone as a historian is not committing to describing him as a ‘great historian’ or a ‘ground-breaking historian’ or a ‘famous historian’. You don’t need to be Marvin Minsky to be called ‘an AI researcher’ and you don’t need to be a pre-eminent figure to be described as a worker in a field. Even a bad programmer is still a ‘programmer’; someone who has moved up into management is still a programmer even if they haven’t written a large program in years.
From Wikipedia: “After being awarded a doctorate (Dr. rer. nat.) for her thesis on quantum chemistry,[17] she worked as a researcher and published several papers.”
But no, all that is chopped liver because gjm doesn’t think she’s a physicist/chemist.
I imagine Soros would be disappointed to hear that; his Popperian philosophy grounds his ‘reflexivity’ on which he has written extensively and believes can significantly influence economics as it’s currently practiced.
It is more than sufficient, Taube had excellent training (the University of Chicago, especially in the 1930s thanks to Adler & Hutchinson, was a philosophy powerhouse, and still is to some extent—ranked #24 in the Anglosphere by Leiter), received his PhD, kept up with the issues both as a practitioner and commenter, and was reportedly working on a philosophy book when he died. He was a philosopher. And your other examples were hardly better.
On flipping the bozo bit
Before you bother to read any of what follows, I would be grateful if you would answer the following question: Have you, in fact, bozo-bitted me? Because I’ve been proceeding on the assumption that it is in principle possible for us to have a reasoned discussion, but that’s looking less and less true, and if I’m wasting my time here then I’d prefer to stop.
On librarians and librarianship
Unless I misunderstand you badly, you are arguing either that I have been lying constantly about this or that I am appallingly unaware of my own opinions and attitudes and you know them better than I do. And, if I understand this remark correctly …
… your basis for this is that you can’t think of any reason why I might have mentioned that Taube was a librarian other than that I have “contempt for librarians” and that I wanted to put Taube down by calling him names.
So, allow me to propose a very simple alternative explanation (which is, in fact, the correct explanation, so far as I can tell by introspection): I said it because, having listed a bunch of things that weren’t Taube’s profession, it seemed appropriate to say what his profession actually was.
On the basis of this thread so far, I’m guessing that you still don’t believe me; so let me ask: Is there, in fact, anything I could possibly say or do that would convince you that I do not hold librarians in contempt? Because it looks to me as if there isn’t, and it seems rather odd that describing someone who was in fact a librarian as a librarian could be such strong evidence of contempt for librarians as to outweigh all future testimony from the person in question.
On professions and the like
There are at least three things you can mean by saying someone is, e.g., “a biologist”. (1) That they know something about biology and think about it from time to time. (2) That doing biology is their job, or at least that they do it as much and as well as you could reasonably expect if it were. (3) That, regardless of how much biology they actually do, they have at least some (fairly high) threshold level of expertise in it.
Angela Merkel is surely a physicist(1). She is not a physicist(2) now, although she used to be. Whether she’s a physicist(3) depends on what threshold we pick and on the extent to which she’s kept up her expertise. Similarly, Ian Bostridge is a historian(1), not a historian(2) so far as I know, and might or might not be a historian(3), and similarly for George Soros and philosophy.
In general, being an X PhD is a guarantee of being an Xer(1) and (at least for a while; knowledge decays) of being an Xer(3) for some plausible choices of threshold; it is of course no guarantee of being an Xer(2).
You appear to be taking the position that it is never reasonable to deny that someone with an X PhD is “an Xer”. That seems like excessive credentialism to me.
The relevant notion of “scientist”, “philosopher”, etc., here was never made explicit. I think I’ve had meaning 2 in mind sometimes and meaning 3 in mind sometimes. Eliezer’s original post about Pascalian wagers takes Enrico Fermi as its leading example, and talks about “famous scientists” and “prestigious scientists” in general. The present post takes Lord Kelvin as another example, but also points to skepticism about flying machines (which was not generally from famous scientists). So I don’t know what the “right” threshold for meaning 3 would be here, but it seems like it should be fairly high.
Bostridge, Merkel and Soros seem to me like pretty decent examples of people who are no longer Xers(2), and probably aren’t Xers(3) with a high threshold. I could be wrong about some or all of them, though; I mentioned them only to make the more general point that holding a doctoral degree is no guarantee of being an Xer(2) or Xer(3) with high threshold.
On Taube and his qualifications
Taube was an expert in the indexing of documents, and an innovator in that field. In your opinion, does that amount to expertise in computer chess-playing comparable to, say, Fermi’s expertise in nuclear fission?
Taube was (I think; perhaps it was actually others in his company who were concerned with this) an expert in automated punched-card reading machines. Does that amount to expertise in computer chess-playing comparable to, etc.?
Taube held a PhD in philosophy; I think his thesis was on the history of philosophical thought about causality. Does that amount to, etc., etc.?
I repeat: Mortimer Taube was an impressive person. He was clearly very smart. He accomplished more than I am ever likely to. I do not hold him in contempt. Still less do I hold him in contempt for having been a librarian. I simply don’t think that his opinions on computer chess-playing are the same kind of thing as Fermi’s opinions on nuclear fission, or Kelvin’s on the age of the earth.
I haven’t yet, but if you’re going to persist in claiming that people with PhDs in philosophy are not even allowed the description ‘philosopher’, it’s tempting because why should I bother with people who abuse language and redefine words so abysmally?
Which was pursuant to your belief that a mere librarian could have nothing to say about the issue, could not be any sort of authority or indicator of the times, and so does not belong in the list lukeprog presented. Yes, I’ve said all this before.
The obvious reading of your concluding paragraph was obvious, before you started trying to defend it.
Indeed. And I think it’s absurd to restrict usage of descriptions to the rarefied and elevated #2s (how many biologists get tenure?) and even more absurd to restrict it to the even more rarefied and elevated #3s.
(Merkel & Bostridge were both #2s at some point, but seem likely to never be #3s in those fields; whether we could consider Soros a #3 - because he claims his philosophical approach of reflexivity guides his philanthropy & investing and so his inarguably historic roles there are part and parcel of philosophy—is an interesting question, but getting a bit far afield.)
It gives him a great deal of expertise in organizing and searching data mechanically, which is relevant to AI; and inasmuch as chess-playing falls under AI… No, he didn’t write his thesis on chess-playing, but here again I would say it’s absurd to insist on such doctrinaire rigidity that no one can have respectable expertise without being the expert on a topic. (I would note in passing that Fermi’s laurea thesis was not on fission, but X-ray imaging; is that close enough? Well, probably, but then why is indexing and search so out of bounds? Search at Google involves a great deal of AI work, so clearly there is a real connection at some point in time...)
I’m afraid I have shocking news for you, many respected philosophers in AI may not have written their theses directly on AI: Dennett’s dissertation on consciousness was etc. etc. etc? Or consider John Searle’s early work on speech acts, was it etc etc etc? Keeping in mind the recent praise on LW for his work...
All I can do is point to my previous summary and observe that Taube was one of the few contemporaries who grappled with the cybernetics issues, was trained philosophically, built a tech career on primitive computers etc etc. His observations are not chopped liver.
(I’m going to be brief, because I’m losing hope that you’re going to pay any attention to anything I say. I haven’t the least intention of bozo-bitting you globally because you have been consistently extremely impressive elsewhere, but in this particular discussion it seems that at least one of us—and I’m perfectly willing to consider that it may be me—is being sufficiently irrational that we’re doomed to produce more heat than light. More specifically, what it looks like to me is that you’re treating me as an enemy combatant who needs to be defeated, rather than a person who disagrees with you who needs to be either taught or learned from or both.)
[EDITED to add: well, it turns out I wasn’t so brief. But I tried.]
What’s annoying here is not so much your evident belief that I am lying through my teeth about my own opinion about librarians (why on earth would I even do that?) as your refusal even to acknowledge that your fantasy about that opinion is anything other than a mutually-agreed truth.
I’m sorry, was that meant to be an answer to the question I asked?
I wasn’t asking the question just to make a rhetorical point. Your behaviour in this thread suggests to me that as soon as you read the last sentence of what I wrote you leapt to a conclusion, got angry about it, and came out fighting, and that ever since you’ve refused even to consider the possibility that you leapt to the wrong conclusion.
It’s just as well that I’m not insisting on any such thing. So far as I know, there were other people around who were about as expert on nuclear physics as Fermi. I am not an expert on the history, so maybe that’s wrong, but I haven’t been assuming it’s wrong and when I say “comparable to Fermi’s expertise in nuclear fission” I don’t mean “expertise as of the world’s greatest expert”, I mean “expertise as of someone very expert in the field”. Because it seems to me that that’s the level of expertise that’s actually relevant to Eliezer’s original point and his more recent question.
Of course. But what makes them respected philosophers in AI, and means that if they make pronouncements about AI that turn out to be very wrong then they might be examples of the phenomenon Eliezer was talking about, is not the fact that they are philosophy PhDs but their further body of work in the field that is related to AI.
(“Might be” rather than “are” because I have the impression that a sizeable number of people around here hold that philosophy is so terribly diseased a discipline that being a respected philosopher in AI is no ground for paying much attention to their opinions on AI.)
On the other hand, you’ve been arguing (I think) that Taube’s philosophical expertise made him an expert in the nascent field of AI, and the only evidence we have for his philosophical expertise is that he was a philosophy PhD. So it’s of some relevance what stuff this tells us he’d studied and thought about in depth. The stuff in question seems pretty interesting, but I don’t see how it could have shed much light on the prospects for computer chess-playing.
I was slightly wrong about the topic of (the book I think was derived from) Taube’s thesis, by the way; it wasn’t only a historical study of other philosophers’ thinking about causation but also “an attempt to solve the causal problem”, as the title puts it. Apparently his solution involved saying that causation and determination are incompatible and hence that causation implies freedom.
Doing a perfect post on this topic would be hitting a dead horse right between the eyes at a thousand paces.
Doing X for a living is a lower bar than being tenured.
Peccadillo. (Sorry; couldn’t resist the temptation to flag that accidental autology for posterity.)
Thankyou for your research. I was mislead by the grandparent.
“Eliezer” should be “lukeprog”.
Hah, whups. And so it goes—you correct Eliezer’s lack of examples, gjm corrects your description of Taube, I correct gjm’s description of Taube, and you correct my description of gjm’s description...
Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn’t.
Yup. I strongly suspect that Taube was in fact “into qualia territory”, or something along those lines, when he wrote that.
Crackpots frequently build lists of scientists being wrong, misrepresent quotes, and the like, but doing that for librarians ? That’s quite outstanding.
I dislike gjm’s and your contempt for librarians. My favorite writer, Jorge Luis Borges, was a librarian. Being a librarian does not disqualify one from commenting.
His rebuttal letter to Norbert Weiner was apparently thought worth publishing by Science, right after a Harvard physicist’s letter and before another letter. This indicates that his thinking was hardly marginal, considered low-quality, or uninformed, and tells us what the thinking was like at the time—which is all the quote was supposed to do!
Taube is hardly ‘just’ a libarian. See my reply to gjm.
My “contempt for librarians” is sufficiently fictional that I am happy to pay the 5 karma point penalty I am currently paying (on account of the very negative comment upthread) to reiterate: I do not have contempt for librarians, nor did I express contempt for librarians; you are drawing an incorrect inference and should update whatever mental model led you to draw it.
(I agree that private_messaging’s comment is extremely silly, and I regret the fact that what I wrote seems to have encouraged it.)
I think you misunderstood. No contempt for librarians—merely a surprise that a librarian would be honoured with a misinterpretation by crackpots.
Here is another famous example:Chandrasekhar’s limit. Eddington rejected the idea of black holes (“I think there should be a law of Nature to prevent a star from behaving in this absurd way!”). Says wikipedia:
I guess this is not quite what you are asking for, since the math was on Chandrasekhar’s side, and Eddington was pinning his hopes on “new physics”. To be fair, recent discussions about horizon firewalls) could be such new physics.
Eddington erroneously dismissed M_(white dwarf) > M_limit ⇒ “a black hole” , but didn’t he correctly anticipate new physics?
Do event horizons (Finkelstein, 1958) not prevent nature from behaving in “that absurd way”, so far as we can ever observe?
http://en.wikipedia.org/wiki/Cosmic_censorship_hypothesis
It’s hard to know what Eddington meant by “absurd way”. Presumably he meant that this hypothetical law would prevent matter from collapsing into nothing. Possibly if Chandrasekhar had figured out the strange properties of the event horizon back in 1935 and had emphasized that whatever weird stuff is happening beyond the final Chandrasekhar limit is hidden from view, Eddington would not have reacted as harshly. But that took another 20-30 years, even though the relevant calculations require at most 3rd year college math. Besides, Chandrasekhar’s strength was in mathematics, not physics, and he could not compete with Eddington in physics intuition (which happened to be quite wrong in this particular case).
The general success rate of breakthroughs is pretty damn low, and so I’d argue that most examples of “invalid” pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can’t anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corresponding regression towards the mean rate of success). They’re merely not as hindsight resistant as Fermi’s example. You look back at history seeing things that succeeded. Go read archive of some old journals, and note the zillions of amazing breakthroughs that did not pan out.
If bomb did not rely on unusual U235 , Fermi would not have been irrational about 10% probability to emission of secondary neutrons from fission—it is something that most likely either happens for all fissions, or does not happen for any fissions, so the clever “there would be one” argument doesn’t work irrespective of U235. U235 is not the most general valid objection, it’s just the objection for which sources are easiest to find. No one did the silly task of writing out that production of secondary neutrons is not a statistically independent fact across different nuclei, and we’re lucky that there’s just 1 nucleus so we don’t have to, either.
I’m having trouble understanding your second paragraph. This is probably just due to missing background knowledge on my part, but would you mind explaining what you mean by:
and
Thanks!
There was a really silly argument about Fermi’s 10% estimate , scattered over several threads (which OP talks about). Yudkowsky been arguing that Fermi’s estimate was too low. He came up with the idea that surely there would have been one element (out of many) that would have worked so the probability should have been higher, that was wrong because a: its not as if some element’s fissions released neutrons and some didn’t, and b: there was only 1 isotope to start from (U-235), not many.
Do all elements’ fissions release neutrons?
Yes. The issue is that the argument “look at periodic table, it’s so big, there would be at least one” requires that the fact of fission releasing neutrons would be assumed independent across nuclei.
Gotcha, thanks.
I’m not sure if this is justifiable or just an old-fashioned blunder...
-- August Comte, 1835
I’m leaning towards “blunder” myself...
Yeah, blunder. Wikipedia says:
Well, the first half seems approximately correct. The second sentence should have begun with “And by clever application of this means we shall...”.
Even if you interpret “visual” as ‘mediated by photons’, there’s such a thing as neutrino astronomy.
It wasn’t until the 1850s that Ångström discovered that elements both emit and absorb light at characteristic wavelengths, which is what spectroscopic analysis of stars is based on, so I’m leaning toward justifiable.
This has interesting repercussions for Fermi’s paradox.
Yes, particularly in the context that you and I discussed earlier that intelligent life arising earlier might have had an easier time wiping itself out. Although the consensus there seemed to be that it wouldn’t be a large enough difference to matter for serious filtration issues.
I posted the following in a quotes page a few months back. I don’t know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman’s formulation of QED for many years after it was first introduced.
[Footnote to: “This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy”. The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
Here’s an example of the ‘opposite’ - a case of unjustifiable correct optimism:
Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.
This is probably not a very useful example and I wouldn’t be surprised to see that there were plenty more of these examples.
Kuhn’s Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school—fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.
To him(Kuhn) evidence don’t maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.
Yes, “Science advances one funeral at a time”, but this, from Wikipedia, is a pretty good summary of a typical “scientific revolution”:
”...Copernicus’ model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, Copernicus’s model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus’ contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus’ cosmology lacked credibility.”
Thomas Malthus’ view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.
Not clear his view is erroneous given suitable values for “long run”.
How so? Last I checked, human populations could still pop out children if they wanted to faster than the average real global growth rate since the IR of ~2%.
What’s relevant to whether we are in a Malthusian trap is the actual birth rate, not what the birth rate would be if people wanted to have far more children.
I’ll be more explicit then: the ‘sustained growth’ is almost irrelevant since per the usual Malthusian mechanisms it is quickly eliminated. What made Malthus wrong, what he was pessimistic about, was whether people would exercise “moral restraint”—in other words, he didn’t think the demographic transition would happen. It did, and that’s why we’re wealthy.
But how do you know it’s the “moral restraint” that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources? In fact, the moral restraint could be keeping us closer to the catastrophe than if we had been producing more humans.
Because population growth can outpace innovation growth. This is not a hard concept.
I know. But your post seemed to be taking the position in favor of population growth (change) as the relevant factor rather than innovation. I was asking why you (seemed to have) thought that.
Population growth and innovation are two sides of a scissor: innovation drives potential per capita up, population growth drives it down. But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation.
Hence, innovation can be seen as necessary—but it is not sufficient, in the absence of changes to reproductive patterns.
Okay, that’s where I disagree: Each additional person is also another coin toss (albeit heavily stacked against us) in the search for innovators. The question then is whether the possible innovations, weighted by probability of a new person being an innovator (and to what extent) favors more or fewer people.
There’s no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.
There is no a priori reason, of course. We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.
Yet, the world we actually live in doesn’t look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.) It was not China or India which launched the Scientific and Industrial Revolutions.
The ability to produce lots of children does not at all work against the ability of innovators and innovator probability to overcome their resource-extraction load. In order for your strategy to actually work against the potential innovation, you would have to also suppress the intelligence (probability) of your children to the point where the innovation blade is sufficiently small. And you would have to do it without that action itself causing the die-off, and while ensuring they can continue to execute the strategy on the next generation. And keep in mind, you’re working against the upper tail of the intelligence bell curve, not the mode.
Innovation in this context needn’t be revolution-size. China and India (and the Islamic Empire) did innovate faster than the West, and averted many Malthusian overtakings along the way (probably reaching 800 years ahead at their zenith). Malthus would have known about this at the time.
I’m not following your terms here. Obviously the ability to produce lots of children does in fact sop up all the additional production, because that’s why per capita incomes on net essentially do not change over thousands of years and instead populations may get bigger. So you can’t mean that, but I don’t know what you mean.
They innovated faster at some points, arguably. And the innovation such as in farming techniques helped support a higher population—and a poorer population. Malthus would have known this about China, did, and used China as an example of a number of things, for example, the consequences of a subsistence wage which is close to starvation http://en.wikisource.org/wiki/An_Essay_on_the_Principle_of_Population/Chapter_VII :
That’s not even required, though. What we’re looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it’s not clear which way that swings in general.
Sure, it’s just an example which does not seem to be impossible but where the blade of innovation is clearly bigger than the blade of population growth. But the basic empirical point remains the same: the world does not look like one where population growth drives innovation in a virtuous spiral or anything remotely close to that*.
* except, per Miller’s final reply, in the very wealthiest countries post-demographic-transition where reproduction is sub-replacement and growth maybe even net negative like Japan and South Korea are approaching, then in these exceptional countries some more population growth may maximize innovation growth and increase rather than decrease per capita income.
I can’t prove this, but I believe that in the United States and Western Europe we would still be rich (in the sense that calorie deprivation wouldn’t pose a health risk to the vast majority of the population) if the birth rate had stayed the same since Malthus’s time.
That makes no sense to argue: Malthus’s time was part of the demographic transition. Of course I would agree that if the demographic transition continued post-Malthus—as it did—we would see higher per capita (as we did).
But look up the extremely high birth rates of some times and places (you can borrow some figures from http://www.marathon.uwc.edu/geography/demotrans/demtran.htm ), apply modern United States & Western Europe infant and child mortality rates, and tell me whether the population growth rate is merely much higher than the real economic growth rates of ~2% or extraordinarily higher. You may find it educational.
But I believe that from the point of view of maximizing the per person wealth of the United States and Western Europe the population growth rate has been much, much too low since the industrial revolution. (I admittedly have no citations to back this up.)
Maybe. That’s not the same thing as what you said initially, though.
I was always under the impression that what thwarted his hypothesis was the rise of effective and widespread birth control. I remember reading one of his works and noting that it was operating on the assumption that, to reduce birthrate to sustainable levels, sex would have to be reduced, and that was unlikely. It is unlikely, but it’s also mostly decoupled from childbirth now, at least in the developed world.
Have I misinterpreted something here?
I believe he considered the possibility of birth control, referring to it as “immorality”.
We’ll just evolve for restraint not to work any more.
Yes, that’s the question: is the demographic transition temporary? I’ve brought it up before: http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/
(Was there a SMBC comic or something about men evolving a condom-breaking mechanism in their penis?)
We’re rapidly evolving condom-not-putting-on mechanism in the brain.
“Watch out for that cliff!”
“It looks pretty far off, and besides, we’re turning left soon anyway.”
“But we could keep accelerating!”
Your reply seems completely irrelevant to the Malthusian point that population growth can always exceed total factor production, and so it is population growth—or lack of growth—which dominates and determines per capita.
This blog post claims that only a few years before the Wright brother’s success, the consensus was that flying machines would necessarily have to be less dense than air (like hot air balloons).
All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route “wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock” would still be available.
In context, Fermi was considering something rather more short-term: WW2.
That said, he may not have scoped his statement to such a small scale.
One of many suitable and sufficient qualifiers that could make the arguments plausible.