The Popularization Bias
I noticed that most recommendations in the recent recommended readings thread consist of either fiction or popularizations of specific scientific disciplines. This introduces a potential bias: aspiring rationalists may never learn about some fields or ideas that are important for the art of rationality, just because they’ve never been popularized.
In my recent post on the fair division of black-hole negentropy, I tried to introduce two such ideas/fields (which may be one too many for a single post :). One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines). This is a well-known result in thermodynamics, plus an obvious application of it. Some have complained that the idea is too sci-fi, but actually the opposite is true. Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I’ve never read or watched a piece of science fiction that explorered this one. (BTW, in case it’s not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.)
Similarly, there are many popularizations of topics such as the Prisoner’s Dilemma and the Nash Equilibrium in non-cooperative game theory (and even a blockbuster movie about John Nash!), but I’m not aware of any for cooperative game theory.
Much of Less Wrong, and Overcoming Bias before it, can be seen as an attempt to correct this bias. Eliezer’s posts have provided fictional treatments or popular accounts of probability theory, decision theory, MWI, algorithmic information theory, Bayesian networks, and various ethical theories, to name a few, and others have continued the tradition to some extent. But since popularization and writing fiction are hard, and not many people have both the skills and the motivation to do them, I wonder if there are still other important ideas/fields that most of us don’t know about yet.
So here’s my request: if you know of such a field or idea, just name it in a comment and give a reference for it, and maybe say a few words about why it’s important, if that’s not obvious. Some of us may be motivated to learn about it for whatever reason, even from a textbook or academic article, and may eventually produce a popular account for it.
- Scott Aaronson on Born Probabilities by 2 Oct 2009 6:40 UTC; 39 points) (
- 31 Jan 2010 2:42 UTC; 6 points) 's comment on Complexity of Value ≠ Complexity of Outcome by (
The No free lunch theorems of search could do with a populist write up.
Basically to tell people making AIs that they need to reference the world/problems they are trying to deal with.
There are an awful lot of caveats that apply to the No Free Lunch theorem. Is it really very applicable in practice? If you’re just going to use it as a hand-wave concept, I think it’s more honest to use TANSTAAFL and make your lack of rigorous mathematical backing clear.
So, can anybody list a few lessons we can draw from the NFL theorem?
Occam’s razor means that the no free lunch theorems are practically irrelevant.
One must select what’s important, there is too much science to tell about it all. “Correcting” popularization bias must consist in steering the selection effect according to some specific criteria different from sum-total of popularization in the world. Since what’s important to specific people heavily depends on their interests, it’s unlikely for there to be a magic bullet that more or less universally improves on available popularized material.
The valid way out of this debacle seem to be to acquire general knowledge, to learn to see what science knows and understand it for yourself, given enough effort. Popularizing this skill instead of popularizing specific content may be a better strategy.
To start things off, here are my entries:
recursion theory. Theory of Recursive Functions and Effective Computability by Hartley Rogers. A theory about what’s computable is part of recursion theory, but it turns out that there’s a lot more that’s known about the realm of the uncomputable than one might expect.
hypercomputation. Various papers by Toby Ord. This is related to the above, but more about how one might practically realize the higher forms of computation. I just noticed there’s a recent book about this topic. Hypercomputation: Computing Beyond the Church-Turing Barrier by Apostolos Syropoulos.
Hypercomputation seems like a misguided attack on the Church-Turing thesis to me. If nobody can build a hypercomputer—and there’s no evidence that anyone ever will be able to—then I am not sure I can see what the point is.
I guess it’s because there is no proof that someone won’t find a way of computing the uncomputable. It seems unlikely to me—but I suppose there is not much harm in philosopers speculating.
Re: Toby’s “Regardless of the actual computational limits of our universe, I have no doubt that the study of hypercomputation will lead to many important theoretical results across computer science, philosophy, mathematics and physics.”
Hmm. What have we got so far out of Omegas and Oracles? I expect what we will get out of Hypercomputation will be mostly confusion—since it sounds as though it is a field with a real object of study.
Well, one practical result we’ve got is that we shouldn’t program AIs to assume (either implicitly or explicitly) that the universe must be computable. See this discussion between Eliezer and me about this.
Making agents with assumptions about anything which we are not confident of the truth of seems like a dubious strategy.
We are fairly confident of the Church-Turing thesis, though: “Today the thesis has near-universal acceptance” - http://en.wikipedia.org/wiki/Church–Turing_thesis
The Theory of Bayesian Aggregation—Bayesian Group Agents and Two Modes of Aggregation by Mathias Risse.
ABSTRACT: Suppose we have a group of Bayesian agents, and suppose that they would like for their group as a whole to be a Bayesian agent as well. Moreover, suppose that those agents want the probabilities and utilities attached to this group agent to be aggregated from the individual probabilities and utilities in reasonable ways. Two ways of aggregating their individual data are available to them, viz., ex ante aggregation and ex post aggregation. The former aggregates expected utilities directly, whereas the latter aggregates probabilities and utilities separately. A number of recent formal results show that both approaches have problematic implications. This study discusses the philosophical issues arising from those results. In this process, I hope to convince the reader that these results about Bayesian aggregation are highly significant to decision theorists, but also of immense interest to theorists working in areas such as ethics and political philosophy.
Wasn’t as enlightening as the abstract made it sound.
The results seem quite significant, even if it’s not clear what they mean. One possible interpretation is that expected utility maximization is not the correct ideal for group rationality.
Or they just do it totally wrong.
Good find, thanks!
I wonder if I over-corrected upon learning about cooperative game theory. Based on the relative lack of responses here, perhaps there aren’t that many nuggets of knowledge left to be picked off the street, so to speak.
I’m curious, was anyone else aware of cooperative game theory, before I mentioned it here?
I had vaguely heard of it and the main result you presented, but I didn’t find it very interesting—and I still don’t, even after your post. (The black hole material was much more interesting.)
In comparison, the first time I read about the Prisoner’s Dilemma and the Tragedy of the Commons, my reaction was: ‘this is amazing! It provides a new way to look at just about everything—littering on sidewalks, war, traffic & SUVs, cheating on taxes...’ For a year or two, I saw everything through that lense.
Yes. Not to sound like a jerk, but I didn’t realize it was so poorly known.
On the issue of nuggets of knowledge left, I think it’s more so the case that we just don’t know where we’ll find them or that they aren’t already well known. It will take something that will make someone who is aware of the details of some field realize that a popular account is needed because even his/her fellow smart people don’t know about it.
I’d read the Wikipedia page before, for some reason it didn’t seem very interesting to pursue further.
Yup. Although I think that the core) is possibly a more useful concept than the Shapley value. (I actually had a vague suspicion it could be useful for Toby and Nick Bostrom’s work on dealing with moral uncertainty, but never bothered to follow up.)
Yes, when I first learned about the Shapely value, I bothered everyone I knew by telling them all excited-like about it when they obviously didn’t much care. :)
Complexity theory. Back when I learned it, Garey and Johnson was the standard book, but there must be more up to date sources—perhaps even popular ones (for some less than Harry Potter-sized value of popular).
Michael Sipser’s Introduction to the Theory of Computation is an extremely friendly introduction to theory of computation, including complexity theory and computability theory. As opposed to Garey and Johnson, it seems broader and shallower, covering computability theory (incl. space complexity and other non-NP-Complete topics) as well as complexity theory, and probably in a much friendlier fashion. It’s one of the few compsci books I’ve ever read that I would describe as a “page turner”: it was so interesting and readable that I couldn’t put it down when reading it, and I still like to pick it up from time to time just to reread sections for pleasure.
[The 1st edition is much cheaper than the 2nd edition for anybody interesting in buying ($10-$20 used, versus >$55 used on 2nd edition or $115 new).]
“The Gravity Mine” by Stephen Baxter. http://www.infinityplus.co.uk/stories/gravitymine.htm
That’s not a bad story, but the author seems more interested in using black holes as exotic locales with cool “special effects”, rather than exploring the implications of their physics. The reader walks away entertained, but not really having learned anything about black-hole thermodynamics.
Re: One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines).
What would anyone want a black hole entropy dump for? If you are in orbit around a star, you can just let entropy radiate off as heat. Compared to that sending it into the nearest black hole would probably require a lot of energy. This seems like a bad idea—so what is the proposed point?
The point is that a black hole is much colder than interstellar space, and its temperature decreases as its mass increases. This coldness implies that it takes much less energy to dump a certain amount of entropy into a black hole than into interstellar space. Of course you probably don’t want to ship that entropy across interstellar distances before dumping. That would likely wipe out any savings. You’d create a black hole close by, or build your civilization around an existing one.
It still doesn’t seem to make sense. Buiding a black hole anywhere near a sentient agent seems like a really, really bad idea. Orbiting around one doesn’t help you drop things into it much—because of orbital inertia. The suggestion seems rather like proposing that we dump the planet’s excess heat into the Sun—as opposed to radiating it off in all directions. Yes, we could build a heat ray and point it at the sun—but if you think about that for a moment, you will realise why it wouldn’t help get rid of entropy, and would actually just make things worse.
The tiny relative temperature difference between the surface of the hole and interstellar space hardly makes much difference if you are many millions of miles away from it. Also, the hole is likely to be surrounded by extremely hot stuff in orbit around it. Are you sure that you have thought this idea through?
By the time your civilisation is taking advantage of black holes, it’s large enough that even a small temperature difference can scale to quite a bit of negentropy. Further, you don’t have to be in orbit, you can build a Dyson shell around the hole at such a distance that the surface gravity is one g. (Or several shells, if people prefer different levels of gravity.) Then there’s no orbital velocity to deal with. (And in any case, you could brake by tidal friction and extract some entropy that way.) Or to be shorter, why are you objecting to the practical details of a thought experiment? Nothing about the game theory relies on black holes or the particular exponent 2; it could just as well be mass^1.5, and the analysis would remain the same although the numbers would change a bit.
How is a Dyson sphere anything other than “in orbit”? Do you not know how they are supposed to work? Incidentally, Dyson spheres are a pretty silly idea as well. Slightly more realistic are rings—e.g. see my http://timtyler.org/the_rings_of_earth/
There are multiple types of Dyson sphere. Dyson’s original vision, a swarm of satellites, would be in orbit, but the popular version more commonly seen in fiction—a solid shell—would not, any more than the Earth orbits its own core (although any one point on the shell could plausibly be said to orbit the centre, provided the sphere is spinning).
A solid Dyson sphere is a dumb idea, the dynamics are unstable. See Niven’s essay on the dynamics of ringworld for the problems, and realize a sphere would be even worse. I don’t remember whether he discussed that in “Bigger than Worlds” or in an essay specifically on building Ringworld, he did discuss the dynamics problems in the novels as well.
So you have to expend a bit of energy moving it back to the midpoint every so often. What are attitude jets for?
In fantasy novels, you mean?
Regarding this discussion, I’m totally confused what people are talking about. It sounds like you want to take some of your excess energy and throw it into a black hole. Wouldn’t it be smarter to give it to me? How can energy be “excess”?
Eliezer has a post that explains some of the background assumed here: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/.
I have just finished reading this article. I still have no idea what it is that you intend to do with the black hole, or why it’s useful. Seriously, not even an inkling. And I seem to be unique in this regard, which sucks.
The only way that I can think of for a black hole to reduce entropy is if you throw things into it. Give them to me.
Tilba, Wei’s earlier post pointed to this article:
http://weidai.com/black-holes.txt
You might also need to know that computation can be done in principle almost without expending energy, and the colder you do the computation, the less energy is wasted. Hence being cold is a good thing, and black holes are very cold.
I didn’t get it right away, but now that I do, it’s pretty ingenious. Let me see if I got it right. Build a big ball in space. If the ball was empty, starlight and cosmic background would heat it up, the inner surface would emit photons, and they would bounce around the shell—so you’re back to square one. But the black hole at the center can absorb those photons without becoming hot. And the photons are unusable because they are ambient.
On the other hand, there is now a temperature difference between the inside and the outside. Can it be used to make usable energy?
Not energy, entropy. Energy is useful—entropy is useless.
+1; indeed, this is interesting from an scifi-itch-scratching viewpoint, but I guess we have the next 10^6 years to worry about the details.
Anyway, I like LW for bringing such things to my attention (thanks Wei_Dai!), but apart from being interesting, this seems not like an idea that need mass-popularization, or?
You ask a fair question, I think. Here are some potential short-term implications of black-hole negentropy:
The far future will most likely not be dominated by an everyone-for-himself type of scenario (like Robin Hanson’s Burning the Cosmic Commons. Knowing that, and possibly having a chance to see the far future for yourself, does that affect your short-term goals?
There is less need to adopt drastic policies to prevent the Burning the Cosmic Commons scenario.
The universe is capable of supporting much more life than we might intuit, even after seeing calculations like the one in Nick Bostrom’s Astronomical Waste, which fail to take into account quadratic negentropy. What are the ethical implications of that? I’m not sure yet, but I’d be surprised if there weren’t any.
I’d like to see a more popular discussion of Aumann’s disagreement theorem (and its follow-ons), and what I believe is called Kripkean possible-world semantics, an alternative formulation of Bayes theorem, used in Aumann’s original proof. The proof is very short, just a couple of sentences, but explaining the possible-world formalism is a big job.
I believe the Silent Ones in the Golden Age trilogy used black holes for this purpose.
In Dr. Who, the Time Lords used a black hole as a ‘mysterious energy source’.
That has as much relevance to black-hole negentropy as Demolition Man does to cryonics. In science fiction, the inability to explain something is indistinguishable from attributing it to magic.
Meh. Given that the impression was that no science fiction deals with it, I’d count it, just as I’d count Demolition Man as relevant to cryonics.
As far as I can recall, the last time we saw a black hole in Doctor Who, the TARDIS pulled another spaceship across its event horizon to safety. Just prior to that, they faced off against the actual literal Devil, who was chained in a hellish inferno inside a moon serviced by telepathic squid-people. I love Doctor Who, but I have a hard time calling it science fiction.
Aha. You’re referring to that other show, also coincidentally called Doctor Who. But yes, the original series was just about that silly.
As for the implausibilty of telepathic squid people, just stay out of the dark places of the world and you should be fine for now. Until then, Cthulhu f’thagn.
Same for the Ori in the SG-1 episode Beachhead (transcript here; summary and transcript of prior black-hole episode here and here, which may partly explain the writers’ thinking).
Re: if it’s not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.
That is supposed to help clear up the issue?!? It has rather the opposite effect here.
If anyone else would like to read up on maximum entropy thermodynamics—particularly Dewar’s recent work—that would be cool. This material explains much about why self-organising systems (including living ones) behave as they do—in thermodynamic terms. I discuss this here now and again, but—despite the links to Bayes and Jaynes—no-one seems to know very much about it.
A primer: http://en.citizendium.org/wiki/Life/Signed_Articles/John_Whitfield
That looked to be interesting until I glanced down at Figure 1, which reads:
Eeek! Tropical forests the most entropy-exporting? Not, say, the 1000 C regions below the earth’s surface? Not volcanoes or geysers?
Volcanoes and geysers are mostly uncommon, intermittent phenomena. Some volcano craters do stay pretty hot, for extended periods, though—it’s true.
I’m not sure about how to measure the rate of entropy dissipation within the Earth—but I’m not sure it radiates as much heat from the surface as ultimately comes from the sun.
The insides of nuclear reactors, and other power plants are probably the most entropic places of all—again, per unit area. Whether those count as “environments” could be debated.