Yes, and I did look at something like four of the individual studies of depression, focusing on the ones testing pills so they would be comparable to the Prozac trial. As I said in the post, they all gave me the same impression: I didn’t see a difference between the placebo and no-pill groups. So it was surprising to see the summary value of −0.25 SMD. Maybe it’s some subtle effect in the studies I looked at which you can see once you aggregate. But maybe it’s heterogeneity, and the effect is coming from the studies I didn’t look at. As I mentioned in the post, not all of the placebo interventions were pills.
transhumanist_atom_understander
From a quick look on Wikipedia I don’t see anything. Except for patients that report side effects from placebo, but of course that could be symptoms that they would have had in any case, which they incorrectly attribute to the placebo.
I don’t see how you could get an accurate measure of a nocebo effect from misdiagnoses. I don’t think anyone is willing to randomize patients to be misdiagnosed. And if you try to do it observationally, you run into the problem of distinguishing the effects of the misdiagnosis from whatever brought them to the doctor seeking diagnosis.
I took a look at The Secret of Our Success, and saw the study you’re describing on page 277. I think you may be misremembering the disease. The data given is for bronchitis, emphysema and asthma (combined into one category). It does mention that similar results hold for cancer and heart attacks.
I took a look at the original paper. They checked 15 diseases, and bronchitis, emphysema and asthma was the only one that was significant after correction for multiple comparisons. I don’t agree that the results for cancer and heart attacks are similar. They seem within the range of what you can get from random fluctuations in a list of 15 numbers. The statistical test backs up that impression.
If this is a real difference, I would expect it has something to do with lifestyle changes, rather than stress. However, it’s in the opposite direction of what I would expect. That is, I would expect those with a birth year predisposed to lung problems to avoid smoking. I did find some chinese astrology advice to that effect. Therefore they should live longer, when in fact they live shorter. So that doesn’t really make sense.
This result seems suspicious to me. First of all because it’s just a couple of diseases selected out of a list of mostly non-significant tests, but also because people probably do lots of tests of astrology that they don’t publish. I wouldn’t bet on it replicating in another population.
It’s okay, the post is eight pages long and not super internally consistent, basically because I had to work on Monday and didn’t want to edit it. I don’t make a post like that expecting everyone to read every paragraph and get a perfectly clear idea of what I’m saying.
Observation of the cosmic microwave background was a simultaneous discovery, according to James Peebles’ Nobel lecture. If I’m understanding this right, Bob Dicke’s group at Princeton was already looking for the CMB based on a theoretical prediction of it, and were doing experiments to detect it, with relatively primitive equipment, when the Bell Labs publication came out.
“Norman Borlaug’s Green Revolution” seems to give the credit for turning India into a grain exporter solely to hybrid wheat, when rice is just as important to India as wheat.
Yuan Longping, the “father of hybrid rice”, is a household name in China, but in English-language sources I only seem to hear about Norman Borlaug, who people call “the father of the Green Revolution” rather than “the father of hybrid wheat”, which seems more appropriate. It seems that the way people tell the story of the Green Revolution has as much to do with national pride as the facts.
I don’t think this affects your point too much, although it may affect your estimate of how much change to the world you can expect from one person’s individual choices. It’s not just that if Norman Borlaug had died in a car accident hybrid wheat may still have been developed, but also that if nobody developed hybrid wheat, there would still have been a Green Revolution in India’s rice farming.
Green fluorescent protein (GFP). A curiosity-driven marine biology project (how do jellyfish produce light?), that was later adapted into an important and widely used tool in cell biology. You splice the GFP gene onto another gene, and you’ve effectively got a fluorescent tag so you can see where the protein product is in the cell.
Jellyfish luminescence wasn’t exactly a hot field, I don’t know of any near-independent discoveries of GFP. However, when people were looking for protein markers visible under a microscope, multiple labs tried GFP simultaneously, so it was determined by that point. If GFP hadn’t been discovered, would they have done marine biology as a subtask, or just used their next best option?
Fun fact: The guy who discovered GFP was living near Nagasaki when it was bombed. So we can consider the hypothetical where he was visiting the city that day.
I like it a lot. I’m mainly a tumblr usr, and on tumblr we’re all worried about the site being shut down because it doesn’t make any money. I love having LessWrong as a place for writing up my thoughts more carefully than I would on tumblr, and it also feels like a sort of insurance policy if tumblr goes under, since LessWrong seems to be able to maintain performance and usability with a small team. The mods seem active enough that they frontpage my posts pretty quickly, which helps connect them with an audience that’s not already familiar with me, whereas on tumblr I haven’t gotten any readers through the tag system in years and I’m coasting on inertia from the followers I already have.
As someone who grew up with Greg Egan on the shelf, I want to note that Greg Egan said basically the same thing about “Neuromancer” (that it cares more about being fashionable than having the characters think through their situation), and “Quarantine” and “Permutation City” were in part responses to cyberpunk, so perhaps all is not lost.
Backing that up with Greg Egan interview quotes.
From the Karen Burnham interview, on hating “Neuromancer”, and on the influence of cyberpunk on “Quarantine”:
I read Neuromancer in 1985, because I was voting for the Hugos that year and I thought I ought to read all the nominated novels. I really hated it; aside from the style and the characters, which definitely weren’t to my taste, a lot of things about the technology in the book seemed very contrived and unlikely, especially the idea that anyone would plug in a brain-computer interface that they knew a third party could use to harm them.
Over the next few years I read some Rucker and Sterling novels, which I definitely enjoyed more than Gibson. So there was some reasonable stuff written under the cyberpunk banner, but none of it felt very groundbreaking to anyone who’d been reading Dick and Delany, and if it hadn’t been wrapped in so much hype I probably would have enjoyed it more. In fact, the way cyberpunk as a movement influenced me most was a sense of irritation with its obsession with hipness. I don’t think there’s much doubt that “Axiomatic” and the opening sections of Quarantine have a kind of cyberpunk flavour to them, but my thinking at the time would have been less “Maybe I can join the cyberpunk club!” and more “Maybe I can steal back private eyes and brain-computer interfaces for people who think mirror shades are pretentious, and do something more interesting with them.”
From the Marisa O’Keeffe interview, something that corroborates what Eliezer Yudkowsky said about “Neuromancer” characters worrying how things look on a t-shirt:
A lot of cyberpunk said, in effect: “Computers are interesting because cool, cynical men (or occasionally women) in mirrorshades do dangerous things with them.” If that really is the most interesting thing you can imagine about a computer, you shouldn’t be writing SF.
From the Russell Blackford interview, on the influence of cyberpunk on “Permutation City”:
I recall being very bored and dissatisfied with the way most cyberpunk writers were treating virtual reality and artificial intelligence in the ’80s; a lot of people were churning out very lame noir plots that utterly squandered the philosophical implications of the technology. I wrote a story called “Dust”, which was later expanded into Permutation City, that pushed very hard in the opposite direction, trying to take as seriously as possible all the implications of what it would mean to be software. In the case of Permutation City that included some metaphysical ideas that I certainly wouldn’t want to repeat in everything I wrote, but the basic notions about the way people will be able to manipulate themselves if they ever become software, which I developed a bit further in Diaspora, seem logically unavoidable to me.
Something depressing is certainly going on in mainstream culture, since for example “The New York Times” hasn’t had a review of a Greg Egan book since “Diaspora” in 1998, except to suggest “that Egan doesn’t fully understand how oppression works — or that he is trying to make an inappropriate point”.
But science fiction seems alright, if it reacted to “Neuromancer” exactly along the lines of Eliezer Yudkowsky’s reaction to this post, producing some of the most beloved (by sci-fans) science fiction of the 90s. And I still see every new Alastair Reynolds book in the sci-fi sections of non-specialty bookstores.
I’m looking into the history of QM interpretations and there’s some interesting deviations from the story as told in the quantum sequence. So, of course, single-world was the default from the 1920s onward and many-worlds came later. But the strangeness of a single world was not realized immediately. The concept of a wavefunction collapse seems to originate with von Neumann in the 1930s, as part of his mathematicization of quantum mechanics–which makes sense in a way, imagine trying to talk about it without the concept of operators acting on a Hilbert space. I haven’t read von Neumann’s book, but the 50s, 60s, and 70s discussions of a measurement problem seem to draw on him directly. And the idea that QM requires fundamental irreducible minds seems to date to Wigner’s “Remarks on the Mind-Body Question”, published in 1961. Wigner mentions that Heisenberg thought QM was fundamentally describing our knowledge of the world, but that seems different from consciousness specifically, causing collapse specifically, though I don’t know Heisenberg’s views well. What makes this weird is this is after many-worlds! Notably, DeWitt’s 1970 article which popularized many-worlds seems to associate the “consciousness-causes-collapse” thing with Wigner specifically, giving Wigner more credit for it than Wigner gives himself. It’s not quite correct to say that “consciousness-causes-collapse” originated with Wigner’s article, since the “Wigner’s friend” thought experiment was actually discussed by Everett. Unsurprisingly, since Wigner was a professor at Everett’s school, so they likely discussed these issues. So the deviation from the story in the quantum sequence is that “consciousness-causes-collapse” was not the default theory which many-worlds had to challenge. Instead, they were contemporary competitors, introduced at basically the same time, with the same motivation. Of course, it remains the case that single-world was the default, and Wigner was arguably just following that where it led. But the real “Copenhagen” opinion, it seems to me, was against talking about a picture of the world at all. To say that there is some non-linear irreversible consciousness-initiated collapse, actually occurring in the world, is already a heresy in Copenhagen.
Yes, Sleeping Beauty has to account for the fact that, even if the result of the coin flip was such that she’s being woken up on both Monday and Tuesday, if she bets on it being Monday, she will surely lose one of the two times. So she needs an extra dollar in the pot from the counterparty: betting $1 to $2 rather than $1 to $1. That pays for the loss when she makes the same bet on Tuesday. In expectation this is a fair bet: she either puts $1 in the pot and loses it, or puts $1 in the pot and gets $3 and then puts $1 in the pot and loses it, getting $2 total.
Anyway, feeling something is an action. I think it’s a mistake when people take “anticipation” as primary. Sure, “Make Beliefs Pay Rent (In Anticipated Experiences)” is good advice, in a similar way as a guide to getting rich is good advice. Predictive beliefs, like money, are good to pursue on general principle, even before you know what you’re going to use them for. But my anticipations of stuff is good for me to the extent that the consequences of anticipating it are good for me. Like any other action.
I think as usual with rationality stuff there’s a good analogy to statistics.
I’m very happy I never took Stats 101 and learned what a p value was in a math department “Theory of Statistics” class. Because as I understood it, Stats 101 teaches recipes, rules for when a conclusion is allowed. In the math department, I instead learned properties of algorithms for estimation and decision. There’s a certain interesting property of an estimation algorithm for the size of an effect: how large will that estimate be, if the effect is not there? Of a decision rule, you can ask: how often will the decision “effect is there” be made, if the effect is not there?
Frequentist statistical inference is based entirely on properties like these, and sometimes that works, and sometimes it doesn’t. But frequentist statistical inference is like a set of guidelines. Whether or not you agree with those guidelines, these properties exist. And if you understand what they mean, you can understand when frequentist statistical inference works decently and when it will act insanely.
I think what statistics, and LessWrong-style rationality have in common, is taking the procedure itself as an object of study. In statistics, it’s some algorithm you can run on a spreadsheet. On LessWrong, it tends to be something more vague, a pattern of human behavior.
My experience as a statistician among biologists was, honestly, depressing. One problem was power calculations. People want to know what power to plug into the sample size calculator. I would ask them, what probability are you willing to accept that you do all this work, and find nothing, even though the effect is really there? Maybe the problem is me, but I don’t think I ever got any engagement on this question. Eventually people look up what other people are doing, which is 80%. If I ask, are you willing to accept a 20% probability that your work results in nothing, even though the effect you’re looking for is actually present, I never really get an answer. What I wanted was not for them to follow any particular rule, like “only do experiments with 80% power”, especially since that can always be achieved by plugging in a high enough effect size in the calculation they put in their grant proposal. I wanted them to actually think through whether their experiment will actually work.
Another problem—whenever they had complex data, but were still just testing for a difference between groups, my answer was always “make up a measure of difference, then do a permutation test”. Nobody ever took me up on this. They were looking for a guideline to get it past the reviewers. It doesn’t matter that the made-up test has exactly the same guarantee as whatever test they eventually find: only positive 5% of the time it’s used in the absence of a real difference. But they don’t even know that’s the guarantee that frequentist tests come with.
I don’t really get what was going on. I think the biologists saw statistics as some confusing formality where people like me would yell at them if they got it wrong. Whereas if they follow the guidelines, nobody will yell at them. So they come to me asking for the guidelines, and instead I tell them some irrelevant nonsense about the chance that their conclusion will be correct.
I just want people to have the resources to think through whether the process by which they’re reaching a conclusion will reach the right conclusion. And use those resources. That’s all I guess.
I think this thought has analogues in Bayesian statistics.
We choose a prior. Let’s say, for the effect size of a treatment. What’s our prior? Let’s say, Gaussian with mean 0, and standard deviation equal to the typical effect size for this kind of treatment.
But how do we know that typical effect size? We could actually treat this prior as a posterior, updated from a uniform prior by previous studies. This would be a Bayesian meta-analysis.
I’ve never seen anyone formally do a meta-analysis just to get a prior. At some point, you decide your assumed probability distributions are close enough, that more effort wouldn’t change the final result. Really, all mathematical modeling is like this. We model the Earth as a point, or a sphere, or a more detailed shape, depending on what we can get away with in our application. We make this judgment informally, but we expect a formal analysis to back it up.
As for these ranges and bounds… that reminds me of the robustness analysis they do in Bayesian statistics. That is, vary the prior and see how it effects the posterior. Generally done within a parametric family of priors, so you just vary the parameters. The hope is that you get about the same results within some reasonable range of priors, but you don’t get strict bounds.
Very interesting. A few comments.
I think you mentioned something like this, but Drexler expected a first generation of nanotechnology based on engineered enzymes. For example, in “Engines of Creation”, he imagines using enzymes to synthesize airplane parts. Of course the real use of enzymes is much more restricted: cleaning products such as dishwasher detergent, additives in food, pharmaceutical synthesis. It has always seemed to me that someone who really believed Drexler and wanted to bring his imagined future about would actually not be working on anything like the designs in “Nanosystems”, but on bringing down the cost of enzyme manufacturing. From that perspective it’s interesting that you note that the most promising direction in Drexlery mechanosynthesis is DNA origami. Not quite what Drexler imagined (nucleic acid rather than protein), but still starting with biology.
Also, I think it’s very interesting that silicon turned out to be easier than diamond. While I agree with Yudkowsky that biology is nowhere near the limits of what is possible on the nanometer-scale due to constraints imposed by historical accidents, I disagree with Yudkowsky’s core example of this, the weak interactions holding proteins in the folded configuration. Stronger bonds make things harder, not easier. Maybe the switch from diamond to silicon is an illustration of that.
Editing to add one more comment… Drexler’s definition of “diamondoid” is indeed strange. If we take it literally, it seems that glass is “diamondoid”. But then, “diamondoid” microbes already exist, that is, diatoms. Or at least, microbes with “diamondoid” cell walls.
Another consideration, though maybe not a fundamental one, is that past and future selves are the only beings we know for sure that we have lots of subjunctive dependence with, just from “structural” similarity like calculators from the same factory (to use an example from the TDT paper). Tumblr usr somnilogical elaborated on this a bit, concluding “future selves do help past selves!” An upload is a future self in the way that matters for this conclusion.