Having thought about it a little longer and updated based on your evidently broader knowledge of bras, my original guess for why the market failure exists does seem pretty unlikely.
cupholder
what interpretation of the word “probability” does allow you to think that the probability of something is 1 and then change to something other than 1?
Any interpretation where you can fix a broken model. I can imagine a conversation like this...
Prankster: I’m holding a die behind my back. If I roll it, what probability would you assign to a 1 coming up?
cupholder: Is it loaded?
Prankster: No.
cupholder: Are you throwing it in a funny way, like in one of those machines that throws it so it’s really likely to come up a 6 or something?
Prankster: No, no funny tricks here. Just rolling it normally.
cupholder: Then you’ve got a 1⁄6 probability of rolling a 1.
Prankster: And what about rolling a 2?
cupholder: Well, the same.
Prankster: And so on for all the other numbers, right?
cupholder: Sure.
Prankster: So you assign a probability of 1 to a number between 1 and 6 coming up?
cupholder: Yeah.
Prankster: Surprise! It’s 20-sided!
cupholder: Huh. I’d better change my estimate from 1 to 6⁄20.
You’re right, I should’ve thought of that. I expect it’s easier (maybe therefore cheaper?) to manufacture little silicone blobs or whatever than a half-bra, which must partly be why there’s a market for the first and not the second.
There’s an even more compelling market: women who have had a single mastectomy. I’d be surprised if there weren’t medical half-bras out there already for them.
These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.
Presumably there’s heterogeneity in people’s reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.
The negotiation of where LW threads should be on the 4chan-colloquium continuum is something I would let users handle by interacting with each other in discussions, instead of trying to force it to fit the framework of the karma system. I especially think letting people hide their posts from lurkers and other subsets of the Less Wrong userbase could set a bad precedent.
So would it be right to say your objection is based on the expected utility of working cryonics instead of its probability?
For being a cryonics facility? Is there enough evidence to determine if it could’ve been just a random drive-by?
But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!
In this example, what information are we Bayesian updating on?
yes
OK. I agree with that insofar as agents having the same prior entails them having the same model.
aaahhh.… I changed the language of that sentence at least three times before settling on what you saw. Here’s what I probably should have posted (and what I was going to post until the last minute):
There’s no model checking because there is only one model—the correct model.
That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.
Ah, I think I get you; a PB (perfect Bayesian) doesn’t see a need to test their model because whatever specific proposition they’re investigating implies a particular correct model.
For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don’t consider me representative.
Yeah, I figured you wouldn’t have trouble with it since you talked about taking classes in this stuff—that footnote was intended for any lurkers who might be reading this. (I expected quite a few lurkers to be reading this given how often the Gelman and Shalizi paper’s been linked here.)
Are you asserting that this a catch for my position? Or the “never look back” approach to priors? What you are saying seems to support my argument.
It’s a catch for the latter, the PB. In reality most scientists typically don’t have a wholly unambiguous proposition worked out that they’re testing—or the proposition they are testing is actually not a good representation of the real situation.
My implicit definition of perfect Bayesian is characterized by these propostions:
There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
Given a particular set of evidence, there is a correct posterior probability for any proposition
OK, this is interesting: I think our ideas of perfect Bayesians might be quite different. I agree that #1 is part of how a perfect Bayesian thinks, if by ‘a correct prior...before you see any evidence’ you have the maximum entropy prior in mind.
I’m less sure what ‘correct posterior’ means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?
If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There’s no model checking because there is no model.
There has to be a model because the model is what we use to calculate likelihoods.
The rationale for model checking should be pretty clear …
Agree with this whole paragraph. I am in favor of model checking; my beef is with (what I understand to be) Perfect Bayesianism, which doesn’t seem to include a step for stepping outside the current model and checking that the model itself—and not just the parameter values—makes sense in light of new data.
I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we’re modeling our uncertainty.
The catch here (if I’m interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn’t good enough if that sub-model gets blindsided with unmodeled uncertainty that can’t be accounted for just by juggling probability density around in our parameter space.* From page 8 of their preprint:
If nothing else, our own experience suggests that however many different specifications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.
* This must be one of the most dense/opaque sentences I’ve posted on Less Wrong. If anyone cares enough about this comment to want me to try and break down what it means with an example, I can give that a shot.
- Jul 6, 2010, 2:52 PM; 1 point) 's comment on Open Thread: July 2010 by (
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead.
True—we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn’t contribute to the net expected value, but nor does it make it less positive.
There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
It sounds as though that data’s based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life.
That’s a good point, I know of nothing in utilitarianism that says whose utility I should care about.
The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn’t make any entity that has a chance of suffering negative personal utility.
Points taken.
Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn’t think the antinatalism position has legs.
After the fact model checking is completely incompatible with perfect Bayesianism, if we define perfect Bayesianism as
Define a model with some parameters.
Pick a prior over the parameters.
Collect evidence.
Calculate the likelihood using the evidence and model.
Calculate the posterior by multiplying the prior by the likelihood.
When new evidence comes in, set the prior to the posterior and go to step 4.
There’s no step for checking if you should reject the model; there’s no provision here for deciding if you ‘just have really wrong priors.’ In practice, of course, we often do check to see if the model makes sense in light of new evidence, but then I wouldn’t think we’re operating like perfect Bayesians any more. I would expect a perfect Bayesian to operate according to the Cox-Jaynes-Yudkowsky way of thinking, which (if I understand them right) has no provision for model checking, only for updating according to the prior (or previous posterior) and likelihood.
Do people here really think that antinatalism is silly?
A data point: I don’t think antinatalism (as defined by Roko above - ‘it is a bad thing to create people’) is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child’s life would be equally bad, it’d be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
A good illustration of multiple discovery (not strictly ‘discovery’ in this case, but anyway) too:
While Ettinger was the first, most articulate, and most scientifically credible person to argue the idea of cryonics,[citation needed] he was not the only one. In 1962, Evan Cooper had authored a manuscript entitled Immortality, Scientifically, Physically, Now under the pseudonym “N. Durhing”.[8] Cooper’s book contained the same argument as did Ettinger’s, but it lacked both scientific and technical rigor and was not of publication quality.[citation needed]
In the long run, it’s all good—I think it’s a decent paper, and I suppose this way more eyeballs see it than if I was the only one to post it. (Not to say that we should make a regular habit of linking things four times :-)
My understanding is that historically, schizophrenia has been presumed to have a partly genetic cause since around 1910, out of which grew an intermittent research program of family and twin studies to probe schizophrenia genetics. An opposing camp that emphasized environmental effects emerged in the wake of the Nazi eugenics program and the realization that complex psychological traits needn’t follow trivial Mendelian patterns of inheritance. Both research traditions continue to the present day.
Edit to add—Franz Josef Kallman, whose bibliography in schizophrenia genetics I somewhat glibly linked to in the grandparent comment, is one of the scientists who was most firmly in the genetic camp. His work (so far as I know) dominated the study of schizophrenia’s causes between the World Wars, and for some time afterwards.
?+schizophrenia)
Not one of the downvoters, but the tone of these paragraphs was so overcooked I did consider it for a couple seconds:
Those words and your presumptuous ‘are you going to take back your pretense of ignorance about shoe prejudice?’ question came across to me as quite obnoxious.