Science: Best Sans Booty.
Schrödinger disagreed. (So did Einstein… and Feynman… I could mention Kinsey, but that would be cheating, I supppose.)
Science: Best Sans Booty.
Schrödinger disagreed. (So did Einstein… and Feynman… I could mention Kinsey, but that would be cheating, I supppose.)
Jaynes was a really smart guy, but no one can be a genius all the time. He did make at least one notable blunder in Bayesian probability theory—a blunder he could have avoided if only he’d followed his own rules for careful probability analysis.
Eliezer, I think you have dissolved one of the most persistent and venerable mysteries: “How is it that even the smartest people can make such stupid mistakes”.
Michael Shermer wrote about that in “Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time”. In the question of smart people believing weird things, he essentially describes the same process as that Eliezer experienced: once smart people decide to believe a weird thing for whatever reason, it’s much harder to to convince them that their beliefs are flawed because they are that much better at poking holes in counterarguments.
One disturbing thing about the Petrov issue that I don’t think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.
Petrov isn’t praised for being a non-retaliator. He’s praised for doing good probable inference—specifically, for recognizing that the detection of only 5 missiles pointed to malfunction, not to a U.S. first strike, and that a “retaliatory” strike would initiate a nuclear war. I’d bet counterfactually that Petrov would have retaliated if the malfunction had caused the spurious detection of a U.S. first strike with the expected hundreds of missiles.
You’ve got to be almost as smart as a human to recognize yourself in a mirror...
Quite recently, research has shown that the above statement may not actually be true.
Eliezer, I think you meant to say that “19 * 103 might not be 1957” instead of 1947. Either that or I’m misunderstanding that entire paragraph.
The setup’s a little opaque, but I believe the correct reading is that the other person (characterized as honest) is correcting the faulty multiplication of the notional reader (“you”).
Barkley Rosser, there definitely is something a little hinky going on in those infinite dimensional model spaces. I don’t have the background in measure theory to really grok that stuff, so I just thank my lucky stars that other people have proven the consistency of Dirichlet process mixture models and Gaussian process models.
Barkley Rosser, what I have in mind is a reality which in principle predictable given enough information. So there is a “true” distribution—it’s conditional on information which specifies the state of the world exactly, so it’s a delta function at whatever the observables actually turn out to be. Now, there exists unbounded sequences of bits which don’t settle down to any particular relative frequency over the long run, and likewise, there is no guarantee that any particular sequence of observed data will lead to my posterior distribution getting closer and closer to one particular point in parameter space—if my model doesn’t at least partially account for the information which determines what values the observables take. Then I wave my hands and say, “That doesn’t seem to happen a lot in practical applications, or at least, when it does happen we humans don’t publish until we’ve improved the model to the point of usefulness.”
I didn’t follow your point about a distribution for which Bayes’ Theorem doesn’t hold. Are you describing a joint probability distribution for which Bayes’ Theorem doesn’t hold, or are you talking about a Bayesian modeling problem in which Bayes estimators are inconsistent a la Diaconis and Freedman, or do you mean something else again?
Barkley Rosser, it’s a strong assumption in principle, but in practice, humans seem to be pretty good at obtaining enough information to put in the model such that the posterior does in fact converge to some point in the parameter space.
Closing that tag.
Barkley Rosser, I think the difference between objective Bayesians and subjective Bayesians has more to do with how they treat prior distributions than how they view asymptotic convergence.
I’m personally not an objective Bayesian by your definition—I don’t think there are stable, true probability distributions “out there.” Nevertheless, I do find the asymptotic convergence theorems meaningful. In my view, asymptotic convergence gets you to the most informative distribution conditional on some state of information, but that state of information need not be maximal, i.e., deterministic for observables. When the best you can do is some frequency distribution over observables, it’s because you’ve left out essential details, not because the observables are random.
We can only speculate about what actually happened.
I—err—what? What actually happened is that the Wachowski brothers made a movie. No humans were enslaved in the making of this film.
Tim Tyler, the thermodynamically problematic part of the Matrix is the fact that humans had induced something like nuclear winter to deny the machines the energy of the sun. Morpheus states that the machines then used humans as a source of energy. Humans get their energy from food: no sun implies no food implies no humans.
You cannot decide to make salad taste better to you than cheeseburgers...
Tangentially, if Seth Robert’s Shangri-la diet theory or something like turns out to be correct*, it may indeed be possible to enact a plan that ends with salad tasting better to you than cheeseburgers.
I lost 25 lb on it, so I think something’s going on there.
Richard Hollerith, I thought of that—as far as I can tell, it’s equivalent to giving them the money.
To those who would destroy the money, I have to ask, how would that be possible? Money on that scale is not available in physical form to destroy, and if you buy valuable goods and then destroy them, you’ve actually destroyed wealth, not increased the value of other peoples’ money. I just can’t conceive of a way to vanish $10 trillion.
Laura ABJ, I was thinking that you could include only cases where the opportunity to trade sexual favours for more material gains were not sought after or welcomed by the women concerned. The logic here is that those are the cases where the opportunity for gains was not an advantage, but rather an imposition.
Why do you consider these to be disadvantages? They seem like powerful advantages to me.
I suppose these are advantages to the precise degree that the opportunity to trade sexual favours for more material gains are sought after or welcomed by the women concerned. So the question becomes to what degree were these offered exchanges unwelcome and/or imposed with sme degree of implicit or explicit coercion. Usually there’s a carrot and a stick—some benefit for complying with a sexual request and some penalty for declining.
Not being a woman, I can’t put numbers on situations the way Laura ABJ did, but perhaps she could offer refined conservative estimates?
You mean like the distinction between competence and performance?
Yeah, something like that. But I wouldn’t choose “competence” because colloquially it means “possession of the desired skill” (although I know in some contexts it means something more restricted).
“Consider the horror of America in 1800, faced with America in 2000. The abolitionists might be glad that slavery had been abolished. Others might be horrified, seeing federal law forcing upon all states a few whites’ personal opinions on the philosophical question of whether blacks were people, rather than the whites in each state voting for themselves. Even most abolitionists would recoil from in disgust from interracial marriages—questioning, perhaps, if the abolition of slavery were a good idea, if this were where it led. Imagine someone from 1800 viewing The Matrix, or watching scantily clad dancers on MTV. I’ve seen movies made in the 1950s, and I’ve been struck at how the characters are different—stranger than most of the extraterrestrials, and AIs, I’ve seen in the movies of our own age. Aliens from the past.
Something about humanity’s post-Singularity future will horrify us...
Let it stand that the thought has occurred to me, and that I don’t plan on blindly trusting anything…
This problem deserves a page in itself, which I may or may not have time to write.”
- Eliezer S. Yudkowsky, Coherent Extrapolated Volition