This is an amusing empirical test for zombiehood—do you agree with Daniel Dennett?
Gray_Area
“The idea that Bayesian decision theory being descriptive of the scientific process is very beautifully detailed in classics like Pearl’s book, Causality, in a way that a blog or magazine article cannot so easily convey.”
I wish people would stop bringing up this book to support arbitrary points, like people used to bring up the Bible. There’s barely any mention of decision theory in Causality, let alone an argument for Bayesian decision theory being descriptive of all scientific process (although Pearl clearly does talk about decisions being modeled as interventions).
“Would you care to try to apply that theory to Einstein’s invention of General Relativity? PAC-learning theorems only work relative to a fixed model class about which we have no other information.”
PAC-learning stuff is, if anything far easier than general scientific induction. So should the latter require more samples or less?
“Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations.”
Eliezer is being silly. People invented computational learning theory, which among other things, shows the minimum number of samples needed to recover a given error rate.
Eliezer, why are you concerned with untestable questions?
Richard: Cox’s theorem is an example of a particular kind of result in math, where you have some particular object in mind to represent something, and you come up with very plausible, very general axioms that you want this representation to satisfy, and then prove this object is unique in satisfying these. There are equivalent results for entropy in information theory. The problem with these results, they are almost always based on hindsight, so a lot of the times you sneak in an axiom that only SEEMS plausible in hindsight. For instance, Cox’s theorem states that plausibility is a real number. Why should it be a real number?
“The probability of two events equals the probability of the first event plus the probability of the second event.”
Mutually exclusive events.
It is interesting that you insist that beliefs ought to be represented by classical probability. Given that we can construct multiple kinds of probability theory, on what grounds should we prefer one over the other to represent what ‘belief’ ought to be?
“the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked”
Isn’t there a simple algorithm which samples uniformly from a list without knowing it’s length? Keywords: ‘reservoir sampling.’
People don’t maximize expectations. Expectation-maximizing organisms—if they ever existed—died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.
If people’s behavior doesn’t agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn’t.
Finally, the ‘money pump’ argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.
- 28 Oct 2009 15:36 UTC; 7 points) 's comment on Expected utility without the independence axiom by (
- 28 Jan 2013 0:30 UTC; 2 points) 's comment on Allais Malaise by (
Paul Gowder said:
“We can go even stronger than mathematical truths. How about the following statement?
~(P &~P)
I think it’s safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true.”
Amusingly, this is one of the more controversial tautologies to bring up. This is because constructivist mathematicians reject this statement.
“Sometimes I can feel the world trying to strip me of my sense of humor.”
If you are trying to be funny, the customer is always right, I am afraid. The post wasn’t productive, in my opinion, and I have no emotional stake in Christianity at all (not born, not raised, not currently).
Eliezer, where do your strong claims about the causal structure of scientific discourse come from?
“As long as you’re wishing, wouldn’t you rather have a genie whose prior probabilities correspond to reality as accurately as possible?”
Such a genie might already exist.
Every computer programmer, indeed anybody who uses computers extensively has been surprised by computers. Despite being deterministic, a personal computer taken as a whole (hardware, operating system, software running on top of the operating system, network protocols creating the internet, etc. etc.) is too large for a single mind to understand. We have partial theories of how computers work, but of course partial theories sometimes fail and this produces surprise.
This is not a new development. I have only a partial theory of how my car works, but in the old days people only had a partial theory of how a horse works. Even a technology as simple and old as a knife still follows non-trivial physics and so can surprise us (can you predict when a given knife will shatter?). Ultimately, most objects, man-made or not are ‘black boxes.’
“It seems contradictory to previous experience that humans should develop a technology with “black box” functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user.”
Eric, have you ever been a computer programmer? That technology becomes more and more like a black box is not only in line with previous experience, but I dare say is a trend as technological complexity increases.
On further reflection, the wish as expressed by Nick Tarleton above sounds dangerous, because all human morality may either be inconsistent in some sense, or ‘naive’ (failing to account for important aspects of reality we aren’t aware of yet). Human morality changes as our technology and understanding changes, sometimes significantly. There is no reason to believe this trend will stop. I am afraid (genuine fear, not figure of speech) that the quest to properly formalize and generalize human morality for use by a ‘friendly AI’ is akin to properly formalizing and generalizing Ptolemean astronomy.
Sounds like we need to formalize human morality first, otherwise you aren’t guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!
Well shooting randomly is perhaps a bad idea, but I think the best we can do is shoot systematically, which is hardly better (takes exponentially many bullets). So you either have to be lucky, or hope the target isn’t very far, so you don’t need to a wide cone to take pot shots at, or hope P=NP.
billswift said: “Prove it.”
I am just saying ‘being unpredictable’ isn’t the same as free will, which I think is pretty intuitive (most complex systems are unpredictable, but presumably very few people will grant them all free will). As far as the relationship between randomness and free will, that’s clearly a large discussion with a large literature, but again it’s not clear what the relationship is, and there is room for a lot of strange explanations. For example some panpsychists might argue that ‘free will’ is the primitive notion, and randomness is just an effect, not the other way around.
For what it’s worth, I find plenty to disagree with Eleazar about, on points of both style and substance, but on death I think he has it exactly right. Death is a really bad thing, and while humans have diverse psychological adaptations for dealing with death, it seems the burden of proof is on people who do NOT want to make the really bad thing go away in the most expedient way possible.