The FAQ states that omega has/is a computer the size of the moon—that’s huge but finite. I believe its possible, with today’s technology, to create a randomizer that an omega of this size cannot predict. However smart omega is, one can always create a randomizer that omega cannot break.
linas
Yes. I was confused, and perhaps added to the confusion.
Hmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That’s large, but finite. One may assign some finite complexity to Omega—say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not “God” (infinite, infallible, all-seeing), nor is it an “oracle” (in the computer-science definition of an “oracle”: viz a machine that can decide undecidable computational problems).
Huh? Can you explain? Normally, one states that a mechanical device is “predicatable”: given its current state and some effort, one can discover its future state. Machines don’t have the ability to choose. Normally, “choice” is something that only a system possessing free will can have. Is that not the case? Is there some other “standard usage”? Sorry, I’m a newbie here, I honestly don’t know more about this subject, other than what i can deduce by my own wits.
There needs to be an exploration of addiction and rationality. Gamblers are addicted; we know some of the brain mechanisms of addiction—some neurotransmitter A is released in brain region B, Causing C to deplete, causing a dependency on the reward that A provides. This particular neuro-chemical circuit derives great utility from the addiction, thus driving the behaviour. By this argument, perhaps one might argue that addicts are “rational”, because they derive a great utility from their addiction. But is this argument faulty?
A mechanistic explanation of addiction says the addict has no control, no free will, no ability to break the cycle. But is it fair to say that a “machine has a utility function”? Or do you need to have free before you can discuss choice?
The collision I’m seeing is that between formal, mathematical axioms, and English language usage. Its clear that Benelliot is thinking of the axiom in mathematical terms: dry, inarguable, much like the independence axioms of probability: some statements about abstract sets. This is correct—the proper formulation of VNM is abstract, mathematical.
Kilobug is right in noting that information has value, ignorance has cost. But that doesn’t subvert the axiom, as the axioms are mathematically, by definition, correct; the way they were mapped to the example was incorrect: the choices aren’t truly independent.
Its also become clear that risk-aversion is essentially the same idea as “information has value”: people who are risk-averse are people who value certainty. This observation alone may well be enough to ‘explain’ the Allais paradox: the certainty of the ‘sure thing’ is worth something. All that the Allais experiment does is measure the value of certainty.
Hmm. I just got a −1 on this comment … I thought I posed a reasonable question, and I would have thought it to even be a “commonly asked question”, so why would it get a −1? Am I misunderstanding something, or am I being unclear?
How many times in a row will you be mugged, before you realize that omega was lying to you?
OK, but this can’t be a “minor detail”, its rather central to the nature of the problem. The back-n-forth with incogn above tries to deal with this. Put simply, either omega is able to predict, in which case EDT is right, or omega is not able to predict, in which case CDT is right.
The source of entropy need not be a fair coin: even fully deterministic systems can have a behavior so complex that predictability is untenable. Either omega can predict, and knows it can predict, or omega cannot predict, and knows that it cannot predict. The possibility that it cannot predict, yet is erroneously convinced that it can, seems ridiculous.
I’m with incogn on this one: either there is predictability or there is choice; one cannot have both.
Incogn is right in saying that, from omega’s point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic, but this is unclear.
The only way a deterministic system becomes unpredictable is if it incorporates a source of randomness that is stronger than the ability of a given intelligence to predict. There are good reasons to believe that there exist rather simple sources of entropy that are beyond the predictive power of any fixed super-intelligence—this is not just the foundation of cryptography, but is generically studied under the rubric of ‘chaotic dynamical systems’. I suppose you also have to believe that P is not NP. Or maybe I should just mutter ‘Turing Halting Problem’. (unless omega is taken to be a mythical comp-sci “oracle”, in which case you’ve pushed decision theory into that branch of set theory that deals with cardinal numbers larger than the continuum, and I’m pretty sure you are not ready for the dragons that lie there.)
If the agent incorporates such a source of non-determinism, then omega is unable to predict, and the whole paradox falls down. Either omega can predict, in which case EDT, else omega cannot predict, in which case CDT. Duhhh. I’m sort of flabbergasted, because these points seem obvious to me … the Newcomb paradox, as given, seems poorly stated.
Yes, exactly, and in our modern marketing-driven culture, one almost expects to be gamed by salesmen or sneaky game-show hosts. In this culture, its a prudent, even ‘rational’ response.
I’m finding the “counterfactual mugging” challenging. At this point, the rules of the game seem to be “design a thoughtless, inert, unthinking algorithm, such as CDT or EDT or BT or TDT, which will always give the winning answer.” Fine. But for the entire range of Newcomb’s problems, we are pitting this dumb-as-a-rock algo against a super-intelligence. By the time we get to the counterfactual mugging, we seem to have a scenario where omega is saying “I will reward you only if you are a trusting rube who can be fleeced.” Now, if you are a trusting rube who can be fleeced, then you can be pumped, a la the pumping examples in previous sections: how many times will omega ask you for $100 before you wisen up and realize that you are being extorted?
This shift of focus to pumping also shows up in the Prisoner’s dilemma, specifically, the recent results from Freeman Dyson & William Press. They point out that an intelligent agent can extort any evolutionary algorithm. Basically, if you know the zero-determinant strategy, and your opponent doesn’t, than you can mug the opponent (repeatedly). I think the same applies for the counterfactual mugging: omega has a “theory of mind”, while the idiot decision algo fails to have one. If your decision algo tries to learn from history (i.e. from repeated muggings), using basic evolutionary algo’s, then it will continue to be mugged (forever): it can’t win.
To borrow Press & Dyson’s vocabulary: if you want to have an algorithmic decision theory that can win in the presence of (super-)intelligences, then you must endow that algorithm with a “theory of mind”: you’re algorithm has got to start modelling omega, to determine what its actions will be.
The conclusion to section “11.1.3. Medical Newcomb problems” begs a question which remains unanswered: -- “So just as CDT “loses” on Newcomb’s problem, EDT will “lose” on Medical Newcomb problems (if the tickle defense fails) or will join CDT and “lose” on Newcomb’s Problem itself (if the tickle defense succeeds).”
If I was designing a self-driving car and had to provide an algorithm for what to do during an emergency, I may choose to hard-code CDT or EDT into the system, as seems appropriate. However, as an intelligent being, not a self-driving car, I am not bound to always use EDT or always use CDT: I have the option to carefully analyse the system, and, upon discovering its acausal nature (as the medical researchers do in the second study) then I should choose to use CDT; else I should use EDT.
So the real question is: “Under what circumstances should I use EDT, and when should I use CDT”? Section 11.1.3 suggests a partial answer: when the evidence shows that the system really is acausal, and maybe use EDT the rest of the time.
Presentation of Newcomb’s problem in section 11.1.1. seems faulty. What if the human flips a coin to determine whether to one-box or two-box? (or any suitable source of entropy that is beyond the predictive powers of the super-intelligence.) What happens then?
This point is danced around in the next section, but never stated outright: EDT provides exactly the right answer if humans are fully deterministic and predictable by the superintelligence. CDT gives the right answer if the human employs an unpredictable entropy source in their decision-making. It is the entropy source that makes the decision acausal from the acts of the super-intelligence.
There is one rather annoying subtext that recurs throughout the FAQ: the very casual and carefree use of the words “rational” and “irrational”, with the rather flawed idea that following some axiomatic system (e.g. VNM) and Bayes is “rational” and not doing so is “irrational”. I think this is a dis-service, and, what’s more, fails to look into the effects of intelligence, experience, training and emotion. The Allias paradox scratches the surface, as do various psych experiments. But …
The real question is “why does this or that model differ from human nature?” : this question seems to never be asked overtly, but it does seem to get an implicit answer: because humans are irrational. I don’t like that answer: I doubt that they are irrational per-se, rather, they are reacting to certain learned truths about the environment, and incorporating that into judgments,
So, for example: every day, we are bombarded with advertisers forcing us to make judgments: “if you buy our product, you will benefit in this way.” which is a decision-theoretic decision based on incomplete information. I usually make a different choice: “you can choose to pay attention to this ad, or to ignore it: if you pay attention to this ad, you trade away some of you attention span, in return for something that might be good; but if you ignore it, you sacrifice nothing, but win nothing.” I make this last choice hundreds of times a day. Maybe thousands. I am one big optimized mean green decision machine.
The advertizers have trained us in certain ways: in particular, they have trained us to disbelieve their propositions: they have a bad habit of lying, of over-selling and under-delivering. So when I see offers like “a jar contains red blue and yellow balls...” my knee-jerk reaction is “bullshit, I know that you guys are probably trying to scam me, and I’d be an idiot for picking blue instead of yellow, because I know that most typical salespeople have already removed all the blue marbles from the jar. Only a gullible fool would believe otherwise, so cut it out with that Bayesian prior snow-job. We’re not country bumpkins, you know.”
The above argument, even if made ex-post-facto, is an example of the kind of thinking that humans engage in regularly. Humans make thousands of decisions a day (Should I watch TV now? Should I go to the bathroom? Should I read this? What should I type as the next word of this sentence?) and it seems awfully naive to claim that if any of these decisions don’t follow VNM+Bayes, they are “irrational”. I think its discounting intelligence far more than it should.
There are numerous typos throughout the thing. Someone needs to re-read it. The math in “8.6.3. The Allais paradox” is all wrong, option 2A is not actually 34% of 1A and 66% of nothing, etc.
I will come, unless I utterly space it off and forget.