Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.
robzahra
Due to chaotic / non-linear effects, you’re not going to get anywhere near the compression you need for 33 bits to be enough...I’m very confident the answer is much much higher...
you’re right. speaking more precisely, by “ask yourself what you would do”, I mean “engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)”, as you’ve outlined above. Note though that even when the reduction is not complete (for example, b/c you’re fighting a similar but inexact clone), there can still be added incentive to cooperate...
Agreed that in general one will have some uncertainty over whether one’s opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.
However, in some cases at least, you don’t need to be Omega-superior to predict whether another agent one-boxes....for example, if you’re facing a clone of yourself; you can just ask yourself what you would do, and you know the answer. There may be some class of algorithms non-identical to you but which are still close enough to you to make this self-reflection increased evidence that your opponent will cooperate if you do.
Agreed with tarleton, the prisoner’s dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get
On the belief in god question, rule out simulation scenarios explicitly...I assume you intend “supernatural” to rule out a simulation creator as a “god”?
On marital status, distinguish “single and looking for a relationship” versus “single and looking for people to casually romantically interact with”
Seems worth mentioning: I think a thorough treatment of what “you” want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to “best” harmonize their differing opinions… similarly, whether a single simulated instance of oneself might itself not converge or not provably converge on one utility function as simulated time goes to infinity (seems quite likely; moreover, provable , in a simplified model) etc., etc.
If conclusive work has been done of which I’m unaware, it would be great if someone wants to link to it.
It seems unlikely to me that we can satisfactorily answer these questions without at least a detailed model of our own brains linked to reductionist explanations of what it means to “want” something, etc.
Wh- I definitely agree the point you’re making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term “almost no free lunch” when discussed.
Tim-Good, your distinction sounds correct to me.
Annoyance, I don’t disagree. The runaway loop leading to intelligence seems plausible, and it appears to support the idea that partially accurate modeling confers enough advantage to be incrementally selected .
Yes, the golden gate bridge is a special case of deduction in the sense meant here. I have no problem with anything in your comment, I think we agree.
I think we’re probably using some words differently, and that’s making you think my claim that deductive reasoning is a special case of Bayes is stronger than I mean it to be.
All I mean, approximately, is:
Bayes theorem: p(B|A) = p(A|B)*p(B) / p(A)
Deduction : Consider a deductive system to be a set of axioms and inference rules. Each inference rule says: “with such and such things proven already, you can then conclude such and such”. And deduction in general then consists of recursively turning the crank of the inference rules on the axioms and already generated results over and over to conclude everything you can.
Think of each inference rule “i” as i(A) = B, where A is some set of already established statements and B corresponds to what statements “i” let’s you conclude, if you already have A.
Then, by deduction we’re just trying to say that if we have generated A, and we have an inference rule i(A) = B, then we can generate or conclude B.
The connection between deduction and Baye’s is to take the generated “proofs” of the deductive system as those things to which you assign probability of 1 using Bayes.
So, the inference rule corresponds to the fact that p(B | A) = 1. The fact that A has been already generated corresponds to p(A) = 1. Also, since A has already been generated independently of B, p(A | B) = 1, since A didn’t need B to be generated. And we want to know what p(B) is.
Well, plugging into Bayes:
p(B|A) = p(A|B)p(B) / p(A) i.e. 1 = 1 p(B) / 1 i.e. p(B) = 1.In other words, B can be generated, which is what we wanted to show.
So basically, I think of deductive reasoning as just reasoning with no uncertainty, and I see that as popping out of bayes in the limiting case. If a certain formal interpretation of this leads me into Godelian problems, then I would just need to weaken my claim somewhat, because some useful analogy is clearly there in how the uncertain reasoning of Bayes reduces to certain conclusions in various limits of the inputs (p=0, p=1, etc.).
Ciphergoth, I agree your points, that if your prior over world-states were not induction biased to start with, you would not be able to reliably use induction, and that this is a type of circularity. Also of course, the universe might just be such that the Occam prior doesn’t make you win; there is no free lunch, after all.
But I still think induction could meaningfully justify itself, at least in a partial sense. One possible, though speculative, pathway: Suppose Tegmark is right and all possible math structures exist, and that some of these contain conscious sub-structures, such as you. Suppose further that Bostrom is right and observers can be counted to constrain empirical predictions. Then it might be that there are more beings in your reference class that are part of simple mathematical structures as opposed to complex mathematical structures, possibly as a result of some mathematical fact about your structure and how that logically inter-relates to all possible structures. This might actually make something like induction true about the universe, without it needing to be a direct assumption. I personally don’t know if this will turn out to be true, nor whether it is provable even if true, but this would seem to me to be a deep, though still partially circular, justification for induction, if it is the case.
We’re not fully out of the woods even if all of this is true, because one still might want to ask Tegmark “Why does literally everything exist rather than something else?”, to which he might want to point to an Occam-like argument that “Everything exists” is algorithmically very simple. But these, while circularities, do not appear trivial to my mind; i.e., they are still deep and arguably meaningful connections which seem to lend credence to the whole edifice. Eli discusses in great detail why some circular loops like these might be ok/necessary to use in Where Recursive Justification Hits Bottom
I agree with Jimmy’s examples. Tim, the Solomonoff model may have some other fine print assumptions {see some analysis by Shane Legg here}, but “the earth having the same laws as space” or “laws not varying with time” are definitely not needed for the optimality proofs of the universal prior (though of course, to your point, uniformity does make our induction in practice easier, and time and space translation invariance of physical law do appear to be true, AFAIK.). Basically, assuming the universe is computable is enough to get the optimality guarantees. This doesn’t mean you might not still be wrong if Mars in empirical fact changes the rules you’ve learned on Earth, but it still provides a strong justification for using induction even if you were not guaranteed that the laws were the same, until you observed Mars to have different laws, at which point, you would assign largest weight to the simplest joint hypothesis for your next decision.
Tim--- To resolve your disagreement: Induction is not purely about deduction, but it nevertheless can be completely modelled by a deductive system.
More specifically, I agree with your claim about induction (see point 4 above). However, in defense of Eliezer’s claim that induction is a special case of deduction, I think you can model it in a deductive system even though induction might require additional assumptions. For one thing, deduction in practice seems to me to require empirical assumptions as well (i.e., the “axioms” and “inference rules” are chosen based on how right they seem), so the fact that induction needs some axioms should not itself prevent deductive style proofs using an appropriately formalized version of it. So, once one decides on various axioms, such as the various desiderata I list above for a Solomonoff-like system, you CAN describe via a mathematical deduction system how the process of induction would proceed. So, induction can be formalized and proofs can be made about the best thing for an agent to do; the AIXI model is basically an example of this.
I agree with the spirit of this, though of course we have a long way to go in cognitive neuroscience before we know ourselves anywhere near as well as we know the majority of our current human artifacts. However, it does seem like relatively more accurate models will help us comparatively more, most of the time. Presumably that human intelligence was able to evolve at all is some evidence in favor of this.
It looks to me like those uniformity of nature principles would be nice but that induction could still be a smart thing to do despite non-uniformity. We’d need to specify in what sense uniformity was broken to distinguish when induction still holds.
Are you saying that you would modify the first definition of rational to include these >> other ways of knowing (Occam’s Razor and Inductive Bias), and that they can make conclusions about metaphysical things?
yes, I don’t think you can get far at all without an induction principle. We could make a meta-model of ourselves and our situation and prove we need induction in that model, if it helps people, but I think most people have the intuition already that nothing observational can be proven “absolutely”, that there are an infinite number of ways to draw curved lines connecting two points, etc. Basically, one needs induction to move beyond skeptical arguments and do anything here. We’re using induction implicitly in all or most of our applied reasoning, I think.
The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior