P: 0 ⇐ P ⇐ 1
Part of The Contrarian Sequences.
Reply to infinite certainty and 0 and 1 are not probabilities.
Introduction
In infinite certainty, Eliezer makes the argument that you can’t ever be absolutely sure of a proposition. That is an argument I disagreed with for a long time, but due to Akrasia acedia, I never got around to writing it. I think I have a more coherent counter argument now, and would present it below. Because the post I am replying to and infinite certainty are linked, I address both of them in this post.
This doesn’t mean, though, that I have absolute confidence that 2 + 2 = 4. See the previous discussion on how to convince me that 2 + 2 = 3, which could be done using much the same sort of evidence that convinced me that 2 + 2 = 4 in the first place. I could have hallucinated all that previous evidence, or I could be misremembering it. In the annals of neurology there are stranger brain dysfunctions than this.
This is true. That a statement is true does not mean that you have absolute confidence in the veracity of the statement. It is possible that you may have hallucinated everything.
Suppose you say that you’re 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once.
I am not so sure of this. If I have X% confidence in a belief, and I am well calibrated, then if there were K statements for which I said I have X% confidence in, then you expect that ((100-X)/100)*K of those statements would be wrong, and the remainder would be right. It does not follow that if I have X% confidence in a belief that I can make K statements in which I repose equal confidence, and be wrong only ((100-X)/100)*K times.
It’s something like X% confidence (implies) if you made K statements then ((100-X)/100)*K of those statements would be wrong.
A well calibrated agent does not have to be able to make K with only ((100-X)/100)*K wrong those statements for them to possess X% confidence in the proposition. It only indicates that in a hypothetical world in which they did make K statements, if they were well calibrated, only ((100-X)/100)*K of those statements would be wrong. To assert that a well calibrated agent must be able to make those statements before they can have X% confidence, is to establish the hypothetical as a given fact—either a honest mistake, or deliberate malice.
As for the notion that you could get up to 100% confidence in a mathematical proposition—well, really now! If you say 99.9999% confidence, you’re implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once. That’s around a solid year’s worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.
Assert 99.9999999999% confidence, and you’re taking it up to a trillion. Now you’re going to talk for a hundred human lifetimes, and not be wrong even once?
Assert a confidence of (1—1/googolplex) and your ego far exceeds that of mental patients who think they’re God.
And a googolplex is a lot smaller than even relatively small inconceivably huge numbers like 3^^^3.
All based on the same flawed premise, and equally flawed.
I am Infinitely Certain
There is one proposition that I would start with and assign a probability of 1, not 1-1/googolplex. Not 1 − 1/3^^^^3, Not 1 - epsilon (where epsilon is an arbitrarily small number), but a probability of 1.
I exist.
Rene Descartes presents a very wonderful argument for the veracity of this statement:
Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; And because some men err in reasoning, and fall into Paralogisms, even on the simplest matters of Geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for Demonstrations; And finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams. But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be something; And as I observed that this truth, I think, therefore I am,[c] was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the Sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search
Eliezer quotes Rafal Smigrodski:
“I would say you should be able to assign a less than 1 certainty level to the mathematical concepts which are necessary to derive Bayes’ rule itself, and still practically use it. I am not totally sure I have to be always unsure. Maybe I could be legitimately sure about something. But once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom. I don’t like the idea of not being able to change my mind, ever.”
I am alright with accepting as an axiom that I exist. I see no reason why I should be cautious of assigning a probability of 1 to this statement. I am infinitely certain that I exist.
If you accept Descartes argument, then this is very important. You’re accepting that we can be infinitely certain about a proposition—and not just that—that it is sensible to be infinitely certain about a proposition. Usually, only one counterexample is necessary, but there are several other statements which you may assign a probability of 1 to.
I believe that I exist.
I believe that I believe that I exist.
I believe that I believe that I believe that I exist.
And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer’s (fatuous) requirements for assigning a certain level of confidence to a proposition. If you feel that it is not sensible to assign probability 1 to the first statement, then consider this argument. I assign a probability 1 to the proposition “I exist”. This means that the proposition “I exist” exists (pun intended) in my mental map of the world, and is therefore a belief of mine. By deduction, if I assign a probability of 1 to the statement “I exist”, then I must assign a probability of 1 to the proposition “I believe that I exist”. By induction, I must assign a probability of 1 to all the infinite statements, and all of them are true.
(I assign a probability of 1 to deduction being true).
Generally, using the power of recursion, we can pick any statement, to which we assign a probability of 1 and generate infinite more statements to which we (by deduction) also assign a probability of 1.
Let X be a proposition to which we assign a probability of 1.
define f(var, n=0)
if n < 0 or type(n) != int
return −1
end if
if var == X and n == 0
var = (“I believe ” + var + ”.”)
print var
end if
n = (n < 2)?2:n
str = (“I believe that ” + var + ”.”)
print str
i = 0
while i < n
str += “I believe that ” + str + ”.”
print str
end while
end if else
f(str, n**n)
end
f(f(X, n)) for any X (to which we assign a probability of 1 and some valid n) prints an infinite number of statements to which we also assign a probability of 1.
While I’m at it, I can show that there are an uncountably infinite number of such statements with a probability of 1.
Let S be the array of all propositions produced by f(f(X, n)) (for some valid X to which we assigned a probability of 1, and a valid n).
define g(var)
k = rand(#S)
i = 0
j = rand(#S)
str = “I believe ” + S[j]
delete(S[j])
while i < k
j = rand(#S)
str += ” and ” + S[j]
delete(S[j]
i++
end while
print(str)
f(g(var), 2)
end
Assuming #S = Aleph_null, there are 2^#S possible values for str, and each of them can be used to generate an infinite sequence of true propositions. By Cantor’s diagonal argument the number of propositions to which we assign a probability of 1 are uncountable. For each of those propsitions, we assign a probability of 0 to their negation. That is if you accept Descartes argument, or accept any single proposition has having a probability of 1 (or 0), then you accept uncountably infinite many propositions as having a probability of 1 (or 0). Either we can never be certain of any propositions ever, or we can be certain of uncountably infinite many propositions (you can also use the outlined method to construct K statements with arbitrary accuracy).
Personally, I see no problem with accepting “I exist” (and deduction) as having P of 1.
When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.
Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.
This ignores the fact that you can assign priors of 0 and 1—in fact, it is for this very reason that I argue that 0 and 1 are probabilities—Eliezer is right in that we can never update upwards (or downwards as the case may be) to 1 or 0 (without using priors of 0 or 1), but we can (and I argue we should) sometimes start with priors of 0 and 1.
0 and 1 as priors.
Consider Pascal’s Mugging. Pascal’s Mugging is a breaker (breakers are a name I coined for decision problems which break decision theories). Let us reconceive the problem such that the person doing the mugging is me.
I walk up to Eliezer and tell him that he should pay me a $10,000 or I would grant him infinite negative utility.
Now, I cannot (as a matter of fundamental physical law) inflict infinite negative utility on Eliezer. However, if Eliezer is rational (maximising his expected utility), then Eliezer must pay me the money. No matter how much money I demand from Eliezer, Eliezer must pay me, because Eliezer does not assign a probability of 0 to me carrying out my threat, and no matter how small the probability is, as long as it’s not 0, paying me the ransom I demanded is the choice which maximises expected utility.
(If you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is “infinite negative utility exists”. B is “I can grant infinite negative utility”.) assigning a probability of 0 to me granting you infinite negative utility).
I have no problems with decision problems which break decision theories, but when a problem breaks the very formulation of rationality itself, then I’m pissed. There is a trivial solution to resolving Pascal’s mugging using classical decision theory (accept the objective definition of probability; once you do so, the probability of me carrying out my threat becomes zero and the problem disappears). Only the insistence to cling to (unfounded) subjective probability that forbids 0 and 1 as probabilities leads to this mess.
If anything, Pascal’s mugging should be a definitive proof demonstrating that indeed 0 and 1 are perfectly legitimate priors (if you accept a prior of 0 that I will grant you infinite negative utility, then trivially, you accept a prior of 1 that I do not grant you infinite negative utility). Pascal’s mugging only “breaks” Expected utility theory if you forbid priors of 0 and 1—an inane commandment.
I’ll expand more on breakers, rationality, etc. in my upcoming several ten pages+ paper.
Conclusion
So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.
The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.
However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.
If you made a magical symbol to stand for “all possibilities I haven’t considered”, then you could marginalize over the events including this magical symbol, and arrive at a magical symbol “T” that stands for infinite certainty.
But I would rather ask whether there’s some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn’t believe in absolute certainty.
Eliezer presents a shaky basis for rejecting 0 and 1 as probabilities. His model leads to absurd conclusion(s) (a proof by contradiction that 0 and 1 are indeed probabilities), he offers no benefits to rejecting the standard model and replacing it with his (only multiple demerits), and he doesn’t formalise an alternative model of probability that is free of absurdities and has more benefits than the standard model.
0 and 1 are not probabilities is a solution in search of a problem.
Epistemic Hygiene
This article may have come across as overly vicious and confrontational; I adopted such an attitude to minimise the bias in my perception of the original article based on the halo effect.
The search keyword for reasoning that uses “I exist” as a derivation step in arguments is “anthropic reasoning”. This is squarely in the middle of a thicket of very hard, mostly unsolved research problems, and unless you have a research-level understanding of that field, you probably shouldn’t assign ordinary confidence, let alone axiom-level confidence, in anything whatsoever about it.
I choose to accept as an axiom that I exist. For if I do not exist, everything else is meaningless. If I doubt my own existence, what then can I believe in? What belief can I have if not my own existence?
I exist.
(I’ve taken note of anthropic reasoning, and may read up on it later, but it wouldn’t change the axiom).
There are decision-theoretic contexts in which you don’t exist, but your (counterfactual) actions still matter because you’re being simulated or reasoned about. These push on corner cases of the definitions of “I” and of “exist”, and as far as I know are mostly not written up and published because they’re still poorly understood. But I’m pretty sure that, for the most obvious ways of defining “I” and “exist”, adding your own existence as an axiom will lead to incorrect results.
I’m tempted to agree with DragonGod on a weaker form (or phrasing) of the “I exist” proposition:
I would defend the proposition that my feeling of subjective experience (independent of whether or not I am mistaken about literally everything I think and believe) really does exist with a probability of 1. And even if my entire experience was just a dream or simulated on some computer inside a universe where 2+2=3 actually holds true, the existence of my subjective experience (as opposed to whatever “I” might mean) seems beyond any possible doubt.
Even if every single one of my senses and my entire map of reality (even including the concept of reality itself) was entirely mistaken in every possible aspect, there would still be such a thing as having/being my subjective experience. It’s the one and only true axiom in this world that I think we can assign P=1 to.
Especially if you don’t conceive of the word “exist” as meaning “is a thing within the base level of reality as opposed to a simulation.”
A simulation exists. A simulation of me is me.
I am my information, and a simulation of me is still me.
I think this version of Pascal’s mugging could be rejected if you think that “infinite negative utility” is not a phrase that means anything, without appealing to probability of 0.
However, I still accept 0 and 1 as valid probabilities, because that is how probability is defined in the mathematical structures and proofs that underpin all of the probability theory we use, and as far as I know no other foundation of probability (up to isophorism)has been rigorously defined and explored.
The fact that measure#measure_space) is nonnegative, instead of positive, is a relevant fact and if you’re going to claim 0 and 1 are not probabilities, you had better be ready to re-define all of the relevant terms and re-derive all of the relevant results in probability theory in this new framework. Since no such exposition exists, you should feel free to treat any claims that 0 and 1 are not probabilities as, at best, speculation.
Now, I know those of you who have read Eliezer’s post are about to go “But wait! What about Cox’s Theorem! Doesn’t that imply that odds have to be finite?” No, it does no such thing. If you look at the Wikipedia article on Cox’s Theorem, you will see that probability must be represented by real numbers, and that this is an assumption, rather than a result. In other words, any “way of representing uncertainties” must map them to real numbers in order for Cox’s Theorem to apply, and so Cox’s Theorem only applies to odds or log odds if you assume that odds and log odds are finite to begin with. Obviously, this is circular reasoning, and no more of an argument than simply asserting that probability must be in (0,1) and stopping there.
Moreover, if you look down the page, you will see that the article explicitly states that one of Cox’s results is that probability is in… wait for it… [0,1].
I accept that this is likely the best thing for you to do for debugging your own world-view, but there’s a problematic group-epistemic question: it would be bad if a person could always justify arguing in a way that’s biased against X by saying “I’m biased toward X, so I have to argue in a way that’s biased against X.”
To the extent that you can, I’d suggest that you steer toward de-biasing in a way that’s closer to “re-deriving things from first principles”; IE, try to figure out how one would actually answer the question involved, and then do that, without particularly steering toward X or against X.
With respect to the object-level question: the same type of argument which supports the laws of probability also supports non-dogmatism (see theorem 4), IE, the rejection of probabilities zero or one for non-logical facts. So, I put this principle on the same level as the axioms of probability theory, but I do not extend it to things like “P(A or not A)=1”, which don’t fall to the same arguments.
I reject 0 and 1 for non logical facts as well.
”I think therefore I am” is a logical proof of my own existence, and as such, I assign a probability of 1 to the proposition: “I exist”.
I’m not at all sure it’s any such thing. It depends a little on how broadly you’re prepared to construe “my own existence”.
You aren’t really entitled to say “I think”. You know that some thought is happening, but you don’t really know that what’s having that thought is the right sort of thing to be labelled “I” because that word carries a lot of baggage (e.g., the assumption of persistence through time) that you aren’t entitled to when all you have is the knowledge that some thinking is going on. So, for instance, if you go on—as Descartes does—to draw further inferences involving “I” from “I exist”, and if you assume at different times that you’re referring to the same “I”, then you are cheating.
For more about this stuff, see the Stanford Encyclopedia of Philosophy.
Also I mentioned that even if you actually have a logical proof of something, you cannot assign a probability of 1 to the conclusion, because you might have made a mistake in the argument. You are pointing out some ways that might have happened here. Even if it did not, no one can reasonably assign a probability of 1 to the claim that they did not make such a mistake, and hence to the conclusion.
Right; sorry for not phrasing that in a way that sounded like agreement with you. We should be less that totally certain about mathematical statements in real life, but when setting up the formalism for probability, we’re “inside” math rather than outside of it; there isn’t going to be a good argument for assigning less than probability 1 to logical truths. Only bad things happen when you try.
This does change a bit when we take logical uncertainty into account, but although we understand logical uncertainty better these days, there’s not a super strong argument one way or the other in that setting—you can formulate versions of logical induction which send probabilities to zero immediately when things get ruled out, and you can also formulate versions in which probabilities rapidly approach zero once something has been logically ruled out. The version which jumps to zero is a bit better, but no big theoretical advantage comes out of it afaik. And, in some abstract sense, the version which merely rapidly approaches zero is more prepared for “mistakes” from the deductive system—it could handle a deductive system which occasionally withdrew faulty proofs.
I found the fact that Eliezer did not mention the classic “I think, therefore I am” argument in these essays odd as well. It does seem as though nothing I could experience could convince me that I do not exist because by experiencing it, I am existing. Therefore, assigning a probablitly of 1 to “I exist” seems perfectly reasonable.
Thanks for the vote of confidence.
https://en.wikipedia.org/wiki/Cotard_delusion
That’s a very interesting condition, and I will agree that it indicates that it is possible I could come to the belief that I did not exist if some event of brain damage or other triggering event occurred to cause this delusion. However, I would only have that belief because my reasoning processes had been somehow broken. It would not be based on a Bayesian update because the only evidence for not existing would be ceasing to have experiences, which it seems axiomatic that I could not update upon. People with this condition seem to still have experiences, they just strangely believe that they are dead or don’t exist.
If you could come to the wrong belief because of brain damage, you could come to the other belief because of brain damage too; this is a general skeptical attack on the possibility of knowledge or using a priori proofs to convince yourself of something without making a lot of other assumptions about your intactness and sanity (akin to how it’s hard to come up with good arguments to believe anything about the world without including some basic assumptions like “induction works”), related to the Kripke/Wittgenstein attack on memory or Lewis Carroll’s rule-following paradox. So while the cogito may be true in the sense of ‘a person thinking implies their existence’, you can’t use it to bootstrap yourself out of total Cartesian doubt & immunity to the evil genius, much less into being a Bayesian reasoner who can assign P=1 to things.
Fair enough, I suppose there is a possibility that there is some way I could have experiences and somehow also not exist, even though I cannot imagine how. My inability to imagine how such evidence could be logically consistent does not mean that it is actually, certainly impossible that I will observe such evidence.
Which of Rossin’s statements was your “Cotard delusion” link intended to address? It does seem to rebut the statement that “nothing I could experience could convince me that I do not exist”, since experiencing the psychiatric condition mentioned in the link could presumably cause Rossin to believe that he/she does not exist.
However, the link does nothing to counter the overall message of Rossin’s post which is (it seems to me) that “I think, therefore I am” is a compelling argument for one’s own existence.
BTW, I agree with the general notion that from a Bayesian standpoint, one should not assign p=1 to anything, not even to “I exist”. However, the fact of a mental condition like the one described in your link does nothing (IMO) to reduce the effectiveness of the “I think, therefore I am” argument.
My phenomenological leanings are well known, and even I am epistemically troubled at the idea of assigning probability 1 to the proposition that my experience of my own existence implies my existence. I am willing to go further a say I can’t even assign probability 1 to the proposition that because I experience then experience must exist.
I’ve not quite worked out how I might explain this precisely, but it seems I should not be willing to be completely certain because even my experience of experience-in-itself is experience and lacking an outside view of my own ontology I have no way to completely verify that experience as I experience it is what it appears to be. To be fair I’m not sure I can meaningfully place a number on this uncertainty, but to give it up for certainty seems to say something stronger about my belief in experience than I can justify giving it since I do not have access to what we call the metaphysical. This is to say existence may be something quite different from what we think it is from our experience of it and we may not even be able to meaningfully say experience exists in a way utterly incomprehensible from the inside of experience.
I can at best say I am forced to operate as-if I were certain about such basic propositions because I lack the ability to know how uncertain I might be or if I uncertainty is even a meaningful construct in this context and so the only way forward is to assume certainty of existence, but in doing this I am admitting the fallacy of my reasoning and so should concede some non-zero probability that I am entirely fairly to experience experience in a coherent way.
The problem isn’t that it might be possible for someone to think without existing, or to experience things without experience existing. The problem is that assigning a probability of 1 to something means “this way of knowing is such that it cannot fail.” And just because you have to exist in order to think, without fail, does not mean that the “way of knowing” which is involved is a kind which cannot fail. In other words, 13 cannot fail to be prime, but the way we know that “13 is prime” is a way that in principle can fail. “I exist” is no different. If you think that, you exist without fail, but not by reason of your way of knowing. So you cannot attribute infinite certainty to it.
How can you not exist?
I think the best possible argument against “I think, therefore I am” is that there may be something either confused or oversimplified about either your definition of “I”, your definition of “think”, or your definition of “am”.
“I” as a concept might turn out to not really have much meaning as we learn more about the brain, for example, in which case the most you could really say would be “Something thinks therefore something thinks” which loses a lot of the punch of the original.
I have a coherent definition of “I”.
Fair.
I actually think a bigger weakness in your argument is here:
That can’t actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
I do not need to actually store those beliefs—it is only necessary to be able to state them—and I wrote a program that outputs those beliefs.
Except by their nature, if you’re not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let’s call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say “I appear to believe that X is true.” (Now this is 2 bits of information; x and belief in x”)
Step 3: You notice yourself thinking step 2. Now you say “I appear to believe that I believe that X is true.” (3 bits of information, x, belief in x, and belief in belief in x.)
If at any point you stop storing one of those steps, the next step becomes untrue; if you are not storing, say, step 11 in your head right now (belief in belief in belief....) then step 12 would be false, because you don’t actually believe step 11. After all, “belief” is fundamentally a question of your state of mind, and if you don’t have state x in your mind, if you’ve never even explicitly considered stage x, it can’t really be a belief, right?
I see.
I thought that you don’t actually have to store those beliefs in your head.
My idea was:
Do you disagree?
I disagree, but in any case you are not (nor is anyone else) an agent with a consistent set of beliefs, so it doesn’t matter if that is true or not.
The real issue here is that there is clearly some possibility that you are mistaken enough about the nature of belief that it is impossible for you to have a belief that consists of a million “I believe that I believe… that I exist.” And if that is the case, and you cannot have that belief, then you mistakenly assigned a probability of 1 to a false statement (since the statement that you believe that string would be false.). Which explains why you should not assign a probability of 1 in the first place, since this is supposed to never happen.
(Also, “I believe that I believe X” cannot be logically deduced from “I believe X” in any case.)
To deduce something you need to use two premises. Nothing follows from “I believe X” without something additional. The other premise would have to be, “In every case when someone believes X, they also believe that they believe X.” But that is not obviously true.
“In the case where I (Dragon God) believes X, they believe they believe X”.
I am reasonably confident that is true.
Most people would call that begging the question. I will refrain from that particular accusation since ordinarily people just mean if you agree with the premises of course you would agree with the conclusion. But there is something to the point they would be making: if you are sure of that in general, that is already claiming more than the claim that you believe that you believe that you exist. In other words, it can be “logically deduced” only by assuming even more stuff.
Also, in order to have a probability of 1 for the conclusion, you would need a probability of 1 for that claim, not just reasonable confidence.
I don’t think that’s actually true.
Even if it was, I don’t think you can say you have a belief if you haven’t actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don’t think it’s accurate to say “you believe D” until you’ve actually figured out that it logically follows from A, B, and C.
If you’ve never even thought of something I don’t think you can say that you “believe” it..
The value of the threat becomes zero times infinity, and so undefined. This definitely improves the situation, but I’m not sure it’s a full solution.
There can’t be uncountably many propositions to which you assign probability 0, because you can only express countably many propositions.
Regarding your Pascal’s mugging argument, VNM-rational agents don’t assign infinite or negative infinite utility to anything. The variant using utility that is vast but finite in magnitude need not convince an agent that assigns the extreme outcome comparably tiny but nonzero probability. And it doesn’t work for agents with bounded utility functions, because they don’t assign such high utilities to any outcome, and thus there aren’t any outcomes that they must assign extremely tiny probabilities to in order to avoid weird behavior.
Do you agree that there are (uncountably) infinite many propositions to which we can assign a probability of 1. Then we assign a probability of 0 to the negation of those propositions.
No, of course not. As I said, there are only countably many propositions you can express at all.
I showed a method for constructing uncountably many propositions using recursion.
It appears that you’re starting with some countably infinite set S of propositions, and then trying to make a proposition for each subset of S by taking the conjunction of all propositions in the subset. But all but countably many of those subsets are infinite, and you can’t take the conjunction of infinitely many propositions.
Why can’t you take the conjunction of infinitely many propositions?
You can’t write it down in any finite amount of time.
Must I write them down? I wrote a program that could write them down.
No, you didn’t. A program can’t write down an infinite amount of information either.
Grant the program an infinite amount of time. I didn’t say the program must terminate did I.
A program can’t pick out arbitrary subsets of an infinite set either. Programs can’t do uncountably many things, even if you give them an infinite amount of time to work with.
As is written, g(var) picks out one arbitrary subset of the infinite set. There are 2^Aleph_null possible subsets g(var) can produce, thus, g(var) can (not does) produce uncountably infinite many true propositions.
Ok, I see what you’re trying to do now (though the pseudocode you wrote still doesn’t do it successfully). It’s true that with randomness, there are uncountably many infinite strings that could be produced. But you still have no way of referring to each one individually, so there’s little point in calling them “propositions”, which typically refers to claims that can actually be stated.
Vaniver and Math_viking if you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is “infinite negative utility exists”. B is “I can grant infinite negative utility”.) assigning a probability of 0 to me granting you infinite negative utility.
Consider the logical proposition “A xor”.
Does it makes sense to call it true or false? Not really; when I try to call it a proposition the response should be “type error; that’s a string that doesn’t parse into a proposition.”
Ah, but what probability do we assign to the statement “‘a xor’ results in a type error because it’s a string that doesn’t parse into a proposition”? 1-epsilon, and we’re done. Remember, the probabilistic model of utility came from somewhere, and has an associated level of evidence and support. It’s not impossible to convince me that it’s wrong.<a>
But does this make me vulnerable to Pascal’s mugging? However low I make epsilon, surely infinity is larger. It does not, because of the difference between inside-model probabilities and outside-model probabilities.
Suppose I am presented with a dilemma. Various different strategies all propose different actions; the alphabetical strategy claims I should pick the first option, the utility-maximizing strategy claims I should pick the option with highest EV, the satisficing strategy claims I should pick any option that’s ‘good enough’, and so on. But the epsilon chance that the utility is in fact infinite is not within the utility-maximizing strategy; it refers to a case where the utility-maximizing strategy’s assumptions are broken, and thus needs to be handled by a different strategy—presumably one that doesn’t immediately choke on infinities!
I understand your argument about breaking the assumptions of the strategy. What does inside model probabilities and outside model probabilities mean? I don’t want to blindly guessing.
See here.
The basic problem with this kind of argument is that you are taking the math too seriously. Probability theory leads to absurdities if you assume that you do not have a probability of 0 or 1 for “the trillionth digit of pi is greater than 5.” In reality normal people will neither be certain that is true nor certain it is false. In other words, probability is a formalism of degrees of belief, and it is an imperfect formalism, not a perfect one.
If we consider the actual matter at hand, rather than the imperfect formalism, we actually have bounded utility. So we do not care about very low probability threats, including the one in your example. But although we have bounded utility, we are not infinitely certain of the fact that our probability is bounded. Thus we do not assign a probability of zero to “we have unbounded utility.” Nonetheless, it would be a misuse of a flawed formalism to conclude that we have to act on the possibility of the infinite negative utility. In reality, we act based on our limited knowledge of our bounded utility, and assume the threat is worthless.
Threatening to infllict “infinite negative utility” is qualitateive rather than quantitative. You have not yet said how much you would inflict. Contrast with saying “I am going to inflict finite negative utility to you”.
If you know transfinite amounts it is possible ot make a threat of a infinite magnitude that is rational on expectation maximation grounds to reject as implausible. If you threaen me with omega negative utility but want only finite rewards if I think your plausibility is 1 per omega per omega I would still be losing infinitely by handing over the finite amount. While this makes the technical claim false it is in essence true. If the “ransom” is finite and the threat transfinite then the plausibily will need to be (sufficently) infinidecimal to be rejectable.
However there might be the door that infinidecimal doubt would be a different thing than a probabilty of 0. “Almost never” allows finite occurences while having 0 as the closest real approximator (any positive finite would be grossly inappropriate).
Here’s a question. As humans, we have the inherent flexibility to declare that something has either a probability of zero or a probability of one, and then the ability to still change our minds later if somehow that seems warranted.
You might declare that there’s a zero probability that I have the ability to inflict infinite negitive utility on you, but if then I take you back to a warehouse where I have banks of computers that I can mathematically demonstrate contain uploaded minds which are going to suffer in the equivalent of hell for an infinite amount of subjective time, you likely would at that point to change your estimate to something greater then zero. But if you actually set the probability to zero, you can’t actually do that without violating Bayesian rules, right?
It seems there’s a disconnect here; it may be better, in terms of actually using our minds to reason, to be able to treat a probability as if it were 0 or 1, but only because we can later change our minds if we realize we made an error; in which case it probably wasn’t actually 0 or 1 to start with in the strictest sense.
Vaniver and math_viking, it doesn’t have to be infinite, I can just call a ridiculously big number like G_64 or something.
So then the probability you assign to their ability to actually affect that change can be assigned a correspondingly small, nonzero number, even if you don’t want to assign 0 probability.
I have no counter argument to this.
You’re correct that a statement at a confidence level of p does not imply the existence of other statements at the level of p. But given that the statement is illustrative, that seems fine.
Do note that there is the question of whether the confidence level came from in the first place. If I don’t have a set of 10k statements that look the same to me, of which about one is wrong, from where comes my confidence level of .9999? How did I distinguish it from a confidence level of .999?
Do you count a Boltzmann brain as ‘existing’ in the meaningful sense?
Note that a utility value of infinity implies a probability of 1, because of the probabilistic interpretation of utilities. Rather than assigning probability 0 to the statement, you can simply return a type error when they say “infinite utility” like you would return a type error if they said “probability 1″ and the rejection would work fine.
I may have updated from priors to arrive at that level.
It may be my prior.
I can make infinitely many true statements (using the method I outlined above). If I want to reach a certain number of true statements say Y, I can make Y true statements, and make false statements for the rest to reach the desired confidence level.
If you’re doing the thing correctly, you view the statements as equally probable; otherwise it doesn’t make sense to group them together. It’s not “A=A, B=B, C=C, and D=E, all at confidence level 75%” because I can tell the difference, and would be better off saying “the first three at confidence level 1-epsilon, the last at confidence level epsilon.”
I see.