If the goal here is to make a statement to which one can assign probability 1, how about this: something exists. That would be quite difficult to contradict (albeit it has been done by non-realists).
My attempts to taboo “exist” led me to instrumentalism, so beware.
Is instrumentalism such a bad thing, though? It seems like instrumentalism is a better generalization of Bayesian reasoning than scientific realism, and it approaches scientific realism asymptotically as your prior for “something exists” approaches 1. (Then again, I may have been thoroughly corrupted in my youth by the works of Robert Wilson).
Is instrumentalism such a bad thing, though? It seems like instrumentalism is a better generalization of Bayesian reasoning than scientific realism
If you take instrumentalism seriously, then you remove external “reality” as meaningless, and only talk about inputs (and maybe outputs) and models. Basically in this diagram
from Update then Forget you remove the top row of W’s, leaving dangling arrows where “objective reality” used to be. This is not very aesthetically satisfactory, since the W’s link current actions to future observations, and without them the causality is not apparent or even necessary. This is not necessarily a bad thing, if you take care to avoid the known AIXI pitfalls of wireheading and anvil dropping. But this is certainly not one of the more popular ontologies.
“Exist” is meaningful in the sense that “true” is meaningful, as described in EY’s The Simple Truth. I’m not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there’s always the chance that in a few seconds you’ll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don’t believe me, substitute A=”2+2=4″. Similarly, I can never be mistaken in saying “something exists” because for me to be mistaken about it, I’d have to exist.
You could be mistaken about logic, a demon might be playing tricks on you etc.
Similarly, I can never be mistaken in saying “something exists” because for me to be mistaken about it, I’d have to exist.
You can say “Sherlock Holmes was correct in his deduction.” That does not rely on Sherlock Holmes actually existing, it’s just noting a relation between one concept (Sherlock Holmes) and another (a correct deduction).
You could be mistaken about logic, a demon might be playing tricks on you etc.
What would you say, if asked to defend this possibility?
You can say “Sherlock Holmes was correct in his deduction.” That does not rely on Sherlock Holmes actually existing, it’s just noting a relation between one concept (Sherlock Holmes) and another (a correct deduction).
This is true, but (at least if we’re channeling Descartes) the question is whether or not we can raise a doubt about the truth of the claim that something exists. Our ability to have this thought doesn’t prove that it’s true, but it may well close off any doubts.
What would you say, if asked to defend this possibility?
The complexity based prior for living in such a world is very low, but non-zero. Consequently, you can’t be straight 1.0 convinced it’s not the case.
A teapot could actually be an alien spaceship masquerading as a teapot-lookalike. That possibility is heavily, heavily discounted against using your favorite version of everyone’s favorite heuristic (Occam’s Razor). However, since it can be formulated (with a lot of extra bits), its probability is non-zero. Enough to reductio the “easily 100%”.
The complexity based prior for living in such a world is very low, but non-zero. Consequently, you can’t be straight 1.0 convinced it’s not the case.
Well, this is a restatement of the claim that it’s possible to be deceived about tautologies, not a defense of that claim. But your post clarifies the situation quite a lot, so maybe I can rephrase my request: how would you defend the claim that it is possible (with any arbitrarily large number of bits) to formulate a world in which a contradictions is true?
I admit I for one don’t know how I would defend the contrary claim, that no such world could be formulated.
formulate a world in which a contradictions is true?
Probably heavily depends on the meaning of “formulate”, “contradiction” and “true”. For example, what’s the difference between “imagine” and “formulate”? In other words, with “any arbitrarily large number of bits” you can likely accurately “formulate” a model of the human brain/mind which imagines “a world in which a contradiction is true”.
I mean whatever Kawoomba meant, and so he’s free to tell me whether or not I’m asking for something impossible (though that would be a dangerous line for him to take).
In other words, with “any arbitrarily large number of bits” you can likely accurately “formulate” a model of the human brain/mind which imagines “a world in which a contradiction is true”.
Is your thought that unless we can (with certainty) rule out the possibility of such a model or rule out the possibility that this model represents a world in which a contradiction is true, then we can’t call ourselves certain about the law of non-contradiction? I grant that the falsity of that disjunct seems far from certain.
[in] a world in which a contradiction is true, then we can’t call ourselves certain about the law of non-contradiction?
I am not a mathematician, but to me the law of non-contradiction is something like a theorem in propositional calculus, unrelated to a particular world. A propositional calculus may or may not be a useful model, depends on the application, of course. But I suppose this is straying dangerously close to the discussion of instrumentalism, which led us nowhere last time we had it.
It seems more like an axiom to me than a theorem: I know of no way to argue for it that doesn’t presuppose it. So I kind of read Aristotle for a living (don’t laugh), and he takes an interesting shot at arguing for the LNC: he seems to say it’s simply impossible to formulate a contradiction in thought, or even in speech. The sentence ‘this is a man and not a man’ just isn’t genuine proposition.
That doesn’t seem super plausible, however interesting a strategy it is, and I don’t know of anything better.
he seems to say it’s simply impossible to formulate a contradiction in thought, or even in speech. The sentence ‘this is a man and not a man’ just isn’t genuine proposition.
This seems like a version of “no true Scotsman”. Anyway, I don’t know much about Aristotle’s ideas, but what I do know, mostly physics-related, either is outright wrong or has been obsolete for the last 500 years. If this is any indication, his ideas on logic are probably long superseded by the first-order logic or something, and his ideas on language and meaning by something else reasonably modern. Maybe he is fun to read from the historical or literary perspective, I don’t know, but I doubt that it adds anything to one’s understanding of the world.
Well, his argument consists of more than the above assertion (he lays out a bunch of independent criteria for the expression of a thought, and argues that contradictions can never satisfy them). However I can’t disagree with you on this: no one reads Aristotle to learn about physics or logic or biology or what-have-you. To say that modern versions are more powerful, more accurate, and more useful is massive understatement. People still read Aristotle as a relevant ethical philosopher, though I have my doubts as to how useful he can be, given that he was an advocate for slavery, sexism, infanticide, etc. Not a good start for an ethicist.
On the other hand, almost no contemporary logicians think contradictions can be true, but no one I know of has an argument for this. It’s just a primitive.
You can say “Sherlock Holmes was correct in his deduction.” That does not rely on Sherlock Holmes actually existing, it’s just noting a relation between one concept (Sherlock Holmes) and another (a correct deduction).
This is true, but (at least if we’re channeling Descartes) the question is whether or not we can raise a doubt about the truth of the claim that something exists. Our ability to have this thought doesn’t prove that it’s true, but it may well close off any doubts.
Sure, it sounds pretty reasonable. I mean, it’s an elementary facet of logic, and there’s no way it’s wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can’t say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you’re not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That’s an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can’t be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.
So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don’t believe you truly believe 100% certainty is impossible. Also, 100% certainty can’t be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?
Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible—I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.
As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain—you’d be indifferent on the bet, and you get free signaling from it.
Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain—you’d be indifferent on the bet, and you get free signaling from it.
Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term “money-pump”). Let’s say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won’t allow it). You can now be pascal’s mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).
That’s not how decision theory works. The bounds on my probabilities don’t actually apply quite like that. When I’m making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I’ve made a logic error; after all, given that my entire reasoning is wrong, I shouldn’t expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.
Your calculations aren’t quite right. You’re treating EU(action) as though it were a probability value (like P(action)). EU(action) would be more logically written E(utility | action), which itself is an integral over utility * P(utility | action) for utility∈(-∞,∞), which, due to linearity of * and integrals, does have all the normal identities, like
In this case, if you do expand that out, using p<<1 for the probability of an error, which is independent of your action, and assuming E(utility|action1,error) = E(utility|action2,error), you get E(utility | action) = E(utility | error) * p + E(utility | action, ¬error) * (1 - p). Or for the difference between two actions, EU1 - EU2 = (EU1' - EU2') * (1 - p) where EU1', EU2' are the expected utilities assuming no errors.
Anyway, this seems like a good model for “there’s a superintelligent demon messing with my head” kind of error scenarios, but not so much for the everyday kind of math errors. For example, if I work out in my head that 51 is a prime number, I would accept an even odds bet on “51 is prime”. But, if I knew I had made an error in the proof somewhere, it would be a better idea not to take the bet, since less than half of numbers near 50 are prime.
Right, I didn’t quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn’t hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).
In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term “money-pump”)
The usage of “money-pump” is correct.
(Do note, however, that using 1 and 0 as probabilities when you in fact do not have that much certainty also implies the possibility for exploitation, and unlike the money pump scenario you do not even have the opportunity to learn from the first exploitation and self correct.)
A lot of this is a framing problem. Remember that anything we’re discussing here is in human terms, not (for example) raw Universal Turing Machine tape-streams with measurable Komolgorov complexities. So when you say “what probability do you assign to me being able to assign 100% probability”, you’re abstracting a LOT of little details that otherwise need to be accounted for.
I.e., if I’m computing probabilities as a set of propositions, each of which is a computable function that might predict the universe and a probability that I assign to whether it accurately does so, and in all of those computable functions my semantic representation of ‘probability’ is encoded as log odds with finite precision, then your question translates into a function which traverses all of my possible worlds, looks to see if one of those probabilities that refers to your self-assigned probability is encoded as the number ‘INFINITY’, multiplies that by the probability that I assigned that world being the correct one, and then tabulates.
Since “encoded as log odds with finite precision” and “encoded as the number ‘INFINITY’” are not simultaneously possible given certain encoding schemes, this really resolves itself to “do I encode floating-point numbers using a mantissa notation or other scheme that allows for values like +INF/-INF/+NaN/-NaN?”
Which sounds NOTHING like the question you asked, but it the answers do happen to perfectly correlate (to within the precision allowed by the language we’re using to communicate right now).
Also, 100% certainty can’t be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?
When I say 100% certainty is impossible, I mean that there are no cases where assigning 100% to something is correct, but I have less than 100% confidence in this claim. It’s similar to the claim that it’s impossible to travel faster than the speed of light.
If any agent within a system were able to assign a 1 or 0 probability to any belief about that system being true, that would mean that the map-territory divide would have been broken.
However, since that agent can never rule out being mistaken about its own ontology, its reasoning mechanism, following an invisible (if vanishingly unlikely) internal failure, it can never gain final certainty about any feature of territory, although it can get arbitrarily close.
What evidence convinces you now that something exists? What would the world look like if it were not the case that something existed?
Imagine yourself as a brain in a jar, without the brain and the jar. Would you remain convinced that something existed if confronted with a world that had evidence against that proposition?
If the goal here is to make a statement to which one can assign probability 1, how about this: something exists. That would be quite difficult to contradict (albeit it has been done by non-realists).
Is “exist” even a meaningful term? My probability on that is highish but no where near unity.
My attempts to taboo “exist” led me to instrumentalism, so beware.
Is instrumentalism such a bad thing, though? It seems like instrumentalism is a better generalization of Bayesian reasoning than scientific realism, and it approaches scientific realism asymptotically as your prior for “something exists” approaches 1. (Then again, I may have been thoroughly corrupted in my youth by the works of Robert Wilson).
If you take instrumentalism seriously, then you remove external “reality” as meaningless, and only talk about inputs (and maybe outputs) and models. Basically in this diagram
from Update then Forget you remove the top row of W’s, leaving dangling arrows where “objective reality” used to be. This is not very aesthetically satisfactory, since the W’s link current actions to future observations, and without them the causality is not apparent or even necessary. This is not necessarily a bad thing, if you take care to avoid the known AIXI pitfalls of wireheading and anvil dropping. But this is certainly not one of the more popular ontologies.
“Exist” is meaningful in the sense that “true” is meaningful, as described in EY’s The Simple Truth. I’m not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there’s always the chance that in a few seconds you’ll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don’t believe me, substitute A=”2+2=4″. Similarly, I can never be mistaken in saying “something exists” because for me to be mistaken about it, I’d have to exist.
You could be mistaken about logic, a demon might be playing tricks on you etc.
You can say “Sherlock Holmes was correct in his deduction.” That does not rely on Sherlock Holmes actually existing, it’s just noting a relation between one concept (Sherlock Holmes) and another (a correct deduction).
What would you say, if asked to defend this possibility?
This is true, but (at least if we’re channeling Descartes) the question is whether or not we can raise a doubt about the truth of the claim that something exists. Our ability to have this thought doesn’t prove that it’s true, but it may well close off any doubts.
The complexity based prior for living in such a world is very low, but non-zero. Consequently, you can’t be straight 1.0 convinced it’s not the case.
A teapot could actually be an alien spaceship masquerading as a teapot-lookalike. That possibility is heavily, heavily discounted against using your favorite version of everyone’s favorite heuristic (Occam’s Razor). However, since it can be formulated (with a lot of extra bits), its probability is non-zero. Enough to reductio the “easily 100%”.
Well, this is a restatement of the claim that it’s possible to be deceived about tautologies, not a defense of that claim. But your post clarifies the situation quite a lot, so maybe I can rephrase my request: how would you defend the claim that it is possible (with any arbitrarily large number of bits) to formulate a world in which a contradictions is true?
I admit I for one don’t know how I would defend the contrary claim, that no such world could be formulated.
Probably heavily depends on the meaning of “formulate”, “contradiction” and “true”. For example, what’s the difference between “imagine” and “formulate”? In other words, with “any arbitrarily large number of bits” you can likely accurately “formulate” a model of the human brain/mind which imagines “a world in which a contradiction is true”.
I mean whatever Kawoomba meant, and so he’s free to tell me whether or not I’m asking for something impossible (though that would be a dangerous line for him to take).
Is your thought that unless we can (with certainty) rule out the possibility of such a model or rule out the possibility that this model represents a world in which a contradiction is true, then we can’t call ourselves certain about the law of non-contradiction? I grant that the falsity of that disjunct seems far from certain.
I am not a mathematician, but to me the law of non-contradiction is something like a theorem in propositional calculus, unrelated to a particular world. A propositional calculus may or may not be a useful model, depends on the application, of course. But I suppose this is straying dangerously close to the discussion of instrumentalism, which led us nowhere last time we had it.
It seems more like an axiom to me than a theorem: I know of no way to argue for it that doesn’t presuppose it. So I kind of read Aristotle for a living (don’t laugh), and he takes an interesting shot at arguing for the LNC: he seems to say it’s simply impossible to formulate a contradiction in thought, or even in speech. The sentence ‘this is a man and not a man’ just isn’t genuine proposition.
That doesn’t seem super plausible, however interesting a strategy it is, and I don’t know of anything better.
This seems like a version of “no true Scotsman”. Anyway, I don’t know much about Aristotle’s ideas, but what I do know, mostly physics-related, either is outright wrong or has been obsolete for the last 500 years. If this is any indication, his ideas on logic are probably long superseded by the first-order logic or something, and his ideas on language and meaning by something else reasonably modern. Maybe he is fun to read from the historical or literary perspective, I don’t know, but I doubt that it adds anything to one’s understanding of the world.
Well, his argument consists of more than the above assertion (he lays out a bunch of independent criteria for the expression of a thought, and argues that contradictions can never satisfy them). However I can’t disagree with you on this: no one reads Aristotle to learn about physics or logic or biology or what-have-you. To say that modern versions are more powerful, more accurate, and more useful is massive understatement. People still read Aristotle as a relevant ethical philosopher, though I have my doubts as to how useful he can be, given that he was an advocate for slavery, sexism, infanticide, etc. Not a good start for an ethicist.
On the other hand, almost no contemporary logicians think contradictions can be true, but no one I know of has an argument for this. It’s just a primitive.
This is true, but (at least if we’re channeling Descartes) the question is whether or not we can raise a doubt about the truth of the claim that something exists. Our ability to have this thought doesn’t prove that it’s true, but it may well close off any doubts.
Sure, it sounds pretty reasonable. I mean, it’s an elementary facet of logic, and there’s no way it’s wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can’t say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you’re not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That’s an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can’t be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.
So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don’t believe you truly believe 100% certainty is impossible. Also, 100% certainty can’t be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?
Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible—I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.
As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain—you’d be indifferent on the bet, and you get free signaling from it.
Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term “money-pump”). Let’s say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won’t allow it). You can now be pascal’s mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).
That’s not how decision theory works. The bounds on my probabilities don’t actually apply quite like that. When I’m making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I’ve made a logic error; after all, given that my entire reasoning is wrong, I shouldn’t expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.
Your calculations aren’t quite right. You’re treating
EU(action)
as though it were a probability value (likeP(action)
).EU(action)
would be more logically writtenE(utility | action)
, which itself is an integral overutility * P(utility | action)
forutility∈(-∞,∞)
, which, due to linearity of*
and integrals, does have all the normal identities, likeE(utility | action) = E(utility | action, e) * P(e | action) + E(utility | action, ¬e) * P(¬e | action)
.In this case, if you do expand that out, using
p<<1
for the probability of an error, which is independent of your action, and assumingE(utility|action1,error) = E(utility|action2,error)
, you getE(utility | action) = E(utility | error) * p + E(utility | action, ¬error) * (1 - p)
. Or for the difference between two actions,EU1 - EU2 = (EU1' - EU2') * (1 - p)
whereEU1', EU2'
are the expected utilities assuming no errors.Anyway, this seems like a good model for “there’s a superintelligent demon messing with my head” kind of error scenarios, but not so much for the everyday kind of math errors. For example, if I work out in my head that 51 is a prime number, I would accept an even odds bet on “51 is prime”. But, if I knew I had made an error in the proof somewhere, it would be a better idea not to take the bet, since less than half of numbers near 50 are prime.
Right, I didn’t quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn’t hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).
The usage of “money-pump” is correct.
(Do note, however, that using 1 and 0 as probabilities when you in fact do not have that much certainty also implies the possibility for exploitation, and unlike the money pump scenario you do not even have the opportunity to learn from the first exploitation and self correct.)
A lot of this is a framing problem. Remember that anything we’re discussing here is in human terms, not (for example) raw Universal Turing Machine tape-streams with measurable Komolgorov complexities. So when you say “what probability do you assign to me being able to assign 100% probability”, you’re abstracting a LOT of little details that otherwise need to be accounted for.
I.e., if I’m computing probabilities as a set of propositions, each of which is a computable function that might predict the universe and a probability that I assign to whether it accurately does so, and in all of those computable functions my semantic representation of ‘probability’ is encoded as log odds with finite precision, then your question translates into a function which traverses all of my possible worlds, looks to see if one of those probabilities that refers to your self-assigned probability is encoded as the number ‘INFINITY’, multiplies that by the probability that I assigned that world being the correct one, and then tabulates.
Since “encoded as log odds with finite precision” and “encoded as the number ‘INFINITY’” are not simultaneously possible given certain encoding schemes, this really resolves itself to “do I encode floating-point numbers using a mantissa notation or other scheme that allows for values like +INF/-INF/+NaN/-NaN?”
Which sounds NOTHING like the question you asked, but it the answers do happen to perfectly correlate (to within the precision allowed by the language we’re using to communicate right now).
Did that make sense?
When I say 100% certainty is impossible, I mean that there are no cases where assigning 100% to something is correct, but I have less than 100% confidence in this claim. It’s similar to the claim that it’s impossible to travel faster than the speed of light.
If any agent within a system were able to assign a 1 or 0 probability to any belief about that system being true, that would mean that the map-territory divide would have been broken.
However, since that agent can never rule out being mistaken about its own ontology, its reasoning mechanism, following an invisible (if vanishingly unlikely) internal failure, it can never gain final certainty about any feature of territory, although it can get arbitrarily close.
What evidence convinces you now that something exists? What would the world look like if it were not the case that something existed?
Imagine yourself as a brain in a jar, without the brain and the jar. Would you remain convinced that something existed if confronted with a world that had evidence against that proposition?