A couple of further assumptions... 1) I assume that what’s actually necessary for “agency” on your account is that I’m the sort of system whose actions cannot be deterministically predicted, not merely that I have not been predicted… creating Predictor doesn’t eliminate my “agency,” it merely demonstrates that I never had any such thing, and destroying Predictor doesn’t somehow provide me with or restore my “agency”.
2) I assume that true randomness doesn’t suffice for “agency” on your account… that Schrodinger’s Cat doesn’t involve an “agent” who “decides” to do anything in particular, even though it can’t be deterministically predicted.
Yes?
So, OK. Assuming all of that: Suppose Sam performs three actions: (A1) climbs to the roof of a high building, (A2) steps off the edge, and (A3) accelerates toward the ground. Suppose further that A1-A3 were predictable, and therefore on your account not “decisions.” Is there any useful distinction to be made between A1, A2, and A3? For example, predicting A3 only requires a knowledge of ballistics, whereas predicting A1 and A2 require more than that. Would you classify them differently on those grounds?
If I was classifying things based on how well a given predictor could predict them, I’d give all three events numbers within a range; I suspect that A1 and A2 would be less predictable for most predictors (but more predictable for the class of predictors which can see a short distance into the future, since they happen sooner).
If I was classifying things based on the upper limit of how accurately they could be predicted, I’d give them all the same value, but I would give an action which I consider a decision or the outcome of a decision which has not been made yet a different value.
2: I don’t deny the possibility that there is an agent involved in anything nondeterministic; I think it is very unlikely that unstable atoms are (or contain) agents, but the world would probably look identical to me either way. It’s also possible that things which appear deterministic are in fact determined by agents with a value function entirely foreign to me; again, the world would look the same to me if there was one or more “gravity agents” that pulled everything toward everything. That postulate has a prior so low that I don’t think ‘epsilon’ adequately describes it, and I have no reports of the evidence which would support it but not the standard theory of gravitation (Winwardium Leviosaa working, for example).
It’s not possible to confirm an infinite number of accurate predictions, and any event which has happened as predicted only a finite number of times (e.g. a number of times equal to the age of the universe in plank time) is not proof that it can always be accurately predicted. *
Just to be sure,I do not believe that this dragon in my garage exists. I also think that it’s more likely that I don’t exist as a magician with the power to do something that matter in general does not do. It’s just that the expected utility of believing that the future is mutable (that I can affect things) is higher than the expected utility of believing that the state of the universe at a given time is time-invariant, regardless of the probability distribution between the two possibilities.
I wasn’t asking whether your probability of an agent being involved in, say, unstable atom decay was zero. I was just trying to confirm that the mere fact of indeterminacy did not suffice to earn something the label “agent” on your account. That is, confirm that an agent being involved in unstable atom decay was not a certainty on your account.
Which I guess you’ve confirmed. Thanks.
I agree that infinite confidence in a prediction is impossible.
I agree that infinite confidence in a prediction is impossible.
Did you mean that there was an upper bound less than 1 on the proper confidence of any nontrivial prediction? That’s contrary to materialism, isn’t it?
Yes. Trivial ones, too. And no, not as far as I can tell, merely consistent with the existence of error rates. For that matter, I would also say that infinite confidence in a non-prediction is impossible. That is, I’m pretty damned sure I have toenails, but my confidence that I have toenails is not infinite.
If I generate a statement at the same confidence level as “I have toenails” every day for a century, I’d be unsurprised to get a few wrong just because my brain glitches every once in a while, I’d be surprised if I got as many as ten wrong, and I’d be only slightly surprised to get them all right.
So call that .99998 confidence. Which in practice I refer to as certainty. Of course, better-designed brains are capable of higher confidence than that.
Is there anything that anyone can be more certain about than your belief that you have toenails, or is .99998 the upper bound for confidence in any prediction?
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
is .99998 the upper bound for confidence in any prediction?
Not at all. For example, as I already said, better-designed brains are capable of higher confidence than that.
There may also be other classes of statements for which even my brain is capable of higher confidence, though off-hand I’m not sure what they might be… perception and recognition of concrete familiar objects is pretty basic.
Thinking about it now, I suppose the implication of ownership adds some unnecessary complexity and correspondingly lowers MTBF; my confidence in “there are toenails on that foot” might be higher… maybe even as much as an order of magnitude higher. Then again, maybe not… we’re really playing down at the level of organic brain failure here, so the semantic content may not matter at all.
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
You can get pretty darn high confidences with negation and conjunctions. I can say with great confidence that I am not a 15 story tall Triceratops with glowing red eyes, and I can say with even greater confidence that I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement.
(Incidentally, now you have me wondering how “Linda is a Triceratops and a bank teller” would work in the classic conjunction fallacy example.)
So, as a matter of pure logic, you’re of course correct… but in this particular context, I’m not sure. As I say, once I get down to the 5-9s level, I’m really talking about brain failures, and those can affect the machinery that evaluates negations and conjunctions as readily as they can anything else (perhaps more so, I dunno).
If I made a statement in which I have as much confidence as I do in “I am not a 15 story tall Triceratops with glowing red eyes” every day for a hundred years, would I expect to get them all correct? I guess so, yes. So, agreed, it’s higher than .99998. A thousand years? Geez. No, I’d expect to screw up at least once. So, OK, call it .999999 confidence instead for that class.
What about “I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement”? Yeesh. I dunno. I don’t think I have .9999999 confidence in tautologies.
Within noise of 1. I couldn’t list things that I am that certain of for long enough to expect one of them to be wrong, and I’m bad in general at dealing with probabilities outside of [0.05,0.95]
In one of the ancestors, I asked if there was an upper limit <1 which represented an upper bound on the maximum permissible accurate confidence in something. (e.g. some number 0<x<1 such that confidence always fell into either (1-x, x) or [1-x, x].
I’m happy to say “within noise of 1” (aka “one minus epsilon”) is the upper limit for maximum permissible accurate confidence. Does that count as an answer to your question?
I don’t know any way to put a number to it; for any given mind, I expect there’s an upper limit to how confident that mind can be about anything, but that upper limit increases with how well-designed the mind is, and I have no idea what the upper limit is to how well-designed a mind can be, and I don’t know how to estimate the level of confidence an unspecified mind can have in that sort of proposition (though as at least one data point, a mind basically as fallible as mine but implementing error-checking algorithms can increase that maximum by many orders of magnitude).
I’d initially assumed that meant I couldn’t answer your question, but when you gave me “within noise of 1” as an answer for your confidence about toenails that suggested that you considered that an acceptable answer to questions about confidence levels, and it was an accurate answer to your question about confidence levels as well, so I gave it.
I’m not sure how I could tell the difference between two upper bounds of confidence at all. I mean, it’s not like I test them in practice. I similarly can’t tell whether the maximum speed of my car is 120 mph or 150 mph; I’ve never driven above 110.
But, to answer your question… nope, I wouldn’t be able to tell.
OK, thanks for clarifying your position.
A couple of further assumptions...
1) I assume that what’s actually necessary for “agency” on your account is that I’m the sort of system whose actions cannot be deterministically predicted, not merely that I have not been predicted… creating Predictor doesn’t eliminate my “agency,” it merely demonstrates that I never had any such thing, and destroying Predictor doesn’t somehow provide me with or restore my “agency”.
2) I assume that true randomness doesn’t suffice for “agency” on your account… that Schrodinger’s Cat doesn’t involve an “agent” who “decides” to do anything in particular, even though it can’t be deterministically predicted.
Yes?
So, OK. Assuming all of that:
Suppose Sam performs three actions: (A1) climbs to the roof of a high building, (A2) steps off the edge, and (A3) accelerates toward the ground. Suppose further that A1-A3 were predictable, and therefore on your account not “decisions.”
Is there any useful distinction to be made between A1, A2, and A3?
For example, predicting A3 only requires a knowledge of ballistics, whereas predicting A1 and A2 require more than that. Would you classify them differently on those grounds?
If I was classifying things based on how well a given predictor could predict them, I’d give all three events numbers within a range; I suspect that A1 and A2 would be less predictable for most predictors (but more predictable for the class of predictors which can see a short distance into the future, since they happen sooner).
If I was classifying things based on the upper limit of how accurately they could be predicted, I’d give them all the same value, but I would give an action which I consider a decision or the outcome of a decision which has not been made yet a different value.
2: I don’t deny the possibility that there is an agent involved in anything nondeterministic; I think it is very unlikely that unstable atoms are (or contain) agents, but the world would probably look identical to me either way. It’s also possible that things which appear deterministic are in fact determined by agents with a value function entirely foreign to me; again, the world would look the same to me if there was one or more “gravity agents” that pulled everything toward everything. That postulate has a prior so low that I don’t think ‘epsilon’ adequately describes it, and I have no reports of the evidence which would support it but not the standard theory of gravitation (Winwardium Leviosaa working, for example).
It’s not possible to confirm an infinite number of accurate predictions, and any event which has happened as predicted only a finite number of times (e.g. a number of times equal to the age of the universe in plank time) is not proof that it can always be accurately predicted. *
Just to be sure,I do not believe that this dragon in my garage exists. I also think that it’s more likely that I don’t exist as a magician with the power to do something that matter in general does not do. It’s just that the expected utility of believing that the future is mutable (that I can affect things) is higher than the expected utility of believing that the state of the universe at a given time is time-invariant, regardless of the probability distribution between the two possibilities.
Thanks for the clarification.
I wasn’t asking whether your probability of an agent being involved in, say, unstable atom decay was zero. I was just trying to confirm that the mere fact of indeterminacy did not suffice to earn something the label “agent” on your account. That is, confirm that an agent being involved in unstable atom decay was not a certainty on your account.
Which I guess you’ve confirmed. Thanks.
I agree that infinite confidence in a prediction is impossible.
Did you mean that there was an upper bound less than 1 on the proper confidence of any nontrivial prediction? That’s contrary to materialism, isn’t it?
Yes. Trivial ones, too. And no, not as far as I can tell, merely consistent with the existence of error rates.
For that matter, I would also say that infinite confidence in a non-prediction is impossible. That is, I’m pretty damned sure I have toenails, but my confidence that I have toenails is not infinite.
What do you suppose that upper bound is?
If I generate a statement at the same confidence level as “I have toenails” every day for a century, I’d be unsurprised to get a few wrong just because my brain glitches every once in a while, I’d be surprised if I got as many as ten wrong, and I’d be only slightly surprised to get them all right.
So call that .99998 confidence. Which in practice I refer to as certainty. Of course, better-designed brains are capable of higher confidence than that.
What’s your confidence that you have toenails?
Is there anything that anyone can be more certain about than your belief that you have toenails, or is .99998 the upper bound for confidence in any prediction?
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
Not at all. For example, as I already said, better-designed brains are capable of higher confidence than that.
There may also be other classes of statements for which even my brain is capable of higher confidence, though off-hand I’m not sure what they might be… perception and recognition of concrete familiar objects is pretty basic.
Thinking about it now, I suppose the implication of ownership adds some unnecessary complexity and correspondingly lowers MTBF; my confidence in “there are toenails on that foot” might be higher… maybe even as much as an order of magnitude higher. Then again, maybe not… we’re really playing down at the level of organic brain failure here, so the semantic content may not matter at all.
(nods) Mine, too.
What’s your confidence that you have toenails?
You can get pretty darn high confidences with negation and conjunctions. I can say with great confidence that I am not a 15 story tall Triceratops with glowing red eyes, and I can say with even greater confidence that I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement.
(Incidentally, now you have me wondering how “Linda is a Triceratops and a bank teller” would work in the classic conjunction fallacy example.)
So, as a matter of pure logic, you’re of course correct… but in this particular context, I’m not sure. As I say, once I get down to the 5-9s level, I’m really talking about brain failures, and those can affect the machinery that evaluates negations and conjunctions as readily as they can anything else (perhaps more so, I dunno).
If I made a statement in which I have as much confidence as I do in “I am not a 15 story tall Triceratops with glowing red eyes” every day for a hundred years, would I expect to get them all correct? I guess so, yes. So, agreed, it’s higher than .99998. A thousand years? Geez. No, I’d expect to screw up at least once. So, OK, call it .999999 confidence instead for that class.
What about “I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement”? Yeesh. I dunno. I don’t think I have .9999999 confidence in tautologies.
Within noise of 1. I couldn’t list things that I am that certain of for long enough to expect one of them to be wrong, and I’m bad in general at dealing with probabilities outside of [0.05,0.95]
In one of the ancestors, I asked if there was an upper limit <1 which represented an upper bound on the maximum permissible accurate confidence in something. (e.g. some number 0<x<1 such that confidence always fell into either (1-x, x) or [1-x, x].
I’m happy to say “within noise of 1” (aka “one minus epsilon”) is the upper limit for maximum permissible accurate confidence. Does that count as an answer to your question?
What you said is an answer, but the manner in which you said it indicates that it isn’t the answer you intend.
I’m asking if there is a lower bound above zero for epsilon, and you just said yes, but you didn’t put a number on it.
I didn’t, it’s true.
I don’t know any way to put a number to it; for any given mind, I expect there’s an upper limit to how confident that mind can be about anything, but that upper limit increases with how well-designed the mind is, and I have no idea what the upper limit is to how well-designed a mind can be, and I don’t know how to estimate the level of confidence an unspecified mind can have in that sort of proposition (though as at least one data point, a mind basically as fallible as mine but implementing error-checking algorithms can increase that maximum by many orders of magnitude).
I’d initially assumed that meant I couldn’t answer your question, but when you gave me “within noise of 1” as an answer for your confidence about toenails that suggested that you considered that an acceptable answer to questions about confidence levels, and it was an accurate answer to your question about confidence levels as well, so I gave it.
So… you wouldn’t be able to tell the difference between an epsilon>0 and an epsilon =>0?
I’m not sure how I could tell the difference between two upper bounds of confidence at all. I mean, it’s not like I test them in practice. I similarly can’t tell whether the maximum speed of my car is 120 mph or 150 mph; I’ve never driven above 110.
But, to answer your question… nope, I wouldn’t be able to tell.