Well, it was first introduced into philosophical literature by Nozick explicitly as a challenge to the principle of dominance in traditional decision theories. So, it’s probably about decision theory at least a little bit.
From the context, I would presume “about” in the sense of “this is why it’s fascinating to the people who make a big deal about it”. (I realise the stated reason for LW interest is the scenario of an AI whose source code is known to Omega having to make a decision, but the people being fascinated are humans.)
Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made. A Customer Service Representative that follows company policy regardless of the outcome isn’t making decisions, he’s abdicating the decision-making to someone else.
It’s probable that free will doesn’t exist, in which case decisions don’t exist and agenthood is an illusion; that would be consistent with the line of thinking which has produced the most accurate observations to date. I will continue to act as though I am an agent, because on the off chance I have a choice it is the choice that I want.
Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.
There’s nothing in there that is advice to robots about what decisions to make.
It is all about robots—deterministic machines—performing activities that everyone unproblematically calls “making decisions”. According to what you mean by “decision”, they are inherently incapable of doing any such thing. Robots, in your view, cannot be “agents”; a similar Google search shows that no-one who works with robots has any problem describing them as agents.
So, what do you mean by “decision” and “agenthood”? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?
It would be better to say that materialism being true has the prerequisite of determinism being true, in which case “decisions” do not have the properties we’re crossing on.
Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.
Still not true. The prediction capability of other agents in the same universe does not make the decisions made by an agent into not-decisions. (This is a common confusion that often leads to bad decision-theoretic claims.)
If free will is not the case, there are no agents (anymore?)
If it is the case that the universe in the past might lead to an agent making one of two or more decisions, then free will is the case and perfect prediction is impossible; if it is not the case that an entity can take any one of two or more actions, then free will is not the case and perfect prediction is possible.
Note that it is possible for free will to exist but for me to not be one of the agents. Sometimes I lose sleep over that.
A sufficiently intelligent and informed AI existing in the orbit of Alpha Centauri but in no way interacting with any other agent (in the present or future) does not by its very existence remove the capability of every agent in the galaxy to make decisions. That would be a ridiculous way to carve reality.
A couple of further assumptions... 1) I assume that what’s actually necessary for “agency” on your account is that I’m the sort of system whose actions cannot be deterministically predicted, not merely that I have not been predicted… creating Predictor doesn’t eliminate my “agency,” it merely demonstrates that I never had any such thing, and destroying Predictor doesn’t somehow provide me with or restore my “agency”.
2) I assume that true randomness doesn’t suffice for “agency” on your account… that Schrodinger’s Cat doesn’t involve an “agent” who “decides” to do anything in particular, even though it can’t be deterministically predicted.
Yes?
So, OK. Assuming all of that: Suppose Sam performs three actions: (A1) climbs to the roof of a high building, (A2) steps off the edge, and (A3) accelerates toward the ground. Suppose further that A1-A3 were predictable, and therefore on your account not “decisions.” Is there any useful distinction to be made between A1, A2, and A3? For example, predicting A3 only requires a knowledge of ballistics, whereas predicting A1 and A2 require more than that. Would you classify them differently on those grounds?
If I was classifying things based on how well a given predictor could predict them, I’d give all three events numbers within a range; I suspect that A1 and A2 would be less predictable for most predictors (but more predictable for the class of predictors which can see a short distance into the future, since they happen sooner).
If I was classifying things based on the upper limit of how accurately they could be predicted, I’d give them all the same value, but I would give an action which I consider a decision or the outcome of a decision which has not been made yet a different value.
2: I don’t deny the possibility that there is an agent involved in anything nondeterministic; I think it is very unlikely that unstable atoms are (or contain) agents, but the world would probably look identical to me either way. It’s also possible that things which appear deterministic are in fact determined by agents with a value function entirely foreign to me; again, the world would look the same to me if there was one or more “gravity agents” that pulled everything toward everything. That postulate has a prior so low that I don’t think ‘epsilon’ adequately describes it, and I have no reports of the evidence which would support it but not the standard theory of gravitation (Winwardium Leviosaa working, for example).
It’s not possible to confirm an infinite number of accurate predictions, and any event which has happened as predicted only a finite number of times (e.g. a number of times equal to the age of the universe in plank time) is not proof that it can always be accurately predicted. *
Just to be sure,I do not believe that this dragon in my garage exists. I also think that it’s more likely that I don’t exist as a magician with the power to do something that matter in general does not do. It’s just that the expected utility of believing that the future is mutable (that I can affect things) is higher than the expected utility of believing that the state of the universe at a given time is time-invariant, regardless of the probability distribution between the two possibilities.
I wasn’t asking whether your probability of an agent being involved in, say, unstable atom decay was zero. I was just trying to confirm that the mere fact of indeterminacy did not suffice to earn something the label “agent” on your account. That is, confirm that an agent being involved in unstable atom decay was not a certainty on your account.
Which I guess you’ve confirmed. Thanks.
I agree that infinite confidence in a prediction is impossible.
I agree that infinite confidence in a prediction is impossible.
Did you mean that there was an upper bound less than 1 on the proper confidence of any nontrivial prediction? That’s contrary to materialism, isn’t it?
Yes. Trivial ones, too. And no, not as far as I can tell, merely consistent with the existence of error rates. For that matter, I would also say that infinite confidence in a non-prediction is impossible. That is, I’m pretty damned sure I have toenails, but my confidence that I have toenails is not infinite.
If I generate a statement at the same confidence level as “I have toenails” every day for a century, I’d be unsurprised to get a few wrong just because my brain glitches every once in a while, I’d be surprised if I got as many as ten wrong, and I’d be only slightly surprised to get them all right.
So call that .99998 confidence. Which in practice I refer to as certainty. Of course, better-designed brains are capable of higher confidence than that.
Is there anything that anyone can be more certain about than your belief that you have toenails, or is .99998 the upper bound for confidence in any prediction?
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
is .99998 the upper bound for confidence in any prediction?
Not at all. For example, as I already said, better-designed brains are capable of higher confidence than that.
There may also be other classes of statements for which even my brain is capable of higher confidence, though off-hand I’m not sure what they might be… perception and recognition of concrete familiar objects is pretty basic.
Thinking about it now, I suppose the implication of ownership adds some unnecessary complexity and correspondingly lowers MTBF; my confidence in “there are toenails on that foot” might be higher… maybe even as much as an order of magnitude higher. Then again, maybe not… we’re really playing down at the level of organic brain failure here, so the semantic content may not matter at all.
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
You can get pretty darn high confidences with negation and conjunctions. I can say with great confidence that I am not a 15 story tall Triceratops with glowing red eyes, and I can say with even greater confidence that I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement.
(Incidentally, now you have me wondering how “Linda is a Triceratops and a bank teller” would work in the classic conjunction fallacy example.)
So, as a matter of pure logic, you’re of course correct… but in this particular context, I’m not sure. As I say, once I get down to the 5-9s level, I’m really talking about brain failures, and those can affect the machinery that evaluates negations and conjunctions as readily as they can anything else (perhaps more so, I dunno).
If I made a statement in which I have as much confidence as I do in “I am not a 15 story tall Triceratops with glowing red eyes” every day for a hundred years, would I expect to get them all correct? I guess so, yes. So, agreed, it’s higher than .99998. A thousand years? Geez. No, I’d expect to screw up at least once. So, OK, call it .999999 confidence instead for that class.
What about “I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement”? Yeesh. I dunno. I don’t think I have .9999999 confidence in tautologies.
Within noise of 1. I couldn’t list things that I am that certain of for long enough to expect one of them to be wrong, and I’m bad in general at dealing with probabilities outside of [0.05,0.95]
In one of the ancestors, I asked if there was an upper limit <1 which represented an upper bound on the maximum permissible accurate confidence in something. (e.g. some number 0<x<1 such that confidence always fell into either (1-x, x) or [1-x, x].
I’m happy to say “within noise of 1” (aka “one minus epsilon”) is the upper limit for maximum permissible accurate confidence. Does that count as an answer to your question?
I don’t know any way to put a number to it; for any given mind, I expect there’s an upper limit to how confident that mind can be about anything, but that upper limit increases with how well-designed the mind is, and I have no idea what the upper limit is to how well-designed a mind can be, and I don’t know how to estimate the level of confidence an unspecified mind can have in that sort of proposition (though as at least one data point, a mind basically as fallible as mine but implementing error-checking algorithms can increase that maximum by many orders of magnitude).
I’d initially assumed that meant I couldn’t answer your question, but when you gave me “within noise of 1” as an answer for your confidence about toenails that suggested that you considered that an acceptable answer to questions about confidence levels, and it was an accurate answer to your question about confidence levels as well, so I gave it.
I’m not sure how I could tell the difference between two upper bounds of confidence at all. I mean, it’s not like I test them in practice. I similarly can’t tell whether the maximum speed of my car is 120 mph or 150 mph; I’ve never driven above 110.
But, to answer your question… nope, I wouldn’t be able to tell.
So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that’s a decision or not depends on whether its internal structure is a model of the “does the heat go on?” problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can’t, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.
The thermostat doesn’t model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat’s gauge is a physical manifestation of the engineer’s model.
It’s in the same sense that I don’t decide to be hungry—I just am.
Combining that assertion with your earlier one, I get the claim that the thermostat’s turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn’t the thermostat’s decision, but rather the designer’s decision. Or, well, partially the designer’s. Presumably, since I set the thermostat’s set-point, it’s similarly not the thermostat’s values which the causal chain involves, but mine. So it’s a decision being made collectively by me and the engineer, I guess. Perhaps some other agents, depending on what “things like that” subsumes.
This seems like an odd way to talk about the situation, but not a fatally odd way.
It’s still more about magic and time-reversed causation than it is about deciding which box to take.
Particularly since it rewards the reflexively inconsistent agent that, at the time the money was placed, was going to one-box when it had the chance, but at the time the decision was made two-boxed. (At time A, when Omega makes the prediction, it is the case that the highest-performing decision model will at time B select one box; at time B, the highest-performing model selects both boxes.)
You’re effectively calling the concept of determinism “magic”, arguing that merely being able to calculate the outcome of a decision process is “magical” or requires time-reversal.
Look, I have your source code. I can see what you’ll decide, because I have your source code and know how you decide. Where’s the magic in that? Start thinking like a programmer. There’s nothing magical when I look at the source of a method and say “this will always return ‘true’ under such-and-such conditions”.
What is the physical analogue of looking at the source code of physics? You, the programmer, can assume that no bit rot will occur during the period that the program is running, and that no other program will engage in memory access violations, but the computer cannot.
The compiler will (barring interference) always produce the same executable from the same source, but it can’t use that fact to shortcut compiling the code; even if my decision is deterministic, there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does. (Special case: people who have already considered the problem and already decided to use cached decisions)
there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does.
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
“Special case: people who have already considered the problem and already decided to use cached decisions”
Why? Knowledge of the problem is just as an sensory input. Pass it through the same deterministic brain in the same state and you get the same result. “I’ve just been informed of this type of problem, let me think about it right now” and “I’ve just been informed that I’m now involved in this type of problem I have already considered, I’ll use my predetermined decision.” are both equally deterministic.
The latter seems to you more predictable, because as a human being you’re accustomed to people making up their minds and following through with predetermined decision. As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; otherwise I two-box. With a prior of 13 nines, getting down to three would require ten decades of evidence; if I shared any feature with one person who Omega wrongly identified as a one-boxer but not with the first 10^10 people who Omega correctly identified as a two-boxer, I think that would be strong enough evidence.
As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
Unless you are doing the math on a Pentium processor...
“If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; ”
??? Your calculation seems to be trying to divide the wrong things. One boxing gives you $1,000,000 if Omega is right, gives you $0 if Omega is wrong. Two boxing gives you $1,001,000 if Omega is wrong, gives you $1,000 if Omega is right.
So with Omega being X likely to be right: -the estimated payoff for one-boxing is X $1,000,000 -the estimated payoff for two-boxing is (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (assuming linear utility of money) when X $1,000,000 > (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (always assuming linear utility of money) when Omega has a higher than X> 50.05% likelihood of being right.
Well, it was first introduced into philosophical literature by Nozick explicitly as a challenge to the principle of dominance in traditional decision theories. So, it’s probably about decision theory at least a little bit.
From the context, I would presume “about” in the sense of “this is why it’s fascinating to the people who make a big deal about it”. (I realise the stated reason for LW interest is the scenario of an AI whose source code is known to Omega having to make a decision, but the people being fascinated are humans.)
Given that your source code is known to Omega, your decision cannot be ‘made’.
Yes it can.
Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made. A Customer Service Representative that follows company policy regardless of the outcome isn’t making decisions, he’s abdicating the decision-making to someone else.
It’s probable that free will doesn’t exist, in which case decisions don’t exist and agenthood is an illusion; that would be consistent with the line of thinking which has produced the most accurate observations to date. I will continue to act as though I am an agent, because on the off chance I have a choice it is the choice that I want.
Really?
Oddly enough, those are about programming. There’s nothing in there that is advice to robots about what decisions to make.
It is all about robots—deterministic machines—performing activities that everyone unproblematically calls “making decisions”. According to what you mean by “decision”, they are inherently incapable of doing any such thing. Robots, in your view, cannot be “agents”; a similar Google search shows that no-one who works with robots has any problem describing them as agents.
So, what do you mean by “decision” and “agenthood”? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?
It would be better to say that materialism being true has the prerequisite of determinism being true, in which case “decisions” do not have the properties we’re crossing on.
Still not true. The prediction capability of other agents in the same universe does not make the decisions made by an agent into not-decisions. (This is a common confusion that often leads to bad decision-theoretic claims.)
If free will is not the case, there are no agents (anymore?)
If it is the case that the universe in the past might lead to an agent making one of two or more decisions, then free will is the case and perfect prediction is impossible; if it is not the case that an entity can take any one of two or more actions, then free will is not the case and perfect prediction is possible.
Note that it is possible for free will to exist but for me to not be one of the agents. Sometimes I lose sleep over that.
A starting point.
The scale does not decide the weight of the load.
A sufficiently intelligent and informed AI existing in the orbit of Alpha Centauri but in no way interacting with any other agent (in the present or future) does not by its very existence remove the capability of every agent in the galaxy to make decisions. That would be a ridiculous way to carve reality.
The characteristic of the universe that allows or prevents the existence of such an AI is what is being carved.
Can you clarify what you mean by “agent”?
One of the necessary properties of an agent is that it makes decisions.
I infer from context that free will is necessary to make decisions on your model… confirm?
Yeah, the making of a decision (as opposed to a calculation) and the influence of free will are coincident.
OK, thanks for clarifying your position.
A couple of further assumptions...
1) I assume that what’s actually necessary for “agency” on your account is that I’m the sort of system whose actions cannot be deterministically predicted, not merely that I have not been predicted… creating Predictor doesn’t eliminate my “agency,” it merely demonstrates that I never had any such thing, and destroying Predictor doesn’t somehow provide me with or restore my “agency”.
2) I assume that true randomness doesn’t suffice for “agency” on your account… that Schrodinger’s Cat doesn’t involve an “agent” who “decides” to do anything in particular, even though it can’t be deterministically predicted.
Yes?
So, OK. Assuming all of that:
Suppose Sam performs three actions: (A1) climbs to the roof of a high building, (A2) steps off the edge, and (A3) accelerates toward the ground. Suppose further that A1-A3 were predictable, and therefore on your account not “decisions.”
Is there any useful distinction to be made between A1, A2, and A3?
For example, predicting A3 only requires a knowledge of ballistics, whereas predicting A1 and A2 require more than that. Would you classify them differently on those grounds?
If I was classifying things based on how well a given predictor could predict them, I’d give all three events numbers within a range; I suspect that A1 and A2 would be less predictable for most predictors (but more predictable for the class of predictors which can see a short distance into the future, since they happen sooner).
If I was classifying things based on the upper limit of how accurately they could be predicted, I’d give them all the same value, but I would give an action which I consider a decision or the outcome of a decision which has not been made yet a different value.
2: I don’t deny the possibility that there is an agent involved in anything nondeterministic; I think it is very unlikely that unstable atoms are (or contain) agents, but the world would probably look identical to me either way. It’s also possible that things which appear deterministic are in fact determined by agents with a value function entirely foreign to me; again, the world would look the same to me if there was one or more “gravity agents” that pulled everything toward everything. That postulate has a prior so low that I don’t think ‘epsilon’ adequately describes it, and I have no reports of the evidence which would support it but not the standard theory of gravitation (Winwardium Leviosaa working, for example).
It’s not possible to confirm an infinite number of accurate predictions, and any event which has happened as predicted only a finite number of times (e.g. a number of times equal to the age of the universe in plank time) is not proof that it can always be accurately predicted. *
Just to be sure,I do not believe that this dragon in my garage exists. I also think that it’s more likely that I don’t exist as a magician with the power to do something that matter in general does not do. It’s just that the expected utility of believing that the future is mutable (that I can affect things) is higher than the expected utility of believing that the state of the universe at a given time is time-invariant, regardless of the probability distribution between the two possibilities.
Thanks for the clarification.
I wasn’t asking whether your probability of an agent being involved in, say, unstable atom decay was zero. I was just trying to confirm that the mere fact of indeterminacy did not suffice to earn something the label “agent” on your account. That is, confirm that an agent being involved in unstable atom decay was not a certainty on your account.
Which I guess you’ve confirmed. Thanks.
I agree that infinite confidence in a prediction is impossible.
Did you mean that there was an upper bound less than 1 on the proper confidence of any nontrivial prediction? That’s contrary to materialism, isn’t it?
Yes. Trivial ones, too. And no, not as far as I can tell, merely consistent with the existence of error rates.
For that matter, I would also say that infinite confidence in a non-prediction is impossible. That is, I’m pretty damned sure I have toenails, but my confidence that I have toenails is not infinite.
What do you suppose that upper bound is?
If I generate a statement at the same confidence level as “I have toenails” every day for a century, I’d be unsurprised to get a few wrong just because my brain glitches every once in a while, I’d be surprised if I got as many as ten wrong, and I’d be only slightly surprised to get them all right.
So call that .99998 confidence. Which in practice I refer to as certainty. Of course, better-designed brains are capable of higher confidence than that.
What’s your confidence that you have toenails?
Is there anything that anyone can be more certain about than your belief that you have toenails, or is .99998 the upper bound for confidence in any prediction?
My confidence that I have toenails is more certain than my confidence that there is no accurate claim of a confidence of exactly 1.
Not at all. For example, as I already said, better-designed brains are capable of higher confidence than that.
There may also be other classes of statements for which even my brain is capable of higher confidence, though off-hand I’m not sure what they might be… perception and recognition of concrete familiar objects is pretty basic.
Thinking about it now, I suppose the implication of ownership adds some unnecessary complexity and correspondingly lowers MTBF; my confidence in “there are toenails on that foot” might be higher… maybe even as much as an order of magnitude higher. Then again, maybe not… we’re really playing down at the level of organic brain failure here, so the semantic content may not matter at all.
(nods) Mine, too.
What’s your confidence that you have toenails?
You can get pretty darn high confidences with negation and conjunctions. I can say with great confidence that I am not a 15 story tall Triceratops with glowing red eyes, and I can say with even greater confidence that I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement.
(Incidentally, now you have me wondering how “Linda is a Triceratops and a bank teller” would work in the classic conjunction fallacy example.)
So, as a matter of pure logic, you’re of course correct… but in this particular context, I’m not sure. As I say, once I get down to the 5-9s level, I’m really talking about brain failures, and those can affect the machinery that evaluates negations and conjunctions as readily as they can anything else (perhaps more so, I dunno).
If I made a statement in which I have as much confidence as I do in “I am not a 15 story tall Triceratops with glowing red eyes” every day for a hundred years, would I expect to get them all correct? I guess so, yes. So, agreed, it’s higher than .99998. A thousand years? Geez. No, I’d expect to screw up at least once. So, OK, call it .999999 confidence instead for that class.
What about “I am not a 15 story tall Triceratops with glowing red eyes who is active in the feminist movement”? Yeesh. I dunno. I don’t think I have .9999999 confidence in tautologies.
Within noise of 1. I couldn’t list things that I am that certain of for long enough to expect one of them to be wrong, and I’m bad in general at dealing with probabilities outside of [0.05,0.95]
In one of the ancestors, I asked if there was an upper limit <1 which represented an upper bound on the maximum permissible accurate confidence in something. (e.g. some number 0<x<1 such that confidence always fell into either (1-x, x) or [1-x, x].
I’m happy to say “within noise of 1” (aka “one minus epsilon”) is the upper limit for maximum permissible accurate confidence. Does that count as an answer to your question?
What you said is an answer, but the manner in which you said it indicates that it isn’t the answer you intend.
I’m asking if there is a lower bound above zero for epsilon, and you just said yes, but you didn’t put a number on it.
I didn’t, it’s true.
I don’t know any way to put a number to it; for any given mind, I expect there’s an upper limit to how confident that mind can be about anything, but that upper limit increases with how well-designed the mind is, and I have no idea what the upper limit is to how well-designed a mind can be, and I don’t know how to estimate the level of confidence an unspecified mind can have in that sort of proposition (though as at least one data point, a mind basically as fallible as mine but implementing error-checking algorithms can increase that maximum by many orders of magnitude).
I’d initially assumed that meant I couldn’t answer your question, but when you gave me “within noise of 1” as an answer for your confidence about toenails that suggested that you considered that an acceptable answer to questions about confidence levels, and it was an accurate answer to your question about confidence levels as well, so I gave it.
So… you wouldn’t be able to tell the difference between an epsilon>0 and an epsilon =>0?
I’m not sure how I could tell the difference between two upper bounds of confidence at all. I mean, it’s not like I test them in practice. I similarly can’t tell whether the maximum speed of my car is 120 mph or 150 mph; I’ve never driven above 110.
But, to answer your question… nope, I wouldn’t be able to tell.
So… hrm.
How do I tell whether something is a decision or not?
By the causal chain that goes into it. Does it involve modeling the problem and considering values and things like that?
So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that’s a decision or not depends on whether its internal structure is a model of the “does the heat go on?” problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can’t, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.
(nods) OK, that’s fair. I can live with that.
The thermostat doesn’t model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat’s gauge is a physical manifestation of the engineer’s model.
It’s in the same sense that I don’t decide to be hungry—I just am.
ETA: Dangit, I could use a sandwich.
Combining that assertion with your earlier one, I get the claim that the thermostat’s turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn’t the thermostat’s decision, but rather the designer’s decision.
Or, well, partially the designer’s.
Presumably, since I set the thermostat’s set-point, it’s similarly not the thermostat’s values which the causal chain involves, but mine.
So it’s a decision being made collectively by me and the engineer, I guess.
Perhaps some other agents, depending on what “things like that” subsumes.
This seems like an odd way to talk about the situation, but not a fatally odd way.
It’s still more about magic and time-reversed causation than it is about deciding which box to take.
Particularly since it rewards the reflexively inconsistent agent that, at the time the money was placed, was going to one-box when it had the chance, but at the time the decision was made two-boxed. (At time A, when Omega makes the prediction, it is the case that the highest-performing decision model will at time B select one box; at time B, the highest-performing model selects both boxes.)
You’re effectively calling the concept of determinism “magic”, arguing that merely being able to calculate the outcome of a decision process is “magical” or requires time-reversal.
Look, I have your source code. I can see what you’ll decide, because I have your source code and know how you decide. Where’s the magic in that? Start thinking like a programmer. There’s nothing magical when I look at the source of a method and say “this will always return ‘true’ under such-and-such conditions”.
What is the physical analogue of looking at the source code of physics? You, the programmer, can assume that no bit rot will occur during the period that the program is running, and that no other program will engage in memory access violations, but the computer cannot.
The compiler will (barring interference) always produce the same executable from the same source, but it can’t use that fact to shortcut compiling the code; even if my decision is deterministic, there is no way within the universe to, without some loss of precision or accuracy, determine in general what outcome someone will have before the universe does. (Special case: people who have already considered the problem and already decided to use cached decisions)
Sure. So what? If Omega is just 99.99999999999% of the time correct, how does that change in practice whether you should one-box or two-box?
Why? Knowledge of the problem is just as an sensory input. Pass it through the same deterministic brain in the same state and you get the same result. “I’ve just been informed of this type of problem, let me think about it right now” and “I’ve just been informed that I’m now involved in this type of problem I have already considered, I’ll use my predetermined decision.” are both equally deterministic.
The latter seems to you more predictable, because as a human being you’re accustomed to people making up their minds and following through with predetermined decision. As a programmer, I’ll tell you it’s equally determistic whether you multiply 3 5 every time, or if you only multiply it once, store it in a variable and then return it when asked about the product of 35...
If I estimate a probability of less than ~99.9001% ($1,000,000/$1,001,000) that Omega will be correct in this specific instance, I one-box; otherwise I two-box. With a prior of 13 nines, getting down to three would require ten decades of evidence; if I shared any feature with one person who Omega wrongly identified as a one-boxer but not with the first 10^10 people who Omega correctly identified as a two-boxer, I think that would be strong enough evidence.
Unless you are doing the math on a Pentium processor...
??? Your calculation seems to be trying to divide the wrong things.
One boxing gives you $1,000,000 if Omega is right, gives you $0 if Omega is wrong.
Two boxing gives you $1,001,000 if Omega is wrong, gives you $1,000 if Omega is right.
So with Omega being X likely to be right:
-the estimated payoff for one-boxing is X $1,000,000
-the estimated payoff for two-boxing is (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (assuming linear utility of money) when X $1,000,000 > (100-X) ($1,001,000) + X * $1000.
One-boxing is therefore superior (always assuming linear utility of money) when Omega has a higher than X> 50.05% likelihood of being right.
Yeah, looking at it as a $500,000 bet on almost even money, odds of about 50% are right.