Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the “purpose” of acting asif the future were not already determined and we can choose an optimising function based the possible consequences of different actions?
Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.
Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.
In a deterministic universe, the future is logically implied by the present—but you’re in the present. The future isn’t fated—if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence—and it isn’t predictable—even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality—it’s just logically consistent.
How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?
Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?
I said “counterfactual”. Let me use an archetypal example of a free-will hypothetical and query your response:
Suppose that there are two worlds, A and A’, which are at a certain time indistinguishable in every measurable way. They differ, however, and differ most strongly in the nature of a particular person, Alice, who lives in A versus the nature of her analogue in A’, whom we shall call Alice’ for convenience.
In the two worlds at the time at which A and A’ are indistinguishable, Alice and Alice’ are entering a restaurant. They are greeted by a server, seated, and given menus, and the attention of both Alice and Alice’ rapidly settles upon two items: the fettucini alfredo and the eggplant parmesan. As it happens, the previously-indistinguishable differences between Alice and Alice’ are such that Alice orders fettucini alfredo and Alice’ orders eggplant parmesan.
What dishes will Alice and Alice’ receive?
I’m off to the market, now—I’ll post the followup in a moment.
Now: I imagine most people would say that Alice would receive the fettucini and Alice’ the eggplant. I will proceed on this assumption
Now suppose that Alice and Alice’ are switched at the moment they entered the restaurant. Neither Alice nor Alice’ notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice’) and universe A’ (now containing Alice) can tell, nothing has happened.
After the switch, Alice’ and Alice are seated, open their menus, and pick their orders. What dishes will Alice’ and Alice receive?
I’m missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we’re assuming A and A’ are identical at the beginning, what Alice and Alice’ order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?
I’m not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A’ are physically identical, but the Alices have different “souls” that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice’ has changed the way two identical sets of atoms move around.
If this applies to the waiter as well, we can’t predict what he’ll decide to bring Alice: for all we know he may turn into a leopard, because that’s his nature.
The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.
And the point of the hypothetical is that the question “what if, counterfactually, Alice ordered the eggplant?” is meaningful—it corresponds to physically switching the molecular formation of Alice with that of Alice’ at the appropriate moment.
I understand now. Sorry; that wasn’t clear from the earlier post.
This seems like an intuition pump. You’re assuming there is a way to switch the molecular formation of Alice’s brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I understand now. Sorry; that wasn’t clear from the earlier post.
I know—I didn’t phrase it very well.
Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Yes, yes it is.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I’m not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don’t have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn’t make counterfactuals automatically meaningless.
The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.
Ganapati, the counterfactual does not happen. That’s what “counterfactual” means—something which is contrary to fact.
However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to “what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
“what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
Why would you be interested in something that can’t occur in the real world?
In the “free will” case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don’t know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.
Actually you brought in the counterfactual argument to attempt to explain the significance (or “purpose”) of an approach called consequentialism (as opposed to others) in a determined universe.
Sorry for the delay in replying. No, I don’t have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only “evidence” disproving it, if any, lies with the entity believing it, not outside.
Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it. [emphasis added]
Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice’ orders eggplant in world A, she gets eggplant. The future is not fixed in advance—it is a function of the present, and your acts in the present create the future.
There’s an old Nozick quote that I found in Daniel Dennett’s Elbow Room: “No one has ever announced that because determinism is true thermostats do not control temperature.” Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.
Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn’t mean that Alice must order fetuccini. I’ll elaborate on that further.
No one inside DU can precisely predict future. Proof: Let’s suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can’t change it a bit. How can case B be the case? It can’t. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p’=F(p’), but information from future brings entropy, which causes future entropy to increase, so increasing prediction’s entropy and so on. Thus, there’s cannot be fixed point. QED.
No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier.
If Alice cannot decide what she likes more, she cannot just say “Oh! I must eat fettucini. It is my fate.”, she haven’t and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let’s call him god) will come down and say to Alice “It’s your fate to eat fettucini.” (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.
I’d like to dispute your usage of “predetermined” there: like “fated”, it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that’s fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more.
If one may determine without knowing Alice’s actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.
I’m not sure that I sufficiently understand you. “Fated” implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one’s fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only.
When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn’t mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she’s in universe A where she’s predetermined to eat fettucini, or that she’s in universe B where she’s to eat eggplant. And her decision process is integral part of each of these universes.
Without her decision universe A cannot be universe A.
Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the “purpose” of acting as if the future were not already determined and we can choose an optimising function based the possible consequences of different actions?
Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.
Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.
In a deterministic universe, the future is logically implied by the present—but you’re in the present. The future isn’t fated—if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence—and it isn’t predictable—even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality—it’s just logically consistent.
How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?
Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?
I said “counterfactual”. Let me use an archetypal example of a free-will hypothetical and query your response:
I’m off to the market, now—I’ll post the followup in a moment.
Now: I imagine most people would say that Alice would receive the fettucini and Alice’ the eggplant. I will proceed on this assumption
Now suppose that Alice and Alice’ are switched at the moment they entered the restaurant. Neither Alice nor Alice’ notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice’) and universe A’ (now containing Alice) can tell, nothing has happened.
After the switch, Alice’ and Alice are seated, open their menus, and pick their orders. What dishes will Alice’ and Alice receive?
I’m missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we’re assuming A and A’ are identical at the beginning, what Alice and Alice’ order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?
I’m not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A’ are physically identical, but the Alices have different “souls” that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice’ has changed the way two identical sets of atoms move around.
If this applies to the waiter as well, we can’t predict what he’ll decide to bring Alice: for all we know he may turn into a leopard, because that’s his nature.
The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.
And the point of the hypothetical is that the question “what if, counterfactually, Alice ordered the eggplant?” is meaningful—it corresponds to physically switching the molecular formation of Alice with that of Alice’ at the appropriate moment.
I understand now. Sorry; that wasn’t clear from the earlier post.
This seems like an intuition pump. You’re assuming there is a way to switch the molecular formation of Alice’s brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I know—I didn’t phrase it very well.
Yes, yes it is.
I’m not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don’t have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn’t make counterfactuals automatically meaningless.
The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.
...
Ganapati, the counterfactual does not happen. That’s what “counterfactual” means—something which is contrary to fact.
However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to “what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
Why would you be interested in something that can’t occur in the real world?
In the “free will” case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
What?
Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don’t know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.
Yes, that’s a much better phrasing than mine.
(p.s. you realize that I am having an argument with Ganapati about the compatibility of determinism and free will in this thread, right?)
Actually you brought in the counterfactual argument to attempt to explain the significance (or “purpose”) of an approach called consequentialism (as opposed to others) in a determined universe.
Allow me the privilege of stating my own intentions.
You brought up the counterfactualism example right here, so I assumed it was in response to that post.
I’m sorry, do you have an objection to the reading of “counterfactual” elaborated in this thread?
Sorry for the delay in replying. No, I don’t have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only “evidence” disproving it, if any, lies with the entity believing it, not outside.
Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings.
Thank you all for your time!
Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice’ orders eggplant in world A, she gets eggplant. The future is not fixed in advance—it is a function of the present, and your acts in the present create the future.
There’s an old Nozick quote that I found in Daniel Dennett’s Elbow Room: “No one has ever announced that because determinism is true thermostats do not control temperature.” Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.
Correction.
Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn’t mean that Alice must order fetuccini. I’ll elaborate on that further.
No one inside DU can precisely predict future. Proof: Let’s suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can’t change it a bit. How can case B be the case? It can’t. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p’=F(p’), but information from future brings entropy, which causes future entropy to increase, so increasing prediction’s entropy and so on. Thus, there’s cannot be fixed point. QED.
Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier.
If Alice cannot decide what she likes more, she cannot just say “Oh! I must eat fettucini. It is my fate.”, she haven’t and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let’s call him god) will come down and say to Alice “It’s your fate to eat fettucini.” (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.
I’d like to dispute your usage of “predetermined” there: like “fated”, it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that’s fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more.
If one may determine without knowing Alice’s actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.
I’m not sure that I sufficiently understand you. “Fated” implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one’s fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only.
When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn’t mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she’s in universe A where she’s predetermined to eat fettucini, or that she’s in universe B where she’s to eat eggplant. And her decision process is integral part of each of these universes.
Without her decision universe A cannot be universe A.
So her decision is crucial part of causal chain.
Did I answer your question?
Edit: spellcheck.
I don’t like the connotations, but sure—that’s a mathematically consistent definition.