It was an interesting read. I am a little confused about one aspect, though, that is determinist consequentialism.
From what I read, it appears a determinist consequentialist believes it is ‘biology all the way down’ meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?
Or are there some things that are not biologically determined, like whether to approve someone else’s actions or not, while actions physically impacting others are themsleves completely determined biologically? It doesn’t appear to be the case, since the article states that even something like taste for music, not an action physically impacting the others, is completely determined biologically.
From what I read, it appears a determinist consequentialist believes it is ‘biology all the way down’ meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?
I think you might be confused on the matter of free will—it’s not obvious that there is any conflict between determinism and choice.
I used the word choice, but ‘free will’ do as well.
Was your response to my question biologically determined or was it a matter of conscious choice?
Whether there is going to be another response to this comment of mine or not, would it have been completely determined biologically or would it be a matter of conscious choice by some?
If all human actions are determined biologically the ‘choice’ is only an apparent one, like a tossed up coin having a ‘choice’ of turning up heads or tails. Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!
Was your response to my question biologically determined or was it a matter of conscious choice?
The correct answer to this is “both” (and it is a false dichotomy). My consciousness is a property of a certain collection of matter which can be most compactly described by reference to the regularities we call “biology”. Choosing to answer (or not to answer) is the result of a decision procedure arising out of the matter residing (to a rough approximation) in my braincase.
The difference between me and a coin is that a coin is a largely homogenous lump of metal and does not contain anything like a “choice mechanism”, whereas among the regularities we call “biology” we find some patterns that reliably allow organisms (and even machines) to steer the future toward preferred directions, and which we call “choosing” or “deciding”.
Do your choices have causes? Do those causes have causes?
Determinism doesn’t have to mean epiphenomenalism. Metaphysically, epiphenomenalism—the belief that consciousness has no causal power—is a lot like belief in true free will—consciousness as an uncaused cause—in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.)
Increase in consciousness increases the extent to which the causes of one’s choices and actions are themselves conscious in origin rather than unconscious. This may be experienced as liberation from cause and effect, but really it’s just liberation from unconscious causes. Choices do have causes, whether or not you’re aware of them.
Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!
This is a point which throws many people, but again, it comes from an insufficiently broad concept of causality. Reason itself has causes and operates as a cause. We can agree, surely, that absurdly wrong beliefs have a cause; we can understand why a person raised in a cult may believe its dogmas. Correct beliefs also have a cause. Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.
Nonetheless, despite this limited evolutionary gift, it may be true that we are deterministically doomed to fundamental error or ignorance in certain matters. Since the relationship of consciousness, knowledge, and reality is not exactly clear, it’s hard to be sure.
Do your choices have causes? Do those causes have causes?
Determinism doesn’t have to mean epiphenomenalism. Metaphysically, epiphenomenalism—the belief that consciousness has no causal power—is a lot like belief in true free will—consciousness as an uncaused cause—in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.)
I don’t equate determinism with epiphenomenalism, but that even when it acts as a cause, it is completely determined meaning the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made.
Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.
Not sure how that follows. Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species. If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim.
It appears that the problem seems to be one of ‘generalisation from one example’. There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined. The former may choose determinism as a ‘belief in belief’ while the latter will see it as a fact, much like a self-aware AI.
… the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made.
That’s true. And there is no problem within it.
Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species.
If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage.
If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim.
Yes, but it is a very weak evidence (more so, if current models are correct). The claim stated that there was at least some capacity for correct cognition, not that the cognition is perfect.
There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined.
Can you explain the meaning? What are the former and what are the latter beings?
If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage.
Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn’t care much about cognitive capacity beyond a point (that you already mentioned)
Can you explain the meaning? What are the former and what are the latter beings?
You are already familiar with the latter, those whose consciousness is biologically determined. How do you expect to recognise the former, those whose consciousness is not biologically determined?
Not sure what kind of cognitive capacity the dinosaurs held...
At least they probably hadn’t a deceptive cognitive capacity. That is, they had few beliefs, but that few were more or less correct. I am not saying that an intelligent species is universally better at survival than a dumb species. I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage.
How do you expect to recognise the former, those whose consciousness is not biologically determined?
Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception. Therefore I conclude that consciousness must be physically and therefore bilogically determined. I don’t expect to recognise “deterministic creatures” from “non-determinist creatures”, I simply expect the latter can’t exist in this world. Or maybe I even can’t imagine what could it possibly mean for consciousness to be not biologically determined. From my point of view, it could mean either a very bizarre form of dualism (consciousness is separated from the material world, but by chance it reflects correctly what happens in the material world), or it could mean that the natural laws aren’t entirely deterministic. But I don’t call the latter possibility “free will”, I call it “randomness”.
Your line of thought reminds me of a class of apologetics which claim that if we have evolved by random chance, then there is no guarantee that our cognition is correct, and if our cognition is flawed, we are not able to recognise that we have evolved by random chance; therefore, holding a position that we have evolved by random chance is incoherent and God must have been involved in the process. I think this class of arguments is called “presuppositionalist”, but I may be wrong.
Whatever is the name, the argument is a fallacy. That our cognition is correct is an assumption we must take, otherwise we may better not argue about anything. Although a carefully designed cognitive algorithm may have better chances to work correctly than by chance evolved cognitive algorithm, i.e. it is acceptable that p(correct|evolved)<p(correct|designed), it doesn’t necessarily mean that p(evolved|correct)<p(designed|correct), which is the conclusion the presuppositionalists essentially make.
Back to your argument, you seem to implicitly hold about cognition that p(correct|deterministic)<p(correct|indeterministic), for which I can’t see any reason, but even if that is valid, it isn’t automatically a strong argument for indeterminism.
I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage.
Unless the delusions are related to survival and procreation, don’t see how they would present any evolutionary disadvantage.
Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception.
Actually there is plenty of evidence to show that living creatures require additional laws to be predicted. Darwinian evolution itself is not required to describe the physical world. However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws. Someone else looking at the exact same evidence, can come to an entirely different conclusion, that we are actually on the verge of demonstrating what we always felt, that the living are more than physics. Both the positions are based on something that has not yet been demonstrated, the only “evidence” for either lying with the individual, a case of generalisation from one example.
Back to your argument, you seem to implicitly hold about cognition that p(correct|deterministic)<p(correct|indeterministic),...
Not at all. I was only questioning the logical consistency of an approach called ‘determinist consequentialism’. Determinism implies a future that is predetermined and potentially predictable. Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a ‘free will’ or ‘randomness’.
Unless the delusions are related to survival and procreation, don’t see how they would present any evolutionary disadvantage.
Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere.
Actually there is plenty of evidence to show that living creatures require additional laws to be predicted.
An example would be helpful. I don’t know what evidence you are speaking about.
However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws.
What is the difference between respecting physical laws and not violating them? Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn’t mean that the predictions of future are practically feasible or even easy.
Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a ‘free will’ or ‘randomness’.
Consequentialism doesn’t require either. The choices needn’t be principially unpredictable to be meaningful.
Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere.
Perhaps. But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation. On the contrary it could present an evolutionary adavantage. A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage.
An example would be helpful. I don’t know what evidence you are speaking about.
The evidence was stated in the very next line, the Darwinian evolution, something that is not required to describe the evolution of non-biological systems.
What is the difference between respecting physical laws and not violating them?
Of course, none. The distinction I wanted to make was one between respecting/not-violating and being completely determined by.
Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn’t mean that the predictions of future are practically feasible or even easy.
Nothing to differ there as a definition of determinism. It was exactly the point I was making too. If biological systems are, like us, are completely determined by physical laws, the apparent choice of making a decision by considering consequences is itself an illusion.
Consequentialism doesn’t require either. The choices needn’t be principially unpredictable to be meaningful.
In which case every choice every entity makes, regardless of how it arrives at it, is meaningful. In other words there are no meaningless choices in the real world.
But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation.
Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward.
A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage.
Sounds like a groupselection to me. And not much in accordance with observation. Although I don’t believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren’t much sucessful after all.
the Darwinian evolution, something that is not required to describe the evolution of non-biological systems.
Depends on interpretation of “required”. If it means that practically one cannot derive useful statements about trilobites from Schrödinger equation, then yes, I agree. If it means that laws of evolution are logically independent laws which we would need to keep even if we overcome all computational and data-storage difficulties, then I disagree. I expect you meant the first interpretation, given your last paragraph.
Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward.
Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don’t pose an evolutionary disadvantage.
Sounds like a group selection to me. And not much in accordance with observation.
I am more inclined towards the gene selection theory, not group selection. About the only species whose delusions we can observe are ourselves. So it is difficult to come out wth any significant objective observational data.
Although I don’t believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren’t much sucessful after all.
I didn’t mean Jews, I meant human species. If delusions are not genetically determined, what would be their source, from a deterministic point of view?
Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don’t pose an evolutionary disadvantage.
Peacock tail survival disadvantage isn’t limited to post-reproduction period. In order to explain the existence of the tails, it must be shown that their positive effect is greater than the negative.
I don’t dispute that (probably large) part of the human brain’s capacity is used in the peacock-tail manner as a signal of fitness. What I say is only that having two brains of same energetic demands, the one with more correct cognition is in advantage; their signalling value is the same, so any peacock mechanism shouldn’t favour the deluded one.
This doesn’t constitute proof of the correctness of human cognition, perhaps (almost certainly) some parts of our brain’s design is wrong in a way that no single mutation can repair, like the blind spot on human retina. But the evolutionary argument for correctness can’t be dismissed as irrelevant.
If delusions presented only survival dsiadvantages and no advantages, you are right. However, that need not be the case.
The delusion about an afterlife can co-exist with correct cognition in matters affecting immediate survival and when it does, it can enhance survival chances. So evolution doesn’t automatically lead to/enhance correct cognition. I am not saying correctness plays no role, but isn’t the sole deciding factor, at least not in the case of evolutionary selection.
Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a ‘free will’ or ‘randomness’.
Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn’t care much about cognitive capacity beyond a point (that you already mentioned)
Huh? Presumably if the dinosaurs had the cognitive capacity and the opposable thumbs to develop rocket ships and divert incoming asteroids they would have survived. They died out because they weren’t smart enough.
I will side with Ganapati on this particular point. We humans are spending much more cognitive capacity, with much more success, on inventing new ways to make ourselves extinct than we do on asteroid defense. And dinosaurs stayed around much longer than us anyway. So the jury is still out on whether intelligence helps a species avoid extinction.
prase’s original argument still stands, though. Having a big brain may or may not give you a survival advantage, but having a big non-working brain is certainly a waste that evolution would have erased in mere tens of generations, so if you have a big brain at all, chances are that it’s working mostly correctly.
ETA: disregard that last paragraph. It’s blatantly wrong. Evolution didn’t erase peacock tails.
The asteroid argument aside it seems to me bordering on obvious that general intelligence is adaptive, even if taken to an extreme it can get a species into trouble. (1) Unless you think general intelligence is only helpful for sexual selection it has to be adaptive or we wouldn’t have it (since it is clearly the product of more than one mutation). (2) Intelligence appears to use a lot of energy such that if it wasn’t beneficial it would be a tremendous waste. (3) There are many obvious causal connections between general intelligence and survival. It enabled us to construct axes, spears harness fire, communicate hunting strategies, pass down hunting and gathering techniques to the next generation, navigate status hierarchies etc. All technologies that have fairly straight forward relations to increased survival.
And the fact that we’re doing more to invent new ways to kill ourselves instead of protect ourselves can be traced pretty directly to collective action problems and a whole slew of evolved features other than intelligence that were once adaptive but have ceased to be—tribalism most obviously.
The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it. There’s also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock’s tail). See Norretranders’s “The Generous Man”″ for example. A number of biologists such as Dawkins take this hypothesis very seriously.
There’s also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock’s tail).
Thats an explanation that explains the increase in intelligence from apes to humans and my comment was a lot about that but the original disputed claim was
Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.
And there are less complex adaptive behaviors that require correct cognition: identifying prey, identifying predators, identifying food, identifying cliffs, path-finding etc. I guess there is an argument to be had about what a ‘conscious species’ but that doesn’t seem to be worthwhile. Also, there is a subtle difference between what human intelligence is due to and what the survival benefits of it are. It may have taken sexual selection to jump start it but our intelligence has made us far less vulnerable than we once were (with the exception of the problems we created for ourselves). Humans are rarely eaten by giant cats, for one thing.
The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it.
No species have as high intelligence as humans but lots of species of high intelligence relative to, say, clams. --- Okay, that’s a little facetious but tool use has arisen independently throughout the animal again and again, not to mention the less complex behaviors mentioned above.
Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?
Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?
Well, having more accurate beliefs only matters if you are an entity intelligence enough to general act on those beliefs. To make an extreme case, consider the hypothetical of say an African Grey Parrot able to do calculus problems. Is that going to actually help it? I would suspect generally not. Or consider a member of a species that gains the accurate belief that it can sexually self-stimulate and then engages in that rather than mating. Here we have what is non-adaptive trait (masturbation is a very complicated trait and so isn’t non-adaptive in all cases but one can easily see situations where it seems to be). Or consider a pair of married humans Alice and Bob who have kids that Bob believes are his. Then Bob finds out that his wife had an affair with Bob’s brother Charlie and the kids are all really Charlie’s. If Bob responds by cutting off support for the kids this is likely non-adaptive. Indeed, one can take it a step further and suppose that Bob and Charlie are identical twins. So that Bob’s actions are completely anti-adaptive.
Your second point seems more reasonable. However, I’d suggest that intelligence increases the total number of beliefs one has about the world but that it may not increase the likelyhood of beliefs being accurate. Even if it does, the number of incorrect beliefs is likely to increase as well. It isn’t clear that the average ratio of correct beliefs to total beliefs is actually increasing (I’m being deliberately vague here in that it would likely be very difficult to measure how many beliefs one has without a lot more thought). A common ape may have no incorrect beliefs even as the common human has many incorrect beliefs. So it isn’t clear that intelligence leads to more accurate beliefs.
Edit: I agree that overall intelligence has been a helpful trait for human survival over the long haul.
Are people really disputing whether or not accurate beliefs about the world are adaptive?
That seems a likely area of dispute. Having accurate beliefs seems, ceteris paribus, to be better for you than inaccurate beliefs (though I can make up as many counterexamples as you’d like). But that still leaves open the question of whether it’s better than no beliefs at all.
Mammals are a clade while reptiles are paraphyletic. Well, dinosaurs are too when birds are excluded, but I would gladly leave the birds in. In any case, dinosaurs win over mammals, so it wasn’t probably a good nitpick after all.
No dinosaur species did live along with humans, so direct competition didn’t take place.
they roamed around for millions of years and then became extinct
I don’t think one should compare humans and dinos. Maybe mammals and dinos or something like that. Many dinosaurs went extinct during the era, our ancestors where many different “species”. Successful enough, that we are still around. As were some dinos which gave birds to Earth.
In other words, the ‘choices’ you make are not really choices, but already predetermined, You didn’t really choose to be a determinist, you were programmed to select it, once you encountered it.
Yep, kind of. But your view of determinism is too depressing :-)
My program didn’t know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is “predetermined”, but it’s still useful. Likewise, the worldview I chose was “predetermined”, but that doesn’t mean my choice is somehow “wrong” or “invalid”, as long as my inner program actually implements valid common sense.
My program didn’t know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one.
You couldn’t possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-)
Like a program that receives an array as input and finds the maximum element in it, the output is “predetermined”, but it’s still useful.
Of course the output is useful, for the programmer, if any :-)
Likewise, the worldview I chose was “predetermined”, but that doesn’t mean my choice is somehow “wrong” or “invalid”, as long as my inner program actually implements valid common sense.
It doesn’t appear that regardless of what someone has been programmed to pick, the ‘feelings’ don’t seem to be any different.
If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.
Just to clarify, in a deterministic universe, there are no “invalid” or “wrong” things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.
No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.
A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.
Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be.
But to avoid the confusion over the use of words I will restate what I said earlier slightly differently.
In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.
I thought about your argument a bit and I think I understand it better now. Let’s unpack it.
First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent’s belief is correct. So no need to be outside the world to define “correctness”.
Another matter is verifying the correctness of beliefs if you’re within the world. You seem to argue that a verifier can’t trust its own conclusion if it knows itself to be a deterministic program. This is debatable—it depends on how you define “trust”—but let’s provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn’t follow. Could you explain?
So your argument against determinism is that certain things in your brain appear to have “significance” to you, but in a deterministic world that would be impossible? Does this restatement suffice as a reductio ad absurdum, or do I need to dismantle it further?
I’m kind of confused about your argument. Sometimes I get a glimpse of sense in it, but then I notice some corollary that looks just ridiculously wrong and snap back out. Are you saying that the validity of the statement 2+2=4 depends on whether we live in a deterministic universe? That’s a rather extreme form of belief relativism; how in the world can anyone hope to convince you that anything is true?
the ‘choices’ you make are not really choices, but already predetermined
The only way that choices can be made is by being predetermined (by your decision-making algorithm). Paraphrasing the familiar wordplay, choices that are not predetermined refer to decisions that cannot be made, while the real choices, that can actually be made, are predetermined.
Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the “purpose” of acting asif the future were not already determined and we can choose an optimising function based the possible consequences of different actions?
Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.
Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.
In a deterministic universe, the future is logically implied by the present—but you’re in the present. The future isn’t fated—if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence—and it isn’t predictable—even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality—it’s just logically consistent.
How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?
Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?
I said “counterfactual”. Let me use an archetypal example of a free-will hypothetical and query your response:
Suppose that there are two worlds, A and A’, which are at a certain time indistinguishable in every measurable way. They differ, however, and differ most strongly in the nature of a particular person, Alice, who lives in A versus the nature of her analogue in A’, whom we shall call Alice’ for convenience.
In the two worlds at the time at which A and A’ are indistinguishable, Alice and Alice’ are entering a restaurant. They are greeted by a server, seated, and given menus, and the attention of both Alice and Alice’ rapidly settles upon two items: the fettucini alfredo and the eggplant parmesan. As it happens, the previously-indistinguishable differences between Alice and Alice’ are such that Alice orders fettucini alfredo and Alice’ orders eggplant parmesan.
What dishes will Alice and Alice’ receive?
I’m off to the market, now—I’ll post the followup in a moment.
Now: I imagine most people would say that Alice would receive the fettucini and Alice’ the eggplant. I will proceed on this assumption
Now suppose that Alice and Alice’ are switched at the moment they entered the restaurant. Neither Alice nor Alice’ notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice’) and universe A’ (now containing Alice) can tell, nothing has happened.
After the switch, Alice’ and Alice are seated, open their menus, and pick their orders. What dishes will Alice’ and Alice receive?
I’m missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we’re assuming A and A’ are identical at the beginning, what Alice and Alice’ order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?
I’m not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A’ are physically identical, but the Alices have different “souls” that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice’ has changed the way two identical sets of atoms move around.
If this applies to the waiter as well, we can’t predict what he’ll decide to bring Alice: for all we know he may turn into a leopard, because that’s his nature.
The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.
And the point of the hypothetical is that the question “what if, counterfactually, Alice ordered the eggplant?” is meaningful—it corresponds to physically switching the molecular formation of Alice with that of Alice’ at the appropriate moment.
I understand now. Sorry; that wasn’t clear from the earlier post.
This seems like an intuition pump. You’re assuming there is a way to switch the molecular formation of Alice’s brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I understand now. Sorry; that wasn’t clear from the earlier post.
I know—I didn’t phrase it very well.
Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Yes, yes it is.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I’m not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don’t have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn’t make counterfactuals automatically meaningless.
The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.
Ganapati, the counterfactual does not happen. That’s what “counterfactual” means—something which is contrary to fact.
However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to “what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
“what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
Why would you be interested in something that can’t occur in the real world?
In the “free will” case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don’t know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.
Actually you brought in the counterfactual argument to attempt to explain the significance (or “purpose”) of an approach called consequentialism (as opposed to others) in a determined universe.
Sorry for the delay in replying. No, I don’t have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only “evidence” disproving it, if any, lies with the entity believing it, not outside.
Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it. [emphasis added]
Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice’ orders eggplant in world A, she gets eggplant. The future is not fixed in advance—it is a function of the present, and your acts in the present create the future.
There’s an old Nozick quote that I found in Daniel Dennett’s Elbow Room: “No one has ever announced that because determinism is true thermostats do not control temperature.” Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.
Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn’t mean that Alice must order fetuccini. I’ll elaborate on that further.
No one inside DU can precisely predict future. Proof: Let’s suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can’t change it a bit. How can case B be the case? It can’t. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p’=F(p’), but information from future brings entropy, which causes future entropy to increase, so increasing prediction’s entropy and so on. Thus, there’s cannot be fixed point. QED.
No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier.
If Alice cannot decide what she likes more, she cannot just say “Oh! I must eat fettucini. It is my fate.”, she haven’t and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let’s call him god) will come down and say to Alice “It’s your fate to eat fettucini.” (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.
I’d like to dispute your usage of “predetermined” there: like “fated”, it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that’s fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more.
If one may determine without knowing Alice’s actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.
I’m not sure that I sufficiently understand you. “Fated” implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one’s fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only.
When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn’t mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she’s in universe A where she’s predetermined to eat fettucini, or that she’s in universe B where she’s to eat eggplant. And her decision process is integral part of each of these universes.
Without her decision universe A cannot be universe A.
I didn’t read them in one day and not all of them either.
I ‘stubled upon’ this article on the night of June 1 (GMT + 5.30) and did a bit of research on the site looking to check if my question had been previously raised and answered. In the process I did end up reading a few articles and sequences.
It was an interesting read. I am a little confused about one aspect, though, that is determinist consequentialism.
From what I read, it appears a determinist consequentialist believes it is ‘biology all the way down’ meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?
Or are there some things that are not biologically determined, like whether to approve someone else’s actions or not, while actions physically impacting others are themsleves completely determined biologically? It doesn’t appear to be the case, since the article states that even something like taste for music, not an action physically impacting the others, is completely determined biologically.
I think you might be confused on the matter of free will—it’s not obvious that there is any conflict between determinism and choice.
I used the word choice, but ‘free will’ do as well.
Was your response to my question biologically determined or was it a matter of conscious choice?
Whether there is going to be another response to this comment of mine or not, would it have been completely determined biologically or would it be a matter of conscious choice by some?
If all human actions are determined biologically the ‘choice’ is only an apparent one, like a tossed up coin having a ‘choice’ of turning up heads or tails. Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!
The correct answer to this is “both” (and it is a false dichotomy). My consciousness is a property of a certain collection of matter which can be most compactly described by reference to the regularities we call “biology”. Choosing to answer (or not to answer) is the result of a decision procedure arising out of the matter residing (to a rough approximation) in my braincase.
The difference between me and a coin is that a coin is a largely homogenous lump of metal and does not contain anything like a “choice mechanism”, whereas among the regularities we call “biology” we find some patterns that reliably allow organisms (and even machines) to steer the future toward preferred directions, and which we call “choosing” or “deciding”.
Do your choices have causes? Do those causes have causes?
Determinism doesn’t have to mean epiphenomenalism. Metaphysically, epiphenomenalism—the belief that consciousness has no causal power—is a lot like belief in true free will—consciousness as an uncaused cause—in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.)
Increase in consciousness increases the extent to which the causes of one’s choices and actions are themselves conscious in origin rather than unconscious. This may be experienced as liberation from cause and effect, but really it’s just liberation from unconscious causes. Choices do have causes, whether or not you’re aware of them.
This is a point which throws many people, but again, it comes from an insufficiently broad concept of causality. Reason itself has causes and operates as a cause. We can agree, surely, that absurdly wrong beliefs have a cause; we can understand why a person raised in a cult may believe its dogmas. Correct beliefs also have a cause. Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.
Nonetheless, despite this limited evolutionary gift, it may be true that we are deterministically doomed to fundamental error or ignorance in certain matters. Since the relationship of consciousness, knowledge, and reality is not exactly clear, it’s hard to be sure.
I don’t equate determinism with epiphenomenalism, but that even when it acts as a cause, it is completely determined meaning the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made.
Not sure how that follows. Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species. If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim.
It appears that the problem seems to be one of ‘generalisation from one example’. There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined. The former may choose determinism as a ‘belief in belief’ while the latter will see it as a fact, much like a self-aware AI.
That’s true. And there is no problem within it.
If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage.
Yes, but it is a very weak evidence (more so, if current models are correct). The claim stated that there was at least some capacity for correct cognition, not that the cognition is perfect.
Can you explain the meaning? What are the former and what are the latter beings?
Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn’t care much about cognitive capacity beyond a point (that you already mentioned)
You are already familiar with the latter, those whose consciousness is biologically determined. How do you expect to recognise the former, those whose consciousness is not biologically determined?
At least they probably hadn’t a deceptive cognitive capacity. That is, they had few beliefs, but that few were more or less correct. I am not saying that an intelligent species is universally better at survival than a dumb species. I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage.
Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception. Therefore I conclude that consciousness must be physically and therefore bilogically determined. I don’t expect to recognise “deterministic creatures” from “non-determinist creatures”, I simply expect the latter can’t exist in this world. Or maybe I even can’t imagine what could it possibly mean for consciousness to be not biologically determined. From my point of view, it could mean either a very bizarre form of dualism (consciousness is separated from the material world, but by chance it reflects correctly what happens in the material world), or it could mean that the natural laws aren’t entirely deterministic. But I don’t call the latter possibility “free will”, I call it “randomness”.
Your line of thought reminds me of a class of apologetics which claim that if we have evolved by random chance, then there is no guarantee that our cognition is correct, and if our cognition is flawed, we are not able to recognise that we have evolved by random chance; therefore, holding a position that we have evolved by random chance is incoherent and God must have been involved in the process. I think this class of arguments is called “presuppositionalist”, but I may be wrong.
Whatever is the name, the argument is a fallacy. That our cognition is correct is an assumption we must take, otherwise we may better not argue about anything. Although a carefully designed cognitive algorithm may have better chances to work correctly than by chance evolved cognitive algorithm, i.e. it is acceptable that p(correct|evolved)<p(correct|designed), it doesn’t necessarily mean that p(evolved|correct)<p(designed|correct), which is the conclusion the presuppositionalists essentially make.
Back to your argument, you seem to implicitly hold about cognition that p(correct|deterministic)<p(correct|indeterministic), for which I can’t see any reason, but even if that is valid, it isn’t automatically a strong argument for indeterminism.
Unless the delusions are related to survival and procreation, don’t see how they would present any evolutionary disadvantage.
Actually there is plenty of evidence to show that living creatures require additional laws to be predicted. Darwinian evolution itself is not required to describe the physical world. However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws. Someone else looking at the exact same evidence, can come to an entirely different conclusion, that we are actually on the verge of demonstrating what we always felt, that the living are more than physics. Both the positions are based on something that has not yet been demonstrated, the only “evidence” for either lying with the individual, a case of generalisation from one example.
Not at all. I was only questioning the logical consistency of an approach called ‘determinist consequentialism’. Determinism implies a future that is predetermined and potentially predictable. Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a ‘free will’ or ‘randomness’.
Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere.
An example would be helpful. I don’t know what evidence you are speaking about.
What is the difference between respecting physical laws and not violating them? Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn’t mean that the predictions of future are practically feasible or even easy.
Consequentialism doesn’t require either. The choices needn’t be principially unpredictable to be meaningful.
Perhaps. But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation. On the contrary it could present an evolutionary adavantage. A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage.
The evidence was stated in the very next line, the Darwinian evolution, something that is not required to describe the evolution of non-biological systems.
Of course, none. The distinction I wanted to make was one between respecting/not-violating and being completely determined by.
Nothing to differ there as a definition of determinism. It was exactly the point I was making too. If biological systems are, like us, are completely determined by physical laws, the apparent choice of making a decision by considering consequences is itself an illusion.
In which case every choice every entity makes, regardless of how it arrives at it, is meaningful. In other words there are no meaningless choices in the real world.
Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward.
Sounds like a group selection to me. And not much in accordance with observation. Although I don’t believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren’t much sucessful after all.
Depends on interpretation of “required”. If it means that practically one cannot derive useful statements about trilobites from Schrödinger equation, then yes, I agree. If it means that laws of evolution are logically independent laws which we would need to keep even if we overcome all computational and data-storage difficulties, then I disagree. I expect you meant the first interpretation, given your last paragraph.
Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don’t pose an evolutionary disadvantage.
I am more inclined towards the gene selection theory, not group selection. About the only species whose delusions we can observe are ourselves. So it is difficult to come out wth any significant objective observational data.
I didn’t mean Jews, I meant human species. If delusions are not genetically determined, what would be their source, from a deterministic point of view?
Peacock tail survival disadvantage isn’t limited to post-reproduction period. In order to explain the existence of the tails, it must be shown that their positive effect is greater than the negative.
I don’t dispute that (probably large) part of the human brain’s capacity is used in the peacock-tail manner as a signal of fitness. What I say is only that having two brains of same energetic demands, the one with more correct cognition is in advantage; their signalling value is the same, so any peacock mechanism shouldn’t favour the deluded one.
This doesn’t constitute proof of the correctness of human cognition, perhaps (almost certainly) some parts of our brain’s design is wrong in a way that no single mutation can repair, like the blind spot on human retina. But the evolutionary argument for correctness can’t be dismissed as irrelevant.
If delusions presented only survival dsiadvantages and no advantages, you are right. However, that need not be the case.
The delusion about an afterlife can co-exist with correct cognition in matters affecting immediate survival and when it does, it can enhance survival chances. So evolution doesn’t automatically lead to/enhance correct cognition. I am not saying correctness plays no role, but isn’t the sole deciding factor, at least not in the case of evolutionary selection.
This post is relevant.
Huh? Presumably if the dinosaurs had the cognitive capacity and the opposable thumbs to develop rocket ships and divert incoming asteroids they would have survived. They died out because they weren’t smart enough.
I will side with Ganapati on this particular point. We humans are spending much more cognitive capacity, with much more success, on inventing new ways to make ourselves extinct than we do on asteroid defense. And dinosaurs stayed around much longer than us anyway. So the jury is still out on whether intelligence helps a species avoid extinction.
prase’s original argument still stands, though. Having a big brain may or may not give you a survival advantage, but having a big non-working brain is certainly a waste that evolution would have erased in mere tens of generations, so if you have a big brain at all, chances are that it’s working mostly correctly.
ETA: disregard that last paragraph. It’s blatantly wrong. Evolution didn’t erase peacock tails.
The asteroid argument aside it seems to me bordering on obvious that general intelligence is adaptive, even if taken to an extreme it can get a species into trouble. (1) Unless you think general intelligence is only helpful for sexual selection it has to be adaptive or we wouldn’t have it (since it is clearly the product of more than one mutation). (2) Intelligence appears to use a lot of energy such that if it wasn’t beneficial it would be a tremendous waste. (3) There are many obvious causal connections between general intelligence and survival. It enabled us to construct axes, spears harness fire, communicate hunting strategies, pass down hunting and gathering techniques to the next generation, navigate status hierarchies etc. All technologies that have fairly straight forward relations to increased survival.
And the fact that we’re doing more to invent new ways to kill ourselves instead of protect ourselves can be traced pretty directly to collective action problems and a whole slew of evolved features other than intelligence that were once adaptive but have ceased to be—tribalism most obviously.
The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it. There’s also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock’s tail). See Norretranders’s “The Generous Man”″ for example. A number of biologists such as Dawkins take this hypothesis very seriously.
Thats an explanation that explains the increase in intelligence from apes to humans and my comment was a lot about that but the original disputed claim was
And there are less complex adaptive behaviors that require correct cognition: identifying prey, identifying predators, identifying food, identifying cliffs, path-finding etc. I guess there is an argument to be had about what a ‘conscious species’ but that doesn’t seem to be worthwhile. Also, there is a subtle difference between what human intelligence is due to and what the survival benefits of it are. It may have taken sexual selection to jump start it but our intelligence has made us far less vulnerable than we once were (with the exception of the problems we created for ourselves). Humans are rarely eaten by giant cats, for one thing.
No species have as high intelligence as humans but lots of species of high intelligence relative to, say, clams. --- Okay, that’s a little facetious but tool use has arisen independently throughout the animal again and again, not to mention the less complex behaviors mentioned above.
Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?
Well, having more accurate beliefs only matters if you are an entity intelligence enough to general act on those beliefs. To make an extreme case, consider the hypothetical of say an African Grey Parrot able to do calculus problems. Is that going to actually help it? I would suspect generally not. Or consider a member of a species that gains the accurate belief that it can sexually self-stimulate and then engages in that rather than mating. Here we have what is non-adaptive trait (masturbation is a very complicated trait and so isn’t non-adaptive in all cases but one can easily see situations where it seems to be). Or consider a pair of married humans Alice and Bob who have kids that Bob believes are his. Then Bob finds out that his wife had an affair with Bob’s brother Charlie and the kids are all really Charlie’s. If Bob responds by cutting off support for the kids this is likely non-adaptive. Indeed, one can take it a step further and suppose that Bob and Charlie are identical twins. So that Bob’s actions are completely anti-adaptive.
Your second point seems more reasonable. However, I’d suggest that intelligence increases the total number of beliefs one has about the world but that it may not increase the likelyhood of beliefs being accurate. Even if it does, the number of incorrect beliefs is likely to increase as well. It isn’t clear that the average ratio of correct beliefs to total beliefs is actually increasing (I’m being deliberately vague here in that it would likely be very difficult to measure how many beliefs one has without a lot more thought). A common ape may have no incorrect beliefs even as the common human has many incorrect beliefs. So it isn’t clear that intelligence leads to more accurate beliefs.
Edit: I agree that overall intelligence has been a helpful trait for human survival over the long haul.
That seems a likely area of dispute. Having accurate beliefs seems, ceteris paribus, to be better for you than inaccurate beliefs (though I can make up as many counterexamples as you’d like). But that still leaves open the question of whether it’s better than no beliefs at all.
Dinosaurs weren’t a single species, though. Maybe better compare dinosaurs to mammals than to humans.
Or we could pick a partciular species of dinaosaur that survived for a few million years and compare to humans.
Do you expect any changes to the analysis if we did that?
Nitpicking huh? Two can play at that game!
Maybe better compare mammals to reptiles than to dinosaurs.
Many individual species of dinosaurs have existed for longer than humans have.
Dinosaurs as a whole probably didn’t go extinct, we see their descendants everyday as birds.
Okay, this isn’t much to argue about :-)
I love nitpicking!
Mammals are a clade while reptiles are paraphyletic. Well, dinosaurs are too when birds are excluded, but I would gladly leave the birds in. In any case, dinosaurs win over mammals, so it wasn’t probably a good nitpick after all.
No dinosaur species did live along with humans, so direct competition didn’t take place.
I can’t find a nit to pick it here.
Are you claiming that the human species will last a million years or more and not become extinct before then? What are the grounds for such a claim?
I don’t think one should compare humans and dinos. Maybe mammals and dinos or something like that. Many dinosaurs went extinct during the era, our ancestors where many different “species”. Successful enough, that we are still around. As were some dinos which gave birds to Earth.
Just a side note,
Yep, your view is confused.
The optimizing function is implemented in your biology, which is implemented in physics.
In other words, the ‘choices’ you make are not really choices, but already predetermined, You didn’t really choose to be a determinist, you were programmed to select it, once you encountered it.
Yep, kind of. But your view of determinism is too depressing :-)
My program didn’t know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is “predetermined”, but it’s still useful. Likewise, the worldview I chose was “predetermined”, but that doesn’t mean my choice is somehow “wrong” or “invalid”, as long as my inner program actually implements valid common sense.
You couldn’t possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-)
Of course the output is useful, for the programmer, if any :-)
It doesn’t appear that regardless of what someone has been programmed to pick, the ‘feelings’ don’t seem to be any different.
If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.
I program computers successfully too :-)
Just to clarify, in a deterministic universe, there are no “invalid” or “wrong” things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.
No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.
Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be.
But to avoid the confusion over the use of words I will restate what I said earlier slightly differently.
In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.
I thought about your argument a bit and I think I understand it better now. Let’s unpack it.
First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent’s belief is correct. So no need to be outside the world to define “correctness”.
Another matter is verifying the correctness of beliefs if you’re within the world. You seem to argue that a verifier can’t trust its own conclusion if it knows itself to be a deterministic program. This is debatable—it depends on how you define “trust”—but let’s provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn’t follow. Could you explain?
So your argument against determinism is that certain things in your brain appear to have “significance” to you, but in a deterministic world that would be impossible? Does this restatement suffice as a reductio ad absurdum, or do I need to dismantle it further?
I’m kind of confused about your argument. Sometimes I get a glimpse of sense in it, but then I notice some corollary that looks just ridiculously wrong and snap back out. Are you saying that the validity of the statement 2+2=4 depends on whether we live in a deterministic universe? That’s a rather extreme form of belief relativism; how in the world can anyone hope to convince you that anything is true?
The only way that choices can be made is by being predetermined (by your decision-making algorithm). Paraphrasing the familiar wordplay, choices that are not predetermined refer to decisions that cannot be made, while the real choices, that can actually be made, are predetermined.
I like this phrasing; it makes things very clear. Are you alluding to this quote, or something else?
Yes.
Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the “purpose” of acting as if the future were not already determined and we can choose an optimising function based the possible consequences of different actions?
Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.
Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.
In a deterministic universe, the future is logically implied by the present—but you’re in the present. The future isn’t fated—if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence—and it isn’t predictable—even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality—it’s just logically consistent.
How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?
Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?
I said “counterfactual”. Let me use an archetypal example of a free-will hypothetical and query your response:
I’m off to the market, now—I’ll post the followup in a moment.
Now: I imagine most people would say that Alice would receive the fettucini and Alice’ the eggplant. I will proceed on this assumption
Now suppose that Alice and Alice’ are switched at the moment they entered the restaurant. Neither Alice nor Alice’ notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice’) and universe A’ (now containing Alice) can tell, nothing has happened.
After the switch, Alice’ and Alice are seated, open their menus, and pick their orders. What dishes will Alice’ and Alice receive?
I’m missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we’re assuming A and A’ are identical at the beginning, what Alice and Alice’ order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?
I’m not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A’ are physically identical, but the Alices have different “souls” that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice’ has changed the way two identical sets of atoms move around.
If this applies to the waiter as well, we can’t predict what he’ll decide to bring Alice: for all we know he may turn into a leopard, because that’s his nature.
The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.
And the point of the hypothetical is that the question “what if, counterfactually, Alice ordered the eggplant?” is meaningful—it corresponds to physically switching the molecular formation of Alice with that of Alice’ at the appropriate moment.
I understand now. Sorry; that wasn’t clear from the earlier post.
This seems like an intuition pump. You’re assuming there is a way to switch the molecular formation of Alice’s brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don’t know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn’t order eggplant). While it’s intuitively pleasing to think that there’s a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.
Also, suppose I ask “what if Alice ordered the linguine?” Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?
I know—I didn’t phrase it very well.
Yes, yes it is.
I’m not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don’t have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn’t make counterfactuals automatically meaningless.
The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.
...
Ganapati, the counterfactual does not happen. That’s what “counterfactual” means—something which is contrary to fact.
However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to “what would happen if I added a glider here to this frame of a Conway’s Life game?” has a defined answer, even though no such glider will be present in the original world.
Why would you be interested in something that can’t occur in the real world?
In the “free will” case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.
What?
Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don’t know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.
Yes, that’s a much better phrasing than mine.
(p.s. you realize that I am having an argument with Ganapati about the compatibility of determinism and free will in this thread, right?)
Actually you brought in the counterfactual argument to attempt to explain the significance (or “purpose”) of an approach called consequentialism (as opposed to others) in a determined universe.
Allow me the privilege of stating my own intentions.
You brought up the counterfactualism example right here, so I assumed it was in response to that post.
I’m sorry, do you have an objection to the reading of “counterfactual” elaborated in this thread?
Sorry for the delay in replying. No, I don’t have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed.
In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.
Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only “evidence” disproving it, if any, lies with the entity believing it, not outside.
Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings.
Thank you all for your time!
Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice’ orders eggplant in world A, she gets eggplant. The future is not fixed in advance—it is a function of the present, and your acts in the present create the future.
There’s an old Nozick quote that I found in Daniel Dennett’s Elbow Room: “No one has ever announced that because determinism is true thermostats do not control temperature.” Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.
Correction.
Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn’t mean that Alice must order fetuccini. I’ll elaborate on that further.
No one inside DU can precisely predict future. Proof: Let’s suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can’t change it a bit. How can case B be the case? It can’t. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p’=F(p’), but information from future brings entropy, which causes future entropy to increase, so increasing prediction’s entropy and so on. Thus, there’s cannot be fixed point. QED.
Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier.
If Alice cannot decide what she likes more, she cannot just say “Oh! I must eat fettucini. It is my fate.”, she haven’t and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let’s call him god) will come down and say to Alice “It’s your fate to eat fettucini.” (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.
I’d like to dispute your usage of “predetermined” there: like “fated”, it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that’s fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more.
If one may determine without knowing Alice’s actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.
I’m not sure that I sufficiently understand you. “Fated” implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one’s fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only.
When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn’t mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she’s in universe A where she’s predetermined to eat fettucini, or that she’s in universe B where she’s to eat eggplant. And her decision process is integral part of each of these universes.
Without her decision universe A cannot be universe A.
So her decision is crucial part of causal chain.
Did I answer your question?
Edit: spellcheck.
I don’t like the connotations, but sure—that’s a mathematically consistent definition.
P.S. Welcome to Less Wrong! Besides posts linked from the “free will” Wiki page—particularly How An Algorithm Feels From Inside—you may be interested in browsing the various Sequences. The introductory sequence on Map and Territory) is a good place to start.
Edit: You may also try browsing the backlinks from posts you like—that’s how I originally read through EY’s archive.
Thanks! I read the links and sequences.
Not in one day you didn’t.
I didn’t read them in one day and not all of them either.
I ‘stubled upon’ this article on the night of June 1 (GMT + 5.30) and did a bit of research on the site looking to check if my question had been previously raised and answered. In the process I did end up reading a few articles and sequences.