The cute answer is actually more revealing than you think and might help resolve this conversation.
Astronomy lets you predict the way celestial objects move in the sky. You can trivially extend this to weak!Astrology, which just asserts that the movement of the celestial bodies has some kind of systematic causal impact on the way humans behave. However, you would quite reasonably take issue with strong!Astrology, which makes specific, detailed, wrong claims about the nature and extent of these interactions, as well as the general sloppy standards of the field of strong!Astrology.
Evolution lets you predict the way natural selection affects a population over time. You can trivially extend this to weak!Evpsych, which just asserts that evolution will have some impact in shaping the mental faculties of the population. But you can still disagree with some specific claims of evolutionary psychology, as well as the methodologies used to generate them, and the practices of the field as a whole.
I think that Benito thinks you’re saying weak!Evpsych is wrong (that evolution didn’t shape our minds at all), when you’re actually just critiquing aspects of strong!Evpsych—e.g. that evolutionary psychologists are too quick to generalize from WEIRD college students into the rest of humanity, and so on. At least, my usual kneejerk response to critics of evpsych is “what, you think evolution stops above the neck?”
It doesn’t even construct plausible stories about evolution. In the time in which rather simple morphological changes to the bone shapes make some very minor progress, we supposedly evolve whole new instincts, whose morphological complexity (in terms of wiring adjustments in the brain), if innate, would be comparable to entire new organs, if not higher.
Where evolutionary biology predicts that X won’t evolve (and thus doesn’t exist as an innate quality), evolutionary psychology claims X evolved from scratch and exists.
we supposedly evolve whole new instincts, whose morphological complexity (in terms of wiring adjustments in the brain), if innate, would be comparable to entire new organs, if not higher.
I meant can you site a specific example of an evolutionary psychologist proposing an evolved instinct “whose morphological complexity (in terms of wiring adjustments in the brain), if innate, would be comparable to entire new organs, if not higher”. The wikipedia article doesn’t seem to include any.
Looking at the criticism at best some are valid criticisms of a few stupid evolutionary explanations that some people cite, e.g., the claim that homosexuality is adaptive. Most are, however, either simply incoherent like the “Disjunction and grain problems” section, based on false premises like the “ethnocentrism” section, or straw men like the section on “rape”.
The synaptic structure even of parts of your brain is vastly more complex than the organisational structure of any other organ. You can draw diagrams of about how various hepatic cells go together, and that’s all there is to it (caveats apply, though the rules are kind of simple). A diagram which tells you where the synapses go, so that you’d get e.g. a fight-or-flight response? Vastly more complex.
Now consider the evopsych innate module explanation for improved performance on Wason selection task when it’s specified verbally in terms of social rules, just to pick a specific and common example. (How the hell would such a module even interface to a bunch of learned language circuitry? That’s a question which would have to be answered first).
There’s a good overview of the clashes between evopsych modules and neuroscience:
Note, by the way, that evopsych proposes a very specific explanation—modules performing very specific tasks—not something like e.g. higher general arousal improving general cognitive performance depending on the context, improved clarity, greater involvement of mental visualization, or the like. edit: that’s is, without trying to explain it in terms of use or evolution of existing traits (specialized mental visualization) they skip to postulating a new module.
(How the hell would such a module even interface to a bunch of learned language circuitry?).
This rhetorical question suggests confusion. “How?” Don’t much care, it probably involves synapses. This is nothing remotely like the intended conclusion of “it is implausible that”.
Humans ‘learn’ how to walk. We consistently ‘learn’ how to speak and hear language, except when that is not viable in which case, unless the disability is extreme, we ‘learn’ other ways to communicate. We learn all sorts of emotional skills and habits, we learn how to model 3d physics with gravity. We learn which signals to send to our muscles to produce which results. These all have various degrees of learning and predisposition and each of those modules interface with the other ‘learned’ circuitry without difficulty. Or, rather, with difficulty that was ironed out over a couple of billion years.
Well, scientists care about that sort of things, pseudoscientists don’t. The issue is that the learned circuitry is fairly different across individuals, and there is no known or plausible mechanism by which a mutation could make such highly specific modifications to the learned circuitry (as required for evopsych explanation of improved performance at Wason selection task concerning people, to give a specific example).
There’s simply no known way how a gene would connect learned concepts in a very specific way as to give rise to improved performance on Wason selection task when it is discussing social interactions, but not otherwise.
I’m not so much disagreeing as giving a different perspective. Even in utero, most organs already fulfill their intended function. It takes little time for kidneys to produce urine, or for a proto-heart to beat. There are exceptions, such as certain liver functions not being available until late into the pregnancy or post natal. The point is that it doesn’t take all that much information to describe which layer of cells goes where. It’s an astounding process, cells inducing other cells to specialize in certain ways, and gradually creating 3D structures by doing so.
However, contrast that to how much longer it takes for a brain to learn simple functions. The exact 3D structure of the brain cannot be stored in DNA in the same way that the structure of a glomerulus is stored. Only a framework is provided. The actual connectome is only created in response to information from the environment, external data stores becoming necessary because the DNA carrier cannot handle that much information. Of course, there are upsides, namely that by reacting to outside information, the resulting structure is better adapted to its specific environment than if it relied only on DNA information (which is much less flexible).
Human babies in their first months of live can be considered to be in a final, external stage of pregnancy. The cost of birthing and providing for a baby which is astoundingly helpless (“altricial”) compared to other mammals at that stage of life is enourmous. Of course, the ultimate cognitive power of their brains outweighs the investment, but I wouldn’t exactly call a years-long process which awkwardly circumvents various obstacles (e.g. a bigger and better brain wouldn’t fit through the female pelvis and would exceed the mother’s nutritional capacity) to be without difficulty. But I get your meaning.
Indeed. Thing is, we actually know a lot about how it is built. We see genetically predetermined specialization—hippocampus is very structurally distinct from the neocortex, for example.
We also see learned specialization: initially morphologically homologous areas that acquire increased specialization through synaptic pruning, which is known to be driven by the specific electrical signalling within the brain rather than specific genetic instructions targeting those synapses. We see universality (within those brain regions), in the sense that if one brain region is missing from an early enough time, other brain regions will learn to perform same function (thus proving, at least, that learning can account for said functionality).
What we don’t see is innate specialization in those regions, as proposed by Tooby/Cosmides/Pinker (‘hundreds or thousands distinct mental organs’). They’re somehow below any detection, and work absolutely identical to what learning works like.
They’re somehow below any detection, and work absolutely identical to what learning works like.
And what do you think learning looks like? You seem to be envisioning some kind of blank slate priorless learning. The problem is that learning without priors is logically incoherent. Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
A piece of neocortex consisting of approximate repeats of same structure (made from same genetic code) is not prior-less.
Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis. In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex, and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
And if we are to compare humans to animals, there’s also a far larger brain that does far more distinct things, without a massive increase in the number of genes.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology. And “evolutionary psychologists”, of the Tooby/Cosmides/Pinker kind, skip the middle links in the chain (where a lot of high quality but complicated science is done).
Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis.
So your claim is that each human’s priors are assigned not by genetics, but randomly? (according to some distribution?). I don’t even see how to phrase your position coherently.
In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex,
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
Or he could, you know, try tweaking a bunch of genes and see which ones produced effects close to the effect he wanted. Then try more similar tweaks to those and see which get him closer.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology.
Just because there are many intermediate steps in the causal diagram from genes to psychology doesn’t mean that much of psychology isn’t based on genes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
I don’t even see how to phrase your position coherently.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually. A lot in the human body (minor blood vessels, details in the fingerprint patterns, etc) is not set by genes—most of the fine detail isn’t individually controlled by genes.
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
Not when the guys who speculate about the software keep insisting that microsoft windows is added into the computers at the semiconductor chip factory… that’s probably the best analogy. Hardware is what determines how and where the software can be loaded from. For example from the hardware considerations we can see that RAM comes in blank, and hard drive comes in with head positioning tracks and some firmware but not the OS.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually.
Was anyone claiming they do?
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Or do they really?
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Evopsych you describe is actually a very fringe interpretation of how we get improved performance on Wason selection task. Source . There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense. Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off. A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense.
Do you have an example of an experiment that distinguishes this from the ethics hypothesis?
Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off.
That is also what the ethics theory would predict.
A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
That’s a non-explanation. It doesn’t explain why some cards and not others seem relevant. That just relates a black box in the subjects’ brains with the corresponding black box in the explainer’s brain without saying anything about how either of them work.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
That is also what the ethics theory would predict.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
It doesn’t explain why some cards and not others seem relevant.
The relevance is complicated and highly dependent on context and prior experience of the subject.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
It’s also larger for improvements that do really well in situations that commonly come up.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
Sorry, I didn’t. Now taking a look at it, that is indeed not directly moral. Although I suspect it might be implicitly moral since there is a presumption in our culture that unmarried men are not to be trusted with children.
The relevance is complicated and highly dependent on context and prior experience of the subject.
Now consider the evopsych innate module explanation for improved performance on Wason selection task when it’s specified verbally in terms of social rules, just to pick a specific and common example. (How the hell would such a module even interface to a bunch of learned language circuitry? That’s a question which would have to be answered first).
The same way the hunger module interfaces with the learned language circuitry when someone tells you there is cake in the fridge.
Also the “Wason module” if you want to call it that is a submodule of the ethics/social rules module, which is why it only gets involved on social rules type problems. Are you trying to argue that it’s implausible that the social rules module interfaces with learned language circuitry?
A diagram which tells you where the synapses go, so that you’d get e.g. a fight-or-flight response? Vastly more complex.
Type error. The complexity of the details of a specific phenotype are not the same as the complexity of the adaptation. This is especially the case when considering the organ that is explicitly design to adapt to complexity via bulk application of an adaptive neural algorithm. The diagram of synapses involved in a given process will be completely different between genetically identically individuals.
Analogy: I don’t know whether there is more complexity in apples or watermelons. I do know that a lot more can be said about an individual watermelon than about an individual apple, if I represent enough detail.
A diagram which tells you where the synapses go, so that you’d get e.g. a fight-or-flight response? Vastly more complex.
I don’t think it weakens your point (given charitable reading) but using ‘fight-or-flight response’ as the illustration of how the brain is most complex than other organs has difficulties given just how many of those other organs are involved in the response. Especially when a diagram “so that you’d get e.g. a fight-or-flight response” could plausibly be an electric wire stabbed into either the pituitary or adrenal glands.
How, precisely, is astrology an “extension of a good theory”?
There is a cute answer—that movements of celestial bodies (e.g. the Sun and the Moon) certainly affect people’s lives and fates.
And there is a historical answer—that for centuries astronomy and astrology were, basically, inseparable.
The cute answer is actually more revealing than you think and might help resolve this conversation.
Astronomy lets you predict the way celestial objects move in the sky. You can trivially extend this to weak!Astrology, which just asserts that the movement of the celestial bodies has some kind of systematic causal impact on the way humans behave. However, you would quite reasonably take issue with strong!Astrology, which makes specific, detailed, wrong claims about the nature and extent of these interactions, as well as the general sloppy standards of the field of strong!Astrology.
Evolution lets you predict the way natural selection affects a population over time. You can trivially extend this to weak!Evpsych, which just asserts that evolution will have some impact in shaping the mental faculties of the population. But you can still disagree with some specific claims of evolutionary psychology, as well as the methodologies used to generate them, and the practices of the field as a whole.
I think that Benito thinks you’re saying weak!Evpsych is wrong (that evolution didn’t shape our minds at all), when you’re actually just critiquing aspects of strong!Evpsych—e.g. that evolutionary psychologists are too quick to generalize from WEIRD college students into the rest of humanity, and so on. At least, my usual kneejerk response to critics of evpsych is “what, you think evolution stops above the neck?”
Does it? I don’t think evopsy predicts anything, I think it only constructs plausible stories after the fact.
No, I’m not saying that, it would be pretty silly.
No, and not only that as well. I am not critiquing certain aspects, I’m critiquing the whole field for failing the usual criteria of a science.
It doesn’t even construct plausible stories about evolution. In the time in which rather simple morphological changes to the bone shapes make some very minor progress, we supposedly evolve whole new instincts, whose morphological complexity (in terms of wiring adjustments in the brain), if innate, would be comparable to entire new organs, if not higher.
Where evolutionary biology predicts that X won’t evolve (and thus doesn’t exist as an innate quality), evolutionary psychology claims X evolved from scratch and exists.
[citation please]
Read http://en.wikipedia.org/wiki/Criticism_of_evolutionary_psychology , or go explain massive modularity to almost any neurobiologist and see what they say about it.
I meant can you site a specific example of an evolutionary psychologist proposing an evolved instinct “whose morphological complexity (in terms of wiring adjustments in the brain), if innate, would be comparable to entire new organs, if not higher”. The wikipedia article doesn’t seem to include any.
Looking at the criticism at best some are valid criticisms of a few stupid evolutionary explanations that some people cite, e.g., the claim that homosexuality is adaptive. Most are, however, either simply incoherent like the “Disjunction and grain problems” section, based on false premises like the “ethnocentrism” section, or straw men like the section on “rape”.
The synaptic structure even of parts of your brain is vastly more complex than the organisational structure of any other organ. You can draw diagrams of about how various hepatic cells go together, and that’s all there is to it (caveats apply, though the rules are kind of simple). A diagram which tells you where the synapses go, so that you’d get e.g. a fight-or-flight response? Vastly more complex.
Now consider the evopsych innate module explanation for improved performance on Wason selection task when it’s specified verbally in terms of social rules, just to pick a specific and common example. (How the hell would such a module even interface to a bunch of learned language circuitry? That’s a question which would have to be answered first).
There’s a good overview of the clashes between evopsych modules and neuroscience:
http://www.niu.edu/phil/~buller/publications/_pdf/epmdn.pdf
Note, by the way, that evopsych proposes a very specific explanation—modules performing very specific tasks—not something like e.g. higher general arousal improving general cognitive performance depending on the context, improved clarity, greater involvement of mental visualization, or the like. edit: that’s is, without trying to explain it in terms of use or evolution of existing traits (specialized mental visualization) they skip to postulating a new module.
This rhetorical question suggests confusion. “How?” Don’t much care, it probably involves synapses. This is nothing remotely like the intended conclusion of “it is implausible that”.
Humans ‘learn’ how to walk. We consistently ‘learn’ how to speak and hear language, except when that is not viable in which case, unless the disability is extreme, we ‘learn’ other ways to communicate. We learn all sorts of emotional skills and habits, we learn how to model 3d physics with gravity. We learn which signals to send to our muscles to produce which results. These all have various degrees of learning and predisposition and each of those modules interface with the other ‘learned’ circuitry without difficulty. Or, rather, with difficulty that was ironed out over a couple of billion years.
Well, scientists care about that sort of things, pseudoscientists don’t. The issue is that the learned circuitry is fairly different across individuals, and there is no known or plausible mechanism by which a mutation could make such highly specific modifications to the learned circuitry (as required for evopsych explanation of improved performance at Wason selection task concerning people, to give a specific example).
There’s simply no known way how a gene would connect learned concepts in a very specific way as to give rise to improved performance on Wason selection task when it is discussing social interactions, but not otherwise.
I’m not so much disagreeing as giving a different perspective. Even in utero, most organs already fulfill their intended function. It takes little time for kidneys to produce urine, or for a proto-heart to beat. There are exceptions, such as certain liver functions not being available until late into the pregnancy or post natal. The point is that it doesn’t take all that much information to describe which layer of cells goes where. It’s an astounding process, cells inducing other cells to specialize in certain ways, and gradually creating 3D structures by doing so.
However, contrast that to how much longer it takes for a brain to learn simple functions. The exact 3D structure of the brain cannot be stored in DNA in the same way that the structure of a glomerulus is stored. Only a framework is provided. The actual connectome is only created in response to information from the environment, external data stores becoming necessary because the DNA carrier cannot handle that much information. Of course, there are upsides, namely that by reacting to outside information, the resulting structure is better adapted to its specific environment than if it relied only on DNA information (which is much less flexible).
Human babies in their first months of live can be considered to be in a final, external stage of pregnancy. The cost of birthing and providing for a baby which is astoundingly helpless (“altricial”) compared to other mammals at that stage of life is enourmous. Of course, the ultimate cognitive power of their brains outweighs the investment, but I wouldn’t exactly call a years-long process which awkwardly circumvents various obstacles (e.g. a bigger and better brain wouldn’t fit through the female pelvis and would exceed the mother’s nutritional capacity) to be without difficulty. But I get your meaning.
Indeed. Thing is, we actually know a lot about how it is built. We see genetically predetermined specialization—hippocampus is very structurally distinct from the neocortex, for example.
We also see learned specialization: initially morphologically homologous areas that acquire increased specialization through synaptic pruning, which is known to be driven by the specific electrical signalling within the brain rather than specific genetic instructions targeting those synapses. We see universality (within those brain regions), in the sense that if one brain region is missing from an early enough time, other brain regions will learn to perform same function (thus proving, at least, that learning can account for said functionality).
What we don’t see is innate specialization in those regions, as proposed by Tooby/Cosmides/Pinker (‘hundreds or thousands distinct mental organs’). They’re somehow below any detection, and work absolutely identical to what learning works like.
And what do you think learning looks like? You seem to be envisioning some kind of blank slate priorless learning. The problem is that learning without priors is logically incoherent. Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
A piece of neocortex consisting of approximate repeats of same structure (made from same genetic code) is not prior-less.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis. In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex, and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
And if we are to compare humans to animals, there’s also a far larger brain that does far more distinct things, without a massive increase in the number of genes.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology. And “evolutionary psychologists”, of the Tooby/Cosmides/Pinker kind, skip the middle links in the chain (where a lot of high quality but complicated science is done).
So your claim is that each human’s priors are assigned not by genetics, but randomly? (according to some distribution?). I don’t even see how to phrase your position coherently.
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
Or he could, you know, try tweaking a bunch of genes and see which ones produced effects close to the effect he wanted. Then try more similar tweaks to those and see which get him closer.
Just because there are many intermediate steps in the causal diagram from genes to psychology doesn’t mean that much of psychology isn’t based on genes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually. A lot in the human body (minor blood vessels, details in the fingerprint patterns, etc) is not set by genes—most of the fine detail isn’t individually controlled by genes.
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
Not when the guys who speculate about the software keep insisting that microsoft windows is added into the computers at the semiconductor chip factory… that’s probably the best analogy. Hardware is what determines how and where the software can be loaded from. For example from the hardware considerations we can see that RAM comes in blank, and hard drive comes in with head positioning tracks and some firmware but not the OS.
Was anyone claiming they do?
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Or do they really?
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Evopsych you describe is actually a very fringe interpretation of how we get improved performance on Wason selection task. Source . There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense. Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off. A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
Do you have an example of an experiment that distinguishes this from the ethics hypothesis?
That is also what the ethics theory would predict.
That’s a non-explanation. It doesn’t explain why some cards and not others seem relevant. That just relates a black box in the subjects’ brains with the corresponding black box in the explainer’s brain without saying anything about how either of them work.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
The relevance is complicated and highly dependent on context and prior experience of the subject.
It’s also larger for improvements that do really well in situations that commonly come up.
Sorry, I didn’t. Now taking a look at it, that is indeed not directly moral. Although I suspect it might be implicitly moral since there is a presumption in our culture that unmarried men are not to be trusted with children.
Ok, so your theory is even less falsifiable.
The same way the hunger module interfaces with the learned language circuitry when someone tells you there is cake in the fridge.
Also the “Wason module” if you want to call it that is a submodule of the ethics/social rules module, which is why it only gets involved on social rules type problems. Are you trying to argue that it’s implausible that the social rules module interfaces with learned language circuitry?
There’s no specific “eat cake” module there, it’s learned that cake reduces hunger, that’s the whole point.
These are two quite distinct claims. Only one of them is insane.
No, he is arguing that the concept of very specific modules/massive modularity is implausible.
Type error. The complexity of the details of a specific phenotype are not the same as the complexity of the adaptation. This is especially the case when considering the organ that is explicitly design to adapt to complexity via bulk application of an adaptive neural algorithm. The diagram of synapses involved in a given process will be completely different between genetically identically individuals.
Analogy: I don’t know whether there is more complexity in apples or watermelons. I do know that a lot more can be said about an individual watermelon than about an individual apple, if I represent enough detail.
I don’t think it weakens your point (given charitable reading) but using ‘fight-or-flight response’ as the illustration of how the brain is most complex than other organs has difficulties given just how many of those other organs are involved in the response. Especially when a diagram “so that you’d get e.g. a fight-or-flight response” could plausibly be an electric wire stabbed into either the pituitary or adrenal glands.