A piece of neocortex consisting of approximate repeats of same structure (made from same genetic code) is not prior-less.
Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis. In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex, and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
And if we are to compare humans to animals, there’s also a far larger brain that does far more distinct things, without a massive increase in the number of genes.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology. And “evolutionary psychologists”, of the Tooby/Cosmides/Pinker kind, skip the middle links in the chain (where a lot of high quality but complicated science is done).
Now that we’ve established that our brains have built in priors, why is it implausible that it said priors were the results of evolution? In fact, it would be implausible for them not to be.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis.
So your claim is that each human’s priors are assigned not by genetics, but randomly? (according to some distribution?). I don’t even see how to phrase your position coherently.
In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex,
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
Or he could, you know, try tweaking a bunch of genes and see which ones produced effects close to the effect he wanted. Then try more similar tweaks to those and see which get him closer.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology.
Just because there are many intermediate steps in the causal diagram from genes to psychology doesn’t mean that much of psychology isn’t based on genes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
I don’t even see how to phrase your position coherently.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually. A lot in the human body (minor blood vessels, details in the fingerprint patterns, etc) is not set by genes—most of the fine detail isn’t individually controlled by genes.
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
Not when the guys who speculate about the software keep insisting that microsoft windows is added into the computers at the semiconductor chip factory… that’s probably the best analogy. Hardware is what determines how and where the software can be loaded from. For example from the hardware considerations we can see that RAM comes in blank, and hard drive comes in with head positioning tracks and some firmware but not the OS.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually.
Was anyone claiming they do?
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Or do they really?
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Evopsych you describe is actually a very fringe interpretation of how we get improved performance on Wason selection task. Source . There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense. Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off. A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense.
Do you have an example of an experiment that distinguishes this from the ethics hypothesis?
Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off.
That is also what the ethics theory would predict.
A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
That’s a non-explanation. It doesn’t explain why some cards and not others seem relevant. That just relates a black box in the subjects’ brains with the corresponding black box in the explainer’s brain without saying anything about how either of them work.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
That is also what the ethics theory would predict.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
It doesn’t explain why some cards and not others seem relevant.
The relevance is complicated and highly dependent on context and prior experience of the subject.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
It’s also larger for improvements that do really well in situations that commonly come up.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
Sorry, I didn’t. Now taking a look at it, that is indeed not directly moral. Although I suspect it might be implicitly moral since there is a presumption in our culture that unmarried men are not to be trusted with children.
The relevance is complicated and highly dependent on context and prior experience of the subject.
A piece of neocortex consisting of approximate repeats of same structure (made from same genetic code) is not prior-less.
It’s not C. Elegans. It is individual-specific which neuron groups end up learning a concept. And genome is not a blueprint, there’s no short way to create a gene that would target a specific region even on the location basis. In humans there’s no gene controlling specifically the strength of a specific synapse any place in neocortex, and if you wanted to genetically engineer an alteration to a specific synapse of your choice you would have to set up an incredibly complex framework for expressing that gene in just that specific neuron and affecting that specific synapse.
And if we are to compare humans to animals, there’s also a far larger brain that does far more distinct things, without a massive increase in the number of genes.
There’s the root of the problem, really: environment drives evolution of genes, genes interact with neurobiology, neurobiology gives rise to psychology. And “evolutionary psychologists”, of the Tooby/Cosmides/Pinker kind, skip the middle links in the chain (where a lot of high quality but complicated science is done).
So your claim is that each human’s priors are assigned not by genetics, but randomly? (according to some distribution?). I don’t even see how to phrase your position coherently.
But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.
Or he could, you know, try tweaking a bunch of genes and see which ones produced effects close to the effect he wanted. Then try more similar tweaks to those and see which get him closer.
Just because there are many intermediate steps in the causal diagram from genes to psychology doesn’t mean that much of psychology isn’t based on genes.
To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.
There’s far more hypotheses which a human updates on when learning in the early life than there are genes, so there’s simply not enough genes to address priors to hypotheses individually. A lot in the human body (minor blood vessels, details in the fingerprint patterns, etc) is not set by genes—most of the fine detail isn’t individually controlled by genes.
The fidelity is very low—it’s not a blueprint. The thing is, you can’t make predictions about what would evolve from just what’s beneficial. It’d be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn’t provide for a simple mutation that yields such eyes.
Not when the guys who speculate about the software keep insisting that microsoft windows is added into the computers at the semiconductor chip factory… that’s probably the best analogy. Hardware is what determines how and where the software can be loaded from. For example from the hardware considerations we can see that RAM comes in blank, and hard drive comes in with head positioning tracks and some firmware but not the OS.
Was anyone claiming they do?
Yes, and it would also be beneficial to correctly apply the Wason selection principle to all problems not just ethical ones, but because the relevant circuitry is in the ethics module, our brains only apply it to ethics.
Or do they really?
A large fraction of people get the problem right even in it’s more abstruse form, where it is harder to leverage the experience to correctly understand the problem statement.
Evopsych you describe is actually a very fringe interpretation of how we get improved performance on Wason selection task. Source . There just isn’t a reason to believe that improved performance has anything to do with ethics, rather than, for example, general improvement in understanding when describing problems that make more sense. Something as simple as replacing ‘married’ with something patently irrelevant such as hair colour, throws people off. A proposed explanation had been that most people just flip cards that seem relevant to them, and that’s it, they aren’t actually solving a logical problem.
Yes, our brain also has the a general purpose module, but it’s not as effective as the special purpose ones on the problems they are designed for.
Do you have an example of an experiment that distinguishes this from the ethics hypothesis?
That is also what the ethics theory would predict.
That’s a non-explanation. It doesn’t explain why some cards and not others seem relevant. That just relates a black box in the subjects’ brains with the corresponding black box in the explainer’s brain without saying anything about how either of them work.
They aren’t designed, they’re shaped by evolution, and evolution is driven by differential reproduction, which is larger for more widely applicable improvements.
How so? Did you read my link? Two people are making a bet on a trait. If results gone other way, would ethics theory predict that too?
The relevance is complicated and highly dependent on context and prior experience of the subject.
It’s also larger for improvements that do really well in situations that commonly come up.
Sorry, I didn’t. Now taking a look at it, that is indeed not directly moral. Although I suspect it might be implicitly moral since there is a presumption in our culture that unmarried men are not to be trusted with children.
Ok, so your theory is even less falsifiable.