I can’t presume to answer for Eliezer, but I don’t think he’s yet claimed to know how the brain works. He’s also paid considerable attention to the nonsensical nature of some attempts to say that we might “already” know how- IE “emergence”, “complexity”, and other non-explanations. I’d go so far as to say that it follows directly from the fact that we can’t make our own brains from first principles that we don’t really understand the ones currently in circulation.
That said, it would be a serious defiance of all precedent if brains somehow had a magical, non-reducible quality by which they refused to comply with empirical observation. It’s true that the past success of such study can’t reliably predict future trends. By the same logic, however, we can’t expect gravity to continue in the future because past trends and consistency are of a different substance than future ones. Until gravity and reductionism actually do give out, we can say reasonably well that gravity is likely to continue and things are likely to be explicable. Following this line of reasoning—that the past may not predict the future at all—could easily kill any plotted course of action relying upon gravity or causality equally well, so why apply it only to cognitive science?
As pertains to brains, we have reasonable inferences that the mind is strictly anchored in a physical substance. Among the oldest I’m aware of is Heraclitis’ observation that hitting someone in the head causes stupor, confusion, etc, so the mind probably resides there. More modern versions can include research into brain lesions, neurotransmitters, psychoactive drugs, and the like if you prefer. The only way I can imagine to actually rule out a purely “physical” brain, especially against the weight of current evidence, would be if we could finally map the brain to perfection, watch all the computation it’s carrying out, understand it all- and still demonstrate that there’s a mysterious magic term in the input or output that definitely comes from nowhere at all. It sounds ridiculous spelled out this way, but that’s essentially what postulating “non-reducibility” comes down to- that monitoring an entire brain physically, you could actually watch things come out of nowhere. Certain physicists would find this kind of disturbing, for one.
Additionally, “God did it” and “Energy is conserved” are not isomorphic. One explains nothing; it does not provide any way to plot future events, assuming causality and a fairly stable universe. The other one does provide a way to plot future events, assuming causality and a fairly stable universe. Again, if you want to chuck out causality and a fairly stable universe, I have to wonder why you bother finishing sentences seeing as sound and information propagation are bound to stop working at any time. If we can agree that causality and stability are to remain in play, however, it follows that certain models will correspond to predictable reality and others will not. Going against this doesn’t just undermine AI or cognitive science, it actually undermines empiricism in general, which is funny because empiricism has a pretty good track record in spite of it.
I can’t presume to answer for Eliezer, but I don’t think he’s yet claimed to know how the brain works. He’s also paid considerable attention to the nonsensical nature of some attempts to say that we might “already” know how- IE “emergence”, “complexity”, and other non-explanations. I’d go so far as to say that it follows directly from the fact that we can’t make our own brains from first principles that we don’t really understand the ones currently in circulation.
That said, it would be a serious defiance of all precedent if brains somehow had a magical, non-reducible quality by which they refused to comply with empirical observation. It’s true that the past success of such study can’t reliably predict future trends. By the same logic, however, we can’t expect gravity to continue in the future because past trends and consistency are of a different substance than future ones. Until gravity and reductionism actually do give out, we can say reasonably well that gravity is likely to continue and things are likely to be explicable. Following this line of reasoning—that the past may not predict the future at all—could easily kill any plotted course of action relying upon gravity or causality equally well, so why apply it only to cognitive science?
As pertains to brains, we have reasonable inferences that the mind is strictly anchored in a physical substance. Among the oldest I’m aware of is Heraclitis’ observation that hitting someone in the head causes stupor, confusion, etc, so the mind probably resides there. More modern versions can include research into brain lesions, neurotransmitters, psychoactive drugs, and the like if you prefer. The only way I can imagine to actually rule out a purely “physical” brain, especially against the weight of current evidence, would be if we could finally map the brain to perfection, watch all the computation it’s carrying out, understand it all- and still demonstrate that there’s a mysterious magic term in the input or output that definitely comes from nowhere at all. It sounds ridiculous spelled out this way, but that’s essentially what postulating “non-reducibility” comes down to- that monitoring an entire brain physically, you could actually watch things come out of nowhere. Certain physicists would find this kind of disturbing, for one.
Additionally, “God did it” and “Energy is conserved” are not isomorphic. One explains nothing; it does not provide any way to plot future events, assuming causality and a fairly stable universe. The other one does provide a way to plot future events, assuming causality and a fairly stable universe. Again, if you want to chuck out causality and a fairly stable universe, I have to wonder why you bother finishing sentences seeing as sound and information propagation are bound to stop working at any time. If we can agree that causality and stability are to remain in play, however, it follows that certain models will correspond to predictable reality and others will not. Going against this doesn’t just undermine AI or cognitive science, it actually undermines empiricism in general, which is funny because empiricism has a pretty good track record in spite of it.