It’s all well and good to say that we (neuroscientists) should endevour to understand consciousness better before proceeding to the creation of an AI, but how realistic do you really believe this approach to be? Mike Vassar pointed out that once we have advanced enough technology to emulate a significant number of neurons in a brain circuit, we’d also have good enough technology to create an AGI. If you’re arguing for some kind of ‘quantum tensor factor,’ and need quantum level emulations of the brain for consciousness, AGI’s will have been generated long before we’ve even put a dent in identifying the ineffable essence of consciousness. This is not to say you are wrong, just that what you ask is impossible.
I am more optimistic than you are about science achieving the sort of progress I called for, in time to be relevant. For one thing, this is largely about basic concepts. One or two reconceptualizations, as big as what relativity did to time, and the topic of consciousness will look completely different. That conceptual progress is a matter of someone achieving the right insights and communicating them. (Evidently my little essay on quantum monadology isn’t the turning point, or my LW karma wouldn’t be sinking the way it is...) The interactions between progress in neuroscience, classical computing, quantum computing, and programming are complicated, but when it comes to solving the ontological problem of consciousness in time for the Singularity, I’d say the main factor is the rate of conceptual progress regarding consciousness, and that’s internal to the field of consciousness studies.
To help readers distinguish between self-deprecating jokes and whining, the Internet has provided us with a palette of emoticons. I recommend ”;-)” for this particular scenario.
Well, it may be that once we actually know that much more about the brain’s wiring, well, that additional knowledge may be enough to help untangle some of the mysteriousness of consciousness.
It’s all well and good to say that we (neuroscientists) should endevour to understand consciousness better before proceeding to the creation of an AI, but how realistic do you really believe this approach to be? Mike Vassar pointed out that once we have advanced enough technology to emulate a significant number of neurons in a brain circuit, we’d also have good enough technology to create an AGI. If you’re arguing for some kind of ‘quantum tensor factor,’ and need quantum level emulations of the brain for consciousness, AGI’s will have been generated long before we’ve even put a dent in identifying the ineffable essence of consciousness. This is not to say you are wrong, just that what you ask is impossible.
I am more optimistic than you are about science achieving the sort of progress I called for, in time to be relevant. For one thing, this is largely about basic concepts. One or two reconceptualizations, as big as what relativity did to time, and the topic of consciousness will look completely different. That conceptual progress is a matter of someone achieving the right insights and communicating them. (Evidently my little essay on quantum monadology isn’t the turning point, or my LW karma wouldn’t be sinking the way it is...) The interactions between progress in neuroscience, classical computing, quantum computing, and programming are complicated, but when it comes to solving the ontological problem of consciousness in time for the Singularity, I’d say the main factor is the rate of conceptual progress regarding consciousness, and that’s internal to the field of consciousness studies.
More engaging, less whining, please.
Hey, it was a joke.
To help readers distinguish between self-deprecating jokes and whining, the Internet has provided us with a palette of emoticons. I recommend ”;-)” for this particular scenario.
Well, it may be that once we actually know that much more about the brain’s wiring, well, that additional knowledge may be enough to help untangle some of the mysteriousness of consciousness.