Shawn—firstly, congratulations on your BROPA research and publication; it is likely to have high future impact.
I am already assuming the computer simulation is mimicking the brain’s activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure.
Universal computation necessarily implies/requires multiple realizability of causal systems and thus functionalism.
Part of the confusion stems from the use of the term ‘consciousness’ and all of it’s associated baggage. So let us taboo the word and use self-awareness instead. Self-awareness conveys most of the same meaning, but without the connotations (just as we may prefer the term ‘mind’ over ‘soul’).
Self-awareness is a specific key information processing capability that some intelligent systems/agents possess. Some animals (dolphins, monkeys, humans, etc) demonstrate general self-awareness through their ability to recognize themselves in mirrors. Self-recognition in a mirror test requires a specific ability to construct a predictive model of one’s self as an object embedded in the world.
The other day while on a walk I came upon a songbird that was repeatedly attacking a car (with a short hop ramming maneuver). I was puzzled until I realized that the bird was specifically attacking the side view mirror. I watched it for about 10 minutes and it just did the same attack over and over again. The next day I saw it attacking a different car in about the same location.
Humans possess a more advanced form of self-awareness related to our ability to use language to communicate. Natural linguistic communication is very complex—it requires a sophisticated capability to model not only one’s self—but other self-aware agents as well, along with those other agent’s models of oneself and other agents, and so on recursively.
Self-awareness isn’t a binary concept—obviously it comes in many varieties and flavours, such that individual humans are not self-aware in exactly the same way. Nonetheless, these differences are tiny in comparison to those that separate typical human self-awareness from feline SA or the rudimentary SA of current artificial agents.
Self-awareness is just a computational capability, and once we scale up ANNs to the size and complexity of the human brain, we will prove beyond any doubt that machines can possess human level self-awareness.
I am already assuming the computer simulation is mimicking the brain’s activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure.
If a computer simulation could actually mimick all of the key algorithmic computations in the brain necessary for human level cognition, intelligence, self-awareness, attention, etc etc. - then the computer simulation would be essentially indistinguishable from a human mind (as embodied say—in virtual reality).
The ‘causal structure’ is just the key algorithmic computations. It is the ‘what’. The ‘how’ of those computations is the implement issue—and there are literally—provably—an infinite number of implementations/realizations for any specific causal computational structure.
An nvidia GPU works differently than an AMD GPU which both are quite different from an intel CPU or a hybrid analog/digital neural ASIC (similar to the brain) or a memristor neural ASIC. Nonetheless, a brain simulation could run on any of those architectures and it would work just the same.
Shawn—firstly, congratulations on your BROPA research and publication; it is likely to have high future impact.
Universal computation necessarily implies/requires multiple realizability of causal systems and thus functionalism.
Part of the confusion stems from the use of the term ‘consciousness’ and all of it’s associated baggage. So let us taboo the word and use self-awareness instead. Self-awareness conveys most of the same meaning, but without the connotations (just as we may prefer the term ‘mind’ over ‘soul’).
Self-awareness is a specific key information processing capability that some intelligent systems/agents possess. Some animals (dolphins, monkeys, humans, etc) demonstrate general self-awareness through their ability to recognize themselves in mirrors. Self-recognition in a mirror test requires a specific ability to construct a predictive model of one’s self as an object embedded in the world.
The other day while on a walk I came upon a songbird that was repeatedly attacking a car (with a short hop ramming maneuver). I was puzzled until I realized that the bird was specifically attacking the side view mirror. I watched it for about 10 minutes and it just did the same attack over and over again. The next day I saw it attacking a different car in about the same location.
Humans possess a more advanced form of self-awareness related to our ability to use language to communicate. Natural linguistic communication is very complex—it requires a sophisticated capability to model not only one’s self—but other self-aware agents as well, along with those other agent’s models of oneself and other agents, and so on recursively.
Self-awareness isn’t a binary concept—obviously it comes in many varieties and flavours, such that individual humans are not self-aware in exactly the same way. Nonetheless, these differences are tiny in comparison to those that separate typical human self-awareness from feline SA or the rudimentary SA of current artificial agents.
Self-awareness is just a computational capability, and once we scale up ANNs to the size and complexity of the human brain, we will prove beyond any doubt that machines can possess human level self-awareness.
If a computer simulation could actually mimick all of the key algorithmic computations in the brain necessary for human level cognition, intelligence, self-awareness, attention, etc etc. - then the computer simulation would be essentially indistinguishable from a human mind (as embodied say—in virtual reality).
The ‘causal structure’ is just the key algorithmic computations. It is the ‘what’. The ‘how’ of those computations is the implement issue—and there are literally—provably—an infinite number of implementations/realizations for any specific causal computational structure.
An nvidia GPU works differently than an AMD GPU which both are quite different from an intel CPU or a hybrid analog/digital neural ASIC (similar to the brain) or a memristor neural ASIC. Nonetheless, a brain simulation could run on any of those architectures and it would work just the same.