“Information flow” is a real term—no need for quotes.
The binding problem asks how it is possible we have a unified perception if different aspects of our perception are processed in different parts of our brain. The answer is because those different parts talk to each other, which integrates the information together.
In defense of David’s point, consciousness research is currently pre-scientific, loosely akin to 1400’s alchemy. Fields become scientific as they settle on a core ontology and methodology for generating predictions from this ontology; consciousness research presently has neither.
Most current arguments about consciousness and uploading are thus ultimately arguments by intuition. Certainly an intuitive story can be told why uploading a brain and running it as a computer program would also simply transfer consciousness, but we can also tell stories where intuition pulls in the opposite direction, e.g. see Scott Aaronson’s piece here https://scottaaronson.blog/?p=1951 ; my former colleague Andres also has a relevant paper arguing against computationalist approaches here https://www.degruyter.com/document/doi/10.1515/opphil-2022-0225/html
Of the attempts to formalize the concept of information flows and its relevance to consciousness, the most notable is probably Tononi’s IIT (currently on version 4.0). However, Tononi himself believes computers could be only minimally conscious and only in a highly fragmented way, for technical reasons relating to his theory. Excerpted from Principia Qualia:
>Tononi has argued that “in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing” (Tononi and Koch 2015). However, he hasn’t actually published much on why he thinks this. When pressed on this, he justified this assertion by reference to IIT’s axiom of exclusion – thi axiom effectively prevents ’double counting’ a physical element to be part of multiple virtual elements, and when he ran a simple neural simulation on a simple microprocessor and looked at what the hardware was actually doing, a lot of the “virtual neurons” were being run on the same logic gates (in particular, all virtual neurons extensively share the logic gates which run the processor clock). Thus, the virtual neurons don’t exist in the same causal clump (“cause-effect repertoire”) like they do in a real brain. His conclusion was that there might be small fragments of consciousness scattered around a digital computer, but he’s confident that ‘virtual neurons’ emulated on a Von Neumann system wouldn’t produce their original qualia.
At any rate, there are many approaches to formalizing consciousness across the literature, each pointing to a slightly different set of implications for uploads, and no clear winner yet. I assign more probability mass than David or Tononi that computers generate nontrivial amounts of consciousness (see here https://opentheory.net/2022/12/ais-arent-conscious-but-computers-are/) but find David’s thesis entirely reasonable.
I wish the binding problem could be solved so simply. Information flow alone isn’t enough. Compare Eric Schwitzgebel (“If Materialism Is True, the United States Is Probably Conscious”). Even if 330 million skull-bound American minds reciprocally communicate by fast electromagnetic signalling, and implement any computation you can think of, then a unified continental subject of experience doesn’t somehow switch on—or at least, not on pain of spooky “strong” emergence”. The mystery is why 86 billion odd membrane-bound, effectively decohered classical nerve cells should be any different. Why aren’t we merely aggregates of what William James christened “mind dust”, rather than unified subjects of experience supporting local binding (individual perceptual objects) and global binding (the unity of perception and the unity of the self)? Science doesn’t know. What we do know is the phenomenal binding of organic minds is insanely computationally powerful, as rare neurological deceit syndromes (akinetopsia, integrative agnosia, simultanagnosia etc) illustrate.
I could now speculate on possible explanations. But if you don’t grok the mystery, they won’t be of any interest.
The second kind of binding problem (i.e. not the physical one (how the processing of different aspects of our perception comes together) but the philosophical one (how a composite object feels like a single thing)) is solved by defining us to be the state machine implemented by that object, and our mental states to be states of that state machine.
I.e. the error of people who believe there is a philosophical binding problem comes from the assumption that only ontologically fundamental objects can have a unified perception.
But (as far as I can tell) such a definition doesn’t explain why we aren’t micro-experiential zombies. Compare another fabulously complicated information-processing system, the enteric nervous system (“the brain in the gut”). Even if its individual membrane-bound neurons are micro-pixels of experience, there’s no phenomenally unified subject. The challenge is to explain why the awake mind-brain is different—to derive the local and global binding of our minds and the world-simulations we run from (ultimately) from physics.
Even if its individual membrane-bound neurons are micro-pixels of experience, there’s no phenomenally unified subject.
It would be meaningless to talk about a phenomenally unified subject there, since it can’t describe its perception to anyone (it can’t talk to us) and we can’t talk to it either. On top of that, it doesn’t implement the right kind of a state machine (it’s not a coherent entity of the sort that we’d call it something-that-has-a-unified-mental-state).
You remark that “A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state.” You can stipulatively define a unified mental state in this way. But this definition is not what I (or most people) mean by “unified mental state”. Science doesn’t currently know why we aren’t (at most) just 86 billion membrane-bound pixels of experience.
Forgive me, but how do “information flows” solve the binding problem?
“Information flow” is a real term—no need for quotes.
The binding problem asks how it is possible we have a unified perception if different aspects of our perception are processed in different parts of our brain. The answer is because those different parts talk to each other, which integrates the information together.
In defense of David’s point, consciousness research is currently pre-scientific, loosely akin to 1400’s alchemy. Fields become scientific as they settle on a core ontology and methodology for generating predictions from this ontology; consciousness research presently has neither.
Most current arguments about consciousness and uploading are thus ultimately arguments by intuition. Certainly an intuitive story can be told why uploading a brain and running it as a computer program would also simply transfer consciousness, but we can also tell stories where intuition pulls in the opposite direction, e.g. see Scott Aaronson’s piece here https://scottaaronson.blog/?p=1951 ; my former colleague Andres also has a relevant paper arguing against computationalist approaches here https://www.degruyter.com/document/doi/10.1515/opphil-2022-0225/html
Of the attempts to formalize the concept of information flows and its relevance to consciousness, the most notable is probably Tononi’s IIT (currently on version 4.0). However, Tononi himself believes computers could be only minimally conscious and only in a highly fragmented way, for technical reasons relating to his theory. Excerpted from Principia Qualia:
>Tononi has argued that “in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing” (Tononi and Koch 2015). However, he hasn’t actually published much on why he thinks this. When pressed on this, he justified this assertion by reference to IIT’s axiom of exclusion – thi axiom effectively prevents ’double counting’ a physical element to be part of multiple virtual elements, and when he ran a simple neural simulation on a simple microprocessor and looked at what the hardware was actually doing, a lot of the “virtual neurons” were being run on the same logic gates (in particular, all virtual neurons extensively share the logic gates which run the processor clock). Thus, the virtual neurons don’t exist in the same causal clump (“cause-effect repertoire”) like they do in a real brain. His conclusion was that there might be small fragments of consciousness scattered around a digital computer, but he’s confident that ‘virtual neurons’ emulated on a Von Neumann system wouldn’t produce their original qualia.
At any rate, there are many approaches to formalizing consciousness across the literature, each pointing to a slightly different set of implications for uploads, and no clear winner yet. I assign more probability mass than David or Tononi that computers generate nontrivial amounts of consciousness (see here https://opentheory.net/2022/12/ais-arent-conscious-but-computers-are/) but find David’s thesis entirely reasonable.
I wish the binding problem could be solved so simply. Information flow alone isn’t enough. Compare Eric Schwitzgebel (“If Materialism Is True, the United States Is Probably Conscious”). Even if 330 million skull-bound American minds reciprocally communicate by fast electromagnetic signalling, and implement any computation you can think of, then a unified continental subject of experience doesn’t somehow switch on—or at least, not on pain of spooky “strong” emergence”.
The mystery is why 86 billion odd membrane-bound, effectively decohered classical nerve cells should be any different. Why aren’t we merely aggregates of what William James christened “mind dust”, rather than unified subjects of experience supporting local binding (individual perceptual objects) and global binding (the unity of perception and the unity of the self)?
Science doesn’t know.
What we do know is the phenomenal binding of organic minds is insanely computationally powerful, as rare neurological deceit syndromes (akinetopsia, integrative agnosia, simultanagnosia etc) illustrate.
I could now speculate on possible explanations.
But if you don’t grok the mystery, they won’t be of any interest.
The second kind of binding problem (i.e. not the physical one (how the processing of different aspects of our perception comes together) but the philosophical one (how a composite object feels like a single thing)) is solved by defining us to be the state machine implemented by that object, and our mental states to be states of that state machine.
I.e. the error of people who believe there is a philosophical binding problem comes from the assumption that only ontologically fundamental objects can have a unified perception.
More here: Reductionism.
But (as far as I can tell) such a definition doesn’t explain why we aren’t micro-experiential zombies. Compare another fabulously complicated information-processing system, the enteric nervous system (“the brain in the gut”). Even if its individual membrane-bound neurons are micro-pixels of experience, there’s no phenomenally unified subject. The challenge is to explain why the awake mind-brain is different—to derive the local and global binding of our minds and the world-simulations we run from (ultimately) from physics.
A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state.
Seemingly, we can ask but why does that feel like something instead of only individual microqualia feeling like something but that’s a question that doesn’t appreciate that there is an identity there, much like thinking that it’s conceptually possible that there were hand-shape-arranged fingers but no hand.
It would be meaningless to talk about a phenomenally unified subject there, since it can’t describe its perception to anyone (it can’t talk to us) and we can’t talk to it either. On top of that, it doesn’t implement the right kind of a state machine (it’s not a coherent entity of the sort that we’d call it something-that-has-a-unified-mental-state).
You remark that “A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state.” You can stipulatively define a unified mental state in this way. But this definition is not what I (or most people) mean by “unified mental state”. Science doesn’t currently know why we aren’t (at most) just 86 billion membrane-bound pixels of experience.
There is nothing else to be meant by that—if someone means something else by that, then it doesn’t exist.