Since the Universe’s computational accuracy appears to be infinite, in order for the mind to be omniscient about a human brain it must be running the human brain’s quark-level computations within its own mind; any approximate computation would
yield imperfect predictions. In the act of running this computation, the brain’s qualia are generated, if (as we have assumed)
the brain in question experiences qualia. Therefore the omniscient mind is fully aware of all of the qualia that are experienced within the volume of the Universe about which it has perfect knowledge.
Suppose an entity with qualia emerges in the Game of Life. Surely the omniscient being doesn’t have to have those qualia to predict perfecty (and, it seems, to have perfect “physical” knowledge of the simulation)?
I don’t see any difference. If the superintelligence is just watching the dots move around, then it isn’t predicting anything. If it knows exactly what the simulation is going to do, then it must be performing the same computations that the computer running the cellular automaton is doing. Amongst these computations must be the computations that generate the qualia that the entity in Game of Life experiences. Therefore the superintelligence also experiences the qualia.
Amongst these computations must be the computations that generate the qualia that the entity in Game of Life experiences. Therefore the superintelligence also experiences the qualia.
To take it down to something easier for me to grasp than a superintelligence: if there is a color a rat associates with something bad, say green for foul food pellets, and a human has an enhancement enabling it to model that rat’s brain perfectly, and the human does, and that human associates that color green with something positive, like money, what predictions can we derive from the claim “the superintelligence also experiences the qualia” when the human simulates the rat seeing that color green?
Firstly, it would no longer be a “human” as most would understand the description. In fact, if it could model the rat’s brain perfectly it would be just as unphysical as our hypothetical omniscient being.
This “human” would be running ordinary typical-human-brain computations in one part of its massive brain, and rat-brain computations in another. Therefore different parts of the brain would be experiencing different qualia separately. It cannot “mix up” the computations otherwise they would obviously no longer be the same computations; so they do not interfere with one another.
The problem is really that we have no conception of a mind that experiences multiple qualia at once—but in fact this is what the superintelligence problem entails. We might very well view the superintelligence as an umbrella mind, incorporating different minds within itself whilst presumably possessing some centralised intelligence running computations that assess the different sub-units. I have no idea what that would be like!
Nonetheless, we see that perfect knowledge of the physical brain produces perfectly reproduced qualia in the observer. Whether we want to define this as the observer being split into different beings (so the human has simply become a human+rat) does not change our conclusion regarding “extra-physicalism”.
It cannot “mix up” the computations otherwise they would obviously no longer be the same computations; so they do not interfere with one another.
How do you know the only system that could carry a conversation about the weather and predict the next move of every rat molecule or atom or whatever relevant bit would be one that separated the processes?
Alternatively, considering that the only requirement for computation is locality, can’t it get ahead in one part and then take a break there, unlike a rat’s mushy brain? If the time gap doesn’t seem important enough just scale up physical size the mind in question, not necessarily its complexity.
The problem is really that we have no conception of a mind that experiences multiple qualia at once—but in fact this is what the superintelligence problem entails.
I’m not really sure what a single quale is supposed to be. I had thought that there would be at least one for color, another for smell, but I can accept the idea that they are so specific that there is one per human mind state—in which case I’m not sure why something smarter than human would necessarily need more than just one bigger one, or where a cutoff might be in size, or why a cutoff might be in the first place.
There are some interesting papers out there relating the idea of data compression to intelligence.
How do you know the only system that could carry a conversation about the weather and predict the next move of every rat molecule or atom or whatever relevant bit would be one that separated the processes?
It might not have to separate the processes completely I suppose, if there were exact similarities in the computations somewhere. But I meant that if the humanrat started experiencing a composite of qualia from rat and human, like seeing green money and feeling scared of it, then the humanrat cannot be using that to predict the behaviours of either rat or human. Or insofar as it is predicting the behaviours in a “distributed way” it is also experiencing all of the qualia in a “distributed way” and should be able to reintegrate these gaining full knowledge of the qualia.
I have no idea of the “rules of qualia” although it seems like the sort of thing we could potentially obtain “subjective” knowledge about. But I don’t see a real objection to my article emerging here.
There are a bunch of different sets and they have their pros and cons but whatever you do don’t use the 4th Edition rules of Qualia.
The more refining that is done to the concept the simpler it gets and the harder it is to suspend disbelief. It’s as if someone thought it would be possible to try and make rich characters from WoW avatars that are empty shells, as if the essence of a character’s richness could be stripped from complex, caused interactions with the environment. That’s not magic, it’s hand waving. So what if they would otherwise be computationally expensive?
I think you are making the assumption of strong AI: that a simulation of consciousness must necessarily be consciousness. Consider an omniscient being predicting the results of an H-bomb down to the quark level. Must the omniscient mind containing the simulation reach temperatures of 10,000 F? Must it reach overpressures sufficient to collapse its own mind? I think not. SO presumably there is a way to simulate consciousness without experiencing it, if it is material.
Suppose an entity with qualia emerges in the Game of Life. Surely the omniscient being doesn’t have to have those qualia to predict perfecty (and, it seems, to have perfect “physical” knowledge of the simulation)?
I don’t see any difference. If the superintelligence is just watching the dots move around, then it isn’t predicting anything. If it knows exactly what the simulation is going to do, then it must be performing the same computations that the computer running the cellular automaton is doing. Amongst these computations must be the computations that generate the qualia that the entity in Game of Life experiences. Therefore the superintelligence also experiences the qualia.
To take it down to something easier for me to grasp than a superintelligence: if there is a color a rat associates with something bad, say green for foul food pellets, and a human has an enhancement enabling it to model that rat’s brain perfectly, and the human does, and that human associates that color green with something positive, like money, what predictions can we derive from the claim “the superintelligence also experiences the qualia” when the human simulates the rat seeing that color green?
Firstly, it would no longer be a “human” as most would understand the description. In fact, if it could model the rat’s brain perfectly it would be just as unphysical as our hypothetical omniscient being.
This “human” would be running ordinary typical-human-brain computations in one part of its massive brain, and rat-brain computations in another. Therefore different parts of the brain would be experiencing different qualia separately. It cannot “mix up” the computations otherwise they would obviously no longer be the same computations; so they do not interfere with one another.
The problem is really that we have no conception of a mind that experiences multiple qualia at once—but in fact this is what the superintelligence problem entails. We might very well view the superintelligence as an umbrella mind, incorporating different minds within itself whilst presumably possessing some centralised intelligence running computations that assess the different sub-units. I have no idea what that would be like!
Nonetheless, we see that perfect knowledge of the physical brain produces perfectly reproduced qualia in the observer. Whether we want to define this as the observer being split into different beings (so the human has simply become a human+rat) does not change our conclusion regarding “extra-physicalism”.
How do you know the only system that could carry a conversation about the weather and predict the next move of every rat molecule or atom or whatever relevant bit would be one that separated the processes?
Alternatively, considering that the only requirement for computation is locality, can’t it get ahead in one part and then take a break there, unlike a rat’s mushy brain? If the time gap doesn’t seem important enough just scale up physical size the mind in question, not necessarily its complexity.
I’m not really sure what a single quale is supposed to be. I had thought that there would be at least one for color, another for smell, but I can accept the idea that they are so specific that there is one per human mind state—in which case I’m not sure why something smarter than human would necessarily need more than just one bigger one, or where a cutoff might be in size, or why a cutoff might be in the first place.
There are some interesting papers out there relating the idea of data compression to intelligence.
human+(rat brain predictor)
It might not have to separate the processes completely I suppose, if there were exact similarities in the computations somewhere. But I meant that if the humanrat started experiencing a composite of qualia from rat and human, like seeing green money and feeling scared of it, then the humanrat cannot be using that to predict the behaviours of either rat or human. Or insofar as it is predicting the behaviours in a “distributed way” it is also experiencing all of the qualia in a “distributed way” and should be able to reintegrate these gaining full knowledge of the qualia.
I have no idea of the “rules of qualia” although it seems like the sort of thing we could potentially obtain “subjective” knowledge about. But I don’t see a real objection to my article emerging here.
There are a bunch of different sets and they have their pros and cons but whatever you do don’t use the 4th Edition rules of Qualia.
The more refining that is done to the concept the simpler it gets and the harder it is to suspend disbelief. It’s as if someone thought it would be possible to try and make rich characters from WoW avatars that are empty shells, as if the essence of a character’s richness could be stripped from complex, caused interactions with the environment. That’s not magic, it’s hand waving. So what if they would otherwise be computationally expensive?
I think you are making the assumption of strong AI: that a simulation of consciousness must necessarily be consciousness. Consider an omniscient being predicting the results of an H-bomb down to the quark level. Must the omniscient mind containing the simulation reach temperatures of 10,000 F? Must it reach overpressures sufficient to collapse its own mind? I think not. SO presumably there is a way to simulate consciousness without experiencing it, if it is material.