It’s about trying to figure out what’s implied about your brain by knowing that you exist.
It’s also about trying to draw some kind of boundary with “unknown environment to interact with and reason about” on one side and “physical system that is thinking and feeling” on the other side. (Well, only sort of.)
Treating a merely larger brain as more anthropically important is equivalent to saying that you can draw this boundary inside the brain (e.g. dividing big neurons down the middle), so that part of the brain is the “reasoner” and the rest of the brain, along with the outside, is the environment to be reasoned about.
This is boundary can be drawn, but I think it doesn’t match my self-knowledge as well as drawing the boundary based on my conception of my inputs and outputs.
My inputs are sight, hearing, proprioception, etc. My outputs are motor control, hormone secretion, etc. The world is the stuff that affects my inputs and is affected by my outputs, and I am the thing doing the thinking in between.
If I tried to define “I” as the left half of all the neurons in my head, suddenly I would be deeply causally connected to this thing (the right halves of the neurons) I have defined as not-me. These causal connections are like a huge new input and output channel for this defined-self—a way for me to be influenced by not-me, and influence it in turn. But I don’t notice this or include it in my reasoning—Paper and Scissors in the story are so ignorant about it that they can’t even tell which of them has it!
So I claim that I (and they) are really thinking of themselves as the system that doesn’t have such an interface, and just has the usual suite of senses. This more or less pins down the thing doing my thinking as the usual lump of non-divided neurons, regardless of its size.
Treating a merely larger brain as more anthropically important is equivalent to saying that you can draw this boundary inside the brain
I really can’t understand where this is coming from. When we weigh a bucket of water, this imposes no obligation to distinguish between individual water molecules. For thousands of years we did not know water molecules existed, and we thought of the water as continuous. I can’t tell whether this is an answer to what you’re trying to convey.
Where I’m at is… I guess I don’t think we need to draw strict boundaries between different subjective systems. I’ll probably end up mostly agreeing with Integrated Information theories. Systems of tightly causally integrated matter are more likely as subjectivities, but at no point are supersets of those systems completely precluded from having subjectivity, for example, the system of me, plus my cellphone, also has some subjectivity. At some point, the universe experiences the precise state of every transistor and every neuron at the same time (this does not mean that any conscious-acting system is cognisant of both of those things at the same time. Subjectivity is not cognisance. It is possible to experience without remembering or understanding. Humans do it all of the time.)
I’ll probably end up mostly agreeing with Integrated Information theories
Ah… x.x Maybe check out Scott Aaronsons’ blog posts on the topic (here and here)? I’m definitely more of the Denettian “consciousness is a convenient name for a particular sort of process built out of lots of parts with mental functions” school.
Anyhow, the reason I focused on drawing boundaries to separate my brain into separate physical systems is mostly historical—I got the idea from the Ebborians (further rambling here. Oh, right—I’m Manfred). I just don’t find mere mass all that convincing as a reason to think that some physical system’s surroundings are what I’m more likely to see next.
Intuitively it’s something like a symmetry of my information—if I can’t tell anything about my own brain mass just by thinking, then I shouldn’t assign my probabilities as if I have information about my brain mass. If there are two copies of me, one on Monday with a big brain and one on Tuesday with a small brain, I don’t see much difference in sensibleness between “it should be Monday because big brains are more likely” and “I should have a small brain because Tuesday is an inherently more likely day.” It just doesn’t compute as a valid argument for me without some intermediate steps that look like the Ebborians argument.
I’m definitely more of the Denettian “consciousness is a convenient name for a particular sort of process built out of lots of parts with mental functions” school.
I’m in that school as well. I’d never call correlates with anthropic measure like integrated information “consciousness”, there’s too much confusion there. I’m reluctant to call the purely mechanistic perception-encoding-rumination-action loop consciousness either. For that I try to stick, very strictly to “conscious behaviour”. I’d prefer something like “sentience” to take us even further from that mire of a word.
(But when I thought of the mirror chamber it occurred to me that there was more to it than “conscious behaviour isn’t mysterious, it’s just machines”. Something here is both relevant and mysterious. And so I have to find a way to reconcile the schools.)
athres ∝ mass is not supposed to be intuitive. Anthres ∝ number is very intuitive, what about the path from there to anthres ∝ mass didn’t work for you?
It’s about trying to figure out what’s implied about your brain by knowing that you exist.
It’s also about trying to draw some kind of boundary with “unknown environment to interact with and reason about” on one side and “physical system that is thinking and feeling” on the other side. (Well, only sort of.)
Treating a merely larger brain as more anthropically important is equivalent to saying that you can draw this boundary inside the brain (e.g. dividing big neurons down the middle), so that part of the brain is the “reasoner” and the rest of the brain, along with the outside, is the environment to be reasoned about.
This is boundary can be drawn, but I think it doesn’t match my self-knowledge as well as drawing the boundary based on my conception of my inputs and outputs.
My inputs are sight, hearing, proprioception, etc. My outputs are motor control, hormone secretion, etc. The world is the stuff that affects my inputs and is affected by my outputs, and I am the thing doing the thinking in between.
If I tried to define “I” as the left half of all the neurons in my head, suddenly I would be deeply causally connected to this thing (the right halves of the neurons) I have defined as not-me. These causal connections are like a huge new input and output channel for this defined-self—a way for me to be influenced by not-me, and influence it in turn. But I don’t notice this or include it in my reasoning—Paper and Scissors in the story are so ignorant about it that they can’t even tell which of them has it!
So I claim that I (and they) are really thinking of themselves as the system that doesn’t have such an interface, and just has the usual suite of senses. This more or less pins down the thing doing my thinking as the usual lump of non-divided neurons, regardless of its size.
I really can’t understand where this is coming from. When we weigh a bucket of water, this imposes no obligation to distinguish between individual water molecules. For thousands of years we did not know water molecules existed, and we thought of the water as continuous. I can’t tell whether this is an answer to what you’re trying to convey.
Where I’m at is… I guess I don’t think we need to draw strict boundaries between different subjective systems. I’ll probably end up mostly agreeing with Integrated Information theories. Systems of tightly causally integrated matter are more likely as subjectivities, but at no point are supersets of those systems completely precluded from having subjectivity, for example, the system of me, plus my cellphone, also has some subjectivity. At some point, the universe experiences the precise state of every transistor and every neuron at the same time (this does not mean that any conscious-acting system is cognisant of both of those things at the same time. Subjectivity is not cognisance. It is possible to experience without remembering or understanding. Humans do it all of the time.)
Ah… x.x Maybe check out Scott Aaronsons’ blog posts on the topic (here and here)? I’m definitely more of the Denettian “consciousness is a convenient name for a particular sort of process built out of lots of parts with mental functions” school.
Anyhow, the reason I focused on drawing boundaries to separate my brain into separate physical systems is mostly historical—I got the idea from the Ebborians (further rambling here. Oh, right—I’m Manfred). I just don’t find mere mass all that convincing as a reason to think that some physical system’s surroundings are what I’m more likely to see next.
Intuitively it’s something like a symmetry of my information—if I can’t tell anything about my own brain mass just by thinking, then I shouldn’t assign my probabilities as if I have information about my brain mass. If there are two copies of me, one on Monday with a big brain and one on Tuesday with a small brain, I don’t see much difference in sensibleness between “it should be Monday because big brains are more likely” and “I should have a small brain because Tuesday is an inherently more likely day.” It just doesn’t compute as a valid argument for me without some intermediate steps that look like the Ebborians argument.
I’m in that school as well. I’d never call correlates with anthropic measure like integrated information “consciousness”, there’s too much confusion there. I’m reluctant to call the purely mechanistic perception-encoding-rumination-action loop consciousness either. For that I try to stick, very strictly to “conscious behaviour”. I’d prefer something like “sentience” to take us even further from that mire of a word.
(But when I thought of the mirror chamber it occurred to me that there was more to it than “conscious behaviour isn’t mysterious, it’s just machines”. Something here is both relevant and mysterious. And so I have to find a way to reconcile the schools.)
athres ∝ mass is not supposed to be intuitive. Anthres ∝ number is very intuitive, what about the path from there to anthres ∝ mass didn’t work for you?