you don’t want them to satisfy their own definition—that would be too easy—you want them to satisfy your definition
How could I say either way when they don’t offer any definition to begin with? My original complaint was precisely that consciousness is not sufficiently well understood to allow anyone to be cavalier about these things in either direction.
Demanding that they clarify something to the satisfaction of your “visceral level” is still hand-waving.
The only one who has demanded that a concept be defined to his satisfaction here is you, when you explicitly requested a definition of suffering in terms of literal significance.
If you already have some idea of what the word “consciousness” means, you want to be reassured that the brain tissue in question is not conscious according to your idea.
I doubt you will let “them” define consciousness any way they wish. For example, I can say “X suffers iff X can communicate to me that it wants the current condition to stop”. Will you be happy with that? Probably not.
More importantly, I want there to be a serious recognition of the ethical boundaries that are being pushed against by this kind of research due to the fact that neither I nor anyone else can yet offer any satisfactory theory of consciousness. That’s the whole motivation behind my original comment, rather than the desire to advance a philosophical dogma, which seems to be what you want to impute to me.
You can’t talk about ethical boundaries being pushed unless you place that ethical boundary somewhere first. Otherwise we’re back to hand-waving: Can I say that because no one “can yet offer any satisfactory theory of consciousness’, chewing on a salad is ethically problematic?
Basically, you can’t be both worried and unhappy, and completely unspecific :-/
Is there any particular reason to believe that a salads might be capable of consciousness? No.
Is there any particular reason to believe that brains might be capable of consciousness? Yes—namely the fact that most brains insist on describing themselves as such. Does this imply brains are conscious if and only if they insist on describing themselves as such? No. No more than than a bird is only capable of flight when it’s actually literally soaring in the air.
The same way I don’t need to understand aerodynamics to know that I have no reason to believe that turtles might be capable of flight. I’ve never seen a turtle do anything that sits in the neighbourhood of the notion of “flight” in the network of concepts in my head. This type of argument doesn’t work against the putative consciousness of foetal brains, since we have good reason to believe that at least brains at a certain stage of development are in fact conscious. To argue that this means we can only have an ethical problem with running dubious experiments on brains at that stage of development is rather like arguing that since you’ve only ever seen white swans fly, the supposition that black swans might fly too is not justified as such.
The same way I don’t need to understand aerodynamics to know that I have no reason to believe that turtles might be capable of flight.
You don’t need to know the underlying mechanics, but you do need to know what flight is.
You’re saying we don’t even know what consciousness is.
To argue that this means we can only have an ethical problem with running dubious experiments on brains at that stage of development
No one is arguing that. I am saying that if you claim to have a problem, you have to be more specific about what your problem is and what might convince you that it is not a problem.
“Prove to me something I don’t know what” is not a useful attitude.
You’re saying we don’t even know what consciousness is.
Not in the least. I know what consciousness is because I am a consciousness. The need for a theory of consciousness is necessary to tie the concept to the material world, so that you can make statements like “a rock cannot be conscious, in principle”.
I am saying that if you claim to have a problem, you have to be more specific about what your problem is and what might convince you that it is not a problem
What might convince me is a satisfactory theory of consciousness. Do I have to provide a full specification of what would be “satisfactory” just to recognize an ethical problem? If so there is hardly anything about which I could raise an ethical concern, since I’d perpetually be working on epistemic aesthetics until all necessary puzzles are solved. This is just in fact not how anyone operates. We proceed with vague concepts, heuristic criteria for satisfactoriness, incomplete theories, etc. To say that this should be disallowed unless you can unfold your theory’s logical substructure in a kind of Principia Ethica is waaay more useless than interpreting ideas through partial theories.
Do I have to provide a full specification of what would be “satisfactory” just to recognize an ethical problem?
Not “full”, but some, yes. Otherwise anyone can squint at anything and say “I think there is an ethical problem here. I can’t quite put my finger on it, but my gut feeling (“visceral level”) is that there is”—and there is no adequate response to that.
As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.
The fuzziest starting point for “consciousness” is “something similar to what I experience when I consider my own mind”. But this doesn’t help much. Someone can still claim “So rocks probably have consciousness!”, and another can respond “Certainly not, but brains grown in labs likely do!”. Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.
For some concepts, we disagree on definitions because we don’t know actually know what those concepts refer to (this doesn’t include concepts like “art”, etc.). I’m not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?
What it is, to know what one is referring to? If I see a flying saucer, I may be wrong in believing it’s an alien spaceship, but I am not wrong about seeing something, a thing I also believe to be an alien spaceship.
pangel says:
The fuzziest starting point for “consciousness” is “something similar to what I experience when I consider my own mind”.
and that is the brute fact from which the conundrum of consciousness starts. The fact of having subjective experience is the primary subject matter. That we have no idea how, given everything else we know about the world, there could be any such thing as experience, is not a problem for the fact. It is a problem for those seeking an explanation for the fact. Ignorance and confusion are in the map, not the territory.
All attempts to solve the problem have so far taken one of two forms:
Here is something objectively measurable that correlates with the subjective experience. Therefore that thing is the subjective experience.
We can’t explain it, therefore it doesn’t exist.
Discussion mostly takes the form of knocking down everyone else’s wrong theories. But all the theories are wrong, so there is no end to this.
The actual creation of brains-in-vats will certainly give more urgency to the issue. I expect the ethical issues will be dealt with just by prohibiting growing beyond a certain stage.
To know what I’m referring to by a term is to know what properties something in the world would need to have to be a referent for that term.
The ability to recognize such things in the world is beside the point. When I say “my ancestors,” I know what I mean, but in most cases it’s impossible to pick that attribute out empirically—I can’t pick out most of my ancestors now, because they no longer exist to be picked out, and nobody could have picked them out back when they were alive, because the defining characteristic of the category is in terms of something that hadn’t yet been born. (Unless you want to posit atypical time-travel, of course, but that’s not my point.)
So, sure, if by “flying saucer” I refer to an alien spaceship, I don’t necessarily have any way of knowing whether something I’m observing is a flying saucer or not, but I know what I mean when I claim that it is or isn’t.
And if by “consciousness” I refer to anything sufficiently similar to what I experience when I consider my own mind, then I can’t tell whether a rock is conscious, but I know what I mean when I claim it is or isn’t.
Rereading pangel’s comment, I note that I initially understood “we don’t know actually know what those concepts refer to” to mean we don’t have the latter thing… that we don’t know what we mean to express when we claim that the concept refers to something… but it can also be interpreted as saying we don’t know in what things in the world the concept correctly refers to (as with your example of being wrong about believing something is an alien spaceship).
I’ll stand by my original statement in the original context I made it in, but sure, I also agree that just because we don’t currently know what things in the world are or aren’t conscious (or flying saucers, or accurate blueprints for anti-gravity devices, or ancestors of my great-great-grandchild, or whatever) doesn’t mean we can’t talk sensibly about the category. (Doesn’t mean we can, either.)
And, yes, the fact that I don’t know how subjective experience comes to be doesn’t prevent me from recognizing subjective experience.
As for urgency… I dunno. I suspect we’ll collectively go on inferring that things have a consciousness similar to our own with a confidence proportional to how similar their external behavior is to our own for quite a long time past the development of (human) brains in vats. But sure, I can easily imagine various legal prohibitions like you describe along the way.
I meant it in the sense you understood first. I don’t know what to make of the other interpretation. If a concept is well-defined, the question “Does X match the concept?” is clear. Of course it may be hard to answer.
But suppose you only have a vague understanding of ancestry. Actually, you’ve only recently coined the word “ancestor” to point at some blob of thought in your head. You think there’s a useful idea there, but the best you can for now is: “someone who relates to me in a way similar to how my dad and my grandmother relate to me”. You go around telling people about this, and someone responds “yes, this is the brute fact from which the conundrum of ancestry start”. An other tells you you ought to stop using that word if you don’t know what the referent is. Then they go on to say your definition is fine, it doesn’t matter if you don’t know how someone comes to be an ancestor, you can still talk about an ancestor and make sense. You have not gone through all the tribe’s initiation rituals yet, so you don’t know how you relate to grey wolves. Maybe they’re your ancestors, maybe not. But the other says : “At least, you know what you mean when you claim they are or are not your ancestors.”.
Then your little sisters drops by and says: “Is this rock one of your ancestors?”. No, certainly not. “OK, didn’t think so. Am I one of your ancestors?”. You feel about it for a minute and say no. “Why? We’re really close family. It’s very similar to how dad or grandma relate to you.” Well, you didn’t include it in your original definition, but someone younger than you can definitely not be your ancestor. It’s not that kind of “similar”. A bit of time and a good number of family members later, you have a better definition. Your first definition was just two examples, something about “relating”, and the word “similar” thrown in to mean “and everyone else who is also an ancestor.” But similar in what way?
Now the word means “the smallest set such that your parents are in it, and any parent of an ancestor is an ancestor”...”union the elders of the tribe, dead or alive, and a couple of noble animal species.” Maybe a few generations later you’ll drop the second term of the definition and start talking about genes, whatever.
My “fuzziest starting point” was really fuzzy, and not a good definition. It was one example, something about being able to “experience” stuff, and the word “similar” thrown in to mean “and everyone else who is conscious.” I may (kind of) know what I mean when I say a rock is not conscious, since it doesn’t experience anything, but what do I mean exactly when I say that a dog isn’t conscious?
I don’t think I know what I mean when I say that, but I think it can help to keep using the word.
P.S. The final answer could be as in the ancestor story, a definition which closely matches the initial intuition. It could also be something really weird where you realize you were just confused and stop using the word. I mean, the life force of vitalism was probably a brute fact for a long time.
So, I want to point out explicitly that in your example of ancestry, I intuitively know enough about this concept of mine to know my sister isn’t my ancestor, but I don’t know enough to know why not. (This isn’t an objection; I just want to state it explicitly so we don’t lose sight of it.)
And, OK, I do grant the legitimacy of starting with an intuitive concept and talking around it in the hopes of extracting from my own mind a clearer explicit understanding of that concept. And I’m fine with the idea of labeling that concept from the beginning of the process, just so I can be clear about when I’m referring to it, and don’t confuse myself.
So, OK. I stand corrected here; there are contexts in which I’m OK with using a label even if I don’t quite know what I mean by it.
That said… I’m not quite so sanguine about labeling it with words that have a rich history in my language when I’m not entirely sure that the thing(s) the word has historically referred to is in fact the concept in my head.
That is, if I’ve coined the word “ancestor” to refer to this fuzzy concept, and I say some things about “ancestry,” and then someone comes along “this is the brute fact from which the conundrum of ancestry start” as in your example, my reaction ought to be startlement… why is this guy talking so confidently about a term I just coined?
But of course, I didn’t just coin the word “ancestor.” It’s a perfectly common English word. So… why have I chosen that pre-existing word as a label for my fuzzy concept? At the very least, it seems I’m risking importing by reference a host of connotations that exist for that word without carefully considering whether I actually intend to mean them.
And I guess I’d ask you the same question about “conscious.” Given that there’s this concept you don’t know much about explicitly, but feel you know things about implicitly, and about which you’re trying to make your implicit knowledge explicit… how confident are you that this concept corresponds to the common English word “consciousness” (as opposed to, for example, the common English words “mind”, or “soul”, or “point of view,” or “self-image,” or “self,” or not corresponding especially well to any common English word, perhaps because the history of our language around this concept is irreversibly corrupted)?
How could I say either way when they don’t offer any definition to begin with? My original complaint was precisely that consciousness is not sufficiently well understood to allow anyone to be cavalier about these things in either direction.
The only one who has demanded that a concept be defined to his satisfaction here is you, when you explicitly requested a definition of suffering in terms of literal significance.
If you already have some idea of what the word “consciousness” means, you want to be reassured that the brain tissue in question is not conscious according to your idea.
I doubt you will let “them” define consciousness any way they wish. For example, I can say “X suffers iff X can communicate to me that it wants the current condition to stop”. Will you be happy with that? Probably not.
More importantly, I want there to be a serious recognition of the ethical boundaries that are being pushed against by this kind of research due to the fact that neither I nor anyone else can yet offer any satisfactory theory of consciousness. That’s the whole motivation behind my original comment, rather than the desire to advance a philosophical dogma, which seems to be what you want to impute to me.
You can’t talk about ethical boundaries being pushed unless you place that ethical boundary somewhere first. Otherwise we’re back to hand-waving: Can I say that because no one “can yet offer any satisfactory theory of consciousness’, chewing on a salad is ethically problematic?
Basically, you can’t be both worried and unhappy, and completely unspecific :-/
Is there any particular reason to believe that a salads might be capable of consciousness? No.
Is there any particular reason to believe that brains might be capable of consciousness? Yes—namely the fact that most brains insist on describing themselves as such. Does this imply brains are conscious if and only if they insist on describing themselves as such? No. No more than than a bird is only capable of flight when it’s actually literally soaring in the air.
How can you tell without “any satisfactory theory of consciousness”?
The same way I don’t need to understand aerodynamics to know that I have no reason to believe that turtles might be capable of flight. I’ve never seen a turtle do anything that sits in the neighbourhood of the notion of “flight” in the network of concepts in my head. This type of argument doesn’t work against the putative consciousness of foetal brains, since we have good reason to believe that at least brains at a certain stage of development are in fact conscious. To argue that this means we can only have an ethical problem with running dubious experiments on brains at that stage of development is rather like arguing that since you’ve only ever seen white swans fly, the supposition that black swans might fly too is not justified as such.
You don’t need to know the underlying mechanics, but you do need to know what flight is.
You’re saying we don’t even know what consciousness is.
No one is arguing that. I am saying that if you claim to have a problem, you have to be more specific about what your problem is and what might convince you that it is not a problem.
“Prove to me something I don’t know what” is not a useful attitude.
Not in the least. I know what consciousness is because I am a consciousness. The need for a theory of consciousness is necessary to tie the concept to the material world, so that you can make statements like “a rock cannot be conscious, in principle”.
What might convince me is a satisfactory theory of consciousness. Do I have to provide a full specification of what would be “satisfactory” just to recognize an ethical problem? If so there is hardly anything about which I could raise an ethical concern, since I’d perpetually be working on epistemic aesthetics until all necessary puzzles are solved. This is just in fact not how anyone operates. We proceed with vague concepts, heuristic criteria for satisfactoriness, incomplete theories, etc. To say that this should be disallowed unless you can unfold your theory’s logical substructure in a kind of Principia Ethica is waaay more useless than interpreting ideas through partial theories.
Not “full”, but some, yes. Otherwise anyone can squint at anything and say “I think there is an ethical problem here. I can’t quite put my finger on it, but my gut feeling (“visceral level”) is that there is”—and there is no adequate response to that.
As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.
The fuzziest starting point for “consciousness” is “something similar to what I experience when I consider my own mind”. But this doesn’t help much. Someone can still claim “So rocks probably have consciousness!”, and another can respond “Certainly not, but brains grown in labs likely do!”. Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.
For some concepts, we disagree on definitions because we don’t know actually know what those concepts refer to (this doesn’t include concepts like “art”, etc.). I’m not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?
If I don’t know what I’m referring to when I say “consciousness,” it seems reasonable to conclude that I ought not use the term.
What it is, to know what one is referring to? If I see a flying saucer, I may be wrong in believing it’s an alien spaceship, but I am not wrong about seeing something, a thing I also believe to be an alien spaceship.
pangel says:
and that is the brute fact from which the conundrum of consciousness starts. The fact of having subjective experience is the primary subject matter. That we have no idea how, given everything else we know about the world, there could be any such thing as experience, is not a problem for the fact. It is a problem for those seeking an explanation for the fact. Ignorance and confusion are in the map, not the territory.
All attempts to solve the problem have so far taken one of two forms:
Here is something objectively measurable that correlates with the subjective experience. Therefore that thing is the subjective experience.
We can’t explain it, therefore it doesn’t exist.
Discussion mostly takes the form of knocking down everyone else’s wrong theories. But all the theories are wrong, so there is no end to this.
The actual creation of brains-in-vats will certainly give more urgency to the issue. I expect the ethical issues will be dealt with just by prohibiting growing beyond a certain stage.
To know what I’m referring to by a term is to know what properties something in the world would need to have to be a referent for that term.
The ability to recognize such things in the world is beside the point. When I say “my ancestors,” I know what I mean, but in most cases it’s impossible to pick that attribute out empirically—I can’t pick out most of my ancestors now, because they no longer exist to be picked out, and nobody could have picked them out back when they were alive, because the defining characteristic of the category is in terms of something that hadn’t yet been born. (Unless you want to posit atypical time-travel, of course, but that’s not my point.)
So, sure, if by “flying saucer” I refer to an alien spaceship, I don’t necessarily have any way of knowing whether something I’m observing is a flying saucer or not, but I know what I mean when I claim that it is or isn’t.
And if by “consciousness” I refer to anything sufficiently similar to what I experience when I consider my own mind, then I can’t tell whether a rock is conscious, but I know what I mean when I claim it is or isn’t.
Rereading pangel’s comment, I note that I initially understood “we don’t know actually know what those concepts refer to” to mean we don’t have the latter thing… that we don’t know what we mean to express when we claim that the concept refers to something… but it can also be interpreted as saying we don’t know in what things in the world the concept correctly refers to (as with your example of being wrong about believing something is an alien spaceship).
I’ll stand by my original statement in the original context I made it in, but sure, I also agree that just because we don’t currently know what things in the world are or aren’t conscious (or flying saucers, or accurate blueprints for anti-gravity devices, or ancestors of my great-great-grandchild, or whatever) doesn’t mean we can’t talk sensibly about the category. (Doesn’t mean we can, either.)
And, yes, the fact that I don’t know how subjective experience comes to be doesn’t prevent me from recognizing subjective experience.
As for urgency… I dunno. I suspect we’ll collectively go on inferring that things have a consciousness similar to our own with a confidence proportional to how similar their external behavior is to our own for quite a long time past the development of (human) brains in vats. But sure, I can easily imagine various legal prohibitions like you describe along the way.
I meant it in the sense you understood first. I don’t know what to make of the other interpretation. If a concept is well-defined, the question “Does X match the concept?” is clear. Of course it may be hard to answer.
But suppose you only have a vague understanding of ancestry. Actually, you’ve only recently coined the word “ancestor” to point at some blob of thought in your head. You think there’s a useful idea there, but the best you can for now is: “someone who relates to me in a way similar to how my dad and my grandmother relate to me”. You go around telling people about this, and someone responds “yes, this is the brute fact from which the conundrum of ancestry start”. An other tells you you ought to stop using that word if you don’t know what the referent is. Then they go on to say your definition is fine, it doesn’t matter if you don’t know how someone comes to be an ancestor, you can still talk about an ancestor and make sense. You have not gone through all the tribe’s initiation rituals yet, so you don’t know how you relate to grey wolves. Maybe they’re your ancestors, maybe not. But the other says : “At least, you know what you mean when you claim they are or are not your ancestors.”.
Then your little sisters drops by and says: “Is this rock one of your ancestors?”. No, certainly not. “OK, didn’t think so. Am I one of your ancestors?”. You feel about it for a minute and say no. “Why? We’re really close family. It’s very similar to how dad or grandma relate to you.” Well, you didn’t include it in your original definition, but someone younger than you can definitely not be your ancestor. It’s not that kind of “similar”. A bit of time and a good number of family members later, you have a better definition. Your first definition was just two examples, something about “relating”, and the word “similar” thrown in to mean “and everyone else who is also an ancestor.” But similar in what way?
Now the word means “the smallest set such that your parents are in it, and any parent of an ancestor is an ancestor”...”union the elders of the tribe, dead or alive, and a couple of noble animal species.” Maybe a few generations later you’ll drop the second term of the definition and start talking about genes, whatever.
My “fuzziest starting point” was really fuzzy, and not a good definition. It was one example, something about being able to “experience” stuff, and the word “similar” thrown in to mean “and everyone else who is conscious.” I may (kind of) know what I mean when I say a rock is not conscious, since it doesn’t experience anything, but what do I mean exactly when I say that a dog isn’t conscious?
I don’t think I know what I mean when I say that, but I think it can help to keep using the word.
P.S. The final answer could be as in the ancestor story, a definition which closely matches the initial intuition. It could also be something really weird where you realize you were just confused and stop using the word. I mean, the life force of vitalism was probably a brute fact for a long time.
Hm.
So, I want to point out explicitly that in your example of ancestry, I intuitively know enough about this concept of mine to know my sister isn’t my ancestor, but I don’t know enough to know why not. (This isn’t an objection; I just want to state it explicitly so we don’t lose sight of it.)
And, OK, I do grant the legitimacy of starting with an intuitive concept and talking around it in the hopes of extracting from my own mind a clearer explicit understanding of that concept. And I’m fine with the idea of labeling that concept from the beginning of the process, just so I can be clear about when I’m referring to it, and don’t confuse myself.
So, OK. I stand corrected here; there are contexts in which I’m OK with using a label even if I don’t quite know what I mean by it.
That said… I’m not quite so sanguine about labeling it with words that have a rich history in my language when I’m not entirely sure that the thing(s) the word has historically referred to is in fact the concept in my head.
That is, if I’ve coined the word “ancestor” to refer to this fuzzy concept, and I say some things about “ancestry,” and then someone comes along “this is the brute fact from which the conundrum of ancestry start” as in your example, my reaction ought to be startlement… why is this guy talking so confidently about a term I just coined?
But of course, I didn’t just coin the word “ancestor.” It’s a perfectly common English word. So… why have I chosen that pre-existing word as a label for my fuzzy concept? At the very least, it seems I’m risking importing by reference a host of connotations that exist for that word without carefully considering whether I actually intend to mean them.
And I guess I’d ask you the same question about “conscious.” Given that there’s this concept you don’t know much about explicitly, but feel you know things about implicitly, and about which you’re trying to make your implicit knowledge explicit… how confident are you that this concept corresponds to the common English word “consciousness” (as opposed to, for example, the common English words “mind”, or “soul”, or “point of view,” or “self-image,” or “self,” or not corresponding especially well to any common English word, perhaps because the history of our language around this concept is irreversibly corrupted)?