How can you be 100% confident that a look up table has zero consciousness when you don’t even know for sure what consciousness is?
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don’t recall EY giving HIS definition of consciousness for his thought experiment.
However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.
If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.
But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into “conscious” and “unconscious” in something that resembles what we understand when we talk about “consciousness,” and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout.
If my goal is to talk about something with a particular definition, then I prefer not to use an existing word to refer to it when that word doesn’t refer unambiguously to the definition I have in mind. That just leads to confusing conversations. I’d rather just make up a new term to go with my new made-up definition and talk about that.
Conversely, if my goal is to use the word “consciousness” in a way that respects the existing usage of the term, coming up with an arbitrary definition that is unambiguous and non-contradictory but doesn’t respect that existing usage won’t quite accomplish that. I mean, I could define “consciousness” as the ability to speak fluent English; that would be unambiguous and non-contradictory and there’s even some historical precedent for it, but I consider it a poor choice of referent.
If my goal is to talk about something with a particular definition, then I prefer not to use an existing word to refer to it when that word doesn’t refer unambiguously to the definition I have in mind. That just leads to confusing conversations. I’d rather just make up a new term to go with my new made-up definition and talk about that.
Well, casual conversation is not the same as using key terms (or words) in a scientific hypothesis, so that’s a different subject, but new terms to define new ideas is fine if it’s your hypothesis. In conversation, new definitions for old words would be confusing and defining old words in a new way could be confusing as well. That’s not what I am saying.
Words can have multiple meanings and the dictionary gives the most popular usages. If we are appealing to the popular use then we still need to define the word. At any rate, whatever key terms that we use in our hypothesis must be precise, unambiguous, non circular, non-contradictory and used consistently throughout our presentation.
Conversely, if my goal is to use the word “consciousness” in a way that respects the existing usage of the term, coming up with an arbitrary definition that is unambiguous and non-contradictory but doesn’t respect that existing usage won’t quite accomplish that.
I’m saying it is important what EY meant by consciousness. If the person I quoted says we don’t know what it is.....then that person doesn’t know what the existing usage of the word is, or it is not well defined.
Anyways, why would you use a poor choice of a referent?
At any rate, whatever key terms that we use in our hypothesis must be precise, unambiguous, non circular, non-contradictory and used consistently throughout our presentation.
I’m personally okay with circular definition when used appropriately. For instance, there’s the Haskell definition
naturalNumbers = 1 : (map (+ 1) naturalNumbers)
which tells you how to build the natural numbers in terms of the natural numbers.
If my goal is to clarify some confusing aspects of what people think about when they use the word “consciousness”, then if I end up talking about something other than what people think about when they use the word “consciousness” (for example, if I come up with some precise, unambiguous, non-circular, non-contradictory definition for the term) there’s a good chance that I’ve lost sight of my goal.
The point of defining one’s terms is to avoid confusion in the first place. It doesn’t matter what anyone else thinks consciousness means. Only the meaning as defined in the theorist’s hypothesis is important at this stage of the scientific method.
“there’s a good chance that I’ve lost sight of my goal”
That’s something I don’t understand (with epistemic rationality- “The art of choosing actions that steer the future toward outcomes ranked higher in your preferences ”).
This is fine when a person is making personal choices on how to act, but when it comes to knowledge (and especially the scientific method)....It seems like ultimately one would be interested in increasing one’s understanding regardless of an individual’s goals, preferences or values.
Oh well, at least we aren’t using Weber’s affectual rationality involving feelings here.
I would agree that if what I want to do is increase my understanding regardless of my ability to communicate effectively with other people (which isn’t true of me, but might be true of others), and if communicating effectively with others doesn’t itself contribute significantly to my understanding (which isn’t true of me, but might be true of others), then choosing definitions for my words that maximize my internal clarity without reference to what those words mean to others is a pretty good strategy.
You started out by asking why EY doesn’t do that, and I was suggesting that perhaps it’s because his goals weren’t the goals you’re assuming here.
Reading between the lines a bit, I infer that the question was rhetorical in the first place, and your point is that maximizing individual understanding without reference to other goals, preferences, values, or communication with others should be what EY is doing… or perhaps that it is what he’s doing, and he’s doing a bad job of it.
How can you be 100% confident that a look up table has zero consciousness when you don’t even know for sure what consciousness is?
In response Monkeymind said :
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout?
Not being100% confident what consciousness is, seemed to be a concern to anotherblackhat. Defining consciousness would have removed that concern.
No need to “read between the lines” as it was a straight forward question. I really didn’t understand why the definition of consciousness wasn’t laid out in advance of the thot experiment.
Defining terms allows one to communicate more effectively with others which is really important in any conversation but essential in presenting a hypothesis.
I was informed by Dlthomas that conceptspace is different than thingspace, so I think get the jest of it now.
However, my point was, and is, that the theorist’s defs are crucial to the hypothesis and hypotheses don’t care at all about goals, preferences, and values. Hypotheses simply illustrate the actors, define the terms in the script and set the stage for the first act. Now we can move on to the theory and hopefully form a conclusion.
No need to apologize, it is easy to misunderstand me, as I am not very articulate to begin with, and as usual, I don’t understand what I know about it!
ADDED: And I still need to learn how to narrow the inferential gap!
Agreed that hypotheses don’t care about goals, preferences, or values. Agreed that for certain activities, well-defined terms are more important than anything else.
If you don’t know for sure what consciousness is, you define it as best you can, and proceed forward to see if your hypothesis is rational and that the theory is possible. If you define conscious as made of cells, then everyone knows right away a GLUT is not conscious (that is, if it is not made of cells) by YOUR def. and tells you, you are being irrational, please go back to the drawing board!
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don’t recall EY giving HIS definition of consciousness for his thought experiment.
However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.
Things that are true “by definition” are generally not very interesting.
If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.
But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into “conscious” and “unconscious” in something that resembles what we understand when we talk about “consciousness,” and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.
If my goal is to talk about something with a particular definition, then I prefer not to use an existing word to refer to it when that word doesn’t refer unambiguously to the definition I have in mind. That just leads to confusing conversations. I’d rather just make up a new term to go with my new made-up definition and talk about that.
Conversely, if my goal is to use the word “consciousness” in a way that respects the existing usage of the term, coming up with an arbitrary definition that is unambiguous and non-contradictory but doesn’t respect that existing usage won’t quite accomplish that. I mean, I could define “consciousness” as the ability to speak fluent English; that would be unambiguous and non-contradictory and there’s even some historical precedent for it, but I consider it a poor choice of referent.
Well, casual conversation is not the same as using key terms (or words) in a scientific hypothesis, so that’s a different subject, but new terms to define new ideas is fine if it’s your hypothesis. In conversation, new definitions for old words would be confusing and defining old words in a new way could be confusing as well. That’s not what I am saying.
Words can have multiple meanings and the dictionary gives the most popular usages. If we are appealing to the popular use then we still need to define the word. At any rate, whatever key terms that we use in our hypothesis must be precise, unambiguous, non circular, non-contradictory and used consistently throughout our presentation.
I’m saying it is important what EY meant by consciousness. If the person I quoted says we don’t know what it is.....then that person doesn’t know what the existing usage of the word is, or it is not well defined.
Anyways, why would you use a poor choice of a referent?
I’m personally okay with circular definition when used appropriately. For instance, there’s the Haskell definition
which tells you how to build the natural numbers in terms of the natural numbers.
If my goal is to clarify some confusing aspects of what people think about when they use the word “consciousness”, then if I end up talking about something other than what people think about when they use the word “consciousness” (for example, if I come up with some precise, unambiguous, non-circular, non-contradictory definition for the term) there’s a good chance that I’ve lost sight of my goal.
Thanx! TheOtherDave:
The point of defining one’s terms is to avoid confusion in the first place. It doesn’t matter what anyone else thinks consciousness means. Only the meaning as defined in the theorist’s hypothesis is important at this stage of the scientific method.
That’s something I don’t understand (with epistemic rationality- “The art of choosing actions that steer the future toward outcomes ranked higher in your preferences ”).
This is fine when a person is making personal choices on how to act, but when it comes to knowledge (and especially the scientific method)....It seems like ultimately one would be interested in increasing one’s understanding regardless of an individual’s goals, preferences or values.
Oh well, at least we aren’t using Weber’s affectual rationality involving feelings here.
I would agree that if what I want to do is increase my understanding regardless of my ability to communicate effectively with other people (which isn’t true of me, but might be true of others), and if communicating effectively with others doesn’t itself contribute significantly to my understanding (which isn’t true of me, but might be true of others), then choosing definitions for my words that maximize my internal clarity without reference to what those words mean to others is a pretty good strategy.
You started out by asking why EY doesn’t do that, and I was suggesting that perhaps it’s because his goals weren’t the goals you’re assuming here.
Reading between the lines a bit, I infer that the question was rhetorical in the first place, and your point is that maximizing individual understanding without reference to other goals, preferences, values, or communication with others should be what EY is doing… or perhaps that it is what he’s doing, and he’s doing a bad job of it.
If so, I apologize for misunderstanding.
@TheOtherDave:
Anotherblackhat said :
In response Monkeymind said :
Not being100% confident what consciousness is, seemed to be a concern to anotherblackhat. Defining consciousness would have removed that concern.
No need to “read between the lines” as it was a straight forward question. I really didn’t understand why the definition of consciousness wasn’t laid out in advance of the thot experiment.
Defining terms allows one to communicate more effectively with others which is really important in any conversation but essential in presenting a hypothesis.
I was informed by Dlthomas that conceptspace is different than thingspace, so I think get the jest of it now.
However, my point was, and is, that the theorist’s defs are crucial to the hypothesis and hypotheses don’t care at all about goals, preferences, and values. Hypotheses simply illustrate the actors, define the terms in the script and set the stage for the first act. Now we can move on to the theory and hopefully form a conclusion.
No need to apologize, it is easy to misunderstand me, as I am not very articulate to begin with, and as usual, I don’t understand what I know about it!
ADDED: And I still need to learn how to narrow the inferential gap!
Agreed that hypotheses don’t care about goals, preferences, or values.
Agreed that for certain activities, well-defined terms are more important than anything else.
This seems to exactly contradict your first paragraph. What if I define “conscious” as “made of cells”?
If you don’t know for sure what consciousness is, you define it as best you can, and proceed forward to see if your hypothesis is rational and that the theory is possible. If you define conscious as made of cells, then everyone knows right away a GLUT is not conscious (that is, if it is not made of cells) by YOUR def. and tells you, you are being irrational, please go back to the drawing board!