This question requires agreement on a definition of what “consciousness” is. I think many disagreements about “consciousness” would be well served by tabooing the word.
So what is the property that you are unsure WBEs would have? It must be a property that could in principle be measured by an external, objective procedure. “Subjective experience” is just as ill defined as “consciousness”.
While waiting for an answer, I will note this:
A successful WBE should exhibit all the externally observable behaviors of a human—as a black box, without looking at implementation details. This definition seems to restrict the things you could be unsure about to either implementation details (“only biological machines can be conscious”), or to things that are not the causes of anything (philosophical zombies).
Ultimately you can proclaim literally anything undefined in such a manner, e.g. “a brick”. What a brick exactly is? Clay is equally in need of definition, and if you define clay, you’ll need to be defining other things.
Let me try to explain. There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from, so to say, and some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it’s possible outputs.
(That’s an explanation; I don’t myself think that building some “objective picture” then locating a being inside of it is a good approach).
Ultimately you can proclaim literally anything undefined in such a manner, e.g. “a brick”. What a brick exactly is? Clay is equally in need of definition, and if you define clay, you’ll need to be defining other things.
I’m doing my best to argue in good faith.
When you say “brick”, I have a pretty good idea of what you mean. I could be wrong, I could be surprised, but I do have an assumption with high confidence.
But when you say “consciousness in a WBE”, I really honestly don’t know what it is you mean. There are several alternatives—different things that different people mean—and also there are some confused people who say such words but don’t mean anything consistent by them (e.g. non-materialists). So I’m asking you to clarify what you mean. (Or asking the OP, in this case.)
There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from
So far I’m with you. Today I can look down and see my own body and say “aha, that’s who I am in the objective world”. If I were a WBE I could be connected to very different inputs and then I would be confused and my sense of self could change. That’s a very interesting issue but that doesn’t clarify what “consciousness” is.
some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it’s possible outputs.
I’ve lost you here. What does “a choice of being” mean? What is this mapping that includes some… beings… and not others?
And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let’s consider two programs that take a string and always return zero. One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
Which of course makes problematic any arguments which argue that WBE must be same as biological brains based on some mathematical equivalence, as even within the WBEs, mathematical equivalence does not guarantee subjective equivalence.
(I for one think that brain simulators are physically similar enough to biological brains that I wouldn’t mind being replaced by a brain simulation of me, but it’s not because of some mathematical equivalence, it’s because they are physically quite similar, unlike a database of every possible answer which would be physically very distinct. I’d be wary of doing extensive optimization of a brain simulation of me into something mathematically equivalent but simpler).
I’ve lost you here. What does “a choice of being” mean? What is this mapping that includes some… beings… and not others?
Well, your “if I were a WBE” is an example of you choosing a WBE for example purposes.
OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.
Your neurons (ETA: individually or collectively) do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
That example deserves a post of its own, excellent. Nearly any kind of WBE would rely on optimizing (while maintaining functional equivalence) / translating to a different substrate. The resulting WBE would still proclaim itself to be conscious, and for most people that would be enough to think it so.
However, how do we know which of the many redundancies we could get rid of, and which are instrumental to actually experiencing consciousness? If output behavior is strictly sufficient, then main() { printf(“I’m conscious right now”); } and a human saying the same line would both be conscious, at that moment?
If output behavior isn’t strictly sufficient, how will we ever encode neural patterns in silico, if the one parameter we can measure (how the system behaves) isn’t trustworthy?
In the thought experiment, the database is the entirety of the replacement, which is why the analogy to a single neuron is inappropriate. (Unless I’ve misunderstood the point of your analogy. Anyway, it’s useless to point to neurons as a example of a thing that also doesn’t think, because a neuron by itself also doesn’t have consciousness. It’s the entire brain that is capable of computing anything.)
I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.
Not every technique applies to every problem. Tabooing the word “fire” won’t help you understand fire. Thinking really hard and using all those nice philosophical tools from LW won’t help either. I think the problem of consciousness will be solved only by experimental science, not earlier.
Tabooing isn’t about understanding what something is or how it works. It’s about understanding what another person means when they use a word.
When you say “fire” you refer to a thing that you expect the listener to know about. If someone who doesn’t speak English well asks you what is “fire”—asks you to taboo the word “fire”—you will be able to answer. Even though you may have no idea how fire works.
I’m asking to taboo “consciousness” because I’ve seen many times that different people mean different things when they use that word. And a lot of them don’t have any coherent or consistent concept at all that they refer to. Without a coherent concept of what is meant by “consciousness”, it’s meaningless to ask whether “consciousness” would be present or absent in a WBE.
I’m asking to taboo “consciousness” because I’ve seen many times that different people mean different things when they use that word.
I don’t believe that they actually mean different things. Consciousness, like fire, is something we all know about. It sounds more like you’re pushing people to give more detail than they know, so they make up random answers. I can push you about “fire” the same way, it will just take a couple more steps to get to qualia. Fire is that orange thing—sorry, what’s “orange”? :-) The exercise isn’t helpful.
A successful WBE should exhibit all the externally observable behaviors of a human—as a black box, without looking at implementation details. This definition seems to restrict the things you could be unsure about to either implementation details (“only biological machines can be conscious”), or to things that are not the causes of anything (philosophical zombies).
This question is more subtle than that.
without looking at implementation details.
Is there any variation in “implementation” that could be completely hidden from outside investigation? can thre be completely indetectable phsycial differences?
A successful WBE should exhibit all the externally observable behaviors of a human—as a black bo
We can put something in a box, and agree not to peek inside the box, and we can say that two such systems are equivalent as far as what is allowed to manifest outside the box. But differnt kinds of black box will yield different equivalences. If you are allowed to know that box A needs and oxygen supply, and that box B needs an electrcity supply, that’s a clue. Equivalence is equivalence of an chosen subset of behaviours. No two things are absolutely, acontextually equivalent unless they are phsycially identical. And to draw the line between relevant behaviour and irrelevant implementation correctly would require a pre-existing perfect understanding of the mind-matter relationship.
I wasn’t arguing that differences in implementation are not important. For some purposes they are very important. I’m just pointing out that you are restricted to discussing differences in implementation and so OP should not be surprised that people who wish to claim that WBEs would not be “conscious” support implausible theories as “only biological systems can be conscious”.
We should not discuss the question of what can be conscious, however, without first tabooing “consciousness” as I requested.
I wasn’t arguing that differences in implementation are not important. For some purposes they are very important.
I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.
We should not discuss the question of what can be conscious, however, without first tabooing “consciousness” as I requested.
I don’t think they argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpetation or convention).
2) something that is not entirely inferable from behaviour.
If you use the word “consciousness”, you ought to know what you mean by it. You should always be able to taboo any word you use. So I’m asking you, what is this “consciousness” that you (and the OP) talk about?
If you use the word “consciousness”, you ought to know what you mean by it.
The same applies to you. Any English speaker can attach a meaning to “consciousness”. That doesn’t imply the possession of deep metaphysical insight. I don’t know what dark matter “is” either. I don’t need to fully explain what consc. “is”, since ..
“I don’t think the argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpretation or convention).
2) something that is not entirely inferable from behaviour.”
You repeatedly miss the point of my argument. If you were teaching English to a foreign person, and your dictionary didn’t contain the word “Conscoiusness”, how would you explain what you meant by that word?
I’m not asking you to explain to an alien. You can rely on shared human intuitions and so on. I’m just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
I’m just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
I have already stated those aspects of the meaning of “consciousness” necessary for my argument to go through. Why should I explain more?
1) something that is there or not (not a matter of interpretation or convention).
2) something that is not entirely inferable from behaviour.
A lot of things would satisfy that definition without having anything to do with “consciousness”. An inert lump of metal stuck in your brain would satisfy it. Are you saying you really don’t know anything significant about what the word “consciousness” means beyond those two requirements?
Yep. They weren’t an exhaustive definition of consc, and weren’t said to be. No-one needs to infer the subject matter from 1) and 2), since it was already given.
Tell me, are you like this all the time? You might make a good roommate for dr Dr Sheldon Cooper.
I think the conversation might as well end here. I wasn’t responsible for the first three downvotes, but after posting this reply I will add a fourth downvote.
There was a clear failure to communicate and I don’t feel like investing the time explaining the same thing over and over again.
This question requires agreement on a definition of what “consciousness” is. I think many disagreements about “consciousness” would be well served by tabooing the word.
So what is the property that you are unsure WBEs would have? It must be a property that could in principle be measured by an external, objective procedure. “Subjective experience” is just as ill defined as “consciousness”.
While waiting for an answer, I will note this:
A successful WBE should exhibit all the externally observable behaviors of a human—as a black box, without looking at implementation details. This definition seems to restrict the things you could be unsure about to either implementation details (“only biological machines can be conscious”), or to things that are not the causes of anything (philosophical zombies).
Ultimately you can proclaim literally anything undefined in such a manner, e.g. “a brick”. What a brick exactly is? Clay is equally in need of definition, and if you define clay, you’ll need to be defining other things.
Let me try to explain. There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from, so to say, and some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it’s possible outputs.
(That’s an explanation; I don’t myself think that building some “objective picture” then locating a being inside of it is a good approach).
I’m doing my best to argue in good faith.
When you say “brick”, I have a pretty good idea of what you mean. I could be wrong, I could be surprised, but I do have an assumption with high confidence.
But when you say “consciousness in a WBE”, I really honestly don’t know what it is you mean. There are several alternatives—different things that different people mean—and also there are some confused people who say such words but don’t mean anything consistent by them (e.g. non-materialists). So I’m asking you to clarify what you mean. (Or asking the OP, in this case.)
So far I’m with you. Today I can look down and see my own body and say “aha, that’s who I am in the objective world”. If I were a WBE I could be connected to very different inputs and then I would be confused and my sense of self could change. That’s a very interesting issue but that doesn’t clarify what “consciousness” is.
I’ve lost you here. What does “a choice of being” mean? What is this mapping that includes some… beings… and not others?
And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let’s consider two programs that take a string and always return zero. One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
Which of course makes problematic any arguments which argue that WBE must be same as biological brains based on some mathematical equivalence, as even within the WBEs, mathematical equivalence does not guarantee subjective equivalence.
(I for one think that brain simulators are physically similar enough to biological brains that I wouldn’t mind being replaced by a brain simulation of me, but it’s not because of some mathematical equivalence, it’s because they are physically quite similar, unlike a database of every possible answer which would be physically very distinct. I’d be wary of doing extensive optimization of a brain simulation of me into something mathematically equivalent but simpler).
Well, your “if I were a WBE” is an example of you choosing a WBE for example purposes.
OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.
Your neurons (ETA: individually or collectively) do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
That example deserves a post of its own, excellent. Nearly any kind of WBE would rely on optimizing (while maintaining functional equivalence) / translating to a different substrate. The resulting WBE would still proclaim itself to be conscious, and for most people that would be enough to think it so.
However, how do we know which of the many redundancies we could get rid of, and which are instrumental to actually experiencing consciousness? If output behavior is strictly sufficient, then main() { printf(“I’m conscious right now”); } and a human saying the same line would both be conscious, at that moment?
If output behavior isn’t strictly sufficient, how will we ever encode neural patterns in silico, if the one parameter we can measure (how the system behaves) isn’t trustworthy?
One should do well not to confuse the parts with the whole. After all, transistors do not solve chess problems.
Yes, which is why I used that as a reductio for “This database does not think, it just answers.”
In the thought experiment, the database is the entirety of the replacement, which is why the analogy to a single neuron is inappropriate. (Unless I’ve misunderstood the point of your analogy. Anyway, it’s useless to point to neurons as a example of a thing that also doesn’t think, because a neuron by itself also doesn’t have consciousness. It’s the entire brain that is capable of computing anything.)
I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.
Not every technique applies to every problem. Tabooing the word “fire” won’t help you understand fire. Thinking really hard and using all those nice philosophical tools from LW won’t help either. I think the problem of consciousness will be solved only by experimental science, not earlier.
Tabooing isn’t about understanding what something is or how it works. It’s about understanding what another person means when they use a word.
When you say “fire” you refer to a thing that you expect the listener to know about. If someone who doesn’t speak English well asks you what is “fire”—asks you to taboo the word “fire”—you will be able to answer. Even though you may have no idea how fire works.
I’m asking to taboo “consciousness” because I’ve seen many times that different people mean different things when they use that word. And a lot of them don’t have any coherent or consistent concept at all that they refer to. Without a coherent concept of what is meant by “consciousness”, it’s meaningless to ask whether “consciousness” would be present or absent in a WBE.
I don’t believe that they actually mean different things. Consciousness, like fire, is something we all know about. It sounds more like you’re pushing people to give more detail than they know, so they make up random answers. I can push you about “fire” the same way, it will just take a couple more steps to get to qualia. Fire is that orange thing—sorry, what’s “orange”? :-) The exercise isn’t helpful.
This question is more subtle than that.
Is there any variation in “implementation” that could be completely hidden from outside investigation? can thre be completely indetectable phsycial differences?
We can put something in a box, and agree not to peek inside the box, and we can say that two such systems are equivalent as far as what is allowed to manifest outside the box. But differnt kinds of black box will yield different equivalences. If you are allowed to know that box A needs and oxygen supply, and that box B needs an electrcity supply, that’s a clue. Equivalence is equivalence of an chosen subset of behaviours. No two things are absolutely, acontextually equivalent unless they are phsycially identical. And to draw the line between relevant behaviour and irrelevant implementation correctly would require a pre-existing perfect understanding of the mind-matter relationship.
I wasn’t arguing that differences in implementation are not important. For some purposes they are very important. I’m just pointing out that you are restricted to discussing differences in implementation and so OP should not be surprised that people who wish to claim that WBEs would not be “conscious” support implausible theories as “only biological systems can be conscious”.
We should not discuss the question of what can be conscious, however, without first tabooing “consciousness” as I requested.
I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.
I don’t think they argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpetation or convention).
2) something that is not entirely inferable from behaviour.
Fine, but what is it?
What makes you think I know?
If you use the word “consciousness”, you ought to know what you mean by it. You should always be able to taboo any word you use. So I’m asking you, what is this “consciousness” that you (and the OP) talk about?
The same applies to you. Any English speaker can attach a meaning to “consciousness”. That doesn’t imply the possession of deep metaphysical insight. I don’t know what dark matter “is” either. I don’t need to fully explain what consc. “is”, since ..
“I don’t think the argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpretation or convention).
2) something that is not entirely inferable from behaviour.”
You repeatedly miss the point of my argument. If you were teaching English to a foreign person, and your dictionary didn’t contain the word “Conscoiusness”, how would you explain what you meant by that word?
I’m not asking you to explain to an alien. You can rely on shared human intuitions and so on. I’m just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
I have already stated those aspects of the meaning of “consciousness” necessary for my argument to go through. Why should I explain more?
You mean these aspects?
A lot of things would satisfy that definition without having anything to do with “consciousness”. An inert lump of metal stuck in your brain would satisfy it. Are you saying you really don’t know anything significant about what the word “consciousness” means beyond those two requirements?
Yep. They weren’t an exhaustive definition of consc, and weren’t said to be. No-one needs to infer the subject matter from 1) and 2), since it was already given.
Tell me, are you like this all the time? You might make a good roommate for dr Dr Sheldon Cooper.
I think the conversation might as well end here. I wasn’t responsible for the first three downvotes, but after posting this reply I will add a fourth downvote.
There was a clear failure to communicate and I don’t feel like investing the time explaining the same thing over and over again.