This is why it’s strongly recommended to try out an article idea on the Open Thread first.
You owe it to your readers to have clearly organized and well-explained thoughts before writing a top-level post, and the best way to get there is to discuss your ideas with veterans first. If you say in advance that you want to write a top-level post, we’ll respect that; I’ve never seen anyone here poach a post idea (though of course others may want to write their own ideas on the topic).
Oh yes, I agree. I was just making a note of that since otherwise, given your observation, people will not poach my ideas; I would thus be decreasing the amount of good Lw posts by naming them!
EDIT: I realise that you asked us to be gentle, and all I’ve done is point out a flaws. Feel free to ignore me.
You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you’ve said is correct. The first example of this is this statement:
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness.
How do you know?
What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened?
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time.
Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself).
We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.
This is my first article on LW, so be gentle.
This is why it’s strongly recommended to try out an article idea on the Open Thread first.
You owe it to your readers to have clearly organized and well-explained thoughts before writing a top-level post, and the best way to get there is to discuss your ideas with veterans first. If you say in advance that you want to write a top-level post, we’ll respect that; I’ve never seen anyone here poach a post idea (though of course others may want to write their own ideas on the topic).
People are welcome to poach my ideas, as I have more of them than time to write.
Right; I meant that people don’t do so without permission.
Oh yes, I agree. I was just making a note of that since otherwise, given your observation, people will not poach my ideas; I would thus be decreasing the amount of good Lw posts by naming them!
There seems to be a problem with the paragraph formatting at the beginning. More line breaks maybe?
Yes, for some reasons the top paragraphs have style=”margin-bottom: 0in;”, which makes them stick together.
Some other things that would help making the post more readable:
Breaking it up into sub-sections with titles
Adding a short summary at the beginning that tries to whet my appetite.
EDIT: I realise that you asked us to be gentle, and all I’ve done is point out a flaws. Feel free to ignore me.
You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you’ve said is correct. The first example of this is this statement:
How do you know?
What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened?
Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself).
Recommended reading: http://lesswrong.com/lw/jl/what_is_evidence/
We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.
To be honest you lost me at ‘consciousness’.
The whole question of computational requirements here seems to be one that is just a function of an arbitrary and not included word definition.