I don’t know of a human-independent definition of consciousness, do you? If not, how can one say that “something else is conscious”? So the statement
increasingly complex simulations of humans could be both “obviously” not conscious but be mistaken by others as conscious
will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)
IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent
I don’t understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.
According to this theory, what is the key difference between a human brain, and… let’s say a hard disk of the same capacity, connected to a high-resolution camera? Let’s assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.
I don’t know of a human-independent definition of consciousness, do you? If not, how can one say that “something else is conscious”? So the statement
will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)
I don’t know of ANY definition of consciousness which is testable, human-independent or not.
Integrated Information Theory is one attempt at a definition. I read about it a little, but not enough to determine if it is completely crazy.
IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent
I don’t understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.
According to this theory, what is the key difference between a human brain, and… let’s say a hard disk of the same capacity, connected to a high-resolution camera? Let’s assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.
This article is a good introduction to the topic and the photodiode example in the paper is the simple version of your question http://www.biolbull.org/content/215/3/216.full
Thanks! The article was good. At this moment, I am… not convinced, but also not able to find an obvious error.