have to say I agree with Charles’ proposition. I mean, if one thinks “i am thinking” the neurons have to fire off in your head
A) to think
B) to say “I am thinking”
C) to realize one is saying he or she is thinking
D) determine the cause and thought process of all of the above and
E) rationalize the behavior of our brains in a inductive reasoning-based processing sense of the word.
So, if all of the above are true, as are the aforementioned butterfly effect that causes a misplaced neuron to trigger a seizure, than if one’s neurons were replaced by other completely identical neurons, then you would have consciousness, but not the same consciousness, and not necessarily a human consciousness. (Also the argument depends on whether one believes if random is really not random at all, and if it is random, than the robot neurons could not replicate that process in an algorithm, since there isn’t one (in that case) than the randomness of human consciousness would constitute the definition of the difference between the robot consciousness and the human one, if the robot conscious is actually considered “conscious” at all, which would mean zombies COULD exist due to the lack of randomness in a robot-neuron-composed brain.) BUT, if randomness isn’t actually random at all, and such variables as pi consist of a very complex pattern, then whose to say robots cannot replicate the pattern, in which case human existence would be replicable, and there would be no difference between conscious robots and conscious humans, but the unconscious would not be unconscious unless they were dead, thus proving the GAZP. Does anyone else agree or am I missing something?
have to say I agree with Charles’ proposition. I mean, if one thinks “i am thinking” the neurons have to fire off in your head A) to think B) to say “I am thinking” C) to realize one is saying he or she is thinking D) determine the cause and thought process of all of the above and E) rationalize the behavior of our brains in a inductive reasoning-based processing sense of the word.
So, if all of the above are true, as are the aforementioned butterfly effect that causes a misplaced neuron to trigger a seizure, than if one’s neurons were replaced by other completely identical neurons, then you would have consciousness, but not the same consciousness, and not necessarily a human consciousness. (Also the argument depends on whether one believes if random is really not random at all, and if it is random, than the robot neurons could not replicate that process in an algorithm, since there isn’t one (in that case) than the randomness of human consciousness would constitute the definition of the difference between the robot consciousness and the human one, if the robot conscious is actually considered “conscious” at all, which would mean zombies COULD exist due to the lack of randomness in a robot-neuron-composed brain.) BUT, if randomness isn’t actually random at all, and such variables as pi consist of a very complex pattern, then whose to say robots cannot replicate the pattern, in which case human existence would be replicable, and there would be no difference between conscious robots and conscious humans, but the unconscious would not be unconscious unless they were dead, thus proving the GAZP. Does anyone else agree or am I missing something?