I highly doubt this on an intuitive level. If a draw a picture of a man being shot, is it suffering? Naturally not, since those are just ink pigments in a sheet of cellulose. Suffering seems to need a lot of complexity and also seems deeply connected to biological systems. AI/computers are just a “picture” of these biological systems. A pocket calculator appears to do something similar to the brain but in reality it’s much less complex and much different, and it’s doing something completely different. In reality it’s just an electric circuit. Are lightbulbs moral patients?
Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.
and also seems deeply connected to biological systems.
I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by “deeply connected.” Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?
AI/computers are just a “picture” of these biological systems.
What does this mean?
Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.
Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won’t ‘naturally’ have the capacity for suffering? (i.e. you’ve ruled out the possibility that Steven Byrnes raised in his comment)
1.) Suffering seems to need a lot of complexity, because it demands consciousness, which is the most complex thing that we know of.
2.) I personally suspect that the biological substrate is necessary (of course that I can’t be sure.) For reasons, like I mentioned, like sleep and death. I can’t imagine a computer that doesn’t sleep and can operate for trillions of years as being conscious, at least in any way that resembles an animal. It may be superintelligent but not conscious. Again, just my suspicion.
3.) I think it’s obvious—it means that we are trying to recreate something that biological systems do (arithmetics, imagine recognition, playing games, etc) on these electronic systems called computers or AI. Just like we try to recreate a murder scene with pencils and paper. But the murder drawing isn’t remotely a murder, it’s only a basic representation of a person’s idea of a murder.
4.) Correct. I’m not completely excluding that possibility, but like I said, it would be a great luck to get there not on purpose. Maybe not “winning the lottery” luck as I’ve mentioned, but maybe 1 to 5% probability.
We must understand that suffering takes consciousness, and consciousness takes a nervous system. Animals without one aren’t conscious. The nature of computers is so drastically different from that of a biological nervous system (and, at least until now, much less complex) that I think it would be quite unlikely that we eventually unwillingly generate this very complex and unique and unknown property of biological systems that we call consciousness. I think it would be a great coincidence.
Either consciousness is a mechanism that has been recruited by evolution for one of its abilities to efficiently integrate information, or consciousness is a type of epiphenomenon that serves no purpose.
Personally I think that consciousness, whatever it is, serves a purpose, and has an importance for the systems that try to sort out the anecdotal information from the information that deserves more extensive consideration. It is possible that this is the only way to effectively process information, and therefore that in trying to program an agi, one naturally comes across it
Consciousness definitely serves a purpose, from an evolutionary perspective. It’s definitely an adaptation to the environment, by offering a great advantage, a great leap, in information processing.
But from there to say that it is the only way to process information goes a long way. I mean, once again, just think of the pocket calculator. Is it conscious? I’m quite sure that it isn’t.
I think that consciousness is a very biological thing. The thing that makes me doubt the most about consciousness in non-biological systems (let alone in the current ones which are still very simple) is that they don’t need to sleep and they can function indefinitely. Consciousness seems to have these limits. Can you imagine not ever sleeping? Not ever dying? I don’t think such would be possible for any conscious being, at least one remotely similar to us.
to say that [consciousness] is the only way to process information
I don’t think anyone was claiming that. My post certainly doesn’t. If one thought consciousness were the only way to process information, wouldn’t there not even be an open question about which (if any) information-processing systems can be conscious?
I highly doubt this on an intuitive level. If a draw a picture of a man being shot, is it suffering? Naturally not, since those are just ink pigments in a sheet of cellulose. Suffering seems to need a lot of complexity and also seems deeply connected to biological systems. AI/computers are just a “picture” of these biological systems. A pocket calculator appears to do something similar to the brain but in reality it’s much less complex and much different, and it’s doing something completely different. In reality it’s just an electric circuit. Are lightbulbs moral patients?
Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.
A few questions:
Can you elaborate on this?
I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by “deeply connected.” Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?
What does this mean?
Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won’t ‘naturally’ have the capacity for suffering? (i.e. you’ve ruled out the possibility that Steven Byrnes raised in his comment)
1.) Suffering seems to need a lot of complexity, because it demands consciousness, which is the most complex thing that we know of.
2.) I personally suspect that the biological substrate is necessary (of course that I can’t be sure.) For reasons, like I mentioned, like sleep and death. I can’t imagine a computer that doesn’t sleep and can operate for trillions of years as being conscious, at least in any way that resembles an animal. It may be superintelligent but not conscious. Again, just my suspicion.
3.) I think it’s obvious—it means that we are trying to recreate something that biological systems do (arithmetics, imagine recognition, playing games, etc) on these electronic systems called computers or AI. Just like we try to recreate a murder scene with pencils and paper. But the murder drawing isn’t remotely a murder, it’s only a basic representation of a person’s idea of a murder.
4.) Correct. I’m not completely excluding that possibility, but like I said, it would be a great luck to get there not on purpose. Maybe not “winning the lottery” luck as I’ve mentioned, but maybe 1 to 5% probability.
We must understand that suffering takes consciousness, and consciousness takes a nervous system. Animals without one aren’t conscious. The nature of computers is so drastically different from that of a biological nervous system (and, at least until now, much less complex) that I think it would be quite unlikely that we eventually unwillingly generate this very complex and unique and unknown property of biological systems that we call consciousness. I think it would be a great coincidence.
Either consciousness is a mechanism that has been recruited by evolution for one of its abilities to efficiently integrate information, or consciousness is a type of epiphenomenon that serves no purpose.
Personally I think that consciousness, whatever it is, serves a purpose, and has an importance for the systems that try to sort out the anecdotal information from the information that deserves more extensive consideration. It is possible that this is the only way to effectively process information, and therefore that in trying to program an agi, one naturally comes across it
Consciousness definitely serves a purpose, from an evolutionary perspective. It’s definitely an adaptation to the environment, by offering a great advantage, a great leap, in information processing.
But from there to say that it is the only way to process information goes a long way. I mean, once again, just think of the pocket calculator. Is it conscious? I’m quite sure that it isn’t.
I think that consciousness is a very biological thing. The thing that makes me doubt the most about consciousness in non-biological systems (let alone in the current ones which are still very simple) is that they don’t need to sleep and they can function indefinitely. Consciousness seems to have these limits. Can you imagine not ever sleeping? Not ever dying? I don’t think such would be possible for any conscious being, at least one remotely similar to us.
I don’t think anyone was claiming that. My post certainly doesn’t. If one thought consciousness were the only way to process information, wouldn’t there not even be an open question about which (if any) information-processing systems can be conscious?
I never said you claimed such either, but Charbel did.
“It is possible that this [consciousness] is the only way to effectively process information”
I was replying to his reply to my comment, hence I mentioned it.