The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.
My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.
Do you think it is likely that the creatures will NOT have the experiences we care about?
It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they’re conscious of something, but we have no idea what that may be like.
That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed
You seem to be rather sanguine about the equivalence of thoughts and experiences.
(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)
Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they’re conscious of something, but we have no idea what that may be like.
It’s uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows,
in contradiction to the GAZP.
(Actually, the GAZP is rather terrible because irt means you won’t; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).
I’m not equating thoughts and experiences. I’m relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.
I’m not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I’d probably object and on others I wouldn’t.
You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn’t mention experiences, namely the account i terms of information processing.
You seem to be inventing a guarantee that I don’t need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.
Mentioning something is not a prerequisite for having it.
If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience
That reads like a non sequitur to me. We don’t know what the relationship between algorithms and experience is.
Mentioning something is not a prerequisite for having it.
It’s possible for a description that doesn’t explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.
I’m convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn’t be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers’s Absent Qualia argument, I’m eager to hear it.
If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn’t even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.
I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.
The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.
My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.
Do you think it is likely that the creatures will NOT have the experiences we care about?
(just trying to make sure we’re on the same page)
It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they’re conscious of something, but we have no idea what that may be like.
Further info on my position.
You seem to be rather sanguine about the equivalence of thoughts and experiences.
(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)
It’s uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows, in contradiction to the GAZP.
(Actually, the GAZP is rather terrible because irt means you won’t; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).
I’m not equating thoughts and experiences. I’m relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.
I’m not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I’d probably object and on others I wouldn’t.
You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn’t mention experiences, namely the account i terms of information processing.
You seem to be inventing a guarantee that I don’t need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.
Mentioning something is not a prerequisite for having it.
That reads like a non sequitur to me. We don’t know what the relationship between algorithms and experience is.
It’s possible for a description that doesn’t explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.
I’m convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn’t be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers’s Absent Qualia argument, I’m eager to hear it.
If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn’t even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.
I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.
Everyone should care about pain-pleasure spectrum inversion!