Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience? If one thinks there is a “hard problem of consciousness” such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can’t create a measuring device to find it just now.
Sure, perhaps we can’t tell the difference, but that doesn’t mean there isn’t a difference.
If A means that we cannot determine the difference in principle, then there’s nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience?
This may be a situation where that’s a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn’t grounds for choosing among them.
So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.
The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
That is the case for most any belief you hold (unless you mean “in the exact same way”, not as “change behavior”). You may believe there’s a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it’s more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It’s not AIXI’s fault if it believes in the wrong thing for the right reasons.
In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally “verified” (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.
After all, you do act upon your belief that “I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I’ve eaten”. It also doesn’t lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia (“hot hot hot hot hot”).
(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
Well, in the case of “do landslides have qualia”, Occam’s Razor could be used to assign probabilities just the same as we assign probabilities in the “cheerio simulation” example. So we’ve got methodology, we’ve got impact, enough to adopt a stance on the “psychic unity of the cosmos”, no?
My best guess is that you’re suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam’s Razor provides grounds to be more confident that they have subjective experience than that they don’t. If that’s what you mean, I don’t see why that should be. If that’s not what you mean, can you rephrase the question?
I think it’s conceivable if not likely that Occam’s Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we’re used to. I’m not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.
I’m not advocating assigning a high probability to “landslides have raw experience”, I’m advocating that it’s an important question, the probability of which can be argued. I’m an advocate of the question, not the answer, so to speak. And as such opposed to “I really can’t see why anyone should care one way or the other”.
So, I stand by my assertion that in the absence of evidence one way or the other, I really can’t see why anyone should care.
But I agree that to the extent that Occam’s Razor type reasoning provides evidence, that’s a reason to care.
And if it provided strong evidence one way or another (which I don’t think it does, and I’m not sure you do either) that would provide a strong reason to care.
I stand by my assertion that in the absence of evidence one way or the other,
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn’t mean I don’t have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring. If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences. Failing such evidence, I do best to concentrate my attention elsewhere.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do. On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
He may have meant something like “Qualiaphobia implies we would have no expereinces at all”. However, that all depends on what you mean by experience. I don’t think the Expected Experience criterion is useful here (or anywhere else)
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience? If one thinks there is a “hard problem of consciousness” such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can’t create a measuring device to find it just now.
If A means that we cannot determine the difference in principle, then there’s nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
This may be a situation where that’s a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn’t grounds for choosing among them.
So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
They don’t necessarily once you start talking about uploads, or the afterlife for that matter.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.
The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
I suspect that’s not what TimS meant.
That is the case for most any belief you hold (unless you mean “in the exact same way”, not as “change behavior”). You may believe there’s a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it’s more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It’s not AIXI’s fault if it believes in the wrong thing for the right reasons.
In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally “verified” (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.
After all, you do act upon your belief that “I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I’ve eaten”. It also doesn’t lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia (“hot hot hot hot hot”).
(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)
Cheerio!
Yes, I agree with all of this.
Well, in the case of “do landslides have qualia”, Occam’s Razor could be used to assign probabilities just the same as we assign probabilities in the “cheerio simulation” example. So we’ve got methodology, we’ve got impact, enough to adopt a stance on the “psychic unity of the cosmos”, no?
I’m having trouble following you, to be honest.
My best guess is that you’re suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam’s Razor provides grounds to be more confident that they have subjective experience than that they don’t.
If that’s what you mean, I don’t see why that should be.
If that’s not what you mean, can you rephrase the question?
I think it’s conceivable if not likely that Occam’s Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we’re used to. I’m not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.
I’m not advocating assigning a high probability to “landslides have raw experience”, I’m advocating that it’s an important question, the probability of which can be argued. I’m an advocate of the question, not the answer, so to speak. And as such opposed to “I really can’t see why anyone should care one way or the other”.
Ah, I see.
So, I stand by my assertion that in the absence of evidence one way or the other, I really can’t see why anyone should care.
But I agree that to the extent that Occam’s Razor type reasoning provides evidence, that’s a reason to care.
And if it provided strong evidence one way or another (which I don’t think it does, and I’m not sure you do either) that would provide a strong reason to care.
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn’t mean I don’t have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
The point has been made that we should care because qualia have moral implications.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring.
If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences.
Failing such evidence, I do best to concentrate my attention elsewhere.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
(nods) Makes sense.
What? Are you saying we have weak evidence for even in ourselves?
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
But of course it can. i can be much more confident in
(P → Q)
than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do.
On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
He may have meant something like “Qualiaphobia implies we would have no expereinces at all”. However, that all depends on what you mean by experience. I don’t think the Expected Experience criterion is useful here (or anywhere else)