Ah, these are much better descriptions now, well done!
And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?
Can you be more specific about what exactly I said that you’re referring to? Forgive me but I actually am not sure which part you mean.
Sure. You said:
I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.
[emphasis added]
The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind?
Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it.
EY’s assertion, and I tend to see his point, is that […]
Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why.
As an aside:
… because consciousness is completely emergent-from-the-physical …
No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical.
Until then, perhaps it has to be, but it’s an open question whether it is…
[rest of my response is conceptually separate, so it’s in a separate comment]
Ah, these are much better descriptions now, well done!
Thanks, I sincerely appreciate your help in clarifying :)
We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical”
Can you explain why the former doesn’t imply the latter? I’m under the impression it does, for any reasonable definition of “has to be”, as long as what you’re conditioning on (in this case reductionism) is true. I suppose I don’t see your objection.
Can you explain why the former doesn’t imply the latter?
Sure. Basically, this is the problem:
as long as what you’re conditioning on (in this case reductionism) is true
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Tangentially:
for any reasonable definition of “has to be”
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
Consider:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):
A: Is your husband at home right now?
B: Yes, he is.
A: You know that he’s at home?
B: Well… no. But he has to be at home.
A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?
B: No, I didn’t.
And so on. (I imagine you could easily come up with innumerable other examples.)
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yetreduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical.Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
If your answers to those questions are “well above 50%” and “yes,” why is it so difficult to answer the question:
if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly—pick the hypothesis you have with the highest prior, even if it’s nowhere close to 50% - what would it be
?
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
It seems to me that you’re separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
vs.
A: Is your husband at home right now?
B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.
Neither of these are “is”, i.e. direct, contemporaneous, empirical observation. They are both “has to be”, i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.
I submit that my “has to be” is of the latter type, but even more airtight.
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
I counter with the questions: What numerical strength would you give your belief that reductionism is true?
I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context.
Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder.
That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is—
—the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent!
In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive and possibly incoherent things that become intuitively true as you understand QM. As I said elsewhere:
I truly do think we can’t move further from this point, in this thread of this argument, without you reading and understanding the sequence :(
I could be mistaken, but it seems to me that the distinction you’re trying to make between what I’m saying and what I’d have to say for my answer to be [coherent] dissolves as you understand QM.
I could, of course, be misunderstanding you completely. But there also isn’t anything you’re linking that I’m unwilling to read :P
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism.
I will, if I can find the time, try to give the QM sequence a close re-read, however.
I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.
Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over.
I hope, at least, that this back-and-forth has been useful?
I hope, at least, that this back-and-forth has been useful?
Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.
so I don’t think I have much to add past what we’ve already gone over.
I did make an attempt to address your last reply. If you still feel that way after, let me know.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)
(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.)
I submit that my “has to be” is of the latter type, but even more airtight.
Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario:
A: Is your husband at home?
B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey!
A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now?
B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense…
A: But not absolutely sure, isn’t that right?
B: I suppose that’s so.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
I agree. At the core, every belief is bayesian. I don’t recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between “has to” and “is”.
As an aside, it’s possible you missed my edit. I’ll reproduce it here:
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”.
Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics.
And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so:
“Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).”
“Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.
Ah, these are much better descriptions now, well done!
Sure. You said:
[emphasis added]
The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind?
Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it.
Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why.
As an aside:
No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical.
Until then, perhaps it has to be, but it’s an open question whether it is…
[rest of my response is conceptually separate, so it’s in a separate comment]
Thanks, I sincerely appreciate your help in clarifying :)
Can you explain why the former doesn’t imply the latter? I’m under the impression it does, for any reasonable definition of “has to be”, as long as what you’re conditioning on (in this case reductionism) is true. I suppose I don’t see your objection.
Sure. Basically, this is the problem:
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Tangentially:
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
Consider:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):
A: Is your husband at home right now?
B: Yes, he is.
A: You know that he’s at home?
B: Well… no. But he has to be at home.
A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?
B: No, I didn’t.
And so on. (I imagine you could easily come up with innumerable other examples.)
Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
If your answers to those questions are “well above 50%” and “yes,” why is it so difficult to answer the question:
?
It seems to me that you’re separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
vs.
A: Is your husband at home right now?
B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.
Neither of these are “is”, i.e. direct, contemporaneous, empirical observation. They are both “has to be”, i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.
I submit that my “has to be” is of the latter type, but even more airtight.
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context.
Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder.
That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is—
—the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent!
In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive and possibly incoherent things that become intuitively true as you understand QM. As I said elsewhere:
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism.
I will, if I can find the time, try to give the QM sequence a close re-read, however.
I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.
Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over.
I hope, at least, that this back-and-forth has been useful?
Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.
I did make an attempt to address your last reply. If you still feel that way after, let me know.
This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)
Fair enough. I don’t see them as gibberish, so treating them that way is hard. I admit I didn’t actually see what you meant.
(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.)
Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario:
A: Is your husband at home?
B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey!
A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now?
B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense…
A: But not absolutely sure, isn’t that right?
B: I suppose that’s so.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.
I agree. At the core, every belief is bayesian. I don’t recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between “has to” and “is”.
As an aside, it’s possible you missed my edit. I’ll reproduce it here:
Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”.
Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics.
And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so:
“Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).”
“Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.
My response here would be the same as my responses to the other outstanding threads.