Fair enough. Would you say that if the discussion reaches this part of the ladder, a digression must then be made to ensure that both parties well and truly understand QM?
I’m not sure it “must” be made, but that’s exactly the route I would go at this point.
suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?
I guess I haven’t considered this. When I find myself in this position, I try to gain the requisite skills. EY’s QM sequence on this very site isn’t too hard to follow.
Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also!
Well, that’s the thing; I don’t know (after reading this) whether you’re spouting utter nonsense! That’s precisely the problem I was trying to point to: I, for example, couldn’t tell you where on the ladder I am, for the reasons I outlined in the grandparent.
I suppose that’s entirely fair. I’m not sure how to improve the ladder to this end, though.
I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.
Further, EY goes into pretty great depth about how our current understanding of QM gives us an affirmative belief that there’s no identity difference between one copy or another in principle, particularly here and here but also in many posts in the QM sequence. This further suggests the solution to the HPoC isn’t necessary to reason about our identities.
I can speak for myself only, but—I’m not past #4, for exactly the reasons I outlined. (I think #4 is incoherent, for—apparently—the reason you intended it to be incoherent; but the steps above that are problematic, as I said.)
I admit I have no idea what the subjective experience of dying could be, especially in your sleep, but it seems like whatever lack of experience or end or whatever that you’d have when you die normally would occur here if you believed this?
But I’m inclined to believe manyworlds implies subjective quantum immortality to an extent, as well, so 4 is possibly even more meaningless. I’m just not sure how to fix it, or if I should even try, because I know people who wouldn’t fork-and-die teleport because they thing the person waking up isn’t them; they’re dead.
suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?
I guess I haven’t considered this. When I find myself in this position, I try to gain the requisite skills. EY’s QM sequence on this very site isn’t too hard to follow.
I beg to differ! I found the QM sequence impenetrable (and I don’t consider myself to entirely lack math aptitude). (Granted, it’s been a while since the last time I gave it a close read, and perhaps if I try again I’ll get through it, but I do not have high hopes for gaining anything like the kind of understanding it would take to base intuitions about consciousness and identity on!)
That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :(
I have read this post, though again, it has been a while. I will re-read it and get back to you!
I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.
I, too, am a reductionist, and concur with your strong belief; unfortunately, this doesn’t actually help… it doesn’t move us any closer to a solution. And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?
Further, EY goes into pretty great depth about how our current understanding of QM gives us an affirmative belief that there’s no identity difference between one copy or another in principle, particularly here and here but also in many posts in the QM sequence. This further suggests the solution to the HPoC isn’t necessary to reason about our identities.
I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp that Eliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here?
I know people who wouldn’t fork-and-die teleport because they thing the person waking up isn’t them; they’re dead.
How do I describe that position on this ladder?
You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)
You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)
Fair, I did my best to fix 3 and 4.
And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?
Can you be more specific about what exactly I said that you’re referring to? Forgive me but I actually am not sure which part you mean.
I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp thatEliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here?
EY’s assertion, and I tend to see his point, is that there can’t possibly be anything about [you] that isn’t true of [a perfect copy of you] that would distinguish—to the universe itself or anything in it—between the instances, other than their positions in spacetime.
And, since we change positions in spacetime all the time without claiming we’ve lost our identities or consciousnesses, that method of distinction is not sufficient to threaten any consideration of identity or consciousness. Further, because consciousness is completely emergent-from-the-physical, if there’s nothing physically different about two instances besides spacetime displacements (which do not threaten consciousness), there’s no way in principle that consciousness doesn’t behave this way.
And since these aren’t things we think, as long as we don’t find anything in the future that contradict them, but rather are things that are most assuredly true, unless the universe is lying to us, it ought not matter what we don’t yet know, because this positive fact about identity (or lack thereof) is sufficient to make this conclusion.
That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :(
:(
----
I wanted to say an additional couple of things that may take this in a different direction. To go back to your top level comment:
That is, if you ask me whether I believe the mid-to-high-numbered statements, I won’t say “yes” or “no”; I’ll say “I don’t know, and I don’t know if they even make sense enough to be right or wrong, and really the whole subject is deeply confusing to me”. You can’t convince me to accept (say) rung #7, because I don’t currently reject it; rather, I don’t know whether it even makes sense as a position, or whether there is (as seems to me to be likely) something that we’re missing about the whole matter
I want to clarify that I consider this position (indeed, any position that isn’t “yes, I believe it could be no other way”) a rejection of that intuition. To be an intuition, it should be intuitively true.
Also:
it’s hard to know what intuition to have
My goal with this ladder, specifically, is not to change anyone’s mind, but rather to provide enough resolution to where everyone can point to some level and say either:
1. That is true, or
2. That could be true
and then point to the level below it and say either:
1. That is clearly not true, or
2. That is not the whole picture.
If you don’t have an adjacent pair of levels like that, I do in fact need to fix my ladder. Namely, I need to include a level that corresponds to your intuition. I suppose I’m looking for the highest level someone thinks is clearly wrong.
That said, with all due respect, I don’t think that implies that levels higher than that point should make intuitive sense to you. The fact that they don’t was legitimately my intention.
I want to clarify that I consider this position (indeed, any position that isn’t “yes, I believe it could be no other way”) a rejection of that intuition. To be an intuition, it should be intuitively true.
and
That said, with all due respect, I don’t think that implies that levels higher than that point should make intuitive sense to you. The fact that they don’t was legitimately my intention.
Fair enough, this is a reasonable way of looking at it.
So, let me go ahead and try to “rate” each of the levels in the way you’re looking for:
This seems like obvious nonsense.
This seems like slightly less obvious but still nonsense.
I don’t really know whether this is true [pending resolution of the Hard Problem], but it seems unlikely.
I don’t really know whether this is true; I don’t even know if it’s likely. Without a resolution of the Hard Problem, I can’t really reason about this.
Ditto #4.
I am very uncertain about this because I don’t understand quantum mechanics. That aside, ditto #4.
I have no clue whether this could be true.
I have no clue whether this could be true.
I have no clue whether this could be true.
I think that fails to satisfy your desired criteria, yes? (Unless I have misunderstood?)
I’m not so sure this fails. I’m inclined to take this to mean you are between 2 and 3 or between 3 and 4, depending on what specifically you object to in 3.
But I’m curious, if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly—pick the hypothesis you have with the highest prior, even if it’s nowhere close to 50% - what would it be, completely independent of what I’ve said in this ladder?
Honestly, I have no idea. I really don’t know how to reason about subjective phenomenal consciousness. That’s the problem. It seems clear to me that anyone who, given the state of our current knowledge, is very certain of anything like the latter part of #3 or of any of the higher numbers, is simply unjustified in their certainty. If you can’t give me a satisfying reduction of consciousness—one which fully and comprehensively dissolves the Hard Problem—then nothing approaching certainty in any of these views is possible.
I wholly agree with this:
Copies had a common brain and memories, which make them indistinguishable from each other in principle, so they believe they’re me, and they’re not wrong in any meaningful sense
But everything beyond that? Everything that deals with subjective experience, anticipation, etc.? I just plain don’t know.
In that case, I would say you’re between 3 and 4. And I can’t say you’re wrong about my relative certainty being unwarranted, but obviously I think you’re wrong, and it’s because I believe that QM leaves only enough wiggle room of uncertainty for the things we don’t yet know to never actually affect the physical consequences of such a procedure (even from the inside).
This is why I think QM is necessary to advance up the ladder; it’s the reason I advanced up the ladder, and it’s the only experimentally true thing we have so far that permits you to advance up the ladder. Trying to go a different route would be dishonest.
I will take your word about QM, I suppose, but I’m afraid that is scant concession. Much like I said in my other comment—how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue!
Right. I agree that we don’t know how, but I submit that we know that they do, and we believe strongly in reductionism, and we can condition conclusions on reductionism and the belief that they do, without conditioning them on how they do, and I submit further that limiting ourselves in this way is still sufficient to advance up the ladder.
We have a black box—in computer science, an interface—but we don’t need to know what’s inside if we know everything about its behavior on the outside. We can still use it in an algorithm, and know what to expect will happen.
It doesn’t seem like we can know this much in principle (without access to the inside of the black box) until you understand why we know this much, as EY talks about here.
I’d also like to be clear that “inside the black box” (the answer to the hard problem) is not the same as “the subjective feeling inside the mind” (a physical consequence of whatever the black box is doing).
how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue
I didn’t mean this in the sense of “how is it possible that they do”, but rather in the sense of “in what way do they”. To that formulation, your answer is non-responsive.
We have a black box—in computer science, an interface—but we don’t need to know what’s inside if we know everything about its behavior on the outside.
But we don’t know everything about the black box’s behavior! That’s precisely the problem in the first place! We are, in essence, trying to predict the behavior of the black box. And we’re trying to do it without knowing what’s inside it—which seems futile and ill-advised, given that we can’t exactly observe the box’s behavior, post-hoc!
As for the linked Sequence post—again, I really do take your word for it. I just don’t think that stuff is relevant.
I truly do think we can’t move further from this point, in this thread of this argument, without you reading and understanding the sequence :(
I could be mistaken, but it seems to me that the distinction you’re trying to make between what I’m saying and what I’d have to say for my answer to be responsive dissolves as you understand QM.
I could, of course, be misunderstanding you completely. But there also isn’t anything you’re linking that I’m unwilling to read :P
Well, to be honest, I don’t think there is anywhere further to move.
I mean, suppose I re-read the QM sequence, and this time I understand it. What propositions will I then accept, that I currently reject? What beliefs will I hold, that I currently do not?
If I’ve read your comments correctly thus far, then it seems to me that everything you list, in answer to my above questions, will be things that I have already assented to (at least, for the sake of argument, if not in fact). So what is gained?
1. Everything obeys QM. To wit, nothing can exist anywhere/when that is not describable in the math of QM in principle.
2. If everything obeys QM, consciousness obeys QM.
3. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, there is nothing anywhere that can differentiate between copies in principle in any way besides how we can differentiate between a person in the past and the same person in the future, having taken a mundane trajectory through spacetime. This includes how it “feels from the inside.” If we can claim consciousness is continuous at all, we can claim it is continuous regardless of proximity in spacetime or any other consideration except for a particular change in physical structure across some change in spacetime. There is provably no computable difference, in the structure of our universe, between existing from one second to the next, and blinking out of existence somewhere/when and then into existence somewhere/when else.
4. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, our subjective experience of being in one place at one time is a limitation of our own perceptions. When we exist in multiple places at the same time, each placetime!us perceives a single, unbroken continuity of consciousness in hindsight, but this is an artifact of a failure to perceive or communicate with other placetime!us branches.
5. We constantly exist in multiple places, as decoherence is the rule, and factorable subspaces where a particular placetime!us can exist and identify as a “world” are a significant exception. It just so happens that decoherence of a certain kind tends to create locally factorable subspaces that move away from each other in configuration space, so they can’t meaningfully interact. For any reasonable definition of “we”, “we” are constantly being copied every time there is any detectable (in principle) decoherence anywhere in our “world” (a “universe” that a single placetime!us has access to in principle). By observation, we know that at least every time the “we” we identify as has split, it hasn’t interrupted our continuity of consciousness. As there are decoherence events constantly on scales we have trouble imagining in numbers of places we have trouble imagining at once, we can reason that we didn’t just get absurdly lucky, and every copy of us looks back on these splits with the same feeling of continuity.
6. The splits we’ve observed and cannot interact with are in principle no different from a split in a single “world” where we can interact with our copy.
And possibly more. It’s a lot and I’m doing the best I can from memory.
I grant (for the sake of argument) #1 and #2. I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense. This is in large part this is because the reasoning has “holes” in it—that is, parts which we currently take essentially on faith, pending a resolution of the Hard Problem.
Some specifics:
in any way besides how we can differentiate between a person in the past and the same person in the future
And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.
If we can claim consciousness is continuous at all
Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seem like consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)?
As long as consciousness is not or does not consist of some fundamental element that does not obey QM
Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this!
… actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim!
our subjective experience of being in one place at one time is a limitation of our own perceptions. When we exist in multiple places at the same time, each placetime!us perceives a single, unbroken continuity of consciousness in hindsight, but this is an artifact of a failure to perceive or communicate with other placetime!us branches
This part I actually just don’t get the point of. I mean, you’re not wrong, but so what?
As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.
I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense
This should change when you understand QM. I was trying to black box it.
And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.
Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seemlike consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)?
It doesn’t matter, because we can prove they are the same black box, and thus their behavior is the same, even if we don’t know how it works (or fully what that behavior even is). As long as we have A === B (which QM says we must), we can say (A->C) → (B->C) even if we don’t know whether A->C or how. To the extent that it A gives off some evidence that convinces us of C, B does exactly the same thing.
Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this!
… actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim!
1 and 2 imply this, and you were willing to give me those. QM supports reductionism independent of all the classical and empirical reasons we believe in reductionism. Like I said above, you asked me to black box it, and I’m claiming these are things that are clear when you understand QM. QM is brazen about being the exclusive language our universe uses to describe everything. It’s physical nonsense to talk about something that exists and isn’t described by QM. That’s what existence means.
This part I actually just don’t get the point of. I mean, you’re not wrong, but so what?
It’s preparation for the point “we do it all the time and maintain our sense of continuity in every branch” in 5 and 6.
As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.
5 is basically:
When Schrödinger’s cat enters a superposition of alive|dead, so does the entire universe, including us. Like the cat, we split into a!us and d!us (us in the world where the cat is alive and us in the world where it is dead). When we observe the cat, and find it out is alive|dead, we are finding out which world we are in, and correlating our brain state with the state of the cat. This is decoherence, and it pushes a!universe and d!universe apart in the mathematical substrate that defines them (so they can’t interact anymore).
If we observe the cat is alive, we realize we are a!us. But there is still a d!us. We split, and they are observing a dead cat. a!us and d!us both can think back to the time before the split and say “that’s me and I had an unbroken chain of time slices that lead me here—my consciousness was continuous”. Both maintain continuity throughout the process.
6 is basically:
The above experience for a person cannot be described differently in QM (which is the language of existence) from the kind of copying that occurs in one branch!universe, except by differences that can’t in principle have an effect as per point 2, so they black box as the same thing, and implications about one are implications about the other.
Ah, these are much better descriptions now, well done!
And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?
Can you be more specific about what exactly I said that you’re referring to? Forgive me but I actually am not sure which part you mean.
Sure. You said:
I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.
[emphasis added]
The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind?
Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it.
EY’s assertion, and I tend to see his point, is that […]
Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why.
As an aside:
… because consciousness is completely emergent-from-the-physical …
No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical.
Until then, perhaps it has to be, but it’s an open question whether it is…
[rest of my response is conceptually separate, so it’s in a separate comment]
Ah, these are much better descriptions now, well done!
Thanks, I sincerely appreciate your help in clarifying :)
We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical”
Can you explain why the former doesn’t imply the latter? I’m under the impression it does, for any reasonable definition of “has to be”, as long as what you’re conditioning on (in this case reductionism) is true. I suppose I don’t see your objection.
Can you explain why the former doesn’t imply the latter?
Sure. Basically, this is the problem:
as long as what you’re conditioning on (in this case reductionism) is true
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Tangentially:
for any reasonable definition of “has to be”
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
Consider:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):
A: Is your husband at home right now?
B: Yes, he is.
A: You know that he’s at home?
B: Well… no. But he has to be at home.
A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?
B: No, I didn’t.
And so on. (I imagine you could easily come up with innumerable other examples.)
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yetreduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical.Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
If your answers to those questions are “well above 50%” and “yes,” why is it so difficult to answer the question:
if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly—pick the hypothesis you have with the highest prior, even if it’s nowhere close to 50% - what would it be
?
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
It seems to me that you’re separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
vs.
A: Is your husband at home right now?
B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.
Neither of these are “is”, i.e. direct, contemporaneous, empirical observation. They are both “has to be”, i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.
I submit that my “has to be” is of the latter type, but even more airtight.
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
I counter with the questions: What numerical strength would you give your belief that reductionism is true?
I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context.
Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder.
That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is—
—the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent!
In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive and possibly incoherent things that become intuitively true as you understand QM. As I said elsewhere:
I truly do think we can’t move further from this point, in this thread of this argument, without you reading and understanding the sequence :(
I could be mistaken, but it seems to me that the distinction you’re trying to make between what I’m saying and what I’d have to say for my answer to be [coherent] dissolves as you understand QM.
I could, of course, be misunderstanding you completely. But there also isn’t anything you’re linking that I’m unwilling to read :P
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism.
I will, if I can find the time, try to give the QM sequence a close re-read, however.
I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.
Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over.
I hope, at least, that this back-and-forth has been useful?
I hope, at least, that this back-and-forth has been useful?
Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.
so I don’t think I have much to add past what we’ve already gone over.
I did make an attempt to address your last reply. If you still feel that way after, let me know.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)
(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.)
I submit that my “has to be” is of the latter type, but even more airtight.
Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario:
A: Is your husband at home?
B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey!
A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now?
B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense…
A: But not absolutely sure, isn’t that right?
B: I suppose that’s so.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
I agree. At the core, every belief is bayesian. I don’t recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between “has to” and “is”.
As an aside, it’s possible you missed my edit. I’ll reproduce it here:
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”.
Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics.
And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so:
“Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).”
“Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.
… I anticipate my consciousness transferring to that far away not-copy with some probability
I confess to being confused about how the concept of “probability” is being used here (and in similar comments I’ve seen). Can you elaborate?
whatever you, before you undergo a copying procedure, anticipate happening to you (subjectively experiencing ending up in the grown body vs. ending up in the body that walked in, each with some respective probability)
Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also!
I have now re-read the post. I’m afraid it didn’t help. Or rather, to be more precise—I conclude from it that using the concept of “probability” in this way is incoherent.
Basically, I think that Yu’el is wrong and De’da is right. (In fact, I would go further—I don’t even think that De’da’s answer concerning which way to bet makes a whole lot of sense… but this is a tangent, and one which brings in issues unrelated to “probability” pe se.)
I’m not sure it “must” be made, but that’s exactly the route I would go at this point.
I guess I haven’t considered this. When I find myself in this position, I try to gain the requisite skills. EY’s QM sequence on this very site isn’t too hard to follow.
This may help you here.
I suppose that’s entirely fair. I’m not sure how to improve the ladder to this end, though.
I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.
Further, EY goes into pretty great depth about how our current understanding of QM gives us an affirmative belief that there’s no identity difference between one copy or another in principle, particularly here and here but also in many posts in the QM sequence. This further suggests the solution to the HPoC isn’t necessary to reason about our identities.
I admit I have no idea what the subjective experience of dying could be, especially in your sleep, but it seems like whatever lack of experience or end or whatever that you’d have when you die normally would occur here if you believed this?
But I’m inclined to believe manyworlds implies subjective quantum immortality to an extent, as well, so 4 is possibly even more meaningless. I’m just not sure how to fix it, or if I should even try, because I know people who wouldn’t fork-and-die teleport because they thing the person waking up isn’t them; they’re dead.
How do I describe that position on this ladder?
I beg to differ! I found the QM sequence impenetrable (and I don’t consider myself to entirely lack math aptitude). (Granted, it’s been a while since the last time I gave it a close read, and perhaps if I try again I’ll get through it, but I do not have high hopes for gaining anything like the kind of understanding it would take to base intuitions about consciousness and identity on!)
That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :(
I have read this post, though again, it has been a while. I will re-read it and get back to you!
I, too, am a reductionist, and concur with your strong belief; unfortunately, this doesn’t actually help… it doesn’t move us any closer to a solution. And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?
I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp that Eliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here?
You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)
Fair, I did my best to fix 3 and 4.
Can you be more specific about what exactly I said that you’re referring to? Forgive me but I actually am not sure which part you mean.
EY’s assertion, and I tend to see his point, is that there can’t possibly be anything about [you] that isn’t true of [a perfect copy of you] that would distinguish—to the universe itself or anything in it—between the instances, other than their positions in spacetime.
And, since we change positions in spacetime all the time without claiming we’ve lost our identities or consciousnesses, that method of distinction is not sufficient to threaten any consideration of identity or consciousness. Further, because consciousness is completely emergent-from-the-physical, if there’s nothing physically different about two instances besides spacetime displacements (which do not threaten consciousness), there’s no way in principle that consciousness doesn’t behave this way.
And since these aren’t things we think, as long as we don’t find anything in the future that contradict them, but rather are things that are most assuredly true, unless the universe is lying to us, it ought not matter what we don’t yet know, because this positive fact about identity (or lack thereof) is sufficient to make this conclusion.
:(
----
I wanted to say an additional couple of things that may take this in a different direction. To go back to your top level comment:
I want to clarify that I consider this position (indeed, any position that isn’t “yes, I believe it could be no other way”) a rejection of that intuition. To be an intuition, it should be intuitively true.
Also:
My goal with this ladder, specifically, is not to change anyone’s mind, but rather to provide enough resolution to where everyone can point to some level and say either:
1. That is true, or
2. That could be true
and then point to the level below it and say either:
1. That is clearly not true, or
2. That is not the whole picture.
If you don’t have an adjacent pair of levels like that, I do in fact need to fix my ladder. Namely, I need to include a level that corresponds to your intuition. I suppose I’m looking for the highest level someone thinks is clearly wrong.
That said, with all due respect, I don’t think that implies that levels higher than that point should make intuitive sense to you. The fact that they don’t was legitimately my intention.
Re:
and
Fair enough, this is a reasonable way of looking at it.
So, let me go ahead and try to “rate” each of the levels in the way you’re looking for:
This seems like obvious nonsense.
This seems like slightly less obvious but still nonsense.
I don’t really know whether this is true [pending resolution of the Hard Problem], but it seems unlikely.
I don’t really know whether this is true; I don’t even know if it’s likely. Without a resolution of the Hard Problem, I can’t really reason about this.
Ditto #4.
I am very uncertain about this because I don’t understand quantum mechanics. That aside, ditto #4.
I have no clue whether this could be true.
I have no clue whether this could be true.
I have no clue whether this could be true.
I think that fails to satisfy your desired criteria, yes? (Unless I have misunderstood?)
I’m not so sure this fails. I’m inclined to take this to mean you are between 2 and 3 or between 3 and 4, depending on what specifically you object to in 3.
But I’m curious, if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly—pick the hypothesis you have with the highest prior, even if it’s nowhere close to 50% - what would it be, completely independent of what I’ve said in this ladder?
Honestly, I have no idea. I really don’t know how to reason about subjective phenomenal consciousness. That’s the problem. It seems clear to me that anyone who, given the state of our current knowledge, is very certain of anything like the latter part of #3 or of any of the higher numbers, is simply unjustified in their certainty. If you can’t give me a satisfying reduction of consciousness—one which fully and comprehensively dissolves the Hard Problem—then nothing approaching certainty in any of these views is possible.
I wholly agree with this:
But everything beyond that? Everything that deals with subjective experience, anticipation, etc.? I just plain don’t know.
In that case, I would say you’re between 3 and 4. And I can’t say you’re wrong about my relative certainty being unwarranted, but obviously I think you’re wrong, and it’s because I believe that QM leaves only enough wiggle room of uncertainty for the things we don’t yet know to never actually affect the physical consequences of such a procedure (even from the inside).
This is why I think QM is necessary to advance up the ladder; it’s the reason I advanced up the ladder, and it’s the only experimentally true thing we have so far that permits you to advance up the ladder. Trying to go a different route would be dishonest.
I will take your word about QM, I suppose, but I’m afraid that is scant concession. Much like I said in my other comment—how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue!
Right. I agree that we don’t know how, but I submit that we know that they do, and we believe strongly in reductionism, and we can condition conclusions on reductionism and the belief that they do, without conditioning them on how they do, and I submit further that limiting ourselves in this way is still sufficient to advance up the ladder.
We have a black box—in computer science, an interface—but we don’t need to know what’s inside if we know everything about its behavior on the outside. We can still use it in an algorithm, and know what to expect will happen.
It doesn’t seem like we can know this much in principle (without access to the inside of the black box) until you understand why we know this much, as EY talks about here.
I’d also like to be clear that “inside the black box” (the answer to the hard problem) is not the same as “the subjective feeling inside the mind” (a physical consequence of whatever the black box is doing).
Sorry, I think I wasn’t clear. When I said:
I didn’t mean this in the sense of “how is it possible that they do”, but rather in the sense of “in what way do they”. To that formulation, your answer is non-responsive.
But we don’t know everything about the black box’s behavior! That’s precisely the problem in the first place! We are, in essence, trying to predict the behavior of the black box. And we’re trying to do it without knowing what’s inside it—which seems futile and ill-advised, given that we can’t exactly observe the box’s behavior, post-hoc!
As for the linked Sequence post—again, I really do take your word for it. I just don’t think that stuff is relevant.
I truly do think we can’t move further from this point, in this thread of this argument, without you reading and understanding the sequence :(
I could be mistaken, but it seems to me that the distinction you’re trying to make between what I’m saying and what I’d have to say for my answer to be responsive dissolves as you understand QM.
I could, of course, be misunderstanding you completely. But there also isn’t anything you’re linking that I’m unwilling to read :P
Well, to be honest, I don’t think there is anywhere further to move.
I mean, suppose I re-read the QM sequence, and this time I understand it. What propositions will I then accept, that I currently reject? What beliefs will I hold, that I currently do not?
If I’ve read your comments correctly thus far, then it seems to me that everything you list, in answer to my above questions, will be things that I have already assented to (at least, for the sake of argument, if not in fact). So what is gained?
We gain these:
1. Everything obeys QM. To wit, nothing can exist anywhere/when that is not describable in the math of QM in principle.
2. If everything obeys QM, consciousness obeys QM.
3. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, there is nothing anywhere that can differentiate between copies in principle in any way besides how we can differentiate between a person in the past and the same person in the future, having taken a mundane trajectory through spacetime. This includes how it “feels from the inside.” If we can claim consciousness is continuous at all, we can claim it is continuous regardless of proximity in spacetime or any other consideration except for a particular change in physical structure across some change in spacetime. There is provably no computable difference, in the structure of our universe, between existing from one second to the next, and blinking out of existence somewhere/when and then into existence somewhere/when else.
4. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, our subjective experience of being in one place at one time is a limitation of our own perceptions. When we exist in multiple places at the same time, each placetime!us perceives a single, unbroken continuity of consciousness in hindsight, but this is an artifact of a failure to perceive or communicate with other placetime!us branches.
5. We constantly exist in multiple places, as decoherence is the rule, and factorable subspaces where a particular placetime!us can exist and identify as a “world” are a significant exception. It just so happens that decoherence of a certain kind tends to create locally factorable subspaces that move away from each other in configuration space, so they can’t meaningfully interact. For any reasonable definition of “we”, “we” are constantly being copied every time there is any detectable (in principle) decoherence anywhere in our “world” (a “universe” that a single placetime!us has access to in principle). By observation, we know that at least every time the “we” we identify as has split, it hasn’t interrupted our continuity of consciousness. As there are decoherence events constantly on scales we have trouble imagining in numbers of places we have trouble imagining at once, we can reason that we didn’t just get absurdly lucky, and every copy of us looks back on these splits with the same feeling of continuity.
6. The splits we’ve observed and cannot interact with are in principle no different from a split in a single “world” where we can interact with our copy.
And possibly more. It’s a lot and I’m doing the best I can from memory.
I grant (for the sake of argument) #1 and #2. I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense. This is in large part this is because the reasoning has “holes” in it—that is, parts which we currently take essentially on faith, pending a resolution of the Hard Problem.
Some specifics:
And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.
Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seem like consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)?
Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this!
… actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim!
This part I actually just don’t get the point of. I mean, you’re not wrong, but so what?
As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.
This should change when you understand QM. I was trying to black box it.
It doesn’t matter, because we can prove they are the same black box, and thus their behavior is the same, even if we don’t know how it works (or fully what that behavior even is). As long as we have A === B (which QM says we must), we can say (A->C) → (B->C) even if we don’t know whether A->C or how. To the extent that it A gives off some evidence that convinces us of C, B does exactly the same thing.
1 and 2 imply this, and you were willing to give me those. QM supports reductionism independent of all the classical and empirical reasons we believe in reductionism. Like I said above, you asked me to black box it, and I’m claiming these are things that are clear when you understand QM. QM is brazen about being the exclusive language our universe uses to describe everything. It’s physical nonsense to talk about something that exists and isn’t described by QM. That’s what existence means.
It’s preparation for the point “we do it all the time and maintain our sense of continuity in every branch” in 5 and 6.
5 is basically:
When Schrödinger’s cat enters a superposition of alive|dead, so does the entire universe, including us. Like the cat, we split into a!us and d!us (us in the world where the cat is alive and us in the world where it is dead). When we observe the cat, and find it out is alive|dead, we are finding out which world we are in, and correlating our brain state with the state of the cat. This is decoherence, and it pushes a!universe and d!universe apart in the mathematical substrate that defines them (so they can’t interact anymore).
If we observe the cat is alive, we realize we are a!us. But there is still a d!us. We split, and they are observing a dead cat. a!us and d!us both can think back to the time before the split and say “that’s me and I had an unbroken chain of time slices that lead me here—my consciousness was continuous”. Both maintain continuity throughout the process.
6 is basically:
The above experience for a person cannot be described differently in QM (which is the language of existence) from the kind of copying that occurs in one branch!universe, except by differences that can’t in principle have an effect as per point 2, so they black box as the same thing, and implications about one are implications about the other.
Ah, these are much better descriptions now, well done!
Sure. You said:
[emphasis added]
The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind?
Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it.
Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why.
As an aside:
No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical.
Until then, perhaps it has to be, but it’s an open question whether it is…
[rest of my response is conceptually separate, so it’s in a separate comment]
Thanks, I sincerely appreciate your help in clarifying :)
Can you explain why the former doesn’t imply the latter? I’m under the impression it does, for any reasonable definition of “has to be”, as long as what you’re conditioning on (in this case reductionism) is true. I suppose I don’t see your objection.
Sure. Basically, this is the problem:
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Tangentially:
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
Consider:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):
A: Is your husband at home right now?
B: Yes, he is.
A: You know that he’s at home?
B: Well… no. But he has to be at home.
A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?
B: No, I didn’t.
And so on. (I imagine you could easily come up with innumerable other examples.)
Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
If your answers to those questions are “well above 50%” and “yes,” why is it so difficult to answer the question:
?
It seems to me that you’re separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
vs.
A: Is your husband at home right now?
B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.
Neither of these are “is”, i.e. direct, contemporaneous, empirical observation. They are both “has to be”, i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.
I submit that my “has to be” is of the latter type, but even more airtight.
I concede that this is all hypothesis, but it is of the same sort as “the Higgs Boson exists, or else we’re wrong about a lot of things”… before we found it.
I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context.
Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder.
That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is—
—the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent!
In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.
The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.
The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive and possibly incoherent things that become intuitively true as you understand QM. As I said elsewhere:
The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”
I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.
Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism.
I will, if I can find the time, try to give the QM sequence a close re-read, however.
I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.
Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over.
I hope, at least, that this back-and-forth has been useful?
Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.
I did make an attempt to address your last reply. If you still feel that way after, let me know.
This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)
Fair enough. I don’t see them as gibberish, so treating them that way is hard. I admit I didn’t actually see what you meant.
(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.)
Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario:
A: Is your husband at home?
B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey!
A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now?
B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense…
A: But not absolutely sure, isn’t that right?
B: I suppose that’s so.
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?
And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.
I agree. At the core, every belief is bayesian. I don’t recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between “has to” and “is”.
As an aside, it’s possible you missed my edit. I’ll reproduce it here:
Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”.
Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics.
And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so:
“Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).”
“Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.
My response here would be the same as my responses to the other outstanding threads.
I have now re-read the post. I’m afraid it didn’t help. Or rather, to be more precise—I conclude from it that using the concept of “probability” in this way is incoherent.
Basically, I think that Yu’el is wrong and De’da is right. (In fact, I would go further—I don’t even think that De’da’s answer concerning which way to bet makes a whole lot of sense… but this is a tangent, and one which brings in issues unrelated to “probability” pe se.)
Ah. I guess I’m not sure where to go from here, in that case.