There is the simple observation that one has no conscious experience during dreamless sleep. (A panpsychist could respond that maybe one merely lacks memory of one’s sleeping experience, but that would be epicyclic).
TAG
That’s just ordinary compatibilism—as I said, “it’s not libertarian free will.” All the work is being done by using a definition of free will that doesn’t require indeterministic “elbow room”, so none of it is being done by all the physics and metaphysics. If it is valid, it would be just as valid under naturalistic monism, supernaturalistic determinism, etc.
And compatibilism isn’t universally accepted as the solution to free will because the quale of freedom is libertarian—one feels that one could have done otherwise. (At least , mine is like that).
An additional non physical layer of consciousness might buy you qualia, but delivers no guarantee that they will be accurate… a quale of libertarian free will is necessarily illusory under determinism.
An additional non physical layer of consciousness might have bought you downwards causation and libertarian free will.
But you are not legitimising it as a subjective impression that correctly represents reality… only as an illusion: you can feel free in a deterministic world, but you can’t be free in one.
Under physicalist epiphenomenalism (which is the standard approach to the mind-matter relation), the mind is super-impressed on reality, perfectly synchronized, and parallel to it.
Under dualist epiphenomenalism, that might be true. Physicalism has it either that consciousness is non existent rather than causally idle (eliminitavism), or identical to physical brain states (and therefore sharing their causal powers).
Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantify the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is impossible:
You could have given a reason why.
It’s a warning if the history consists of various groups having extreme confidence about solving all the problems in ways that subsequent groups don’t accept.
You are conflating subjective as in “by subjects” with subjective as in “for subjects”. A subject can have preferences for objectivity, universality, impartiallity, etc.
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
(For example, if I toss a quantum fair coin n times, there will be 2^n branches with all possible outcomes.)
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
(What is the meaning of the probability measure over the branches if all branches coexist?)
It’s not the existence, it’s the lack of interaction/interference.
By “equally” I meant:
“in the same ways (and to the same degree)”.
If you actually believe in florid many worlds, you would end up pretty insuoucient, since everything possible happens, and nothing can be avoided.
Same way you know anything. “Sharp valued” and “classical” have meanings, which cash out in expected experience.
I’d guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind.
Naturalism and reductionism are not sufficient to rigourously prove either form of computationalism—that performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual.
This has been going on for years: most rationalists believe in computationalism, none have a really good reason to.
Arguing down Cartesian dualism (the thing rationalists always do) doesn’t increase the probability of computationalism, because there are further possibilities , including physicalism-without-computationalism (the one rationalists keep overlooking) , and scepticism about consciousness/identity.
One can of course adopt a belief in computationalism, or something else, in the basis of intuitions or probabilities. But then one is very much in the ream of Modest Epistemology, and needs to behave accordingly.
“My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises”.
Yep.
There isn’t a special extra “me” thing separate from my brain-state, and my precise causal history isn’t that important to my values.
If either kind of consciousness depends on physical brain states, computationalism is false. That is the problem that has rarely been recognised, and never addressed.
The particular* brain states* look no different in the teleporter case than if I’d stepped through a door; so if there’s something that makes the post-teleporter Rob “not me” while also making the post-doorway Rob “me”, then it must lie outside the brain states, a Cartesian Ghost.
There’s another option: door-Rob has physical continuity. There’s an analogy with the identity-over-time of physical objects: if someone destroyed the Mona Lisa, and created an atom-by-atom duplicate some time later, the duplicate would not be considered the same entity (numerical identity).
There isn’t an XML tag in the brain saying “this is a new brain, not the original”!
That’s not a strong enough argument. There isn’t an XML tag on the copy of the Mona Lisa, but it’s still a copy.
This question doesn’t really make sense from a naturalistic perspective, because there isn’t any causal mechanism that could be responsible for the difference between “a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing” and “an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn’t anticipate experiencing”.
There is, and its multi-way splitting, whether through copying or many worlds branching. The present you can’t anticipate having all their experiences, because experience is experienced one-at-a-time. They can all look back at their memories, and conclude that they were you, but you can’t simply reverse that and conclude that you will be them , because the set-up is asymmetrical.
Scenario 1 is crazy talk, and it’s not the scenario I’m talking about. When I say “You should anticipate having both experiences”, I mean it in the sense of Scenario 2.
Scenario 2: “Two separate screens.” My stream of consciousness continues from Rob-x to Rob-y, and it also continues from Rob-x to Rob-z. Or, equivalently: Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels exactly as though he was just Rob-x (since each of these slightly different people has all the memories, personality traits, etc. of Rob-x — just as though they’d stepped through a doorway).
But that isn’t an experience. It’s two experiences. You will not have an experience of having two experiences. Two experiences will experience having been one person.
If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?
Yeah.
Are you going to care about 1000 different copies equally?
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?
The claim that humans are at least TM’s is quite different to the claim that humans are at most TM’s. Only the second is computationalism.
Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience,
Operationally, it’s straightforward: you keep “erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction”...all the time murmering under your breath “this is not collapse..this is not collapse”.
(Lubos Motl is quoted making a similar comment here https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function?commentId=8CXRntS3JkLbBaasx)
That claim is unjustified and unjustifiable
Nothing complex is a black box , because it has components, which can potentially be understood.
Nothing artificial is a black box to the person who built it.
An LLM is , of course, complex and artificial.
Everything is fundamentally a black box until proven otherwise.
What justifies that claim?
Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably
I wasn’t arguing on that basis.
every particle interaction creates n parallel universes which never physically interfere with each other”
Although a fairly standard way of explaining MWI, this is an example of conflating coherence and decoherence. To get branches that never interact with each other again, you need decoherence, but decoherence is a complex dynamical process..it takes some time...so it is not going to occur once per elementary interaction. It’s reasonable to suppose that elementary interactions produce coherent superpositions, on the other hand, but these are not mutually isolated “worlds”. And we have fairly strong evidence for them.. quantum computing relies on complex coherent superpositions....so any idea that all superpositions just automatically and instantly decohere must be rejected.
- 20 Apr 2024 15:51 UTC; 2 points) 's comment on When is a mind me? by (
People keep coming up with derivations, and other people keep coming up with criticisms of them, which is why people keep coming up with new ones.
I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate neutral.
Meaning that a strong version of computational substrate independence , where any substrate will do, is false? Maybe, but I was arguing against hypothetical, that “the substrate independence of computation implies the substrate independence of consciousness”, not *for* the antecedent, the substrate independence of computation.
What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.
I don’t see the relevance.
So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.
OK. A crappy computational emulation might not be conscious, because it’s crappy. It still doesn’t follow that a good emulation is necessarily conscious. You’re just pointing out another possible defeater.
This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:
Which argument? Are you saying that a good enough emulation is necessarily conscious?
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.” Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run)
If it’s detailed enough, it’s guaranteed to. That’s what “enough” means
but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.
Ok...that might prove the substrate independence of computation, which I wasn’t arguing against. Past that, I don’t see your point
There’s a soft patch around 5 and 6. Why is testability important? It’s a charactersitic of science, but science assumes an external world. It’s not a characteristic of philosophy—good explanation is enough in philosophy, and the general posit of some sort of external world does explanatory work. And it’s separate from the specific posit that the external world is knowable in some particular way.