Faith in maths prodigies can be misplaced. Faith in maths can be misplaced. No one has ever proved that you can solve everything with maths. The people who believe it believe it because a guru figure said so.
TAG
Positivism isn’t necessarily true, and if it is, it still doesn’t get you to 6, because LP recommends you have no metaphysics which would imply no solipsistic metaphysics. (LP might be compatible with the claim that your own sense-data are all you can know , but that isn’t quite the same thing).
There’s a soft patch around 5 and 6. Why is testability important? It’s a charactersitic of science, but science assumes an external world. It’s not a characteristic of philosophy—good explanation is enough in philosophy, and the general posit of some sort of external world does explanatory work. And it’s separate from the specific posit that the external world is knowable in some particular way.
There is the simple observation that one has no conscious experience during dreamless sleep. (A panpsychist could respond that maybe one merely lacks memory of one’s sleeping experience, but that would be epicyclic).
That’s just ordinary compatibilism—as I said, “it’s not libertarian free will.” All the work is being done by using a definition of free will that doesn’t require indeterministic “elbow room”, so none of it is being done by all the physics and metaphysics. If it is valid, it would be just as valid under naturalistic monism, supernaturalistic determinism, etc.
And compatibilism isn’t universally accepted as the solution to free will because the quale of freedom is libertarian—one feels that one could have done otherwise. (At least , mine is like that).
An additional non physical layer of consciousness might buy you qualia, but delivers no guarantee that they will be accurate… a quale of libertarian free will is necessarily illusory under determinism.
An additional non physical layer of consciousness might have bought you downwards causation and libertarian free will.
But you are not legitimising it as a subjective impression that correctly represents reality… only as an illusion: you can feel free in a deterministic world, but you can’t be free in one.
Under physicalist epiphenomenalism (which is the standard approach to the mind-matter relation), the mind is super-impressed on reality, perfectly synchronized, and parallel to it.
Under dualist epiphenomenalism, that might be true. Physicalism has it either that consciousness is non existent rather than causally idle (eliminitavism), or identical to physical brain states (and therefore sharing their causal powers).
Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantify the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is impossible:
You could have given a reason why.
It’s a warning if the history consists of various groups having extreme confidence about solving all the problems in ways that subsequent groups don’t accept.
You are conflating subjective as in “by subjects” with subjective as in “for subjects”. A subject can have preferences for objectivity, universality, impartiallity, etc.
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
(For example, if I toss a quantum fair coin n times, there will be 2^n branches with all possible outcomes.)
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
(What is the meaning of the probability measure over the branches if all branches coexist?)
It’s not the existence, it’s the lack of interaction/interference.
By “equally” I meant:
“in the same ways (and to the same degree)”.
If you actually believe in florid many worlds, you would end up pretty insuoucient, since everything possible happens, and nothing can be avoided.
Same way you know anything. “Sharp valued” and “classical” have meanings, which cash out in expected experience.
I’d guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind.
Naturalism and reductionism are not sufficient to rigourously prove either form of computationalism—that performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual.
This has been going on for years: most rationalists believe in computationalism, none have a really good reason to.
Arguing down Cartesian dualism (the thing rationalists always do) doesn’t increase the probability of computationalism, because there are further possibilities , including physicalism-without-computationalism (the one rationalists keep overlooking) , and scepticism about consciousness/identity.
One can of course adopt a belief in computationalism, or something else, in the basis of intuitions or probabilities. But then one is very much in the ream of Modest Epistemology, and needs to behave accordingly.
“My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises”.
Yep.
There isn’t a special extra “me” thing separate from my brain-state, and my precise causal history isn’t that important to my values.
If either kind of consciousness depends on physical brain states, computationalism is false. That is the problem that has rarely been recognised, and never addressed.
The particular* brain states* look no different in the teleporter case than if I’d stepped through a door; so if there’s something that makes the post-teleporter Rob “not me” while also making the post-doorway Rob “me”, then it must lie outside the brain states, a Cartesian Ghost.
There’s another option: door-Rob has physical continuity. There’s an analogy with the identity-over-time of physical objects: if someone destroyed the Mona Lisa, and created an atom-by-atom duplicate some time later, the duplicate would not be considered the same entity (numerical identity).
There isn’t an XML tag in the brain saying “this is a new brain, not the original”!
That’s not a strong enough argument. There isn’t an XML tag on the copy of the Mona Lisa, but it’s still a copy.
This question doesn’t really make sense from a naturalistic perspective, because there isn’t any causal mechanism that could be responsible for the difference between “a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing” and “an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn’t anticipate experiencing”.
There is, and its multi-way splitting, whether through copying or many worlds branching. The present you can’t anticipate having all their experiences, because experience is experienced one-at-a-time. They can all look back at their memories, and conclude that they were you, but you can’t simply reverse that and conclude that you will be them , because the set-up is asymmetrical.
Scenario 1 is crazy talk, and it’s not the scenario I’m talking about. When I say “You should anticipate having both experiences”, I mean it in the sense of Scenario 2.
Scenario 2: “Two separate screens.” My stream of consciousness continues from Rob-x to Rob-y, and it also continues from Rob-x to Rob-z. Or, equivalently: Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels exactly as though he was just Rob-x (since each of these slightly different people has all the memories, personality traits, etc. of Rob-x — just as though they’d stepped through a doorway).
But that isn’t an experience. It’s two experiences. You will not have an experience of having two experiences. Two experiences will experience having been one person.
If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?
Yeah.
Are you going to care about 1000 different copies equally?
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?
The claim that humans are at least TM’s is quite different to the claim that humans are at most TM’s. Only the second is computationalism.
Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience,
Operationally, it’s straightforward: you keep “erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction”...all the time murmering under your breath “this is not collapse..this is not collapse”.
(Lubos Motl is quoted making a similar comment here https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function?commentId=8CXRntS3JkLbBaasx)
That claim is unjustified and unjustifiable
Nothing complex is a black box , because it has components, which can potentially be understood.
Nothing artificial is a black box to the person who built it.
An LLM is , of course, complex and artificial.
Everything is fundamentally a black box until proven otherwise.
What justifies that claim?
Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably
I wasn’t arguing on that basis.
If Steve is saying that the moral facts need to be intrinsically motivating, that is a stronger claim than “the good is what you should do”, ie, it is the claim that “the good is what you would do”. But, as cubefox points out, being intrinsically motivating isn’t part of moral realism as defined in the mainstream. (it is apparently part of moral realism as defined in LW, because of something EY said years ago). Also, since moral realism is metaethical claim, there is no need to specify the good at object level.
Once again, theories aren’t definitions.
People don’t all have to have the same moral theory. At the same time, there has to be a common semantic basis for disagreement, rather than talking past, to take place. “The good is what you should do” is pretty reasonable as a shared definition, since it is hard to dispute, but also neutral between “the good” being define personally, tribally, or universally.