Hmm… I could definitely say stuff about, what’s the IB physicalism take on those questions. But this would be what you specifically said you’re not asking me to do. So, from my perspective addressing your confusion seems like a completely illegible task atm. Maybe the explanation you alluded to in the last paragraph would help.
I’d be happy to read it if you’re so inclined and think the prompt would help you refine your own thoughts, but yeah, my anticipation is that it would mostly be updating my (already decent) probability that IB physicalism is a reasonable guess.
A few words on the sort of thing that would update me, in hopes of making it slightly more legible sooner rather than later/never: there’s a difference between giving the correct answer to metaethics (“‘goodness’ refers to an objective (but complicated, and not objectively compelling) logical fact, which was physically shadowed by brains on account of the specifics of natural selection and the ancestral environment”), and the sort of argumentation that, like, walks someone from their confused state to the right answer (eg, Eliezer’s metaethics sequence). Like, the confused person is still in a state of “it seems to me that either morality must be objectively compelling, or nothing truly matters”, and telling them your favorite theory isn’t really engaging with their intuitions. Demonstrating that your favorite theory can give consistent answers to all their questions is something, it’s evidence that you have at least produced a plausible guess. But from their confused perspective, lots of people (including the nihilists, including the Bible-based moral realists) can confidently provide answers that seem superficially consistent.
The compelling thing, at least to me and my ilk, is the demonstration of mastery and the ability to build a path from the starting intuitions to the conclusion. In the case of a person confused about metaethics, this might correspond to the ability to deconstruct the “morality must be objectively compelling, or nothing truly matters” intuition, right in front of them, such that they can recognize all the pieces inside themselves, and with a flash of clarity see the knot they were tying themselves into. At which point you can help them untie the knot, and tug on the strings, and slowly work your way up to the answer.
(The metaethics sequence is, notably, a tad longer than the answer itself.)
(If I were to write this whole concept of solutions-vs-answers up properly, I’d attempt some dialogs that make the above more concrete and less metaphorical, but \shrug.)
In the case of IB physicalism (and IB more generally), I can see how it’s providing enough consistent answers that it counts as a plausible guess. But I don’t see how to operate it to resolve my pre-existing confusions. Like, we work with (infra)measures over ΣR×Φ, and we say some fancy words about how ΣR is our “beliefs about the computations”, but as far as I’ve been able to make out this is just a neato formalism; I don’t know how to get to that endpoint by, like, starting from my own messy intuitions about when/whether/how physical processes reflect some logical procedure. I don’t know how to, like, look inside myself, and find confusions like “does logic or physics come first?” or “do I switch which algorithm I’m instantiating when I drink alcohol?”, and disassemble them into their component parts, and gain new distinctions that show me how the apparent conflicts weren’t true conflicts and all my previous intuitions were coming at things from slightly the wrong angle, and then shift angles and have a bunch of things click into place, and realize that the seeds of the answer were inside me all along, and that the answer is clearly that the universe isn’t really just a physical arrangement of particles (or a wavefunction thereon, w/e), but one of those plus a mapping from syntax-trees to bits (here taking |R|=2). Or whatever the philosophy corresponding to “a hypothesis is a ΣR×Φ” is supposed to be. Like, I understand that it’s a neat formalism that does cool math things, and I see how it can be operated to produce consistent answers to various philosophical questions, but that’s a long shot from seeing it solve the philosophical problems at hand. Or, to say it another way, answering my confusion handles consistently is not nearly enough to get me to take a theory philosophically seriously, like, it’s not enough to convince me that the universe actually has an assignment of syntax-trees to bits in addition to the physical state, which is what it looks to me like I’d need to believe if I actually took IB physicalism seriously.
I don’t think I’m capable of writing something like the metaethics sequence about IB, that’s a job for someone else. My own way of evaluating philosophical claims is more like:
Can we a build an elegant, coherent mathematical theory around the claim?
Does the theory meet reasonable desiderata?
Does the theory play nicely with other theories we have high confidence of?
If there are compelling desiderata the theory doesn’t meet, can we show that meeting them is impossible?
For example, the way I understood objective morality is wrong was by (i) seeing that there’s a coherent theory of agents with any utility function whatsoever (ii) understanding that, in terms of the physical world, “Vanessa’s utility function” is more analogous to “coastline of Africa” than to “fundamental equations of physics”.
I agree that explaining why we have certain intuitions is a valuable source of evidence, but it’s entangled with messy details of human psychology that create a lot of noise. (Notice that I’m not saying you shouldn’t use intuition, obviously intuition is an irreplaceable core part of cognition. I’m saying that explaining intuition using models of the mind, while possible and desirable, is also made difficult by the messy complexity of human minds, which in particular introduces a lot of variables that vary between people.)
Also, I want to comment on your last tagline, just because it’s too tempting:
how does the fact that larger quantum amplitudes correspond to more magical happening-ness relate to the question of how much more I should care about a simulation running on a computer with wires that are twice as thick???
I haven’t written the proofs cleanly yet (because prioritizing other projects atm), but it seems that IB physicalism produces a rather elegant interpretation of QM. Many-worlds turns out to be false. The wavefunction is not “a thing that exists”. Instead, what exists is the outcomes of all possible measurements. The universe samples those outcomes from a distribution that is determined by two properties: (i) the marginal distribution of each measurement has to obey the Born rule (ii) the overall amount of computation done by the universe should be minimal. It follows that, outside of weird thought experiments (i.e. as long as decoherence applies), agents don’t get split into copies and quantum randomness is just ordinary randomness. (Another nice consequence is that Boltzmann brains don’t have qualia.)
Hmm… I could definitely say stuff about, what’s the IB physicalism take on those questions. But this would be what you specifically said you’re not asking me to do. So, from my perspective addressing your confusion seems like a completely illegible task atm. Maybe the explanation you alluded to in the last paragraph would help.
I’d be happy to read it if you’re so inclined and think the prompt would help you refine your own thoughts, but yeah, my anticipation is that it would mostly be updating my (already decent) probability that IB physicalism is a reasonable guess.
A few words on the sort of thing that would update me, in hopes of making it slightly more legible sooner rather than later/never: there’s a difference between giving the correct answer to metaethics (“‘goodness’ refers to an objective (but complicated, and not objectively compelling) logical fact, which was physically shadowed by brains on account of the specifics of natural selection and the ancestral environment”), and the sort of argumentation that, like, walks someone from their confused state to the right answer (eg, Eliezer’s metaethics sequence). Like, the confused person is still in a state of “it seems to me that either morality must be objectively compelling, or nothing truly matters”, and telling them your favorite theory isn’t really engaging with their intuitions. Demonstrating that your favorite theory can give consistent answers to all their questions is something, it’s evidence that you have at least produced a plausible guess. But from their confused perspective, lots of people (including the nihilists, including the Bible-based moral realists) can confidently provide answers that seem superficially consistent.
The compelling thing, at least to me and my ilk, is the demonstration of mastery and the ability to build a path from the starting intuitions to the conclusion. In the case of a person confused about metaethics, this might correspond to the ability to deconstruct the “morality must be objectively compelling, or nothing truly matters” intuition, right in front of them, such that they can recognize all the pieces inside themselves, and with a flash of clarity see the knot they were tying themselves into. At which point you can help them untie the knot, and tug on the strings, and slowly work your way up to the answer.
(The metaethics sequence is, notably, a tad longer than the answer itself.)
(If I were to write this whole concept of solutions-vs-answers up properly, I’d attempt some dialogs that make the above more concrete and less metaphorical, but \shrug.)
In the case of IB physicalism (and IB more generally), I can see how it’s providing enough consistent answers that it counts as a plausible guess. But I don’t see how to operate it to resolve my pre-existing confusions. Like, we work with (infra)measures over ΣR×Φ, and we say some fancy words about how ΣR is our “beliefs about the computations”, but as far as I’ve been able to make out this is just a neato formalism; I don’t know how to get to that endpoint by, like, starting from my own messy intuitions about when/whether/how physical processes reflect some logical procedure. I don’t know how to, like, look inside myself, and find confusions like “does logic or physics come first?” or “do I switch which algorithm I’m instantiating when I drink alcohol?”, and disassemble them into their component parts, and gain new distinctions that show me how the apparent conflicts weren’t true conflicts and all my previous intuitions were coming at things from slightly the wrong angle, and then shift angles and have a bunch of things click into place, and realize that the seeds of the answer were inside me all along, and that the answer is clearly that the universe isn’t really just a physical arrangement of particles (or a wavefunction thereon, w/e), but one of those plus a mapping from syntax-trees to bits (here taking |R|=2). Or whatever the philosophy corresponding to “a hypothesis is a ΣR×Φ” is supposed to be. Like, I understand that it’s a neat formalism that does cool math things, and I see how it can be operated to produce consistent answers to various philosophical questions, but that’s a long shot from seeing it solve the philosophical problems at hand. Or, to say it another way, answering my confusion handles consistently is not nearly enough to get me to take a theory philosophically seriously, like, it’s not enough to convince me that the universe actually has an assignment of syntax-trees to bits in addition to the physical state, which is what it looks to me like I’d need to believe if I actually took IB physicalism seriously.
I don’t think I’m capable of writing something like the metaethics sequence about IB, that’s a job for someone else. My own way of evaluating philosophical claims is more like:
Can we a build an elegant, coherent mathematical theory around the claim?
Does the theory meet reasonable desiderata?
Does the theory play nicely with other theories we have high confidence of?
If there are compelling desiderata the theory doesn’t meet, can we show that meeting them is impossible?
For example, the way I understood objective morality is wrong was by (i) seeing that there’s a coherent theory of agents with any utility function whatsoever (ii) understanding that, in terms of the physical world, “Vanessa’s utility function” is more analogous to “coastline of Africa” than to “fundamental equations of physics”.
I agree that explaining why we have certain intuitions is a valuable source of evidence, but it’s entangled with messy details of human psychology that create a lot of noise. (Notice that I’m not saying you shouldn’t use intuition, obviously intuition is an irreplaceable core part of cognition. I’m saying that explaining intuition using models of the mind, while possible and desirable, is also made difficult by the messy complexity of human minds, which in particular introduces a lot of variables that vary between people.)
Also, I want to comment on your last tagline, just because it’s too tempting:
I haven’t written the proofs cleanly yet (because prioritizing other projects atm), but it seems that IB physicalism produces a rather elegant interpretation of QM. Many-worlds turns out to be false. The wavefunction is not “a thing that exists”. Instead, what exists is the outcomes of all possible measurements. The universe samples those outcomes from a distribution that is determined by two properties: (i) the marginal distribution of each measurement has to obey the Born rule (ii) the overall amount of computation done by the universe should be minimal. It follows that, outside of weird thought experiments (i.e. as long as decoherence applies), agents don’t get split into copies and quantum randomness is just ordinary randomness. (Another nice consequence is that Boltzmann brains don’t have qualia.)
What’s ordinary randomness?