Solipsism is Underrated
Consider two simple, non-exhaustive theories.
Theory 1, Strong Materialism: The only ontological primitive is matter. This is to say that there is nothing but physical things, and everything that happens can be reduced down to the material. Minds and consciousness must then be some emergent phenomenon based on matter. For example, some people who take this view speculate that consciousness may be “what an algorithm implemented on a physical machine (like your brain) feels like from the inside.”
Theory 2, Strong Solipsism: The only ontological primitive is my own mind. This is to say that there is nothing (or at least nothing worth believing in) but mental phenomena. Someone who takes this view might say that they have no good reason to believe that their experiences have a physical antecedent, and everything that exists is just what’s going on for them mentally. People with this view might point out that perceptions of things outside themselves don’t actually indicate those things — just the perceptions themselves.
There are also dualistic theories which assert the existence of both physical and mental primitives, but I don’t want to focus on them here, and they’re not too popular among the LessWrong crowd anyway. There are a series of problems with different forms of dualism involving the prior complexity of believing in multiple ontological primitives as well some curious means by which the physical and mental interact. It’s generally agreed that dualistic theories would force us to believe in more unlikely phenomena than either materialism or solipsism and aren’t comparatively plausible.
My impression is that most people who I’ve listened to or read from who have discussed these ideas (including Brian Tomasik and Eliezer Yudkowsky), seem to prefer materialism. There are two good reasons people have for this. First, even though when I perceive something, it does not with certainty imply the existence of that thing (only of that perception), it is perfectly consistent with the existence of that thing. Second, I observe many people other than myself, and their behavior and feelings seem to be a lot like my own. Given both of these observations, the likelihood in Bayes rule for materialism is relatively high, while for solipsism, it’s not.
I think that these two points are strong reasons to suspect that materialism is a good theory and may have an advantage over solipsism. But it seems to me that when many people discuss the question, they mention these two points, miss applying a crucial counterpoint, and go along discussing metaphysics as if solipsism weren’t worth much consideration. I think this is a mistake.
Anyone who supports materialism has to bite a bullet. It implies that somehow, someway consciousness and qualia are phenomena that reduce down to the activity of physical systems. From a Newtonian perspective, atoms are much like billiard balls, bouncing around together, and from a quantum perspective, they are waves/particles interacting through forces and entanglements. But using either interpretation, how puzzling is the view, that the activity of these little material things somehow is responsible for conscious qualia? This is where a lot of critical thinking has led many people to say things like “consciousness must be what an algorithm implemented on a physical machine feels like from the ‘inside.’” And this is a decent hypothesis, but not an explanatory one at all. The emergence of consciousness and qualia is just something that materialists need to accept as a spooky phenomenon. It’s not a very satisfying solution to the hard problem of consciousness. This belief in some mysterious ability for the mental to supervene on the physical is almost as ad hoc as the belief that dualists have in immaterial minds!
Suppose that you update on the evidence that you experience conscious qualia and your various perceptions about the world. How do the two theories compare? Both of them assume only a single primitive, be it the material or the mental (a huge advantage of either of these theories over dualism). But conditioning on the existence of consciousness, materialism must assume the existence of a strange phenomenon whereby physical things somehow cause qualia, and solipsism doesn’t do a good job of accounting for much of the structure in observations and your perception of other people much like yourself. In this sense, materialism has a relatively low prior and solipsism has a relatively low likelihood — qualitatively, at least.
Comparing these two posteriors quantitatively is hard, but it’s not a lopsided debate like I think many people assume. There’s definitely a huge advantage that solipsism has over materialism. So I recommend taking it seriously.
Nice post. I tend to think that solipsism of the sort you describe (a form of “subjective idealism”) ends up looking almost like regular materialism, just phrased in a different ontology. That’s because you still have to predict all the things you observe, and in theory, you’d presumably converge on similar “physical laws” to describe how things you observe change as a materialist does. For example, you’ll still have your own idealist form of quantum mechanics to explain the observations you make as a quantum physicist (if you are a quantum physicist). In practice you don’t have the computing power to by yourself figure all these things out just based on your own observations, but presumably an AIXI version of you would be able to deduce the full laws of physics from just these solipsist observations.
So if the laws of physics are the same, the only difference seems to be that in the case of idealism, we call the ontological primitive “mental”, and we say that external phenomena don’t actually exist but instead we just model them as if they existed to predict experiences. I suppose this is a consistent view and isn’t that different in complexity from regular materialism. I just don’t see much motivation for it. It seems slightly more elegant to just assume that all the stuff we’re modeling as if it existed actually does exist (whatever that means).
And I’m not sure how much difference it makes to postulate that the ontological primitive is “mental” (whatever that means). Whether the ontological primitive is mental or not, there are still mechanical processes in our brains that cause us to believe we’re conscious and to ask why there’s a hard problem of consciousness. Maybe that already explains all the data, and there’s no need for us to actually be conscious (whatever that would mean).
Anyway, I find these questions to be some of the most difficult in philosophy, because it’s so hard to know what we’re even talking about. We have to explain the datum that we’re conscious, but what exactly does that datum look like? It seems that how we interpret the datum depends on what ontology we’re already assuming. A materialist interprets the datum as saying that we physically believe that we’re conscious, and materialism can explain that just fine. A non-materialist insists that there’s more to the datum than that.
Great comment. Thanks.
I can’t disagree. This definitely shifts my thinking a bit. I think that solipsism + structured observations might be comparable in complexity to materialism + an ability for qualia to arise from material phenomena. But at that point the questions hinges a bit on what we think is spookier. I’m convinced that a material solution to the hard problem of consciousness is spooky. I think I could maybe be convinced that hallucinating structured observations might be similarly spooky.
And I think you’re right about the problem of knowing what we’re talking about.
Makes sense. :) To me it seems relatively plausible that the intuition of spookiness regarding materialist consciousness is just a cognitive mistake, similar to Capgras syndrome. I’m more inclined to believe this than to adopt weirder-seeming ontologies.
So evidence contrary to materialism isn’t evidence, it’s a delusion.
I think it’s all evidence, and the delusion is part of the materialist explanation of that evidence. Analogously, part of the atheist hypothesis has to be an explanation of why so many cultures developed religions.
That said, as we discussed, there’s debate over what the nature of the evidence is and whether delusions in the materialist brains of us zombies can adequately explain it.
And “fossils were created by the Devil to mislead us” is part of the theist explanation of creationism. The thing is, that rationalists have complete contempt for this kind of argument in some contexts...but rationalists also believe that rationality is based on normative rules. If “don’t dismiss evidence as delusory” is a rule, it has to apply to everybody. And it it isn’t, it has to apply to nobody.
I wouldn’t support a “don’t dismiss evidence as delusory” rule. Indeed, there are some obvious delusions in the world, as well as optical illusions and such. I think the reason to have more credence in materialism than theist creationism is the relative prior probabilities of the two hypotheses: materialism is a lot simpler and seems less ad hoc. (That said, materialism can organically suggest some creationism-like scenarios, such as the simulation hypothesis.)
Ultimately the choice of what hypothesis seems simpler and less ad hoc is up to an individual to decide, as a “matter of faith”. There’s no getting around the need to start with bedrock assumptions.
A major problem with physicalist dismissal of experiential evidence (as I’ve discussed previously) is that the conventional case for believing in physics is that it explains experiential evidence, e.g. experimental results. Solomonoff induction, among the best formalizations of Occam’s razor, believes in “my observations”.
If basic facts like “I have observations” are being doubted, then any case for belief in physics has to go through something independent of its explanations of experiential evidence. This looks to be a difficult problem.
You could potentially resolve the problem by saying that only some observations, such as those of mechanical measuring devices, count; however, this still leads to an analogous problem to the hard problem of consciousness, namely, what is the mapping between physics and the outputs of the mechanical measuring devices that are being explained by theories? (The same problem comes up of “what data is the theorizing trying to explain” whether the theorizing happens in a single brain or in a distributed intelligence, e.g. a collection of people using the scientific method)
There are significant differences between observations in the sense of pointer positions, and qualia.
That’s much more like the easy problem.
OK, but then you have parted company with the strong program in rationalism, the idea that all rational agents should be able to converge on objective truth.
I have “faith” in things like Occam’s razor and hope it helps get toward objective truth, but there’s no way to know for sure. Without constraints on the prior, we can’t say much of anything beyond the data we have.
https://en.wikipedia.org/wiki/No_free_lunch_theorem#Implications_for_computing_and_for_the_scientific_method
For example, without an assumption that nature is regular, a million observations of the sun having risen on past days would tell us nothing about whether it will rise again tomorrow.
Occam’s razor tells you to find the simplest explanation for the evidence,so it is downstream of the question of what constitutes evidence.
My comment about Occam’s razor was in reply to “the idea that all rational agents should be able to converge on objective truth.” I was pointing out that even if you agree on the data, you still may not agree on the conclusions if you have different priors. But yes, you’re right that you may not agree on how to characterize the data either.
Which is to say that idealistic instrumentalism is as complex as materialistic instrumentalism. The complexity of the minimum ruleset you need to predict observation is the same in each case. But that doesn’t mean the complexity of materialist ontology is the same as the complexity of idealist ontology. Idealism asserts that mentality, or some aspect of it, is fundamental , whereas materialism says that is all a complex mechanism. So idealism is asserting a simpler ontology. Which itslef is pretty orthogonal to the question how much complexity you need to predict observation. (of course, the same confusion infects discussions of the relative complexity of different interpretations of quantum mechanics).
Yes. It’s hard to agree what evidence is, meaning that is hard to do philosophy, and impossible to do philosophy algorithmically.
Note that if someone assigns probabilities to these models, and conditional outcomes based on what they say publically, materialism is far more beneficial to express than solipsism, even if one finds solipsism more likely.
In the case where materialism and some form of causality is true, then you have some impact by publicly supporting it and pushing other people toward it. If instead, solipsism is true, and you push these hallucinations you call “other people” toward it, you get no benefit.
Much like other questions of free will and causality, as long as I assign some chance that it’s real and I have choices that matter, I should behave as if that’s certain. Even if I privately am not convinced, there’s no benefit to acting that way.
I agree—thanks for the comment. When writing this post, my goal was to share a reflection on solipsism in a vacuum rather than in context of decision theory. I acknowledge that solipsism doesn’t really tend to drive someone toward caring much about others and such. In that sense, it’s not very productive if someone is altruistically/externally motivated.
I don’t want to give any impression that this is a particularly important decision theoretic question. :)
Mostly my comment was a response to the word “underrated” in the title. We wouldn’t know how it’s rated, because, by it’s nature, it’s going to be less proselytized. A quibble, to be sure, but “underrepresented” is probably more accurate.
I don’t know whether you missed this or just didn’t spell it out, but the reason that the likelihood ratio is so much in favor of materialism over solipsism is that if solipsism were true, you could experience literally anything, and apparently ordered universes consistent with simple physical laws are a vanishing subset of the possible experiences.
You’d need a solipsist theory that strongly predicts ordered universes without adding in too much complexity, just in order to be in the conversation with the likelihood of qualia being reducible to materialism.
“Qualia being irreducible” is, to be as charitable to you as possible, in the reference class of philosophical positions that some people have seen as unassailable and others have seen as flawed. You don’t get to assign incredibly high probability within this reference class.
(To be uncharitable, it is an intuition for which you cannot provide even what looks like an airtight philosophical argument, just louder reiterations of your intuition.)
Thanks.
I disagree a bit. My point has been that it’s easy for solipsism to explain consciousness and hard to materialism to. But it’s easy for materialism to account for structure and hard solipsism to. Don’t interpret the post as my saying solipsism wins—just that it’s underrated. I also don’t say qualia must be irreducible, just that there’s spookiness if they are.
“lack of a satisfying explanatory solution” does not imply low likelihood if you think that the explanatory solution exists but is computationally hard to find (which in fact seems pretty reasonable).
Like, the same structure of argument could be used to argue that computers are extremely low likelihood—how puzzling is the view, that the activity of electrons moving around somehow is responsible for proving mathematical theorems?
With laptops, we of course have a good explanation of how computation arises from electrons, but that’s because we designed them—it would probably be much harder if we had no knowledge of laptops or even electricity and then were handed a laptop and asked to explain how it could reliably produce true mathematical theorems. This seems pretty analogous to the situation we find ourselves in with consciousness.
Thanks for the comment. I’m not 100% on the computers analogy. I think answering the hard problem of consciousness is significantly different compared to understanding how complex information processing systems like computers work. Any definition or framing of consciousness in terms of informational or computational theory may allow it to be studied in those terms in the same way that computers are can be understood by system based theoretical reasoning based on abstraction. However, I don’t think this is what it means to solve the hard problem of consciousness. It seems more like solving the problem with a definition rather than an explanation.
I wonder how much differing perspectives here are due to differing intuitions. But in any case, I hope this makes my thinking more clear.
OTOH, you should keep lowering the probability of ever finding a satisfactory explanation the longer you keep failing to find one.
This update seems like it would be extraordinarily small, given our poor understanding of the brain, and the relatively small amount of concerted effort that goes into understanding consciousness.
We don’t have a uniformly poor understanding: we understand some aspects of mentality much better than others..
You haven’t addressed the “what’s so special about you” objection.
What exactly would it mean to perform a baysian update on you not experiencing qualia?
The primitives of materialism are described in equations. Does a solipsist seek an equation to tell them how angry they will be next Tuesday? If not, what is the substance of a solipsistic model of the world?
I am not sure what you mean my that, I consider my mind to be just an arrangement of atoms. An arrangement governed by the same laws as the rest of the universe.
I am not sure where the instinct that consciousness can’t be materialistic comes from, although I would suspect that it might come from a large amount of uncertainty, and an inability to imagine any specific answer that you would consider a good explanation. Wherever this instinct comes from, I don’t think it is reliable.
You know that “if a tree falls in a forest, and there is no one there to hear it, does it make a sound?” thing. Even after all the factual questions, like if audio equipment would record something, have been answered, there is a feeling of a question remaining. I expect any explanation of qualia to look somewhat similar, a description of how mental imperfections produce a sensation of something.
Consider the limiting case of describing minds in terms of algorithms, you scan a philosophers brain, put the data into a computer, and predict exactly their discussion on qualia. Once you have a complete understanding of why the philosopher talks about qualia, if the philosopher has any info about qualia at all, the process by which they gained that info should be part of the model.
Pick something up, drop it, watch it fall. Can solipsism consider this observation to be more likely than some max entropy observation? How does a solipsist predict the experience of watching the object fall.
The unstated background assumption of the article you are responding to is that the hard Problem is real and hard. It is certainly hard to dispute that we have made no progress in writing algorithms that experience sensations or feelings. Whether we ever will is another matter, but impossibility arguments exist.
Is that a falsifiable hypothesis? What would falsify it?
That isn’t an understanding of a philosophers brain, it’s an artificial construct that produces the same outputs given the same inputs. The function of the human kidney can be replaced by a kidney dialysis machine ,but that does not mean kidneys do just exist,nor does it mean that you can understand how kidneys work by looking at dialysis machines.
Thanks! This is insightful.
Good point. In an anthropic sense, the sentence this is a reply to could be redacted. Experiencing qualia themselves would not be evidence to prefer one theory over another. Only experiencing certain types of observations would cause a meaningful update.
I think this is the same type of argument as saying that other people whom I observe seem to be very similar to me. The materialistic interpretation makes us believe in a less capricious world, but there’s the trouble of explaining how conscious results from material phenomena. This is similar to my thoughts on the final 4 paragraphs of what you wrote.
I think that works well. But I don’t think that subjective experience falls out of this interpretation for free.
Someone that knows quantum physics but almost no computing looks at a phone. They don’t know how it works inside. They are uncertain about how apps result from material phenomenon. This is just normal uncertainty over a set of hypothesis. One of those hypotheses is the actual answer, many others will look like alternate choices of circuit board layout or programming language. They still need to find out how the phone works, but that is because they have many hypothesis that involve atoms. They have no reason to doubt that the phone is made of atoms.
I don’t know how your brain works either, but I am equally sure it is made of (atoms, quantum waves, strings or whatever). I apply the same to my own brain.
In the materialist paradigm I can understand Newtonian gravity as at least an approximation of whatever the real rules are. How does a solipsist consider it?