Richard Carrier on solipsism, but not nearly as pithy:
Solipsism still requires an explanation for what you are cognating. There are only two logically possible explanations: random chance, or design.
It’s easy to show that the probability that your stream of consciousness is a product of random chance is absurdly low (see Bolzmann brains, for example). In simple form, if we assume no prior knowledge or assumptions (other than logic and our raw uninterpreted experience), the prior probability of solipsism becomes 0.5 but the likelihood of the evidence on solipsism is then vanishingly small (approaching zero), since chance events would sooner produce a relative chaos than an organized stream of complex consciousness, whereas the likelihood of that same evidence on a modest scientific realism is effectively 100%. Work the math and the probability of chance-based solipsism is necessarily vanishingly small (albeit not zero, but close enough for any concern). Conclusion: random solipsism would sooner produce a much weirder experience.
That leaves some sort of design hypothesis, namely your mind is cleverly making everything up, just so. Which requires your mind to be vastly more intelligent and resourceful and recollectful than you experience yourself being, since you so perfectly create a reality for yourself that remains consistent and yet that you can’t control with your mind. So you control absolutely everything, yet control next to nothing, a contradiction in terms, although an extremely convoluted system of hypotheses could eliminate that contradiction with some elaborate device explaining why your subconscious is so much more powerful and brilliant and consistent and mysterious than your conscious self is. The fact that you have to develop such a vastly complex model of how your mind works, just to get solipsism to make the evidence likely (as likely as it already is on modest scientific realism), necessarily reduces the prior probability by as much, and thus the probability of intelligent solipsism is likewise vanishingly small. Conclusion: intelligent solipsism would sooner result in your being more like a god, i.e. you would have vast or total control over your reality.
One way to think of the latter demarcation of prior probability space is similar to the thermodynamic argument against our having a Boltzmann brain: solipsism is basically a cartesian demon scenario, only the demon is you; so think of all the possible cartesian demons, from “you can change a few things but not all,” to “you can change anything you want,” and then you’ll see the set of all possible solipsistic states in which you would have obvious supernatural powers (the ability to change aspects of reality) is vastly larger than the set of all possible solipsistic states in which you can’t change anything except in exactly the same way as a modest scientific realism would produce. In other words, we’re looking at an incredible coincidence, where the version of solipsism that is realized just “happens” to be exactly identical in all observed effects to non-solipsism. And the prior probability space shared by that extremely rare solipsism is a vanishingly small fraction of all logically possible solipsisms. Do the math and the probability of an intelligent solipsism is vanishingly small.
This all assumes you have no knowledge making any version of solipsism more likely than another. And we are effectively in that state vis-a-vis normal consciousness. However we are not in that state vis-a-vis other states of consciousness, e.g. put “I just dropped acid” or “I am sleeping” in your background knowledge and that entails a much higher probability that you are in a solipsistic state, but then that will be because the evidence will be just as such a hypothesis would predict: reality starts conforming to your whim or behaving very weirdly in ways peculiar to your own desires, expectations, fears, etc. Thus “subjective” solipsism is then not a vanishingly small probability. But “objective” solipsism would remain so (wherein reality itself is a product of your solipsistic state), since for that to explain all the same evidence requires extremely improbable coincidences again, e.g. realism explains why you need specific conditions of being drugged or sleeping to get into such a state, and why everything that happens or changes in the solipsistic state turns out not to have changed or happened when you exit that state, and why the durations and limitations and side effects and so on all are as they are, whereas pure solipsism doesn’t come with an explanation for any of that, there in that case being no actual brain or chemistry or “other reality” to return to, and so on, so you would have to build all those explanations in to get objective solipsism to predict all the same evidence, and that reduces the prior. By a lot.
There is no logically consistent way to escape the conclusion that solipsism is exceedingly improbable.
I think that’s actually a really terrible bit of arguing.
There are only two logically possible explanations: random chance, or design.
We can stop right there. If we’re all the way back at solipsism, we haven’t even gotten to defining concepts like ‘random chance’ or ‘design’, which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we’re going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov’s axioms (which brings us to the analytic and synthetic problem).
And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can’t minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.
Yes, if you’re going to buy into a (very large) number of materialist non-solipsist claims, then you’re going to have trouble making a case in such terms for solipsism. But if you’ve bought all those materialist or externalist claims, you’ve already rejected solipsism and there’s no tension in the first place. And he doesn’t do a good case of explaining that at all.
Good points, but then likewise how do you define and import the designations of ‘hand’ or ‘here’ and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore—you seem to be going metaphysical)? (or were you not referring to Moore’s argument in the context of skepticism?)
I think Moore’s basic argument works on the level of epistemic skepticism, yes, but also metaphysics: some sort of regular metaphysics and externalism is what one believes, and what provides the grist for the philosophical mill. If you don’t credit the regular metaphysics, then why do you credit the reasoning and arguments which led you to the more exotic metaphysics?
I’m not sure what skeptical arguments it doesn’t work for. I think it may stop at the epistemic level, but that may just be because I’m having a hard time thinking of any ethics examples (which is my usual interest on the next level down of abstraction).
The way I see it, Moore’s argument gets you to where you’re uncertain of the reasoning pro or contra skepticism. But If you start from the position of epistemic solipsism (I know my own mind, but I’m uncertain of the external world), then you have reason (more or less depending how uncertain you are) to side with common sense. However, if you start at metaphysical solipsism (I’m uncertain of my own mind), then such an argument could even be reason to not side with common sense (e.g., there are little people in my mind trying to manipulate my beliefs; I must not allow them to).
So you control absolutely everything, yet control next to nothing, a contradiction in terms, although an extremely convoluted system of hypotheses could eliminate that contradiction with some elaborate device explaining why your subconscious is so much more powerful and brilliant and consistent and mysterious than your conscious self is.
Or “if salt loses its savor”, although I wonder if they’re really making the same philosophical point about relative weights of evidence on two sides of a contradiction/paradox.
Reminds me of Moore’s “here is a hand” paradox (or one man’s modus tollens is another’s modus ponens).
Richard Carrier on solipsism, but not nearly as pithy:
I think that’s actually a really terrible bit of arguing.
We can stop right there. If we’re all the way back at solipsism, we haven’t even gotten to defining concepts like ‘random chance’ or ‘design’, which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we’re going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov’s axioms (which brings us to the analytic and synthetic problem).
And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can’t minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.
Yes, if you’re going to buy into a (very large) number of materialist non-solipsist claims, then you’re going to have trouble making a case in such terms for solipsism. But if you’ve bought all those materialist or externalist claims, you’ve already rejected solipsism and there’s no tension in the first place. And he doesn’t do a good case of explaining that at all.
Good points, but then likewise how do you define and import the designations of ‘hand’ or ‘here’ and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore—you seem to be going metaphysical)? (or were you not referring to Moore’s argument in the context of skepticism?)
I think Moore’s basic argument works on the level of epistemic skepticism, yes, but also metaphysics: some sort of regular metaphysics and externalism is what one believes, and what provides the grist for the philosophical mill. If you don’t credit the regular metaphysics, then why do you credit the reasoning and arguments which led you to the more exotic metaphysics?
I’m not sure what skeptical arguments it doesn’t work for. I think it may stop at the epistemic level, but that may just be because I’m having a hard time thinking of any ethics examples (which is my usual interest on the next level down of abstraction).
The way I see it, Moore’s argument gets you to where you’re uncertain of the reasoning pro or contra skepticism. But If you start from the position of epistemic solipsism (I know my own mind, but I’m uncertain of the external world), then you have reason (more or less depending how uncertain you are) to side with common sense. However, if you start at metaphysical solipsism (I’m uncertain of my own mind), then such an argument could even be reason to not side with common sense (e.g., there are little people in my mind trying to manipulate my beliefs; I must not allow them to).
A hypothesis like… I’m dreaming.
This also made me think of the aphorism “if water sticks in your throat, with what will you wash it down?”
Or “if salt loses its savor”, although I wonder if they’re really making the same philosophical point about relative weights of evidence on two sides of a contradiction/paradox.