This is a short argument against subjective idealism. Since I don’t think there are (m)any subjective idealist on this site I’ve decided to make it a shortform rather than a full post.
We don’t know how big reality really is. Most people agree that it isn’t smaller than the things you perceive, because if I have perception of something the perception exists. Subjective Idealism says that only the perceptions are real and the things outside of our perception don’t exist:
But if you’re not infinitely certain that subjective idealism is correct, then you have to at least assign some probability that a different model of reality (e.g your perception + one other category of things exists) is true:
But of course there are many other types of models that could also be true:
In fact the other models outnumber subjective idealism infinity to one, making it seem more probable that things outside your immediate perception exist.
(I don’t think this argument is particularly strong in itself, but it could be used to strengthen other arguments.)
Most people agree that it isn’t smaller than the things you perceive, because if I have perception of something the perception exists
Obviously you can hallucinate a bear without there being a bear, but the hallucination of the bear would exist (according to most people). There are models that say that even sense data does not exist but those models are very strange, unpopular and unpersuasive (for me and most other people). But if you think that both the phenomenon and the noumenon don’t exist, then I would be interested in hearing your reasons for that conclusion.
Biggest world where that’s the case for me is some form of the malicious demon argument.
I can make mistakes when doing things like adding 297 + 972, and forget to carry a one.
Could there be a malicious demon that always makes me make the same mistakes? So I really believe the logical answer is 1296, because every time I check using different procedures, I get the same answer?
Could the same malicious demon then make me make a separate mistake, so I believed that 2+2 =5? It just has to be a bigger mistake that I make every time, doesn’t seem different in kind than the previous thought.
Logically, my experience exists because that’s a priori the definition of existence. But couldn’t the same malicious demon make me believe that was logically sound, while actually there’s some error that I was making every time to draw that conclusion? Again, that doesn’t seem very different in kind than believing 2+2=5.
In the space of all possible minds, is it possible there are some that have a malicious demon baked in. If mine was one, how would I know?
Yes the malicious demon was also the model that sprung to my mind. To answer your question; there are certainly possible minds that have “demons” (or faulty algorithms) that make finding their internal mistakes impossible (but my current model thinks that evolution wouldn’t allow those minds to live for very long). Although this argument has the same feature as the simulation argument in that any counterargument can be countered with “But what if the simulation/demon wants you to think that?”. I don’t have any real solution for this except to say that it doesn’t really matter for our everyday life and we shouldn’t put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).
I don’t have any real solution for this except to say that it doesn’t really matter for our everyday life and we shouldn’t put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).
I think this is true in every day life, but not true when you’re doing philosophy of mind like in the above post. I don’t think any of your argument is wrong, I just think you should include the possibility that your observations don’t exist in your reasoning.
Well to be fair this was just a short argument against subjective idealism with three pictures to briefly illustrate the point and this was not (nor did it claim to be) a comprehensive list of all the possible models in the field of philosophy of mind (otherwise I would also have to include pictures with the perception being red and the outside being green, or half being green no matter where they are, or everything being red, or everything being green etc)
In Already In Motion [actually not, but somewhere] EY noticed that epistemic processes start from unjustified assumptions, and concludes that,even so , you should carry on as before. But even though scepticism doesn’t motivate you to switch to different beliefs, it should motivate you to be less certain of everything.
Less certain than what, though? That’s an update you make once only, perhaps in childhood, when you first wake up to the separation between perceptions and the outside world, between beliefs and perceptions, and so on up the ladder of abstraction.
It’s not clear what “subjective idealism is correct” means, because it’s not clear what “a given thing is real” means (at least in the context of this thread). It should be more clear what a claim means before it makes sense to discuss levels of credence in it.
If we are working with credences assigned to hypotheticals, the fact that the number of disjoint hypotheticals incompatible with some hypothetical S is large doesn’t in itself make them (when considered altogether) more probable than S. (A sum of an infinite number of small numbers can still be small.)
Working with credences in hypotheticals is not the only possible way to reason. If we are talking about weird things like subjective idealism, assumptions about epistemics are not straightforward and should be considered.
You are correct, this argument only works if you have a specific epistemic framework and a subjective idealistic framework which might not coincide in most subjective idealist. I only wrote it down because I just so happened to have used this argument successfully against someone with this framework (and I also liked the visualization I made for it). I didn’t want to go into what “a given thing is real” means because it’s a giant can of philosophical worms and I try to keep my shortforms short. Needless to say that this argument works with some philosophical definitions of “real” but not others. So as I said, this argument is pretty weak in itself and can only be used in certain situation in conjunction with other arguments.
This goes without saying and I apologize if I gave the impression that people should use this argument and it’s visualization to persuade rather than to explain.
This is a short argument against subjective idealism. Since I don’t think there are (m)any subjective idealist on this site I’ve decided to make it a shortform rather than a full post.
We don’t know how big reality really is. Most people agree that it isn’t smaller than the things you perceive, because if I have perception of something the perception exists. Subjective Idealism says that only the perceptions are real and the things outside of our perception don’t exist:
But if you’re not infinitely certain that subjective idealism is correct, then you have to at least assign some probability that a different model of reality (e.g your perception + one other category of things exists) is true:
But of course there are many other types of models that could also be true:
In fact the other models outnumber subjective idealism infinity to one, making it seem more probable that things outside your immediate perception exist.
(I don’t think this argument is particularly strong in itself, but it could be used to strengthen other arguments.)
It bothers me that there are no red things in your perception in any of your pictures.
I already mentioned in the post:
Obviously you can hallucinate a bear without there being a bear, but the hallucination of the bear would exist (according to most people). There are models that say that even sense data does not exist but those models are very strange, unpopular and unpersuasive (for me and most other people). But if you think that both the phenomenon and the noumenon don’t exist, then I would be interested in hearing your reasons for that conclusion.
Biggest world where that’s the case for me is some form of the malicious demon argument.
I can make mistakes when doing things like adding 297 + 972, and forget to carry a one.
Could there be a malicious demon that always makes me make the same mistakes? So I really believe the logical answer is 1296, because every time I check using different procedures, I get the same answer?
Could the same malicious demon then make me make a separate mistake, so I believed that 2+2 =5? It just has to be a bigger mistake that I make every time, doesn’t seem different in kind than the previous thought.
Logically, my experience exists because that’s a priori the definition of existence. But couldn’t the same malicious demon make me believe that was logically sound, while actually there’s some error that I was making every time to draw that conclusion? Again, that doesn’t seem very different in kind than believing 2+2=5.
In the space of all possible minds, is it possible there are some that have a malicious demon baked in. If mine was one, how would I know?
Yes the malicious demon was also the model that sprung to my mind. To answer your question; there are certainly possible minds that have “demons” (or faulty algorithms) that make finding their internal mistakes impossible (but my current model thinks that evolution wouldn’t allow those minds to live for very long). Although this argument has the same feature as the simulation argument in that any counterargument can be countered with “But what if the simulation/demon wants you to think that?”. I don’t have any real solution for this except to say that it doesn’t really matter for our everyday life and we shouldn’t put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).
I think this is true in every day life, but not true when you’re doing philosophy of mind like in the above post. I don’t think any of your argument is wrong, I just think you should include the possibility that your observations don’t exist in your reasoning.
Well to be fair this was just a short argument against subjective idealism with three pictures to briefly illustrate the point and this was not (nor did it claim to be) a comprehensive list of all the possible models in the field of philosophy of mind (otherwise I would also have to include pictures with the perception being red and the outside being green, or half being green no matter where they are, or everything being red, or everything being green etc)
Yes, that’s fair. This was definitely a nitpicky request.
Isn’t this a universal argument against everything? “There are so many other things that might be true, so how can you be sure of this one?”
It’s a valid argument, too.
In Already In Motion [actually not, but somewhere] EY noticed that epistemic processes start from unjustified assumptions, and concludes that,even so , you should carry on as before. But even though scepticism doesn’t motivate you to switch to different beliefs, it should motivate you to be less certain of everything.
Less certain than what, though? That’s an update you make once only, perhaps in childhood, when you first wake up to the separation between perceptions and the outside world, between beliefs and perceptions, and so on up the ladder of abstraction.
Of course, it depends on where you are starting from. I was kind of aiming at people with Yudkowky’s level of certitude, such as the man himself.
It’s not clear what “subjective idealism is correct” means, because it’s not clear what “a given thing is real” means (at least in the context of this thread). It should be more clear what a claim means before it makes sense to discuss levels of credence in it.
If we are working with credences assigned to hypotheticals, the fact that the number of disjoint hypotheticals incompatible with some hypothetical S is large doesn’t in itself make them (when considered altogether) more probable than S. (A sum of an infinite number of small numbers can still be small.)
Working with credences in hypotheticals is not the only possible way to reason. If we are talking about weird things like subjective idealism, assumptions about epistemics are not straightforward and should be considered.
You are correct, this argument only works if you have a specific epistemic framework and a subjective idealistic framework which might not coincide in most subjective idealist. I only wrote it down because I just so happened to have used this argument successfully against someone with this framework (and I also liked the visualization I made for it). I didn’t want to go into what “a given thing is real” means because it’s a giant can of philosophical worms and I try to keep my shortforms short. Needless to say that this argument works with some philosophical definitions of “real” but not others. So as I said, this argument is pretty weak in itself and can only be used in certain situation in conjunction with other arguments.
(I think making arguments clear is more meaningful than using them for persuasion.)
This goes without saying and I apologize if I gave the impression that people should use this argument and it’s visualization to persuade rather than to explain.