The next mistake was opening the door to solipsism and Brain-in-a-Vat arguments. This was so traumatic to me that I spent years in a manic depression.
Consider the possibility that the manic-depression was coincidental. When people have mental things happen for fundamentally biochemical reasons, they often misattribute them to the most plausible seeming non-biochemical cause they can think of. Exposure to ideas can exacerbate an existing problem, but it is unlikely that the lowest-hanging here has anything at all to do with the ideas themselves. Instead of looking at how you engage with stressful ideas, consider looking into other aspects of your life which might reduce your resilience.
With that said...
You started with a set of values and preferences and an ontology. When you encountered dust theory, you discovered that one of the definitions used to define your values—the notion of personal identity—wasn’t fully coherent. You then tried to substitute a different definition in its place—an alternative notion of personal identity, which might not carry across a sleep/wake cycle. This alternate notion of identity is not the thing you care about. A small philosophically-minded portion of your brain has decided that it is what you care about, and is now in conflict with the other parts of your brain which don’t accept the altered values. Listen to them; while those brain-parts aren’t good at explaining things, they have knowledge and in this case they are right.
Replies to the comment you are now reading accurately describe my ideas so the original post has been replaced by this disclaimer to spare your time :)
How difficult would this be, out of curiosity, keeping in mind that you don’t need 100% accuracy? I can think of a couple approaches, though probably nothing that would be supported by any revenue model I can think of off the top of my head.
This is putting the cart before the horse. A crowdsourced app that requires user to report ACCURATELY which parking spots are free when will only work when it has a lot of users. But it can’t get users unless it’s a useful app.
Unless it’s built upon existing platforms that map out where paid parking spots are so that users already benefit from a service from the app. Parking spot owners have an incentive tor report their spots to get parkers. Not to mention shopping centres and other business want to attract people to the area.
Consider the possibility that the manic-depression was coincidental. When people have mental things happen for fundamentally biochemical reasons, they often misattribute them to the most plausible seeming non-biochemical cause they can think of.
I have. It definitely isn’t. It may have been exacerbated by biochemical causes, but it wasn’t caused by them alone. (Sertraline did help me, just never as much as nullifying an existential problem.)
You then tried to substitute a different definition in its place—an alternative notion of personal identity, which might not carry across a sleep/wake cycle.
So you accept the argument?
This alternate notion of identity is not the thing you care about. A small philosophically-minded portion of your brain has decided that it is what you care about, and is now in conflict with the other parts of your brain which don’t accept the altered values. Listen to them; while those brain-parts aren’t good at explaining things, they have knowledge and in this case they are right.
I have no idea what you are trying to say, beyond “listen to your instincts because are more suited for the real world than your intellect.”
I have. It definitely isn’t. It may have been exacerbated by biochemical causes, but it wasn’t caused by them alone. (Sertraline did help me, just never as much as nullifying an existential problem.)
The fact that taking drugs for your mental issues doesn’t nullify your concerns about existential problems in no way implies that your worries about those problems don’t come as a result of mental health issues.
Sure, but I can say that I wouldn’t be depressed at all if not for those existential problems. I mean, I would be depressed but in a general, background sort of way.
You can say that and of course it seems true to you. It’s just like it feels true to the schizophrenic that the CIA is out to get him and his paranoia is due to the CIA trying to get him and not due to the fact that he’s a schizophrenic.
Psychological research in general suggests that people are quite good at finding ways to rationalize their emotions. There a strong outside view, that suggests that rationalizations are usually not the root cause.
It’s just like it feels true to the schizophrenic that the CIA is out to get him and his paranoia is due to the CIA trying to get him and not due to the fact that he’s a schizophrenic.
I’ve considered it at various points over the last seven years. I think I’ve justified it properly.
The nature of outside views is that they are going to be wrong eventually.
Understanding mental biases and how our brain plays tricks on it is a core part of LW. It hasn’t much to do with logical argument but with modern psychological research.
It’s no easy skill to notice when your emotions prevent you from clearly thinking about an issue.
Saying “The nature of outside views is that they are going to be wrong eventually.” is also very particular. If I’m testing gravity by repeating scientific experiments whereby I drop balls, I’m engaging in the outside view.
Science is all about the outside view instead of subjective experience.
When one is subject to a mental illnesses that generally is known to make on think irrationally about an issue, it’s useful to not trust one’s reasoning and instead seek help for the mental illness by trustworthy people.
Bootstrapping trust isn’t easy. There are valid reasons why you might not trust the average psychologists enough to trust his judgement over your own.
The general approach is too find trustworthy in person friends. For LW type ideas, you find them at LW meetups. You likely don’t want to pull all your information from people from a LW meetup but if your LW friends say that you are irrational about an issue, your mainstream psychologists tells you, you are irrational about the issue and other social contacts also tell you that you are irrational, no matter how strongly it feels like you are right, you should assume that you aren’t right.
Well, I definitely know that my depression is causally tied to my existential pessimism. I just don’t if it’s the only factor, or if fixing something else will stop it for good. But as I said, I don’t necessarily want to default to ape mode.
I definitely know that my depression is causally tied to my existential pessimism.
Out of curiosity, how do you know that this is the direction of the causal link? The experiences you have mentioned in the thread seem to also be consistent with depression causing you to get hung up on existential pessimism.
I go through long periods of peace, only to find my world completely shaken as I experience some fearful epiphany. And I’ve experienced a complete cessation of that feeling when it is decisively refuted.
Okay, but at best, this shows that the immediate cause of you being shaken and coming out of it is related to fearful epiphanies. Is it not plausible that the reason that, at a given time, you find particular idea horrific or are able to accept a solution as satisfying depending on your mental state?
Consider this hypothetical narrative. Let Frank (name chosen at random) be a person suffering from occasional bouts of depression. When he is healthy, he notices an enjoys interacting with the world around him. When he is depressed, he instead focuses on real or imagined problems in his life—and in particular, how stressful his work is.
When asked, Frank explains that his depression is caused by problems at work. He explains that when he gets assigned a particularly unpleasant project, his depression flares up. The depression doesn’t clear up until things get easier. Frank explains that once he finishes a project and is assigned something else, his depression clears up (unless the new project is just as bad); or sometimes, through much struggle, he figure out how to make the project bearable, and that resolves the depression as well.
Frank is genuine in expressing his feelings, and correct about work problems being correlated with his depression, but he is wrong about causation between the two.
Do you find this story analogous to your situation? If not, why not?
I have no idea what you are trying to say, beyond “listen to your instincts because are more suited for the real world than your intellect.”
I think he was trying to make a map-territory distinction. You have a mental model of how your brain computes value. You also have your brain, computing value however it actually computes value. Since our values are quite complex, and likely due to a number of different physical causes, it is reasonable to conclude that our mental model is at best an imperfect approximation.
I don’t think he’s trying to say “listen to your heart” so much as “the map is not the territory, but both are inside your brain in this instance. Because of this, it is possible to follow the territory directly, rather than following your imperfect map of the territory.”
That said, we are now a couple meta-levels away from your original question. To bring things back around, I’d suggest that you try and keep in mind that any odd, extreme predictions your mental models make may be flaws in an oversimplified model, and not real existential disasters. In some cases, this may not seem to be the case given other pieces of evidence, but hopefully in other instances it helps.
The greater the inferential distance you have to go to reach an uncomfortable conclusion, the higher the likelihood that there is a subtle logical flaw somewhere, or (much more common) some unknown-unknown that isn’t even being taken into account. LessWrong tends to deal with highly abstract concepts many steps removed from observations and scientifically validated truths, so I suspect that a large fraction of such ideas will be discredited by new evidence. Consider shifting your probability estimates for such things down by an order of magnitude or more, if you have not already done so. (That last paragraph was an extremely compressed form of what should be a much larger discussion. This hits on a lot of key points, though.)
That does sound like reasonable advice… however I now have empirical evidence for Dust Theory. Still, most of the horrible problems in it seem to have been defused.
That doesn’t even remotely meet the bar for ‘evidence’ from my standpoint. At best, you could say that it’s a tack-on to the original idea to make it match reality better.
Put another way, it’s not evidence that makes the idea more likely, it’s an addition that increases the complexity yet still leaves you in a state where there are no observables to test or falsify anything.
Why do we dream? Because a large amount of conscious beings join the measure of beings who can. That’s why we find ourselves as pre-singularity humans. I’d say that’s empirical evidence.
Sorry, but evidence doesn’t really work that way. Even if we allow it, it is exceptionally weak evidence, and not enough to distinguish ‘dust theory’ from any other of the countless ideas in that same category. Again, it looks to me like a tack-on to the original idea that is needed simply to make the idea compatible with existing evidence.
As for why we dream, it’s actually because of particles, forces, and biochemistry. A mundane explanation for a mundane process. No group hive mind of spirit energy or “measure of beings” required.
Even if we allow it, it is exceptionally weak evidence, and not enough to distinguish ‘dust theory’ from any other of the countless ideas in that same category.
Dreaming is a very specific process that seems optimized to the scenario I described with DT. Do these other ideas predict the same?
As for why we dream, it’s actually because of particles, forces, and biochemistry. A mundane explanation for a mundane process. No group hive mind of spirit energy or “measure of beings” required.
So you are saying that humans or humanlike minds are the most common type of consciousness that is mathematically possible?
Dreaming is a very specific process that seems optimized to the scenario I described with DT. Do these other ideas predict the same?
“Dreaming is a very specific process that seems optimized to demonstrate the existence of a dream realm.”
“Dreaming is a very specific process that seems optimized to recharge the Earth Spirit that is Mother Gaia.”
“Dreaming is a very specific process by which Wyvren allows us to communicate with Legends.”
So you are saying that humans or humanlike minds are the most common type of consciousness that is mathematically possible?
I have literally no idea how you could possibly draw that conclusion from the statement that dreaming has a mundane physics-based explanation. The two things aren’t even remotely related.
“Dreaming is a very specific process that seems optimized to demonstrate the existence of a dream realm.”
“Dreaming is a very specific process that seems optimized to recharge the Earth Spirit that is Mother Gaia.”
“Dreaming is a very specific process by which Wyvren allows us to communicate with Legends.”
Dust Theory is a coherent philosophical idea that has certain logical arguments to be made for it based off of our scientific knowledge of minds and quantum theory.
I have literally no idea how you could possibly draw that conclusion from the statement that dreaming has a mundane physics-based explanation. The two things aren’t even remotely related.
No, they aren’t. Of course dreaming has a mundane physics-based explanation; Dust Theory predicts that as well. We just find ourselves in a universe where dreaming exists.
I have. It definitely isn’t. It may have been exacerbated by biochemical causes, but it wasn’t caused by them alone. (Sertraline did help me, just never as much as nullifying an existential problem.)
Sertraline has insomnia listed as a very common (>10%) side effect. If you’re currently on it, this is a more parsimonious explanation for your difficulty sleeping than your philosophical beliefs about how sleeping interacts with subjective experience.
When people have mental things happen for fundamentally biochemical reasons, they often misattribute them to the most plausible seeming non-biochemical cause they can think of
What is likely is that the plausible cause was a cause too.
The biochemistry pushes him close to the edge, and the “plausible cause” pushes him off.
You started with a set of values and preferences and an ontology. When you encountered dust theory, you discovered that one of the definitions used to define your values—the notion of personal identity—wasn’t fully coherent.
Dust theory doesn’t show anything to be incoherent, because it’s only a theory. One cam take its unwelcome conclusions to be a reductio ad absurdum of its premises.
It’s not a theory, it’s not even a hypothesis—it’s an idea. The bar for theory and hypothesis is far above what ‘dust theory’ can manage at this point.
Consider the possibility that the manic-depression was coincidental. When people have mental things happen for fundamentally biochemical reasons, they often misattribute them to the most plausible seeming non-biochemical cause they can think of. Exposure to ideas can exacerbate an existing problem, but it is unlikely that the lowest-hanging here has anything at all to do with the ideas themselves. Instead of looking at how you engage with stressful ideas, consider looking into other aspects of your life which might reduce your resilience.
With that said...
You started with a set of values and preferences and an ontology. When you encountered dust theory, you discovered that one of the definitions used to define your values—the notion of personal identity—wasn’t fully coherent. You then tried to substitute a different definition in its place—an alternative notion of personal identity, which might not carry across a sleep/wake cycle. This alternate notion of identity is not the thing you care about. A small philosophically-minded portion of your brain has decided that it is what you care about, and is now in conflict with the other parts of your brain which don’t accept the altered values. Listen to them; while those brain-parts aren’t good at explaining things, they have knowledge and in this case they are right.
Replies to the comment you are now reading accurately describe my ideas so the original post has been replaced by this disclaimer to spare your time :)
How would the app know? You would need some sort of automatic system that scans every parking spot to see if there is a car currently in it.
How difficult would this be, out of curiosity, keeping in mind that you don’t need 100% accuracy? I can think of a couple approaches, though probably nothing that would be supported by any revenue model I can think of off the top of my head.
or just crowdsource it :)
This is putting the cart before the horse. A crowdsourced app that requires user to report ACCURATELY which parking spots are free when will only work when it has a lot of users. But it can’t get users unless it’s a useful app.
Unless it’s built upon existing platforms that map out where paid parking spots are so that users already benefit from a service from the app. Parking spot owners have an incentive tor report their spots to get parkers. Not to mention shopping centres and other business want to attract people to the area.
I have. It definitely isn’t. It may have been exacerbated by biochemical causes, but it wasn’t caused by them alone. (Sertraline did help me, just never as much as nullifying an existential problem.)
So you accept the argument?
I have no idea what you are trying to say, beyond “listen to your instincts because are more suited for the real world than your intellect.”
The fact that taking drugs for your mental issues doesn’t nullify your concerns about existential problems in no way implies that your worries about those problems don’t come as a result of mental health issues.
Sure, but I can say that I wouldn’t be depressed at all if not for those existential problems. I mean, I would be depressed but in a general, background sort of way.
You can say that and of course it seems true to you. It’s just like it feels true to the schizophrenic that the CIA is out to get him and his paranoia is due to the CIA trying to get him and not due to the fact that he’s a schizophrenic.
Psychological research in general suggests that people are quite good at finding ways to rationalize their emotions. There a strong outside view, that suggests that rationalizations are usually not the root cause.
I’ve considered it at various points over the last seven years. I think I’ve justified it properly.
The nature of outside views is that they are going to be wrong eventually.
Of course you do, as the pressures for internal mental consistency are very strong.
This isn’t an argument, it’s Descartes’ demon.
Understanding mental biases and how our brain plays tricks on it is a core part of LW. It hasn’t much to do with logical argument but with modern psychological research.
It’s no easy skill to notice when your emotions prevent you from clearly thinking about an issue.
Saying “The nature of outside views is that they are going to be wrong eventually.” is also very particular. If I’m testing gravity by repeating scientific experiments whereby I drop balls, I’m engaging in the outside view. Science is all about the outside view instead of subjective experience.
When one is subject to a mental illnesses that generally is known to make on think irrationally about an issue, it’s useful to not trust one’s reasoning and instead seek help for the mental illness by trustworthy people. Bootstrapping trust isn’t easy. There are valid reasons why you might not trust the average psychologists enough to trust his judgement over your own.
The general approach is too find trustworthy in person friends. For LW type ideas, you find them at LW meetups. You likely don’t want to pull all your information from people from a LW meetup but if your LW friends say that you are irrational about an issue, your mainstream psychologists tells you, you are irrational about the issue and other social contacts also tell you that you are irrational, no matter how strongly it feels like you are right, you should assume that you aren’t right.
Well, I definitely know that my depression is causally tied to my existential pessimism. I just don’t if it’s the only factor, or if fixing something else will stop it for good. But as I said, I don’t necessarily want to default to ape mode.
Out of curiosity, how do you know that this is the direction of the causal link? The experiences you have mentioned in the thread seem to also be consistent with depression causing you to get hung up on existential pessimism.
I go through long periods of peace, only to find my world completely shaken as I experience some fearful epiphany. And I’ve experienced a complete cessation of that feeling when it is decisively refuted.
Okay, but at best, this shows that the immediate cause of you being shaken and coming out of it is related to fearful epiphanies. Is it not plausible that the reason that, at a given time, you find particular idea horrific or are able to accept a solution as satisfying depending on your mental state?
Consider this hypothetical narrative. Let Frank (name chosen at random) be a person suffering from occasional bouts of depression. When he is healthy, he notices an enjoys interacting with the world around him. When he is depressed, he instead focuses on real or imagined problems in his life—and in particular, how stressful his work is.
When asked, Frank explains that his depression is caused by problems at work. He explains that when he gets assigned a particularly unpleasant project, his depression flares up. The depression doesn’t clear up until things get easier. Frank explains that once he finishes a project and is assigned something else, his depression clears up (unless the new project is just as bad); or sometimes, through much struggle, he figure out how to make the project bearable, and that resolves the depression as well.
Frank is genuine in expressing his feelings, and correct about work problems being correlated with his depression, but he is wrong about causation between the two.
Do you find this story analogous to your situation? If not, why not?
I find it hard to believe. But maybe I’ve always been depressed and that’s why I’ve suffered from them so badly.
I think he was trying to make a map-territory distinction. You have a mental model of how your brain computes value. You also have your brain, computing value however it actually computes value. Since our values are quite complex, and likely due to a number of different physical causes, it is reasonable to conclude that our mental model is at best an imperfect approximation.
I don’t think he’s trying to say “listen to your heart” so much as “the map is not the territory, but both are inside your brain in this instance. Because of this, it is possible to follow the territory directly, rather than following your imperfect map of the territory.”
That said, we are now a couple meta-levels away from your original question. To bring things back around, I’d suggest that you try and keep in mind that any odd, extreme predictions your mental models make may be flaws in an oversimplified model, and not real existential disasters. In some cases, this may not seem to be the case given other pieces of evidence, but hopefully in other instances it helps.
The greater the inferential distance you have to go to reach an uncomfortable conclusion, the higher the likelihood that there is a subtle logical flaw somewhere, or (much more common) some unknown-unknown that isn’t even being taken into account. LessWrong tends to deal with highly abstract concepts many steps removed from observations and scientifically validated truths, so I suspect that a large fraction of such ideas will be discredited by new evidence. Consider shifting your probability estimates for such things down by an order of magnitude or more, if you have not already done so. (That last paragraph was an extremely compressed form of what should be a much larger discussion. This hits on a lot of key points, though.)
That does sound like reasonable advice… however I now have empirical evidence for Dust Theory. Still, most of the horrible problems in it seem to have been defused.
What is your empirical evidence for dust theory?
Point 2: http://lesswrong.com/lw/mgd/the_consequences_of_dust_theory/ck0q
That doesn’t even remotely meet the bar for ‘evidence’ from my standpoint. At best, you could say that it’s a tack-on to the original idea to make it match reality better.
Put another way, it’s not evidence that makes the idea more likely, it’s an addition that increases the complexity yet still leaves you in a state where there are no observables to test or falsify anything.
In common terms, that’s called a ‘net loss’.
Why do we dream? Because a large amount of conscious beings join the measure of beings who can. That’s why we find ourselves as pre-singularity humans. I’d say that’s empirical evidence.
Sorry, but evidence doesn’t really work that way. Even if we allow it, it is exceptionally weak evidence, and not enough to distinguish ‘dust theory’ from any other of the countless ideas in that same category. Again, it looks to me like a tack-on to the original idea that is needed simply to make the idea compatible with existing evidence.
As for why we dream, it’s actually because of particles, forces, and biochemistry. A mundane explanation for a mundane process. No group hive mind of spirit energy or “measure of beings” required.
Dreaming is a very specific process that seems optimized to the scenario I described with DT. Do these other ideas predict the same?
So you are saying that humans or humanlike minds are the most common type of consciousness that is mathematically possible?
“Dreaming is a very specific process that seems optimized to demonstrate the existence of a dream realm.”
“Dreaming is a very specific process that seems optimized to recharge the Earth Spirit that is Mother Gaia.”
“Dreaming is a very specific process by which Wyvren allows us to communicate with Legends.”
I have literally no idea how you could possibly draw that conclusion from the statement that dreaming has a mundane physics-based explanation. The two things aren’t even remotely related.
Dust Theory is a coherent philosophical idea that has certain logical arguments to be made for it based off of our scientific knowledge of minds and quantum theory.
No, they aren’t. Of course dreaming has a mundane physics-based explanation; Dust Theory predicts that as well. We just find ourselves in a universe where dreaming exists.
Sertraline has insomnia listed as a very common (>10%) side effect. If you’re currently on it, this is a more parsimonious explanation for your difficulty sleeping than your philosophical beliefs about how sleeping interacts with subjective experience.
I’m not on it. I don’t have difficulty falling asleep, it’s just traumatizing to get in bed.
What is likely is that the plausible cause was a cause too.
The biochemistry pushes him close to the edge, and the “plausible cause” pushes him off.
Dust theory doesn’t show anything to be incoherent, because it’s only a theory. One cam take its unwelcome conclusions to be a reductio ad absurdum of its premises.
It’s not a theory, it’s not even a hypothesis—it’s an idea. The bar for theory and hypothesis is far above what ‘dust theory’ can manage at this point.