I find the question, “What would change my mind?”, to be quite powerful, psychotherapeutic even. AKA “singlecruxing”. It cuts right through to seeking disconfirmation of one’s model, and can make the model more explicit, legible, object. It’s proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon’s “Keeping Beliefs Cruxy”.
I am curious how others ask this question of themselves. What follows is me practicing the question.
What would change my mind about the existence of the moon? Here are some hypotheses:
I would look up in the sky every few hours for several days and nights and see that it’s not there.
I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics’ asylum. The doctor prescribes me a pill and I no longer believe.
It turns out I was deluded and now I’m relieved to be sane.
It turns out they have brainwashed me and now I’m relieved to be sane.
I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I’m looking for?) the question but it’s interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.
These anticipations were System 2 generated and I’m still uncertain to what extent I can imagine them actually happening and changing my mind. It’s probably sane and functional that the mind doesn’t just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.
An interesting second exercise you might apply here is taking note of what other beliefs in your network would have to change (you sort of touch on this here). If you find out the moon isn’t real, you’ve found out something very important about your entire epistemic state. This indeed makes updating on it harder or more interesting, at least.
You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.
I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?
I find the question, “What would change my mind?”, to be quite powerful, psychotherapeutic even. AKA “singlecruxing”. It cuts right through to seeking disconfirmation of one’s model, and can make the model more explicit, legible, object. It’s proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon’s “Keeping Beliefs Cruxy”.
I am curious how others ask this question of themselves. What follows is me practicing the question.
What would change my mind about the existence of the moon? Here are some hypotheses:
I would look up in the sky every few hours for several days and nights and see that it’s not there.
I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics’ asylum. The doctor prescribes me a pill and I no longer believe.
It turns out I was deluded and now I’m relieved to be sane.
It turns out they have brainwashed me and now I’m relieved to be sane.
I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I’m looking for?) the question but it’s interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.
These anticipations were System 2 generated and I’m still uncertain to what extent I can imagine them actually happening and changing my mind. It’s probably sane and functional that the mind doesn’t just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.
An interesting second exercise you might apply here is taking note of what other beliefs in your network would have to change (you sort of touch on this here). If you find out the moon isn’t real, you’ve found out something very important about your entire epistemic state. This indeed makes updating on it harder or more interesting, at least.
You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.
I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?