Interesting—that make sense… though do you think you’d need to somehow perform these acts subconsciously? I guess you clarified that the sky was too obvious, but when you first wrote this, I thought that it wouldn’t work if I had a “meta-awareness” of my “rigging” of circumstances to produce a given belief. I’d know I was trying to trick myself and thus it’d seem like a game.
But perhaps that’s why you retracted for such a clear-cut objective case?
I’ll ponder this more. I appreciate the comment. I’m sure I do this myself and often talk aloud to myself when I’m feeling something I think is irrational, say about being late to a meeting and feeling extremely self-criticizing or worrying about what others think. I kind of talk to myself and try to come the conclusion that what’s happened has happened and despite the setback which led me to be late, I’m doing the best I can now and thus shouldn’t be condemning myself.
IME, “subconsciously” doesn’t really enter into it… I’m not tricking myself into believing something (where I would have to be unaware of the trick for it to work), I’m setting up a series of situations that will demonstrate what I want to believe. It’s a little bit like training my dog, I guess… it works without reference to an explicit cognitive representation of what’s being learned.
But then, I’ve never tried to do this for something that I actually believe to be false, as opposed to something that I either believe to be true but react to emotionally as though it were false, or something where my confidence in its truth or falsehood is low and I’m artificially bolstering one of them for pragmatic reasons.
I do think it makes more sense now that you’ve added that it’s [most likely] something you already believe in but are not “emotionally aligned” with. I’d be interested in an example of using this to promote action in a near 50⁄50 truth/falsehood estimate situation.
My favorite example is sort of a degenerate case, and so might be more distracting than illustrative, but I’ll share it anyway: a programmer friend of mine has a utility on his desktop called “placebo.”
When executed, it prints out the following text over the course of about 15 seconds:
“Working.........Done.”
That’s all.
It’s something he uses when caught up in a complex project to remind himself that he can write code projects that work, and thereby to alter his confidence level in his ability to make this code project work.
This is, of course, ridiculous: his ability to write a three-line program that generates text on a screen has no meaningful relationship to his ability to make complicated code work as intended—that “placebo” runs is just as consistent with the current project failing as it is with it succeeding, and is therefore evidence of neither—and in any case running it for a second time doesn’t give him any new evidence that he didn’t already have. It’s purely a mechanism for irrationally changing his beliefs about his likely success. (That said, the choice of whether and when to use that mechanism can be a rational choice.)
That’s great, though it probably would be helpful to have a perhaps more pertinent/universal example of something to go along with your original explanation:
...I’m setting up a series of situations that will demonstrate what I want to believe.
I think I’m still a bit lost on what category of beliefs you would use this on. It seems like they are generally subjective sorts of “flexible” beliefs; nothing concerning empirical evidence. Is that right?
More like, “I want to be happy in all circumstances, and happiness is within my control, thus I will make myself believe that event x should increase my happiness.” (And then you go about “setting up a series of situations” that increases your happiness about X.)
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
Interesting—that make sense… though do you think you’d need to somehow perform these acts subconsciously? I guess you clarified that the sky was too obvious, but when you first wrote this, I thought that it wouldn’t work if I had a “meta-awareness” of my “rigging” of circumstances to produce a given belief. I’d know I was trying to trick myself and thus it’d seem like a game.
But perhaps that’s why you retracted for such a clear-cut objective case?
I’ll ponder this more. I appreciate the comment. I’m sure I do this myself and often talk aloud to myself when I’m feeling something I think is irrational, say about being late to a meeting and feeling extremely self-criticizing or worrying about what others think. I kind of talk to myself and try to come the conclusion that what’s happened has happened and despite the setback which led me to be late, I’m doing the best I can now and thus shouldn’t be condemning myself.
Kind of like that?
IME, “subconsciously” doesn’t really enter into it… I’m not tricking myself into believing something (where I would have to be unaware of the trick for it to work), I’m setting up a series of situations that will demonstrate what I want to believe. It’s a little bit like training my dog, I guess… it works without reference to an explicit cognitive representation of what’s being learned.
But then, I’ve never tried to do this for something that I actually believe to be false, as opposed to something that I either believe to be true but react to emotionally as though it were false, or something where my confidence in its truth or falsehood is low and I’m artificially bolstering one of them for pragmatic reasons.
Maybe I just need a concrete example :)
I do think it makes more sense now that you’ve added that it’s [most likely] something you already believe in but are not “emotionally aligned” with. I’d be interested in an example of using this to promote action in a near 50⁄50 truth/falsehood estimate situation.
Thanks for the continued discussion!
My favorite example is sort of a degenerate case, and so might be more distracting than illustrative, but I’ll share it anyway: a programmer friend of mine has a utility on his desktop called “placebo.”
When executed, it prints out the following text over the course of about 15 seconds:
“Working.........Done.”
That’s all.
It’s something he uses when caught up in a complex project to remind himself that he can write code projects that work, and thereby to alter his confidence level in his ability to make this code project work.
This is, of course, ridiculous: his ability to write a three-line program that generates text on a screen has no meaningful relationship to his ability to make complicated code work as intended—that “placebo” runs is just as consistent with the current project failing as it is with it succeeding, and is therefore evidence of neither—and in any case running it for a second time doesn’t give him any new evidence that he didn’t already have. It’s purely a mechanism for irrationally changing his beliefs about his likely success. (That said, the choice of whether and when to use that mechanism can be a rational choice.)
Awesome. Thank you for this. I feel so much more competent now after having done this.
That’s great, though it probably would be helpful to have a perhaps more pertinent/universal example of something to go along with your original explanation:
I think I’m still a bit lost on what category of beliefs you would use this on. It seems like they are generally subjective sorts of “flexible” beliefs; nothing concerning empirical evidence. Is that right?
More like, “I want to be happy in all circumstances, and happiness is within my control, thus I will make myself believe that event x should increase my happiness.” (And then you go about “setting up a series of situations” that increases your happiness about X.)
Am I remotely close?
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Is that any clearer than mud? :)
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
Does that about sum it up?
Thanks again.
Works for me!