I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Is that any clearer than mud? :)
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
Does that about sum it up?
Thanks again.
Works for me!