You’re asking too general a question. I’ll attempt to guess at your real question and answer it, but that’s notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it’s real and devastating and you’re desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn’t track calculated probabilities, and it usually only affects subjective perception).
Is the problem you’re having that you want to match your emotional anticipation of success to your calculated probability of success, but you’ve noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you’re inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here’s an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Sounds like we’re finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it’s relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don’t have direct access to the data I’m operating on nor the ability to practice rationality in realtime.
… that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don’t take it to literally.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as “Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!”
My apologies if the response is flip but I suggest going from “I did the right thing, woo!” to “I made the optimal action given my knowledge, that’s kinda awesome, innit?”
that’s still the same class of problem: “screwed over by circumstances beyond reasonable control”. Stretching it to full generality, “I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws”, only makes the framing WORSE because you remember how many things you suck at.
You’re asking too general a question. I’ll attempt to guess at your real question and answer it, but that’s notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it’s real and devastating and you’re desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn’t track calculated probabilities, and it usually only affects subjective perception).
Is the problem you’re having that you want to match your emotional anticipation of success to your calculated probability of success, but you’ve noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you’re inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here’s an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Sounds like we’re finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can’t find similar workaround for the problems I’m facing and even if I could I don’t have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it’s relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don’t have direct access to the data I’m operating on nor the ability to practice rationality in realtime.
… that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don’t take it to literally.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as “Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!”
My apologies if the response is flip but I suggest going from “I did the right thing, woo!” to “I made the optimal action given my knowledge, that’s kinda awesome, innit?”
that’s still the same class of problem: “screwed over by circumstances beyond reasonable control”. Stretching it to full generality, “I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws”, only makes the framing WORSE because you remember how many things you suck at.