Evo bio would say that overeating was more useful in the ancestral environment than it is now, so the brain’s signals about desiring food are understandable but mistaken (“retarded” would be an appropriate word). Not sure what QM would say, but I’ve seen it used to support some weird conclusions.
I suppose you could use MWI as a way of illustrating the decision theory approach:
Imagine that there is the you that eats the garlic bread, then the you that doesn’t. From there, each will experience many more branches with each passing moment. If you take that forward a while, then you can analyze the amount of utility at each of the two different sets of end leaves, to figure out which branch you want to choose now so as to have the highest chance of ending up in a high-utility sub-branch later.
Yeah, this is the kind of bullshit that I’m talking about :-) A cognitive algorithm cannot “choose” a quantum branch to “continue into”, it always continues into both. The perception of choice relies on logical uncertainty about the future output of your deterministic algorithm, not on quantum uncertainty.
Evo bio would say that overeating was more useful in the ancestral environment than it is now, so the brain’s signals about desiring food are understandable but mistaken (“retarded” would be an appropriate word). Not sure what QM would say, but I’ve seen it used to support some weird conclusions.
In this case QM helps because it supports normal conclusions. QM tells you eating less deserts makes you lose weight. It just takes longer to do the calculation. Just like TDT typically takes longer to understand that CDT but either work just fine for the purpose of deciding not to eat deserts because you care about future consequences.
I suppose the only advantage to either is that thinking in those modes can make folk feel mystic/abstract/deep or otherwise in “Far Mode” so trick themselve into the mode of actually making decisions rather than executing habits.
Evo bio would say that overeating was more useful in the ancestral environment than it is now, so the brain’s signals about desiring food are understandable but mistaken (“retarded” would be an appropriate word). Not sure what QM would say, but I’ve seen it used to support some weird conclusions.
I suppose you could use MWI as a way of illustrating the decision theory approach:
Imagine that there is the you that eats the garlic bread, then the you that doesn’t. From there, each will experience many more branches with each passing moment. If you take that forward a while, then you can analyze the amount of utility at each of the two different sets of end leaves, to figure out which branch you want to choose now so as to have the highest chance of ending up in a high-utility sub-branch later.
Yeah, this is the kind of bullshit that I’m talking about :-) A cognitive algorithm cannot “choose” a quantum branch to “continue into”, it always continues into both. The perception of choice relies on logical uncertainty about the future output of your deterministic algorithm, not on quantum uncertainty.
Yeah, that’s an issue. I suppose I could get around that by emphasizing that it’s just a desired branch, and the metaphor would still work.
But then again, maybe I should just let it die. :-)
In this case QM helps because it supports normal conclusions. QM tells you eating less deserts makes you lose weight. It just takes longer to do the calculation. Just like TDT typically takes longer to understand that CDT but either work just fine for the purpose of deciding not to eat deserts because you care about future consequences.
I suppose the only advantage to either is that thinking in those modes can make folk feel mystic/abstract/deep or otherwise in “Far Mode” so trick themselve into the mode of actually making decisions rather than executing habits.