I suppose you could use MWI as a way of illustrating the decision theory approach:
Imagine that there is the you that eats the garlic bread, then the you that doesn’t. From there, each will experience many more branches with each passing moment. If you take that forward a while, then you can analyze the amount of utility at each of the two different sets of end leaves, to figure out which branch you want to choose now so as to have the highest chance of ending up in a high-utility sub-branch later.
Yeah, this is the kind of bullshit that I’m talking about :-) A cognitive algorithm cannot “choose” a quantum branch to “continue into”, it always continues into both. The perception of choice relies on logical uncertainty about the future output of your deterministic algorithm, not on quantum uncertainty.
I suppose you could use MWI as a way of illustrating the decision theory approach:
Imagine that there is the you that eats the garlic bread, then the you that doesn’t. From there, each will experience many more branches with each passing moment. If you take that forward a while, then you can analyze the amount of utility at each of the two different sets of end leaves, to figure out which branch you want to choose now so as to have the highest chance of ending up in a high-utility sub-branch later.
Yeah, this is the kind of bullshit that I’m talking about :-) A cognitive algorithm cannot “choose” a quantum branch to “continue into”, it always continues into both. The perception of choice relies on logical uncertainty about the future output of your deterministic algorithm, not on quantum uncertainty.
Yeah, that’s an issue. I suppose I could get around that by emphasizing that it’s just a desired branch, and the metaphor would still work.
But then again, maybe I should just let it die. :-)