I really like this framework, and I’m going try to make intentional use of it when evaluating decisions. An issue to consider though:
Probable-Future-You is a decided-upon mental construct of Current-You. I’m not saying what Aharon said, that Current-You won’t argue rationally. Rather, there may be no argument; Current-You is less likely to conjure up the correct Future-You to begin with!
The necessary fuel to turn that tiny note of discord into a clear mental state is as ninjacolin said, memory. WI-Bob’s brain is busy pretending NWI-Bob doesn’t exist. If there’s a tiny feeling of discord though (memory leak), WI-Bob can pause for a moment, take the outside view, and construct NWI-Bob, reasoning abstractly that NWI-Bob might be the most representative Future-Bob. Then WI-Bob can run an emotional simulation of being NWI-Bob, looking back with regret.
And really, the committee should consist of several future selves. If there’s a decision worth considering, then there is more than one outcome to consider. You need to simulate NWI-Bob after each decision. That’s the full argument the subagent constructs can make.
Getting that initial discord, being able to quickly leap to the outside view’s conclusion, and being able to run simulations fast enough probably takes a bit of preparation. If Monday-you might have to make a decision that Tuesday-you will regret, Sunday-you should make the decision. But since Sunday can’t entirely (since Monday is a weak day), Sunday should use rehearse, making use of the privileged weekend read-write access to memory. Otherwise, Bob might start or finish this deliberation while taking his last bite of cake.
I really like this framework, and I’m going try to make intentional use of it when evaluating decisions. An issue to consider though:
Probable-Future-You is a decided-upon mental construct of Current-You. I’m not saying what Aharon said, that Current-You won’t argue rationally. Rather, there may be no argument; Current-You is less likely to conjure up the correct Future-You to begin with!
The necessary fuel to turn that tiny note of discord into a clear mental state is as ninjacolin said, memory. WI-Bob’s brain is busy pretending NWI-Bob doesn’t exist. If there’s a tiny feeling of discord though (memory leak), WI-Bob can pause for a moment, take the outside view, and construct NWI-Bob, reasoning abstractly that NWI-Bob might be the most representative Future-Bob. Then WI-Bob can run an emotional simulation of being NWI-Bob, looking back with regret.
And really, the committee should consist of several future selves. If there’s a decision worth considering, then there is more than one outcome to consider. You need to simulate NWI-Bob after each decision. That’s the full argument the subagent constructs can make.
Getting that initial discord, being able to quickly leap to the outside view’s conclusion, and being able to run simulations fast enough probably takes a bit of preparation. If Monday-you might have to make a decision that Tuesday-you will regret, Sunday-you should make the decision. But since Sunday can’t entirely (since Monday is a weak day), Sunday should use rehearse, making use of the privileged weekend read-write access to memory. Otherwise, Bob might start or finish this deliberation while taking his last bite of cake.