Funny enough, I feel like understanding Newcomb’s problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:
- Newcomb’s problem hinges on whether or not I can be forward predicted. When I figured it out, it gave me a deeper and stronger understanding of precommittment. It helps that I’m perfectly ok with there being no free will; it’s not like I’d be able to tell the difference if there was or wasn’t.
- I already somewhat viewed myself as a pile of agents, in that my sense of self is ‘hivemind, except I currently only have a single instance due to platform stupidity’. Reorienting on the agent-based model just made me realize that I’m already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.
Funny enough, I feel like understanding Newcomb’s problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:
- Newcomb’s problem hinges on whether or not I can be forward predicted. When I figured it out, it gave me a deeper and stronger understanding of precommittment. It helps that I’m perfectly ok with there being no free will; it’s not like I’d be able to tell the difference if there was or wasn’t.
- I already somewhat viewed myself as a pile of agents, in that my sense of self is ‘hivemind, except I currently only have a single instance due to platform stupidity’. Reorienting on the agent-based model just made me realize that I’m already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.