It occurs to me that a Pensieve would be a powerful tool for dealing with any of the Newcomblike problems that have come up so far or are likely to come up in the future—getting past the magic mirror, for example. Just extract any memories relating to debiasing that you might have, put them in a jar labeled do not open until Christmas, and go talk to whatever the Omega of the moment is.
Might not work like that in practice—I wouldn’t expect it to IRL, at least, since scrubbing a memory probably wouldn’t get rid of the weightings that have grown around the concepts it links to. But there’s enough story potential that I can see a case for handwaving that away. Starting with the potential value stability problems...
Unless being more rational caused you to decide to one-box. Actually, I can’t see where someone’s intuition about that problem would go from one-boxing to two-boxing as they learned rationality. Nor do I see how knowing less would make you want the Philosopher’s Stone but not want to use it.
To be honest, looking back at that from three months on I’m not sure exactly what I was thinking. I do still think there’s a lot of interest and story potential in a magical item that allows you to edit your own mind to some extent: if pulling out memories also removes your associations with them, you don’t just have external storage, you have something very close to a general personality editor.
I’d actually be a little surprised if it doesn’t come up down the road—but the specific application that 20110929!Gest suggested up there looks far-fetched to me now. You’d have to have a very good idea of what’s linked to what in your head, and you’d probably have to make some pretty extensive changes to change your answer to a Newcomblike problem. Enough to make the whole operation uncertain and rather dangerous. I don’t think I’d try it, anyway.
It occurs to me that a Pensieve would be a powerful tool for dealing with any of the Newcomblike problems that have come up so far or are likely to come up in the future—getting past the magic mirror, for example. Just extract any memories relating to debiasing that you might have, put them in a jar labeled do not open until Christmas, and go talk to whatever the Omega of the moment is.
Might not work like that in practice—I wouldn’t expect it to IRL, at least, since scrubbing a memory probably wouldn’t get rid of the weightings that have grown around the concepts it links to. But there’s enough story potential that I can see a case for handwaving that away. Starting with the potential value stability problems...
Unless being more rational caused you to decide to one-box. Actually, I can’t see where someone’s intuition about that problem would go from one-boxing to two-boxing as they learned rationality. Nor do I see how knowing less would make you want the Philosopher’s Stone but not want to use it.
Am I missing something here?
To be honest, looking back at that from three months on I’m not sure exactly what I was thinking. I do still think there’s a lot of interest and story potential in a magical item that allows you to edit your own mind to some extent: if pulling out memories also removes your associations with them, you don’t just have external storage, you have something very close to a general personality editor.
I’d actually be a little surprised if it doesn’t come up down the road—but the specific application that 20110929!Gest suggested up there looks far-fetched to me now. You’d have to have a very good idea of what’s linked to what in your head, and you’d probably have to make some pretty extensive changes to change your answer to a Newcomblike problem. Enough to make the whole operation uncertain and rather dangerous. I don’t think I’d try it, anyway.
Yeah, I agree with this.