Who knows what “meditation” is really doing under the hood.
Lets set up a clearer example.
Suppose you are an uploaded mind, running on a damaged robot body.
You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world.
You run the script, and then you die. That’s it. The computer running nul ops “merges” with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it’s hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news.
Basically, I think forgetting like that without just deleting your mind isn’t something that really happens. I also feel like, when arbitrary mind modifications are on the table, “what will I experience in the future” returns Undefined.
Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else.
Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program.
In the case of broken robot we need two conditions for magic by forgetting:
there are 100 robots and only one is broken and all of them are type-copies of each other.
each robot enters into blank state of mind naturally in some moment, like sleep or reboot.
In that case, after robot enters the blank state of mind it has equal chances to be any of robots and this dilutes its chances to have the damaged body after awakening.
For you toy example—at first approximation, any of which can recognize itself as avturchin (self-recognition identity criteria).
True. But for that you need there to exist another mind almost identical to yours except for that one thing.
In the question “how much of my memories can I delete while retaining my thread of subjective experience?” I don’t expect there to be an objective answer.
Who knows what “meditation” is really doing under the hood.
Lets set up a clearer example.
Suppose you are an uploaded mind, running on a damaged robot body.
You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world.
You run the script, and then you die. That’s it. The computer running nul ops “merges” with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it’s hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news.
Basically, I think forgetting like that without just deleting your mind isn’t something that really happens. I also feel like, when arbitrary mind modifications are on the table, “what will I experience in the future” returns Undefined.
Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else.
Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program.
In the case of broken robot we need two conditions for magic by forgetting:
there are 100 robots and only one is broken and all of them are type-copies of each other.
each robot enters into blank state of mind naturally in some moment, like sleep or reboot.
In that case, after robot enters the blank state of mind it has equal chances to be any of robots and this dilutes its chances to have the damaged body after awakening.
For you toy example—at first approximation, any of which can recognize itself as avturchin (self-recognition identity criteria).
The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.
I can forget one particular thing, but preserve most of my selfidentification information
True. But for that you need there to exist another mind almost identical to yours except for that one thing.
In the question “how much of my memories can I delete while retaining my thread of subjective experience?” I don’t expect there to be an objective answer.