I would expect that if their simulation is accurate enough for you to precommit to the same test, you’re not going to fool it with a journal entry.
Also, if they simulate many worlds, you’ll almost certainly end up in an Everett branch that precommitted to a different test, and if they simulate Copenhagen, you’ll almost certainly end up with the wrong random numbers.
I’m assuming much more limited computational resources than all that are being used in at least some of the simulations—something much closer to the minimal required to fool a simulated inhabitant into thinking that their universe runs on full physics, with plenty of shortcuts taken (such as deliberate falsification of experiments performed by simulated physicists).
I’m also assuming that at least one goal of those doing the simulating is to find a simulation that emulates their own past as closely as possible, given whatever information on their past they still have.
With these assumptions, then the ‘sympathetic magic’ thing would seem to increase the probability of the desired outcome occurring in at least a few such simulations, thus at least mildly increasing the probability that the person involved will experience the desired effect. It isn’t going to happen in all simulations; but even increasing the probability from 50% to 51% could have some use.
I’m assuming much more limited computational resources than all that are being used in at least some of the simulations
In that case, the simulation wouldn’t be accurate enough that you’d have the same test. You write down that you saw a pigeon when you didn’t in April. Emulated you writes down that he saw a family of ducks when he didn’t in February. He decides his test failed. Several months later, he sees a pigeon and thinks nothing of it.
With these assumptions, then the ‘sympathetic magic’ thing would seem to increase the probability of the desired outcome occurring in at least a few such simulations, thus at least mildly increasing the probability that the person involved will experience the desired effect.
It would increase the probability of the outcome occurring in simulations in which the protagonist does not come up with the trick. (Or doesn’t apply it to this occasion, as DanielLC mentions.) In simulations where the protagonist does come up with the trick (and applies it to this case), it’s a sufficient explanation for the various notes which the protagonist leaves; the events don’t need to happen. So leaving false notes will only have the desired effect in simulations in which the protagonist is sufficiently different so as to not leave those notes.
This could still be worthwhile to the protagonist.
For fun fiction, such a simulated protagonist could eventually understand the trick and what is happening. (The simulated protagonists actions would still have no supernatural correlation to other events in the simulation.)
I would expect that if their simulation is accurate enough for you to precommit to the same test, you’re not going to fool it with a journal entry.
Also, if they simulate many worlds, you’ll almost certainly end up in an Everett branch that precommitted to a different test, and if they simulate Copenhagen, you’ll almost certainly end up with the wrong random numbers.
I’m assuming much more limited computational resources than all that are being used in at least some of the simulations—something much closer to the minimal required to fool a simulated inhabitant into thinking that their universe runs on full physics, with plenty of shortcuts taken (such as deliberate falsification of experiments performed by simulated physicists).
I’m also assuming that at least one goal of those doing the simulating is to find a simulation that emulates their own past as closely as possible, given whatever information on their past they still have.
With these assumptions, then the ‘sympathetic magic’ thing would seem to increase the probability of the desired outcome occurring in at least a few such simulations, thus at least mildly increasing the probability that the person involved will experience the desired effect. It isn’t going to happen in all simulations; but even increasing the probability from 50% to 51% could have some use.
In that case, the simulation wouldn’t be accurate enough that you’d have the same test. You write down that you saw a pigeon when you didn’t in April. Emulated you writes down that he saw a family of ducks when he didn’t in February. He decides his test failed. Several months later, he sees a pigeon and thinks nothing of it.
It would increase the probability of the outcome occurring in simulations in which the protagonist does not come up with the trick. (Or doesn’t apply it to this occasion, as DanielLC mentions.) In simulations where the protagonist does come up with the trick (and applies it to this case), it’s a sufficient explanation for the various notes which the protagonist leaves; the events don’t need to happen. So leaving false notes will only have the desired effect in simulations in which the protagonist is sufficiently different so as to not leave those notes.
This could still be worthwhile to the protagonist.
For fun fiction, such a simulated protagonist could eventually understand the trick and what is happening. (The simulated protagonists actions would still have no supernatural correlation to other events in the simulation.)