Present cryptography becomes redundant when the past can be approximated. Simulating the universe at an earlier point and running it forward to extract information before it’s encrypted is a basic, but difficult way to do this. For some information this approximation could even be fuzzy, and still cause damage if public. How can you protect information when your adversary can simulate the past?
The information must never exist as plaintext in the past. A bad way to do this is to make the information future-contingent. Perhaps it could be acausally inserted into the past by future agents, but probably you would not be able to act on future-contingent information in useful ways. A better method is to run many homomorphically encrypted instances of a random function that might output programs that do computations that yield sensitive information (e.g, an uploaded human). You would then give a plaintext description of the random function, including a proof that it probably output a program doing computations likely adversaries would want. This incentivizes the adversary to not destroy the program output by the random function, because it may not be worth the cost of destruction and replacement with something that is certainly doing better computations.
This method satisfies the following desiderata: 1. The adversary does not know the output of the encrypted random function, or the outputs of the program the random function output 2. There is an incentive to not destroy the program output by the random function
One problem with this is that your advserary might be superintelligent, and prove the assumptions that made your encryption appear strong to be incorrect. To avoid this, you could base your cryptography on something other than computational hardness.
My first thought was to necessitate computations that would make an adversary incur massive negative utility to verify the output of a random function. It’s hard to predict what an adversary’s preferences might be in advance, so the punishment for verifying the output of the random function would need to be generically bad, such as forcing the adversary to expend massive amounts of computation on useless problems. This idea is bad for obvious reasons, and will probably end up making the same or equally bad assumptions about the inseparability of the punishment and verification.
I don’t think I understand your hypothetical. Is your hypothetical about a future AI which has:
Very accurate measurements of the state of the universe in the future
A large amount of compute, but not exponentially large
Very good algorithms for retrodicting* the past
I think it’s exponentially hard to retrodict the past. It’s hard in a similar way as encryption is hard. If an AI isn’t power enough to break encryption, it also isn’t powerful enough to retrodict the past accurately enough to break secrets.
If you really want to keep something secret from a future AI, I’d look at ways of ensuring the information needed to theoretically reconstruct your secret is carried away from the earth at the speed of light in infrared radiation. Write the secret in sealed room, atomize the room to plasma, then cool the plasma by exposing it to the night sky.
*predicting is using your knowledge of the present to predict the state of the future. Retrodicting is using your knowledge of the present to predictretrodict the state of the past
No, that’s a very different problem. The matrix overlords are Laplace’s demon, with god-like omniscience about the present and past. The matrix overlords know the position and momentum of every molecule in my cup of tea. They can look up the microstate of any time in the past, for free.
The future AI is not Laplace’s demon. The AI is informationally bounded. It knows the temperature of my tea, but not the position and momentum of every molecule. Any uncertainties it has about the state of my tea will increase exponentially when trying to predict into the future or retrodict into the past. Figuring out which water molecules in my tea came from the kettle and which came from the milk is very hard, harder than figuring out which key encrypted a cypher-text.
Yes, your description of my hypothetical is correct. I think it’s plausible that approximating things that happened in the past is computationally easier than breaking some encryption, especially if the information about the past is valuable even if it’s noisy. I strongly doubt my hypothetical will materialize, but I think it is an interesting problem regardless.
My concern with approaches like the one you suggest is that they’re restricted to small parts of the universe, so with enough data it might be possible to fill in the gaps.
Present cryptography becomes redundant when the past can be approximated. Simulating the universe at an earlier point and running it forward to extract information before it’s encrypted is a basic, but difficult way to do this. For some information this approximation could even be fuzzy, and still cause damage if public. How can you protect information when your adversary can simulate the past?
The information must never exist as plaintext in the past. A bad way to do this is to make the information future-contingent. Perhaps it could be acausally inserted into the past by future agents, but probably you would not be able to act on future-contingent information in useful ways. A better method is to run many homomorphically encrypted instances of a random function that might output programs that do computations that yield sensitive information (e.g, an uploaded human). You would then give a plaintext description of the random function, including a proof that it probably output a program doing computations likely adversaries would want. This incentivizes the adversary to not destroy the program output by the random function, because it may not be worth the cost of destruction and replacement with something that is certainly doing better computations.
This method satisfies the following desiderata:
1. The adversary does not know the output of the encrypted random function, or the outputs of the program the random function output
2. There is an incentive to not destroy the program output by the random function
One problem with this is that your advserary might be superintelligent, and prove the assumptions that made your encryption appear strong to be incorrect. To avoid this, you could base your cryptography on something other than computational hardness.
My first thought was to necessitate computations that would make an adversary incur massive negative utility to verify the output of a random function. It’s hard to predict what an adversary’s preferences might be in advance, so the punishment for verifying the output of the random function would need to be generically bad, such as forcing the adversary to expend massive amounts of computation on useless problems. This idea is bad for obvious reasons, and will probably end up making the same or equally bad assumptions about the inseparability of the punishment and verification.
I don’t think I understand your hypothetical. Is your hypothetical about a future AI which has:
Very accurate measurements of the state of the universe in the future
A large amount of compute, but not exponentially large
Very good algorithms for retrodicting* the past
I think it’s exponentially hard to retrodict the past. It’s hard in a similar way as encryption is hard. If an AI isn’t power enough to break encryption, it also isn’t powerful enough to retrodict the past accurately enough to break secrets.
If you really want to keep something secret from a future AI, I’d look at ways of ensuring the information needed to theoretically reconstruct your secret is carried away from the earth at the speed of light in infrared radiation. Write the secret in sealed room, atomize the room to plasma, then cool the plasma by exposing it to the night sky.
*predicting is using your knowledge of the present to predict the state of the future. Retrodicting is using your knowledge of the present to
predictretrodict the state of the pastOh, wait, is this “How does a simulation keep secrets from the (computationally bounded) matrix overlords?”
This should be an equivalent problem, yes.
No, that’s a very different problem. The matrix overlords are Laplace’s demon, with god-like omniscience about the present and past. The matrix overlords know the position and momentum of every molecule in my cup of tea. They can look up the microstate of any time in the past, for free.
The future AI is not Laplace’s demon. The AI is informationally bounded. It knows the temperature of my tea, but not the position and momentum of every molecule. Any uncertainties it has about the state of my tea will increase exponentially when trying to predict into the future or retrodict into the past. Figuring out which water molecules in my tea came from the kettle and which came from the milk is very hard, harder than figuring out which key encrypted a cypher-text.
Yes, your description of my hypothetical is correct. I think it’s plausible that approximating things that happened in the past is computationally easier than breaking some encryption, especially if the information about the past is valuable even if it’s noisy. I strongly doubt my hypothetical will materialize, but I think it is an interesting problem regardless.
My concern with approaches like the one you suggest is that they’re restricted to small parts of the universe, so with enough data it might be possible to fill in the gaps.