I can see that the condition you’ve given, that a “curriculum be sampled uniformly at random” with no mutual information with the real world is sufficient for a curriculum to satisfy Premise 1 of TurnTrouts argument.
But it isn’t immediately obvious to me that it is a sufficient and necessary condition (and therefore equivalent to Premise 1).
I’m not claiming to have shown something equivalent to premise 1, I’m claiming to have shown something equivalent to the conclusion of the proof (that it’s possible to make an AI which very probably does not cause x-risk), inspired by the general idea of the proof but simplifying/constructifying it to be more rigorous and transparent.
I might be misunderstanding something crucial or am not expressing myself clearly.
I understand TurnTrout’s original post to be an argument for a set of conditions which, if satisfied, prove the AI is (probably) safe. There are no restrictions on the capabilities of the system given in the argument.
You do constructively show “that it’s possible to make an AI which very probably does not cause x-risk” using a system that cannot do anything coherent when deployed.
But TurnTrout’s post is not merely arguing that it is “possible” to build a safe AI.
Your conclusion is trivially true and there are simpler examples of “safe” systems if you don’t require them to do anything useful or coherent. For example, a fried, unpowered GPU is guaranteed to be “safe” but that isn’t telling me anything useful.
How did you understand the argument instead?
I can see that the condition you’ve given, that a “curriculum be sampled uniformly at random” with no mutual information with the real world is sufficient for a curriculum to satisfy Premise 1 of TurnTrouts argument.
But it isn’t immediately obvious to me that it is a sufficient and necessary condition (and therefore equivalent to Premise 1).
I’m not claiming to have shown something equivalent to premise 1, I’m claiming to have shown something equivalent to the conclusion of the proof (that it’s possible to make an AI which very probably does not cause x-risk), inspired by the general idea of the proof but simplifying/constructifying it to be more rigorous and transparent.
I might be misunderstanding something crucial or am not expressing myself clearly.
I understand TurnTrout’s original post to be an argument for a set of conditions which, if satisfied, prove the AI is (probably) safe. There are no restrictions on the capabilities of the system given in the argument.
You do constructively show “that it’s possible to make an AI which very probably does not cause x-risk” using a system that cannot do anything coherent when deployed.
But TurnTrout’s post is not merely arguing that it is “possible” to build a safe AI.
Your conclusion is trivially true and there are simpler examples of “safe” systems if you don’t require them to do anything useful or coherent. For example, a fried, unpowered GPU is guaranteed to be “safe” but that isn’t telling me anything useful.