If I understand correctly, you’re basically saying:
We can’t know how long it will take for the machine to finish its task. In fact, it might take an infinite amount of time, due to the halting problem which says that we can’t know in advance whether a program will run forever.
If our machine took an infinite amount of time, it might do something catastrophic in that infinite amount of time, and we could never prove that it doesn’t.
Since we can’t prove that the machine won’t do something catastrophic, the alignment problem is impossible.
The halting problem doesn’t say that we can’t know whether any program will halt, just that we can’t determine the halting status of every single program. It’s easy to “prove” that a program that runs an LLM will halt. Just program it to “run the LLM until it decides to stop; but if it doesn’t stop itself after 1 million tokens, cut it off.” This is what ChatGPT or any other AI product does in practice.
Also, the alignment problem isn’t necessarily about proving that a AI will never do something catastrophic. It’s enough to have good informal arguments that it won’t do something bad with (say) 99.99% probability over the length of its deployment.
If I understand correctly, you’re basically saying:
We can’t know how long it will take for the machine to finish its task. In fact, it might take an infinite amount of time, due to the halting problem which says that we can’t know in advance whether a program will run forever.
If our machine took an infinite amount of time, it might do something catastrophic in that infinite amount of time, and we could never prove that it doesn’t.
Since we can’t prove that the machine won’t do something catastrophic, the alignment problem is impossible.
The halting problem doesn’t say that we can’t know whether any program will halt, just that we can’t determine the halting status of every single program. It’s easy to “prove” that a program that runs an LLM will halt. Just program it to “run the LLM until it decides to stop; but if it doesn’t stop itself after 1 million tokens, cut it off.” This is what ChatGPT or any other AI product does in practice.
Also, the alignment problem isn’t necessarily about proving that a AI will never do something catastrophic. It’s enough to have good informal arguments that it won’t do something bad with (say) 99.99% probability over the length of its deployment.