Maybe you have a 30% chance of solving the clean theoretical problem. And a 30% chance that you could wing AI alignment with no technical solution. If they were independent, you would have a 50% probability of being able to do one or the other.
But things are worse than this, because both of them are more likely to work if alignment turns out to be easy. So maybe it’s more like a 40% probability of being able to do one or the other.
But in reality, you don’t need to solve the full theoretical problem or wing the problem without understanding anything more than we do today. You can have a much better theoretical understanding than we currently do, but not good enough to solve the problem. And you can be pretty prepared to wing it, even if it’s not good enough to solve the problem without knowing anything it might be good enough if combined with a reasonable theoretical picture.
Maybe you have a 30% chance of solving the clean theoretical problem. And a 30% chance that you could wing AI alignment with no technical solution. If they were independent, you would have a 50% probability of being able to do one or the other.
But things are worse than this, because both of them are more likely to work if alignment turns out to be easy. So maybe it’s more like a 40% probability of being able to do one or the other.
But in reality, you don’t need to solve the full theoretical problem or wing the problem without understanding anything more than we do today. You can have a much better theoretical understanding than we currently do, but not good enough to solve the problem. And you can be pretty prepared to wing it, even if it’s not good enough to solve the problem without knowing anything it might be good enough if combined with a reasonable theoretical picture.
(Similarly for coordination.)