Ultimately, when you can’t compute the right answer in the given time, you will either have no answer or compute a wrong one.
But if the question is possibly important and you have to make a decision now, you have to make a best guess. How do you think we should do that?
How do you know that you have to make a decision now? You don’t know when AGI is going to be invented. You don’t know if it will be a quick transition from expert systems towards general reasoning capabilities or if AGI will be constructed piecewise over a longer period of time. You don’t know if all that you currently believe to know will be rendered moot in future. You don’t know if the resources that you currently spend on researching friendly AI are a wasted opportunity because all that you could possible come up with will be much easier to come by in future.
All that you really know at this time is that smarter than human intelligence is likely possible and that something that is smarter is hard to control.
How do you know that you have to make a decision now? You don’t know when AGI is going to be invented. You don’t know if it will be a quick transition from expert systems towards general reasoning capabilities or if AGI will be constructed piecewise over a longer period of time. You don’t know if all that you currently believe to know will be rendered moot in future. You don’t know if the resources that you currently spend on researching friendly AI are a wasted opportunity because all that you could possible come up with will be much easier to come by in future.
All that you really know at this time is that smarter than human intelligence is likely possible and that something that is smarter is hard to control.
How do you know we don’t? Figuring out whether there is urgency or not is one of those questions whose solution we need to estimate… somehow.