Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an “oh, shit” moment?
Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and “oh, shit” moment (I’m not sure if they are equivalent).
Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an “oh, shit” moment?
Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and “oh, shit” moment (I’m not sure if they are equivalent).