I agree completely with the sentiment in this post. While I think that AGI would be potentially dangerous, the existing progress towards it is blown completely out of proportion. The problem is that one of the things you’d need for AGI is to be able to reason about the state of the (simulated) world, which we have no clue how to do in a computer program.
I think one ought to be careful with the wording here. What is the proportion of existing AI progress? We could be 90% there on the time axis and only one last key insight is left to be discovered, but still virtually useless compared to humans on the capability axis. It would be a precarious situation. Is the inability of our algorithms to reason the problem, or our only saving grace?
I agree completely with the sentiment in this post. While I think that AGI would be potentially dangerous, the existing progress towards it is blown completely out of proportion. The problem is that one of the things you’d need for AGI is to be able to reason about the state of the (simulated) world, which we have no clue how to do in a computer program.
I think one ought to be careful with the wording here. What is the proportion of existing AI progress? We could be 90% there on the time axis and only one last key insight is left to be discovered, but still virtually useless compared to humans on the capability axis. It would be a precarious situation. Is the inability of our algorithms to reason the problem, or our only saving grace?