AI is getting human-level at closed-ended tasks such as math and programming, but not yet at open-ended ones. They appear to be more difficult. Perhaps evolution brute-forced open-ended tasks by creating lots of agents. In a chaotic world, we’re never going to know which actions lead to a final goal, e.g. GDP growth. That’s why lots of people try lots of different things.
Perhaps the only way in which AI can achieve ambitious final goals is by employing lots of slightly diverse agents. Perhaps that would almost inevitably lead to many warning shots before a successful takeover?
AI is getting human-level at closed-ended tasks such as math and programming, but not yet at open-ended ones. They appear to be more difficult. Perhaps evolution brute-forced open-ended tasks by creating lots of agents. In a chaotic world, we’re never going to know which actions lead to a final goal, e.g. GDP growth. That’s why lots of people try lots of different things.
Perhaps the only way in which AI can achieve ambitious final goals is by employing lots of slightly diverse agents. Perhaps that would almost inevitably lead to many warning shots before a successful takeover?