If we knew what a benevolent super-genius would do, it’s likely that a powerful human (or group of humans) could do it without waiting for the AI. Fundamentally, the output of superhuman AGI is going to be discovery—things we didn’t know, or didn’t know to value or enact.
If we knew what a benevolent super-genius would do, it’s likely that a powerful human (or group of humans) could do it without waiting for the AI. Fundamentally, the output of superhuman AGI is going to be discovery—things we didn’t know, or didn’t know to value or enact.