That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let’s make GMO crops, robotic farms in oceans and we will provide much more food for everybody.
It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.
On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as “neither urgent nor necessary”, and that’s true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.
As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can’t detect.
That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let’s make GMO crops, robotic farms in oceans and we will provide much more food for everybody.
It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.
On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as “neither urgent nor necessary”, and that’s true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.
As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can’t detect.