Most of human problems could be solved by humans or slightly above human AIs
Every task (mainly engineering problems) that are currently solved by humans could be optimized to a staggering degree by a strong AI, think microprocessors.
The long list of coordination problems that exist in human communities.
The fact that humans are capable of solving some problems now (e.g. food production), is hardly sufficient. The problem is currently solved with immense human costs.
But the main problem is that even though humans are capable of solving some problems, they are inherently selfish, so they will only solve problems for themselves. For this reason there are billions of people on this planet lacking basic (and more complex) necessities of life.
Of course, whether an AI will actually help all these people will depend on the governing structure into which it will be integrated. But should it come from a corporation, in a capitalist system, it will still help by dramatically driving the costs down.
In other words, I think of an AI as a massively better tool for problem solving, a much more dramatic jump than the switch from horses to automobiles and planes for transportation.
That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let’s make GMO crops, robotic farms in oceans and we will provide much more food for everybody.
It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.
On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as “neither urgent nor necessary”, and that’s true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.
As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can’t detect.
Every task (mainly engineering problems) that are currently solved by humans could be optimized to a staggering degree by a strong AI, think microprocessors.
The long list of coordination problems that exist in human communities.
The fact that humans are capable of solving some problems now (e.g. food production), is hardly sufficient. The problem is currently solved with immense human costs.
But the main problem is that even though humans are capable of solving some problems, they are inherently selfish, so they will only solve problems for themselves. For this reason there are billions of people on this planet lacking basic (and more complex) necessities of life.
Of course, whether an AI will actually help all these people will depend on the governing structure into which it will be integrated. But should it come from a corporation, in a capitalist system, it will still help by dramatically driving the costs down.
In other words, I think of an AI as a massively better tool for problem solving, a much more dramatic jump than the switch from horses to automobiles and planes for transportation.
That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let’s make GMO crops, robotic farms in oceans and we will provide much more food for everybody.
It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.
On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as “neither urgent nor necessary”, and that’s true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.
As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can’t detect.