Suppose that some new technology is useful but creates bad effects, and we say that the bad effects are the “problem” caused by the intelligence used to create that technology. Then suppose that further intelligence-driven technological development doesn’t give us any way to directly “fix” the problem, but it does give us a better alternative, which has the same uses and no bad effects (or less bad ones), and people just drop the first technology in favor of the second. Does that count as “solving” with intelligence?
Suppose that there is no second technology, but that, upon further analysis, the bad effects of the first technology turn out worse than initially believed, and then people decide the first technology isn’t worth the downsides and drop it. Does that count as “solving” with intelligence?
Suppose that there is no second technology—yet, anyway. There might be in the future, who knows. Will it ever be practical to say that intelligence can’t solve a problem? (The main one that comes to mind is “heat death of the universe”, due to the second law of thermodynamics, but that problem wasn’t created by intelligence.)
My best answer at the moment is, “Problem: people using their intelligence to figure out how to benefit themselves at the expense of others in net-negative ways”. Intelligence yields approaches to addressing many forms of this problem, but plenty of them are far from what I’d call solved.
Definitely true that many problems are far from solved—my question was more whether there are areas where there isn’t a path to a solution given more resources/attention/intelligence
Suppose that some new technology is useful but creates bad effects, and we say that the bad effects are the “problem” caused by the intelligence used to create that technology. Then suppose that further intelligence-driven technological development doesn’t give us any way to directly “fix” the problem, but it does give us a better alternative, which has the same uses and no bad effects (or less bad ones), and people just drop the first technology in favor of the second. Does that count as “solving” with intelligence?
Suppose that there is no second technology, but that, upon further analysis, the bad effects of the first technology turn out worse than initially believed, and then people decide the first technology isn’t worth the downsides and drop it. Does that count as “solving” with intelligence?
Suppose that there is no second technology—yet, anyway. There might be in the future, who knows. Will it ever be practical to say that intelligence can’t solve a problem? (The main one that comes to mind is “heat death of the universe”, due to the second law of thermodynamics, but that problem wasn’t created by intelligence.)
My best answer at the moment is, “Problem: people using their intelligence to figure out how to benefit themselves at the expense of others in net-negative ways”. Intelligence yields approaches to addressing many forms of this problem, but plenty of them are far from what I’d call solved.
Definitely true that many problems are far from solved—my question was more whether there are areas where there isn’t a path to a solution given more resources/attention/intelligence