Magnificent! In particular, your pacing and composition was perfect—when I had a thought, you’d answer it in a sentence to a paragraph.
Unfortunately, improved coordination is going to have to be accomplished extremely quickly to help with superintelligence, which is going to happen very soon now on a historical timeframe. See my other comment here for thoughts on a reputation system that could be used to find better leadership to represent groups.
A similar algorithm balanceing priorities for each individual could work. But those with power would have to want to cooperate with those without. Some of them are nice people and do; lots of them aren’t and don’t. There are solutions for cooperation and we should search for them. There aren’t solutions for people simply wanting different, conflicting things, and two of the problems you mention are in that category: arguably, for the most part, people simply don’t want to be healthy or prevent climate change as much as they want to have fun now.
I guess you could say they’re bad at cooperating with their future selves, so a better coordination mechanism could help that short-term bias, too.
To your point about improved understanding: you imply that AI could be part of that improvement, and I very much agree. Even current LLMs are probably able to do “translation” of cultural viewpoints, (e.g., explain to a liberal how someone could ever support anti-abortion laws without simply wanting to control women’s bodies and punish sex). Getting people to use them or exposing them to those arguments would be the hard part. This is sort of back to the problem being that humans don’t want to cooperate.
They’d want to cooperate if you could show them that the simple provable fact is that they’ll probably get terrible results if they keep on competing. And that might very well be the case with a multipolar scenario with AGIs aligned to different humans’ intentions.
Magnificent! In particular, your pacing and composition was perfect—when I had a thought, you’d answer it in a sentence to a paragraph.
Unfortunately, improved coordination is going to have to be accomplished extremely quickly to help with superintelligence, which is going to happen very soon now on a historical timeframe. See my other comment here for thoughts on a reputation system that could be used to find better leadership to represent groups.
A similar algorithm balanceing priorities for each individual could work. But those with power would have to want to cooperate with those without. Some of them are nice people and do; lots of them aren’t and don’t. There are solutions for cooperation and we should search for them. There aren’t solutions for people simply wanting different, conflicting things, and two of the problems you mention are in that category: arguably, for the most part, people simply don’t want to be healthy or prevent climate change as much as they want to have fun now.
I guess you could say they’re bad at cooperating with their future selves, so a better coordination mechanism could help that short-term bias, too.
To your point about improved understanding: you imply that AI could be part of that improvement, and I very much agree. Even current LLMs are probably able to do “translation” of cultural viewpoints, (e.g., explain to a liberal how someone could ever support anti-abortion laws without simply wanting to control women’s bodies and punish sex). Getting people to use them or exposing them to those arguments would be the hard part. This is sort of back to the problem being that humans don’t want to cooperate.
They’d want to cooperate if you could show them that the simple provable fact is that they’ll probably get terrible results if they keep on competing. And that might very well be the case with a multipolar scenario with AGIs aligned to different humans’ intentions.