I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You’re right, humanity is great at developing technology, but we’re very unaligned with respect to each other and are constantly losing value in some way or another.
If we could solve this problem without AGI, we wouldn’t need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.
I agree deeply with the first paragraph. I was going to list coordination as the only great thing I know of where AI might be able to help us do something we really couldn’t do otherwise. But I removed it because it occurred to me that I have no plausible story for how that would actually happen. How do you imagine that going down? All I’ve got is “some rogue benevolent actor does CVE or pivotal act” which I don’t think is very likely.
I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You’re right, humanity is great at developing technology, but we’re very unaligned with respect to each other and are constantly losing value in some way or another.
If we could solve this problem without AGI, we wouldn’t need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.
I agree deeply with the first paragraph. I was going to list coordination as the only great thing I know of where AI might be able to help us do something we really couldn’t do otherwise. But I removed it because it occurred to me that I have no plausible story for how that would actually happen. How do you imagine that going down? All I’ve got is “some rogue benevolent actor does CVE or pivotal act” which I don’t think is very likely.