I would rate “value lost to bad deliberation” (“deliberation” broadly construed, and including easy+hard problems and individual+collective failures) as comparably important to “AI alignment.” But I’d guess the total amount of investment in the problem is 1-2 orders of magnitude lower, so there is a strong prima facie case for longtermists prioritizing it.
Overall I think I’m quite a bit more optimistic than you are, and would prioritize these problems less than you would, but still agree directionally that these problems are surprisingly neglected (and I could imagine them playing more to the comparative advantages/interests of longermists and the LW crowd than topics like AI alignment).
I would rate “value lost to bad deliberation” (“deliberation” broadly construed, and including easy+hard problems and individual+collective failures) as comparably important to “AI alignment.” But I’d guess the total amount of investment in the problem is 1-2 orders of magnitude lower, so there is a strong prima facie case for longtermists prioritizing it.
Overall I think I’m quite a bit more optimistic than you are, and would prioritize these problems less than you would, but still agree directionally that these problems are surprisingly neglected (and I could imagine them playing more to the comparative advantages/interests of longermists and the LW crowd than topics like AI alignment).