So the sections “Counteracting deception with only interp is not the only approach” and “Preventive measures against deception”, “Cognitive Emulations” and “Technical Agendas with better ToI” don’t feel productive? It seems to me that it’s already a good list of neglected research agendas. So I don’t understand.
You’ve listed them, but you haven’t really argued that they’re valuable, you’re mostly just asserting stuff like Rob Miles having a bigger impact than most interpretability researchers, or the best strategy being copying Dan Hendrycks. But since I disagree with the assertions, these sections aren’t very useful; they don’t actually zoom in on the positive case for these research directions.
(The main positive case I’m seeing seems to be “anything which helps with coordination is really valuable”. And sure, coordination is great. But most coordination-related research is shallow: it helps us do things now, but doesn’t help us figure out how to do things better in the long term. So I think you’re overstating the case for it in general.)
I agree that I haven’t argued the positive case for more governance/coordination work (and that’s why I hope to do a next post on that).
We do need alignment work, but I think the current allocation is too focused on alignment, whereas AI X-Risks could arrive in the near future. I’ll be happy to reinvest in alignment work once we’re sure we can avoid X-Risks from misuses and grossly negligent accidents.
You’ve listed them, but you haven’t really argued that they’re valuable, you’re mostly just asserting stuff like Rob Miles having a bigger impact than most interpretability researchers, or the best strategy being copying Dan Hendrycks. But since I disagree with the assertions, these sections aren’t very useful; they don’t actually zoom in on the positive case for these research directions.
(The main positive case I’m seeing seems to be “anything which helps with coordination is really valuable”. And sure, coordination is great. But most coordination-related research is shallow: it helps us do things now, but doesn’t help us figure out how to do things better in the long term. So I think you’re overstating the case for it in general.)
I agree that I haven’t argued the positive case for more governance/coordination work (and that’s why I hope to do a next post on that).
We do need alignment work, but I think the current allocation is too focused on alignment, whereas AI X-Risks could arrive in the near future. I’ll be happy to reinvest in alignment work once we’re sure we can avoid X-Risks from misuses and grossly negligent accidents.