Is there actually a discussion somewhere of what we can do? I basically only ever see discussion of AI alignment, which seems to me more of a long-term strategy and not the best place to spend resources on right now (although I’m confident that it is going to have payoffs if we weather the next several years). I kind of conceptualize this as a war—its normal to be devastated and the stakes are incredibly high, but in the end you have to pull together and try to figure out the most strategic thing to do without giving in to panicked thinking. And we’re all in this together.
FYI there’s an AI Governance tag and an AI Alignment Fieldbuilding tag, which covers some of the other major strategies other than the technical alignment of AI (although I worry that people drawn to those other two topics are often pursuing them in a “missing the real problem” sort of way that won’t actually help)
You say “AI alignment” is a longterm strategy, not… something to work on right now, which feels confusing. Most of the things worth doing require some investment in learning/skill-gaining along the way, to contribute to longterm projects, IMO. (There are smaller projects, but figuring out which ones actually help is fairly high-context and not actually easy to jump onto)
Thanks! These tags are very relevant and I wasn’t aware—that’s maybe part of the problem (although I’m relatively new in using this forum). I might simply not be aware of the concrete efforts to mobilize/draw public attention and raise public concern about the issue. Certainly many of the leading figures of EA/rationalism appear to be very stuck in their heads from the outside and—I don’t really know how to phrase this—not willing to make a stand? E.g. Eliezer’s famously defeatist attitude probably doesn’t send a good signal to the outside (I do mostly agree with him on his general opinions).
When I say alignment seems mostly relevant long-term, I really mean the technical parts. It is definitely good to recruit and educate people in that direction even right now. I just think the political dimension—public perception, and government involvement are likely much more relevant short term (and could yield a lot of resources towards alignment). Since I haven’t been seeing much discussion of this here, I felt it is underappreciated.
What precisely do you mean by “missing the real problem”?
Is there actually a discussion somewhere of what we can do? I basically only ever see discussion of AI alignment, which seems to me more of a long-term strategy and not the best place to spend resources on right now (although I’m confident that it is going to have payoffs if we weather the next several years). I kind of conceptualize this as a war—its normal to be devastated and the stakes are incredibly high, but in the end you have to pull together and try to figure out the most strategic thing to do without giving in to panicked thinking. And we’re all in this together.
I’m not quite sure what you’re asking for.
FYI there’s an AI Governance tag and an AI Alignment Fieldbuilding tag, which covers some of the other major strategies other than the technical alignment of AI (although I worry that people drawn to those other two topics are often pursuing them in a “missing the real problem” sort of way that won’t actually help)
You say “AI alignment” is a longterm strategy, not… something to work on right now, which feels confusing. Most of the things worth doing require some investment in learning/skill-gaining along the way, to contribute to longterm projects, IMO. (There are smaller projects, but figuring out which ones actually help is fairly high-context and not actually easy to jump onto)
Thanks! These tags are very relevant and I wasn’t aware—that’s maybe part of the problem (although I’m relatively new in using this forum). I might simply not be aware of the concrete efforts to mobilize/draw public attention and raise public concern about the issue. Certainly many of the leading figures of EA/rationalism appear to be very stuck in their heads from the outside and—I don’t really know how to phrase this—not willing to make a stand? E.g. Eliezer’s famously defeatist attitude probably doesn’t send a good signal to the outside (I do mostly agree with him on his general opinions).
When I say alignment seems mostly relevant long-term, I really mean the technical parts. It is definitely good to recruit and educate people in that direction even right now. I just think the political dimension—public perception, and government involvement are likely much more relevant short term (and could yield a lot of resources towards alignment). Since I haven’t been seeing much discussion of this here, I felt it is underappreciated.
What precisely do you mean by “missing the real problem”?