Could we take from Eliezer’s message the need to redirect more efforts into AI policy and into widening the Overton window to try, in any way we can, to give AI safety research the time it needs? As Raemon said, the Overton window might be widening already, making more ideas “acceptable” for discussion, but it doesn’t seem enough. I would say the typical response from the the overwhelming majority of the population and world leaders to misaligned AGI concerns still is to treat them as a panicky sci-fi dystopia rather than to say “maybe we should stop everything we’re doing and not build AGI”.
I’m wondering if not addressing AI policy sufficiently might be a coordination failure from the AI alignment community; i.e. from an individual perspective, the best option for a person who wants to reduce existential risks probably is to do technical AI safety work rather than AI policy work, because AI policy and advocacy work is most effective when done by a large number of people, to shift public opinion and the Overton window. Plus it’s extremely hard to make yourself heard and influence entire governments, due to the election cycles, incentives, short-term thinking, bureaucracy...that govern politics.
Maybe, now that AI is starting to cause turmoil and enter popular debate, it’s time to seize this wave and improve the coordination of the AI community. The main issue is not whether a solution to AI alignment is possible, but whether there will be enough time to come up with one. And the biggest factors that can affect the timelines probably are (1) big corporations and governments, and (2) how many people work on AI safety.
Could we take from Eliezer’s message the need to redirect more efforts into AI policy and into widening the Overton window to try, in any way we can, to give AI safety research the time it needs? As Raemon said, the Overton window might be widening already, making more ideas “acceptable” for discussion, but it doesn’t seem enough. I would say the typical response from the the overwhelming majority of the population and world leaders to misaligned AGI concerns still is to treat them as a panicky sci-fi dystopia rather than to say “maybe we should stop everything we’re doing and not build AGI”.
I’m wondering if not addressing AI policy sufficiently might be a coordination failure from the AI alignment community; i.e. from an individual perspective, the best option for a person who wants to reduce existential risks probably is to do technical AI safety work rather than AI policy work, because AI policy and advocacy work is most effective when done by a large number of people, to shift public opinion and the Overton window. Plus it’s extremely hard to make yourself heard and influence entire governments, due to the election cycles, incentives, short-term thinking, bureaucracy...that govern politics.
Maybe, now that AI is starting to cause turmoil and enter popular debate, it’s time to seize this wave and improve the coordination of the AI community. The main issue is not whether a solution to AI alignment is possible, but whether there will be enough time to come up with one. And the biggest factors that can affect the timelines probably are (1) big corporations and governments, and (2) how many people work on AI safety.