When I was an SRE at Google, we had a motto that I really like, which is: “hope is not a strategy.” It would be nice if all the lab heads would be perfectly honest here, but just hoping for that to happen is not an actual strategy.
Furthermore, I would say that I see the main goal of outside-game advocacy work as setting up external incentives in such a way that pushes labs to good things rather than bad things. Either through explicit regulation or implicit pressure, I think controlling the incentives is absolutely critical and the main lever that you have externally for controlling the actions of large companies.
I don’t think aysja was endorsing “hope” as a strategy– at least, that’s not how I read it. I read it as “we should hold leaders accountable and make it clear that we think it’s important for people to state their true beliefs about important matters.”
To be clear, I think it’s reasonable for people to discuss the pros and cons of various advocacy tactics, and I think asking “to what extent do I expect X advocacy tactic will affect peoples’ incentives to openly state their beliefs?” makes sense.
Separately, though, I think the “accountability frame” is important. Accountability can involve putting pressure on them to express their true beliefs, pushing back when we suspect people are trying to find excuses to hide their beliefs, and making it clear that we think openness and honesty are important virtues even when they might provoke criticism– perhaps especially when they might provoke criticism. I think this is especially important in the case of lab leaders and others who have clear financial interests or power interests in the current AGI development ecosystem.
It’s not about hoping that people are honest– it’s about upholding standards of honesty, and recognizing that we have some ability to hold people accountable if we suspect that they’re not being honest.
I would say that I see the main goal of outside-game advocacy work as setting up external incentives in such a way that pushes labs to good things rather than bad things
I’m currently most excited about outside-game advocacy that tries to get governments to implement regulations that make good things happen. I think this technically falls under the umbrella of “controlling the incentives through explicit regulation”, but I think it’s sufficiently different from outside-game advocacy work that is trying to get labs to do things voluntarily.
When I was an SRE at Google, we had a motto that I really like, which is: “hope is not a strategy.” It would be nice if all the lab heads would be perfectly honest here, but just hoping for that to happen is not an actual strategy.
Furthermore, I would say that I see the main goal of outside-game advocacy work as setting up external incentives in such a way that pushes labs to good things rather than bad things. Either through explicit regulation or implicit pressure, I think controlling the incentives is absolutely critical and the main lever that you have externally for controlling the actions of large companies.
I don’t think aysja was endorsing “hope” as a strategy– at least, that’s not how I read it. I read it as “we should hold leaders accountable and make it clear that we think it’s important for people to state their true beliefs about important matters.”
To be clear, I think it’s reasonable for people to discuss the pros and cons of various advocacy tactics, and I think asking “to what extent do I expect X advocacy tactic will affect peoples’ incentives to openly state their beliefs?” makes sense.
Separately, though, I think the “accountability frame” is important. Accountability can involve putting pressure on them to express their true beliefs, pushing back when we suspect people are trying to find excuses to hide their beliefs, and making it clear that we think openness and honesty are important virtues even when they might provoke criticism– perhaps especially when they might provoke criticism. I think this is especially important in the case of lab leaders and others who have clear financial interests or power interests in the current AGI development ecosystem.
It’s not about hoping that people are honest– it’s about upholding standards of honesty, and recognizing that we have some ability to hold people accountable if we suspect that they’re not being honest.
I’m currently most excited about outside-game advocacy that tries to get governments to implement regulations that make good things happen. I think this technically falls under the umbrella of “controlling the incentives through explicit regulation”, but I think it’s sufficiently different from outside-game advocacy work that is trying to get labs to do things voluntarily.