We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards.
I’ve been doing quite a bit of this recently, and I’d love to other researchers do more of this:
However I haven’t gotten much engagement from people who work on strategy professionally. I’m not sure if they just aren’t following LW/AF, or don’t feel comfortable discussing strategically relevant issues in public. So this kind of ties into my other comment, and is part of what I’m thinking about as I try to puzzle out how to move forward, both for myself and for others who may be interested in writing up their strategic insights and considerations.
Allan Dafoe, director of the Centre for the Governance of AI, has a different take
I’m not sure I understand what Allan is suggesting, but it feels pretty similar to what you’re saying. Can you perhaps explain your understanding of how his take differs from yours?
Nice work, Wei Dai! I hope to read more of your posts soon.
However I haven’t gotten much engagement from people who work on strategy professionally. I’m not sure if they just aren’t following LW/AF, or don’t feel comfortable discussing strategically relevant issues in public.
A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There’s no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn’t perceived as your main peer group. And if you’re at FHI or OpenAI, you’ll have plenty contact with people who can provide quick feedback already.
I’m not sure I understand what Allan is suggesting, but it feels pretty similar to what you’re saying. Can you perhaps explain your understanding of how his take differs from yours?
I believe he suggests that there is a large space that contains strategically important information. However, rather than first trying to structure that space and trying to find the questions with the most valuable answers, he suggests that researchers should just try their hand at finding anything of value. Probably for two reasons:
By trying to find anything of value, you get much more rapid feedback on whether you are good at finding information than by taking a longer time to find high-value information.
When there is a lot of information acquirable (‘low-hanging fruit’), it doesn’t matter as much where you start, as long as you start quickly.
In addition, he might believe that fewer people are good at strategy research than at tactics or informing research, and he might have wanted to give more generalizable advise.
I’ve been doing quite a bit of this recently, and I’d love to other researchers do more of this:
https://www.lesswrong.com/posts/Qz6w4GYZpgeDp6ATB/beyond-astronomical-waste
https://www.lesswrong.com/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety
https://www.lesswrong.com/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty
https://www.lesswrong.com/posts/4K52SS7fm9mp5rMdX/three-ways-that-sufficiently-optimized-agents-appear
https://www.lesswrong.com/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk-1
https://www.lesswrong.com/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low
https://www.lesswrong.com/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale
However I haven’t gotten much engagement from people who work on strategy professionally. I’m not sure if they just aren’t following LW/AF, or don’t feel comfortable discussing strategically relevant issues in public. So this kind of ties into my other comment, and is part of what I’m thinking about as I try to puzzle out how to move forward, both for myself and for others who may be interested in writing up their strategic insights and considerations.
I’m not sure I understand what Allan is suggesting, but it feels pretty similar to what you’re saying. Can you perhaps explain your understanding of how his take differs from yours?
Nice work, Wei Dai! I hope to read more of your posts soon.
A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There’s no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn’t perceived as your main peer group. And if you’re at FHI or OpenAI, you’ll have plenty contact with people who can provide quick feedback already.
I believe he suggests that there is a large space that contains strategically important information. However, rather than first trying to structure that space and trying to find the questions with the most valuable answers, he suggests that researchers should just try their hand at finding anything of value. Probably for two reasons:
By trying to find anything of value, you get much more rapid feedback on whether you are good at finding information than by taking a longer time to find high-value information.
When there is a lot of information acquirable (‘low-hanging fruit’), it doesn’t matter as much where you start, as long as you start quickly.
In addition, he might believe that fewer people are good at strategy research than at tactics or informing research, and he might have wanted to give more generalizable advise.