I was recently told that there’s a “fair bit” of AI strategy/policy/governance research and discussion happening non-publicly (e.g., via Google docs) by people at places like FHI and OpenAI. Looking at the acknowledgements section of this post, it appears that the current authors are not very “plugged in” to those non-public discussions. I am in a similar situation in that I’m interested in AI strategy but am not “plugged in” to the existing discussions. It seems like there’s a few different ways to go from here and I’m not sure which is best:
Try to get “plugged in” to the non-public discussions.
Assuming there’s not serious info hazard concerns, try to make the current discussions more public, e.g., by pushing for the creation of a public forum for discussing strategy and inviting strategy researchers to participate.
Try to create a parallel public strategy discussion.
My guess is that assuming resources (and info hazards) aren’t an issue, 3 is best because different kinds of research/discussion setups create different biases and it’s good to have diversity to avoid blind spots. (For example Bitcoin and UDT both came out of informal online discussion forums instead of academia/industry/government research institutions.) But:
Are there enough people and funding to sustain a parallel public strategy research effort and discussion?
Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?
I’d be interested in the authors’ (or other people’s) thoughts on these questions.
I agree with you that #3 seems the most valuable option, and you are correct that we aren’t as plugged in—although I am much less plugged in (yet) than the other two authors. I hope to learn more in the future about
How much explicit strategy research is actually going on behind close doors, rather than just people talking and sharing implicit models.
How much of all potential strategy research should be private, and how much should be public. My current belief is that more strategy research should be public than private, but my understanding of info hazards is still quite limited, so this belief might change drastically in the future.
To respond to your other questions:
Are there enough people and funding to sustain a parallel public strategy research effort and discussion?
I am not sure whether I get the question: I don’t think there is currently enough people or funding being allocated to public strategy research, but I think there could be a sustained public strategy research field. I also think there is not a high threshold for a critical mass: just a few researchers communicating with an engaged audience seems enough to sustain the research field.
Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?
Yes, there are serious info hazards. And yes, I think the benefits of having a public discussion outweigh the (manageable) risk that comes with public discussion. If there is a clear place for info-hazardous content to be shared (which there is: the draft-sharing network) and when there is a clear understanding and policy for limiting info-hazards (which can be improved on a lot), public discussion will have at least the following advantages:
Exposure to wider array of feedback will, on expectation, improve the quality of ideas
Outsiders have more accessible knowledge to learn from to contribute later. There are probably also a lot of benefits to be gained from making other people more strategically savvy!
It makes it easier for non-affiliated/less-connected individuals to create and share knowledge
In my experience there are infohazard/attention hazards concerns. Public strategy has likely negative expected value—if it is good, it will run into infohazards. If it is bad, it will create confusion.
I would expect prudent funders will not want to create parallel public strategy discussion.
I am not sure why you believe good strategy research always has infohazards. That’s a very strong claim. Strategy research is broader than ‘how should we deal with other agents’. Do you think Drexler’s Reframing Superintelligence: Comprehensive AI Systems or The Unilateralist’s Curse were negative expected value? Because I would classify them as public, good strategy research with a positive expected value.
Are there any specific types of infohazards you’re thinking of? (E.g. informing unaligned actors, getting media attention and negative public opinion)
Depends on what you mean by public. While I don’t think you can have good public research processes which would not run into infohazards, you can have nonpublic process which produces good public outcomes. I don’t think the examples count as something public—e.g. do you see any public discussion leading to CAIS?
My guess is that the ideal is to have semi-independent teams doing research. Independence in order to better explore the space of questions, and some degree of plugging in to each other in order to learn from each other and to coordinate.
Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?
There are info hazards. But I think if we can can discuss Superintelligence publicly, then yes; we can have a public discussion about non-hazardous parts of strategy.
Are there enough people and funding to sustain a parallel public strategy research effort and discussion?
I think you could get a pretty lively discussion even with just 10 people, if they were active enough. I think you’d need a core of active posters and commenters, and there needs to be enough reason for them to assemble.
I was recently told that there’s a “fair bit” of AI strategy/policy/governance research and discussion happening non-publicly (e.g., via Google docs) by people at places like FHI and OpenAI. Looking at the acknowledgements section of this post, it appears that the current authors are not very “plugged in” to those non-public discussions. I am in a similar situation in that I’m interested in AI strategy but am not “plugged in” to the existing discussions. It seems like there’s a few different ways to go from here and I’m not sure which is best:
Try to get “plugged in” to the non-public discussions.
Assuming there’s not serious info hazard concerns, try to make the current discussions more public, e.g., by pushing for the creation of a public forum for discussing strategy and inviting strategy researchers to participate.
Try to create a parallel public strategy discussion.
My guess is that assuming resources (and info hazards) aren’t an issue, 3 is best because different kinds of research/discussion setups create different biases and it’s good to have diversity to avoid blind spots. (For example Bitcoin and UDT both came out of informal online discussion forums instead of academia/industry/government research institutions.) But:
Are there enough people and funding to sustain a parallel public strategy research effort and discussion?
Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?
I’d be interested in the authors’ (or other people’s) thoughts on these questions.
I agree with you that #3 seems the most valuable option, and you are correct that we aren’t as plugged in—although I am much less plugged in (yet) than the other two authors. I hope to learn more in the future about
How much explicit strategy research is actually going on behind close doors, rather than just people talking and sharing implicit models.
How much of all potential strategy research should be private, and how much should be public. My current belief is that more strategy research should be public than private, but my understanding of info hazards is still quite limited, so this belief might change drastically in the future.
To respond to your other questions:
I am not sure whether I get the question: I don’t think there is currently enough people or funding being allocated to public strategy research, but I think there could be a sustained public strategy research field. I also think there is not a high threshold for a critical mass: just a few researchers communicating with an engaged audience seems enough to sustain the research field.
Yes, there are serious info hazards. And yes, I think the benefits of having a public discussion outweigh the (manageable) risk that comes with public discussion. If there is a clear place for info-hazardous content to be shared (which there is: the draft-sharing network) and when there is a clear understanding and policy for limiting info-hazards (which can be improved on a lot), public discussion will have at least the following advantages:
Exposure to wider array of feedback will, on expectation, improve the quality of ideas
Outsiders have more accessible knowledge to learn from to contribute later. There are probably also a lot of benefits to be gained from making other people more strategically savvy!
It makes it easier for non-affiliated/less-connected individuals to create and share knowledge
FWIW
In my experience there are infohazard/attention hazards concerns. Public strategy has likely negative expected value—if it is good, it will run into infohazards. If it is bad, it will create confusion.
I would expect prudent funders will not want to create parallel public strategy discussion.
I am not sure why you believe good strategy research always has infohazards. That’s a very strong claim. Strategy research is broader than ‘how should we deal with other agents’. Do you think Drexler’s Reframing Superintelligence: Comprehensive AI Systems or The Unilateralist’s Curse were negative expected value? Because I would classify them as public, good strategy research with a positive expected value.
Are there any specific types of infohazards you’re thinking of? (E.g. informing unaligned actors, getting media attention and negative public opinion)
Depends on what you mean by public. While I don’t think you can have good public research processes which would not run into infohazards, you can have nonpublic process which produces good public outcomes. I don’t think the examples count as something public—e.g. do you see any public discussion leading to CAIS?
My guess is that the ideal is to have semi-independent teams doing research. Independence in order to better explore the space of questions, and some degree of plugging in to each other in order to learn from each other and to coordinate.
There are info hazards. But I think if we can can discuss Superintelligence publicly, then yes; we can have a public discussion about non-hazardous parts of strategy.
I think you could get a pretty lively discussion even with just 10 people, if they were active enough. I think you’d need a core of active posters and commenters, and there needs to be enough reason for them to assemble.