I think the workshop would be a valuable use of three days for anyone actively working in AI safety, even if they consider themselves “senior” in the field: it offered a valuable space for reconsidering basic assumptions and rediscovering the reasons why we’re doing what we’re doing.
This read to me as a remarkably strong claim; I assumed you meant something slightly weaker. But then I realized you said “valuable” which might mean “not considering opportunity cost”. Can you clarify that?
And if you do mean “considering opportunity cost”, I think it would be worth giving your ~strongest argument(s) for it!
For context, I am a PhD candidate in ML working on safety, and I am interested in such events, but unsure if they would be a valuable use of my time, and OTTMH would expect most of the value to be in terms of helping others rather than benefitting my own understanding/research/career/ability-to-contribute (I realize this sounds a bit conceited, and I didn’t try to avoid that except via this caveat, and I really do mean (just) OTTMH… I think the reality is a bit more that I’m mostly estimating value based on heuristics). If I had been in the UK when they happened, I would probably have attended at least one.
But I think I am a bit unusual in my level of enthusiasm. And FWICT, such initiatives are not receiving much resources (including money and involvement of senior safety researchers) and potentially should receive A LOT more (e.g. 1-2 orders of magnitude). So the case for them being valuable (in general or for more senior/experienced researchers) is an important one!
So, first let me give you some reasons it was valuable to me, which I think will also be true for other people:
It created space for reconsidering AI safety from the ground up, which is important because I can often become trapped by my plans once they have been set in motion.
It offered an opportunity to learn from and teach others about AI safety, including those who I wouldn’t think would have something to teach me, usually by saying weird things that knocked me out of local maxima created by being relatively immersed in the field, but also by teaching me about things I thought I understood but didn’t really because I hadn’t spent as much time as them specializing in some other small part of the AI safety field. (I’d give examples except it’s been long enough that I can’t remember the specifics.)
It let me connect with folks who I otherwise would not have connected with because they are less active on LW or not living in the Bay Area, and this has generally proven fruitful to me over the years to know other folks in the space in a variety of ways such as increased willingness to consider each others research and give each other the benefit of the doubt on new and weird ideas, access to people who are willing and excited to bounce ideas around with you, and feeling connected to the community of AI safety researchers so this isn’t such a lonely project (this last one being way more important than I think many people recognize!).
It let me quickly get feedback on ideas from multiple people with different specializations and interests that would have otherwise been hard to get if I had to rely on them, say, interacting with my posts on LW or responding to my emails.
In the end though what most motivates me to make such a strong claim is how much more valuable it was than I thought it would be. I expected it to be a nice few days getting to work and think full time about a thing I care greatly about but, due to a variety of life circumstances, find it hard to devote more than ~15 hours a week to, when averaged out over many weeks. Instead it turned out to be a catalyst for getting me to reconsider my research assumptions, to re-examine my plans, to help others and learn I had more to offer others than I thought, and to get me unstuck on problems I’ve been thinking about for months without much measurable progress.
In terms of opportunity costs I would guess that even if you’re already spending the majority of your time working on AI safety and doing so in an in-person collaborative environment with other AI safety researchers, my guess is you still would find it valuable to attend an event like this maybe once a year to help break you out of local maxima created by that bubble and reconsider your research priorities by interacting with a broader range of folks interested in AI safety.
This read to me as a remarkably strong claim; I assumed you meant something slightly weaker. But then I realized you said “valuable” which might mean “not considering opportunity cost”. Can you clarify that?
And if you do mean “considering opportunity cost”, I think it would be worth giving your ~strongest argument(s) for it!
For context, I am a PhD candidate in ML working on safety, and I am interested in such events, but unsure if they would be a valuable use of my time, and OTTMH would expect most of the value to be in terms of helping others rather than benefitting my own understanding/research/career/ability-to-contribute (I realize this sounds a bit conceited, and I didn’t try to avoid that except via this caveat, and I really do mean (just) OTTMH… I think the reality is a bit more that I’m mostly estimating value based on heuristics). If I had been in the UK when they happened, I would probably have attended at least one.
But I think I am a bit unusual in my level of enthusiasm. And FWICT, such initiatives are not receiving much resources (including money and involvement of senior safety researchers) and potentially should receive A LOT more (e.g. 1-2 orders of magnitude). So the case for them being valuable (in general or for more senior/experienced researchers) is an important one!
So, first let me give you some reasons it was valuable to me, which I think will also be true for other people:
It created space for reconsidering AI safety from the ground up, which is important because I can often become trapped by my plans once they have been set in motion.
It offered an opportunity to learn from and teach others about AI safety, including those who I wouldn’t think would have something to teach me, usually by saying weird things that knocked me out of local maxima created by being relatively immersed in the field, but also by teaching me about things I thought I understood but didn’t really because I hadn’t spent as much time as them specializing in some other small part of the AI safety field. (I’d give examples except it’s been long enough that I can’t remember the specifics.)
It let me connect with folks who I otherwise would not have connected with because they are less active on LW or not living in the Bay Area, and this has generally proven fruitful to me over the years to know other folks in the space in a variety of ways such as increased willingness to consider each others research and give each other the benefit of the doubt on new and weird ideas, access to people who are willing and excited to bounce ideas around with you, and feeling connected to the community of AI safety researchers so this isn’t such a lonely project (this last one being way more important than I think many people recognize!).
It let me quickly get feedback on ideas from multiple people with different specializations and interests that would have otherwise been hard to get if I had to rely on them, say, interacting with my posts on LW or responding to my emails.
In the end though what most motivates me to make such a strong claim is how much more valuable it was than I thought it would be. I expected it to be a nice few days getting to work and think full time about a thing I care greatly about but, due to a variety of life circumstances, find it hard to devote more than ~15 hours a week to, when averaged out over many weeks. Instead it turned out to be a catalyst for getting me to reconsider my research assumptions, to re-examine my plans, to help others and learn I had more to offer others than I thought, and to get me unstuck on problems I’ve been thinking about for months without much measurable progress.
In terms of opportunity costs I would guess that even if you’re already spending the majority of your time working on AI safety and doing so in an in-person collaborative environment with other AI safety researchers, my guess is you still would find it valuable to attend an event like this maybe once a year to help break you out of local maxima created by that bubble and reconsider your research priorities by interacting with a broader range of folks interested in AI safety.