I enjoyed reading this, especially the introduction to trading zones and boundary objects.
I don’t believe there is a single AI safety agenda that will once and for all “solve” AI safety or AI alignment (and even “solve” doesn’t quite capture the nature of the challenge). Hence, I’ve been considering safety cases as a way to integrate elements from various technical AI safety approaches, which in my opinion have so far evolved mostly in isolation with limited interaction.
I’m curious about your thoughts on the role of “big science” here. The main example you provide of a trading zone and boundary object involves nation-states collaborating toward a specific, high-stakes warfare objective. While “big science” large-scale scientific collaboration isn’t inherently necessary for trading zones to succeed, it might be essential for the specific goal of developing safe advanced AI systems. Any thoughts?
Re “big science”: I’m not familiar with the term, so I’m not sure what the exact question being asked is. I am much more optimistic in the worlds where we have large scale coordination amongst expert communities. If the question is around what the relationship between governments, firms and academia, I’m still developing my gears around this. Jade Leung’s thesis seems to have an interesting model but I have yet to dig very deep into it.
I enjoyed reading this, especially the introduction to trading zones and boundary objects.
I don’t believe there is a single AI safety agenda that will once and for all “solve” AI safety or AI alignment (and even “solve” doesn’t quite capture the nature of the challenge). Hence, I’ve been considering safety cases as a way to integrate elements from various technical AI safety approaches, which in my opinion have so far evolved mostly in isolation with limited interaction.
I’m curious about your thoughts on the role of “big science” here. The main example you provide of a trading zone and boundary object involves nation-states collaborating toward a specific, high-stakes warfare objective. While “big science” large-scale scientific collaboration isn’t inherently necessary for trading zones to succeed, it might be essential for the specific goal of developing safe advanced AI systems. Any thoughts?
Re “big science”: I’m not familiar with the term, so I’m not sure what the exact question being asked is. I am much more optimistic in the worlds where we have large scale coordination amongst expert communities. If the question is around what the relationship between governments, firms and academia, I’m still developing my gears around this. Jade Leung’s thesis seems to have an interesting model but I have yet to dig very deep into it.