I think it might be more effective in future debates at the outset to: * Explain that it’s only necessary to cross a low bar (e.g. see my Tweet below). -- This is a common practice in debates. * Outline the responses they expect to hear from the other side, and explain why they are bogus. Framing: “Whether AI is an x-risk has been debated in the ML community for 10 years, and nobody has provided any compelling counterarguments that refute the 3 claims (of the Tweet). You will hear a bunch of counter arguments from the other side, but when you do, ask yourself whether they are really addressing this. Here are a few counter-arguments and why they fail...”—I think this could really take the wind out of the sails of the opposition, and put them on the back foot.
I also don’t think Lecun and Meta should be given so much credit—Is Facebook really going to develop and deploy AI responsibly? 1) They have been widely condemned for knowingly playing a significant role in the Rohingya genocide, have acknowledged that they failed to act to prevent Facebook’s role in the Rohingya genocide, and are being sued for $150bn for this. 2) They have also been criticised for the role that their products, especially Instagram, play in contributing to mental health issues, especially around body image in teenage girls.
More generally, I think the “companies do irresponsible stuff all the time” point needs to be stressed more. And one particular argument that is bogus is the “we’ll make it safe”—x-safety is a common good, and so companies should be expected to undersupply it. This is econ 101.
David Scott Krueger (formerly: capybaralet) comments on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?