As I said a lot of time, I am not convinced that the risk is as high as they perceive it. And I found that most talks were so high levels that there was no way for me to say anything concrete about them. At least not without seeing how they actually would try to implement things first. Worse, I was saying all of this out loud! A staff member from MIRI told me that my questions meant I didn’t really understood the risks.
It sounds to me that you actually understood the arguments about risk rather well and paid an unusual amount of attention to detail, which is a very good thing. It’s completely MIRI’s loss that you weren’t hired, as it appears you are a very clear-headed thinker and pay close attention to fine details.
As someone who has gone through a AIRCS and left with a similar amount of confusion, let me say that it is not you that is at fault. I hope you find what you are looking for!
This maybe doesn’t make much difference for the rest of your comment, but just FWIW: the workshop you attended in Sept 2016 not part of the AIRCS series. It was a one-off experiment, funded by an FLI grant, called “CFAR for ML”, where we ran most of a standard CFAR workshop and then tacked on an additional day of AI alignment discussion at the end.
The AIRCS workshops have been running ~9 times/year since Feb 2018, have been evolving pretty rapidly, and in recent iterations involve a higher ratio of either AI risk content, or content about how cognitive biases etc. that seem to arise in discussion about AI risk in particular. They have somewhat smaller cohorts for more 1-on-1 conversation (~15 participants instead of 23). They are co-run with MIRI, which “CFAR for ML” was not. They have a slightly different team and a slightly different beast.
Which… doesn’t mean you wouldn’t have had most of the same perceptions if you’d come to a recent AIRCS! You might well have. From a distance perhaps all our workshops are pretty similar. And I can see calling “CFAR for ML” “AIRCS”, since it was in fact partially about AI risk and was aimed mostly at computer scientists, which is what “AIRCS” stands for. Still, we locally care a good bit of the distinctions between our programs, so I did want to clarify.
Thanks for the correction! Yes, from a distance the description of your workshops seem pretty similar. I thought the “CFAR for ML” was a prototype for AIRCS, and assumed it would have similar structure and format. Many of Arthur’s descriptions seemed familiar to my memories of the CFAR for ML workshop.
It sounds to me that you actually understood the arguments about risk rather well and paid an unusual amount of attention to detail, which is a very good thing. It’s completely MIRI’s loss that you weren’t hired, as it appears you are a very clear-headed thinker and pay close attention to fine details.
As someone who has gone through a AIRCS and left with a similar amount of confusion, let me say that it is not you that is at fault. I hope you find what you are looking for!
Hi Mark,
This maybe doesn’t make much difference for the rest of your comment, but just FWIW: the workshop you attended in Sept 2016 not part of the AIRCS series. It was a one-off experiment, funded by an FLI grant, called “CFAR for ML”, where we ran most of a standard CFAR workshop and then tacked on an additional day of AI alignment discussion at the end.
The AIRCS workshops have been running ~9 times/year since Feb 2018, have been evolving pretty rapidly, and in recent iterations involve a higher ratio of either AI risk content, or content about how cognitive biases etc. that seem to arise in discussion about AI risk in particular. They have somewhat smaller cohorts for more 1-on-1 conversation (~15 participants instead of 23). They are co-run with MIRI, which “CFAR for ML” was not. They have a slightly different team and a slightly different beast.
Which… doesn’t mean you wouldn’t have had most of the same perceptions if you’d come to a recent AIRCS! You might well have. From a distance perhaps all our workshops are pretty similar. And I can see calling “CFAR for ML” “AIRCS”, since it was in fact partially about AI risk and was aimed mostly at computer scientists, which is what “AIRCS” stands for. Still, we locally care a good bit of the distinctions between our programs, so I did want to clarify.
Thanks for the correction! Yes, from a distance the description of your workshops seem pretty similar. I thought the “CFAR for ML” was a prototype for AIRCS, and assumed it would have similar structure and format. Many of Arthur’s descriptions seemed familiar to my memories of the CFAR for ML workshop.