I recommend listening to it yourself. I’m sorry I didn’t take timestamped notes, then maybe you wouldn’t have to. I think that listening to it has subtly improved my intuitions/models/priors about how US government and society might react to developments in AI in the future.
In a sense, this is already an example of an “AI warning shot” and the public’s reaction to it. This hearing contains lots of discussion about Facebook’s algorithms, discussion about how the profit-maximizing thing is often harmful but corporations have an incentive to do it anyway, discussion about how nobody understands what these algorithms really think & how the algorithms are probably doing very precisely targeted ads/marketing even though officially they aren’t being instructed to. So, basically, this is a case of unaligned AI causing damage—literally killing people, according to the politicians here.
And how do people react to it? Well, the push in this meeting here seems to be to name Facebook upper management as responsible and punish them, while also rolling out a grab bag of fixes such as eliminating like trackers for underage kids and changing the algorithm to not maximize engagement. I hope they would also apply such fixes to other major social media platforms, but this hearing does seem very focused on facebook in particular. One thing that I think is probably a mistake: The people here constantly rip into Facebook for doing internal research that concluded their algorithms were causing harms, and then not sharing that research with the world. I feel like the predictable consequence of this is that no tech company will do research on topics like this in the future, and they’ll hoard their data so that no one else can do the research either. In a sense, one of the outcomes of this warning shot will be to dismantle our warning shot detection system.
I’m listening to this congressional hearing about Facebook & the harmful effects of its algorithms: https://www.youtube.com/watch?v=GOnpVQnv5Cw
I recommend listening to it yourself. I’m sorry I didn’t take timestamped notes, then maybe you wouldn’t have to. I think that listening to it has subtly improved my intuitions/models/priors about how US government and society might react to developments in AI in the future.
In a sense, this is already an example of an “AI warning shot” and the public’s reaction to it. This hearing contains lots of discussion about Facebook’s algorithms, discussion about how the profit-maximizing thing is often harmful but corporations have an incentive to do it anyway, discussion about how nobody understands what these algorithms really think & how the algorithms are probably doing very precisely targeted ads/marketing even though officially they aren’t being instructed to. So, basically, this is a case of unaligned AI causing damage—literally killing people, according to the politicians here.
And how do people react to it? Well, the push in this meeting here seems to be to name Facebook upper management as responsible and punish them, while also rolling out a grab bag of fixes such as eliminating like trackers for underage kids and changing the algorithm to not maximize engagement. I hope they would also apply such fixes to other major social media platforms, but this hearing does seem very focused on facebook in particular. One thing that I think is probably a mistake: The people here constantly rip into Facebook for doing internal research that concluded their algorithms were causing harms, and then not sharing that research with the world. I feel like the predictable consequence of this is that no tech company will do research on topics like this in the future, and they’ll hoard their data so that no one else can do the research either. In a sense, one of the outcomes of this warning shot will be to dismantle our warning shot detection system.
We’ll see what comes of this.