Can you quote (or link to) things Sacks has said that give you this impression?
My own impression is that there are many AI policy ideas that don’t have anything to do with censorship (e.g., improving government technical capacity, transparency into frontier AI development, emergency preparedness efforts, efforts to increase government “situational awareness”, research into HEMs and verification methods). Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
Sacks is smarter and more sophisticated than that.
Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
In the real world, efforts of the Department of Homeland security that started with censoring for reasons of national security ended up increasing the scope of what they censor. In the end the lab leak theory got censored and if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
Do you have any quotes or any particular podcast episodes you recommend?
if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
Yeah, I agree that one needs to have a pretty narrow conception of national security. In the absence of that, there’s concept creep in which you can justify pretty much anything under a broad conception of national security. (Indeed, I suspect that lots of folks on the left justified a lot of general efforts to censor conservatives as a matter of national security//public safety, under the view that a Trump presidency would be disastrous for America//the world//democracy. And this is the kind of thing that clearly violates a narrower conception of national security.)
How to exactly draw the line is a difficult question, but I think most people would clearly be able to see a difference between “preventing model from outputting detailed instructions/plans to develop bioweapons” and “preventing model from voicing support for political positions that some people think are problematic.”
Do you have any quotes or any particular podcast episodes you recommend?
I don’t have specific recommendations for the past. I would expect a section in the next All-In Podcast in which David Sachs participates to law out his views a bit.
How to exactly draw the line is a difficult question,
That’s the question you would ask if you think the person who’s drawing the line is aligned. If you think the people speaking about national security and using that to further different political and geopolitical ends are not aligned, it’s not the most interesting question.
It sounds to me like you are taking this as an abstract policy issue while ignoring the real-world censorship industrial complex. It’s like discussing union policy in the 1970s and 1980s in New York without taking into account that a lot of strikes are because someone failed to pay the Mafia.
If you don’t know what the censorship industrial complex is, Joe Rogan had a good interview with Mike Benz, who is a former official with the U.S. Department of State and current Executive Director of the Foundation For Freedom Online.
Can you quote (or link to) things Sacks has said that give you this impression?
My own impression is that there are many AI policy ideas that don’t have anything to do with censorship (e.g., improving government technical capacity, transparency into frontier AI development, emergency preparedness efforts, efforts to increase government “situational awareness”, research into HEMs and verification methods). Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
Sacks is smarter and more sophisticated than that.
In the real world, efforts of the Department of Homeland security that started with censoring for reasons of national security ended up increasing the scope of what they censor. In the end the lab leak theory got censored and if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
Do you have any quotes or any particular podcast episodes you recommend?
Yeah, I agree that one needs to have a pretty narrow conception of national security. In the absence of that, there’s concept creep in which you can justify pretty much anything under a broad conception of national security. (Indeed, I suspect that lots of folks on the left justified a lot of general efforts to censor conservatives as a matter of national security//public safety, under the view that a Trump presidency would be disastrous for America//the world//democracy. And this is the kind of thing that clearly violates a narrower conception of national security.)
How to exactly draw the line is a difficult question, but I think most people would clearly be able to see a difference between “preventing model from outputting detailed instructions/plans to develop bioweapons” and “preventing model from voicing support for political positions that some people think are problematic.”
I don’t have specific recommendations for the past. I would expect a section in the next All-In Podcast in which David Sachs participates to law out his views a bit.
That’s the question you would ask if you think the person who’s drawing the line is aligned. If you think the people speaking about national security and using that to further different political and geopolitical ends are not aligned, it’s not the most interesting question.
It sounds to me like you are taking this as an abstract policy issue while ignoring the real-world censorship industrial complex. It’s like discussing union policy in the 1970s and 1980s in New York without taking into account that a lot of strikes are because someone failed to pay the Mafia.
If you don’t know what the censorship industrial complex is, Joe Rogan had a good interview with Mike Benz, who is a former official with the U.S. Department of State and current Executive Director of the Foundation For Freedom Online.