With David Sacks being the AI/Crypto czar, we likely won’t be getting any US regulation on AI in the next years.
It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex.
To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
Can you quote (or link to) things Sacks has said that give you this impression?
My own impression is that there are many AI policy ideas that don’t have anything to do with censorship (e.g., improving government technical capacity, transparency into frontier AI development, emergency preparedness efforts, efforts to increase government “situational awareness”, research into HEMs and verification methods). Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
Sacks is smarter and more sophisticated than that.
Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
In the real world, efforts of the Department of Homeland security that started with censoring for reasons of national security ended up increasing the scope of what they censor. In the end the lab leak theory got censored and if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
Do you have any quotes or any particular podcast episodes you recommend?
if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
Yeah, I agree that one needs to have a pretty narrow conception of national security. In the absence of that, there’s concept creep in which you can justify pretty much anything under a broad conception of national security. (Indeed, I suspect that lots of folks on the left justified a lot of general efforts to censor conservatives as a matter of national security//public safety, under the view that a Trump presidency would be disastrous for America//the world//democracy. And this is the kind of thing that clearly violates a narrower conception of national security.)
How to exactly draw the line is a difficult question, but I think most people would clearly be able to see a difference between “preventing model from outputting detailed instructions/plans to develop bioweapons” and “preventing model from voicing support for political positions that some people think are problematic.”
Do you have any quotes or any particular podcast episodes you recommend?
I don’t have specific recommendations for the past. I would expect a section in the next All-In Podcast in which David Sachs participates to law out his views a bit.
How to exactly draw the line is a difficult question,
That’s the question you would ask if you think the person who’s drawing the line is aligned. If you think the people speaking about national security and using that to further different political and geopolitical ends are not aligned, it’s not the most interesting question.
It sounds to me like you are taking this as an abstract policy issue while ignoring the real-world censorship industrial complex. It’s like discussing union policy in the 1970s and 1980s in New York without taking into account that a lot of strikes are because someone failed to pay the Mafia.
If you don’t know what the censorship industrial complex is, Joe Rogan had a good interview with Mike Benz, who is a former official with the U.S. Department of State and current Executive Director of the Foundation For Freedom Online.
The fundamental problem is that any effective AI alignment technique is also a censorship technique, and thus you can’t advance AI alignment very much without also allowing people to censor an AI effectively, because a lot of alignment work is aiming to make AIs be censored in particular ways.
I disagree with the use of “any”. In principle, an effective alignment technique could create an AI that isn’t censored, but does have certain values/preferences over the world. You could call that censorship, but that doesn’t seem like the right or common usage. I agree that in practice many/most things currently purporting to be effective alignment techniques fit the word more, though.
I admit this is possible, so I almost certainly am overconfident here (which matters a little), though I believe a lot of common methods that do work for alignment also allow you to censor an AI.
If you take early writing of Eliezer, the idea is AI should be aligned with Coherent Extrapolated Volition. That’s a different goal from aligning AI with the views of credentialed experts or the leadership of AI companies.
“How do you regulate AI companies so that they aren’t enforcing Californian values on the rest of the United States and the world?” is an alignment question. If you have a good answer to that question, it would be easier to convince someone worried about those companies having enforced Californian values via censorship industrial complex doing the same thing with AI to regulate AI companies.
If you ignore the alignment questions that people like David Sachs care about, it’s hard to convince them that you are sincere about the other alignment questions.
A crux here is that I basically don’t think Coherent Extrapolated Volition of humanity type alignment strategies work, and I also think that it is irrelevant that we can’t align an AI to the CEV of humanity.
With David Sacks being the AI/Crypto czar, we likely won’t be getting any US regulation on AI in the next years.
It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex.
To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
Can you quote (or link to) things Sacks has said that give you this impression?
My own impression is that there are many AI policy ideas that don’t have anything to do with censorship (e.g., improving government technical capacity, transparency into frontier AI development, emergency preparedness efforts, efforts to increase government “situational awareness”, research into HEMs and verification methods). Also things like “an AI model should not output bioweapons or other things that threaten national security” are “censorship” under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
I haven’t looked much into Sacks’ particular stance here, but I think concerns around censorship are typically along the lines of “the state should not be involved in telling companies what their models can/can’t say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias.”
I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
Sacks is smarter and more sophisticated than that.
In the real world, efforts of the Department of Homeland security that started with censoring for reasons of national security ended up increasing the scope of what they censor. In the end the lab leak theory got censored and if you would ask the Department of Homeland security for their justification there’s a good chance that they would say “national security”.
Do you have any quotes or any particular podcast episodes you recommend?
Yeah, I agree that one needs to have a pretty narrow conception of national security. In the absence of that, there’s concept creep in which you can justify pretty much anything under a broad conception of national security. (Indeed, I suspect that lots of folks on the left justified a lot of general efforts to censor conservatives as a matter of national security//public safety, under the view that a Trump presidency would be disastrous for America//the world//democracy. And this is the kind of thing that clearly violates a narrower conception of national security.)
How to exactly draw the line is a difficult question, but I think most people would clearly be able to see a difference between “preventing model from outputting detailed instructions/plans to develop bioweapons” and “preventing model from voicing support for political positions that some people think are problematic.”
I don’t have specific recommendations for the past. I would expect a section in the next All-In Podcast in which David Sachs participates to law out his views a bit.
That’s the question you would ask if you think the person who’s drawing the line is aligned. If you think the people speaking about national security and using that to further different political and geopolitical ends are not aligned, it’s not the most interesting question.
It sounds to me like you are taking this as an abstract policy issue while ignoring the real-world censorship industrial complex. It’s like discussing union policy in the 1970s and 1980s in New York without taking into account that a lot of strikes are because someone failed to pay the Mafia.
If you don’t know what the censorship industrial complex is, Joe Rogan had a good interview with Mike Benz, who is a former official with the U.S. Department of State and current Executive Director of the Foundation For Freedom Online.
The fundamental problem is that any effective AI alignment technique is also a censorship technique, and thus you can’t advance AI alignment very much without also allowing people to censor an AI effectively, because a lot of alignment work is aiming to make AIs be censored in particular ways.
I disagree with the use of “any”. In principle, an effective alignment technique could create an AI that isn’t censored, but does have certain values/preferences over the world. You could call that censorship, but that doesn’t seem like the right or common usage. I agree that in practice many/most things currently purporting to be effective alignment techniques fit the word more, though.
I admit this is possible, so I almost certainly am overconfident here (which matters a little), though I believe a lot of common methods that do work for alignment also allow you to censor an AI.
If you take early writing of Eliezer, the idea is AI should be aligned with Coherent Extrapolated Volition. That’s a different goal from aligning AI with the views of credentialed experts or the leadership of AI companies.
“How do you regulate AI companies so that they aren’t enforcing Californian values on the rest of the United States and the world?” is an alignment question. If you have a good answer to that question, it would be easier to convince someone worried about those companies having enforced Californian values via censorship industrial complex doing the same thing with AI to regulate AI companies.
If you ignore the alignment questions that people like David Sachs care about, it’s hard to convince them that you are sincere about the other alignment questions.
A crux here is that I basically don’t think Coherent Extrapolated Volition of humanity type alignment strategies work, and I also think that it is irrelevant that we can’t align an AI to the CEV of humanity.