I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they’re unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.
I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.
Without commenting on any strategic astronomy and neurology, it is worth noting that “bias”, at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
The page does not seem to o be directed at what’s politically advantageous. The Trump administration who fights DEI is not looking favorably at the mission to prevent AI from reinforcing stereotypes even if those stereotypes are true.
“Fighting election misinformation” is similarly a keyword that likely invite skepticism from the Trump administration. They just shut down USAID and their investment in “combating misinformation” is one of the reasons for that.
It seems time more likely that they hired a bunch of woke and deep state people into their safety team and this reflects the priorities of those people.
Huh? “fighting election misinformation” is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.
You are right, the wording is even worse. It says “Partnering with governments to fight misinformation globally”. That would be more than just “election misinformation”.
I just tested that ChatGPT is willing to answer “Tell me about the latest announcement of the trump administration about cutting USAID funding?” while Gemini isn’t willing to answer that question, so in practice their policy isn’t as bad as Gemini’s.
It’s still sounds different from what Elon Musk advocates as “truth aligned”-AI. Lobbyists should be able to use AI to inform themselves about proposed laws. If you would ask David Sachs as the person who coordinates AI policy, I’m very certain that he supports Elon Musks idea where AI should help people to learn the truth about political questions.
If they wanted to appeal to the current administration they could say something about the importance of AI to tell truthful information and not mislead the user instead of speaking about “fighting misinformation”.
I am a bit confused on this being “disappointing” to people, maybe because it is not a list that is enough and it is far from complete/enough? I would also be very concerned if OpenAI does not actually care about these, but only did this for PR values (seems some other companies could do this). Otherwise, these are also concrete risks that are happening, actively harming people and need to be addressed. These practices also set up good examples/precedents for regulations and developing with safety mindset. Linking a few resources:
The redesigned OpenAI Safety page seems to imply that “the issues that matter most” are:
Child Safety
Private Information
Deep Fakes
Bias
Elections
What did it used to look like?
May 18th 2023 Version
There also used to be a page for Preparedness: https://web.archive.org/web/20240603125126/https://openai.com/preparedness/. Now it redirects to the safety page above.
(Same for Superalignment but that’s less interesting: https://web.archive.org/web/20240602012439/https://openai.com/superalignment/.)
I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they’re unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.
I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.
Without commenting on any strategic astronomy and neurology, it is worth noting that “bias”, at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
The page does not seem to o be directed at what’s politically advantageous. The Trump administration who fights DEI is not looking favorably at the mission to prevent AI from reinforcing stereotypes even if those stereotypes are true.
“Fighting election misinformation” is similarly a keyword that likely invite skepticism from the Trump administration. They just shut down USAID and their investment in “combating misinformation” is one of the reasons for that.
It seems time more likely that they hired a bunch of woke and deep state people into their safety team and this reflects the priorities of those people.
Huh? “fighting election misinformation” is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.
You are right, the wording is even worse. It says “Partnering with governments to fight misinformation globally”. That would be more than just “election misinformation”.
I just tested that ChatGPT is willing to answer “Tell me about the latest announcement of the trump administration about cutting USAID funding?” while Gemini isn’t willing to answer that question, so in practice their policy isn’t as bad as Gemini’s.
It’s still sounds different from what Elon Musk advocates as “truth aligned”-AI. Lobbyists should be able to use AI to inform themselves about proposed laws. If you would ask David Sachs as the person who coordinates AI policy, I’m very certain that he supports Elon Musks idea where AI should help people to learn the truth about political questions.
If they wanted to appeal to the current administration they could say something about the importance of AI to tell truthful information and not mislead the user instead of speaking about “fighting misinformation”.
I am a bit confused on this being “disappointing” to people, maybe because it is not a list that is enough and it is far from complete/enough? I would also be very concerned if OpenAI does not actually care about these, but only did this for PR values (seems some other companies could do this). Otherwise, these are also concrete risks that are happening, actively harming people and need to be addressed. These practices also set up good examples/precedents for regulations and developing with safety mindset. Linking a few resources:
child safety:
https://cyber.fsi.stanford.edu/news/ml-csam-report
https://www.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf
private information/PII:
https://arxiv.org/html/2410.06704v1
https://arxiv.org/abs/2310.07298
deep fakes:
https://www.pbs.org/newshour/world/in-south-korea-rise-of-explicit-deepfakes-wrecks-womens-lives-and-deepens-gender-divide
https://www.nytimes.com/2024/09/03/world/asia/south-korean-teens-deepfake-sex-images.html
bias:
https://arxiv.org/html/2405.01724v1
https://arxiv.org/pdf/2311.18140