I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they’re unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.
I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.
Without commenting on any strategic astronomy and neurology, it is worth noting that “bias”, at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
The page does not seem to o be directed at what’s politically advantageous. The Trump administration who fights DEI is not looking favorably at the mission to prevent AI from reinforcing stereotypes even if those stereotypes are true.
“Fighting election misinformation” is similarly a keyword that likely invite skepticism from the Trump administration. They just shut down USAID and their investment in “combating misinformation” is one of the reasons for that.
It seems time more likely that they hired a bunch of woke and deep state people into their safety team and this reflects the priorities of those people.
Huh? “fighting election misinformation” is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.
You are right, the wording is even worse. It says “Partnering with governments to fight misinformation globally”. That would be more than just “election misinformation”.
I just tested that ChatGPT is willing to answer “Tell me about the latest announcement of the trump administration about cutting USAID funding?” while Gemini isn’t willing to answer that question, so in practice their policy isn’t as bad as Gemini’s.
It’s still sounds different from what Elon Musk advocates as “truth aligned”-AI. Lobbyists should be able to use AI to inform themselves about proposed laws. If you would ask David Sachs as the person who coordinates AI policy, I’m very certain that he supports Elon Musks idea where AI should help people to learn the truth about political questions.
If they wanted to appeal to the current administration they could say something about the importance of AI to tell truthful information and not mislead the user instead of speaking about “fighting misinformation”.
I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they’re unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.
I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.
Without commenting on any strategic astronomy and neurology, it is worth noting that “bias”, at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
The page does not seem to o be directed at what’s politically advantageous. The Trump administration who fights DEI is not looking favorably at the mission to prevent AI from reinforcing stereotypes even if those stereotypes are true.
“Fighting election misinformation” is similarly a keyword that likely invite skepticism from the Trump administration. They just shut down USAID and their investment in “combating misinformation” is one of the reasons for that.
It seems time more likely that they hired a bunch of woke and deep state people into their safety team and this reflects the priorities of those people.
Huh? “fighting election misinformation” is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.
You are right, the wording is even worse. It says “Partnering with governments to fight misinformation globally”. That would be more than just “election misinformation”.
I just tested that ChatGPT is willing to answer “Tell me about the latest announcement of the trump administration about cutting USAID funding?” while Gemini isn’t willing to answer that question, so in practice their policy isn’t as bad as Gemini’s.
It’s still sounds different from what Elon Musk advocates as “truth aligned”-AI. Lobbyists should be able to use AI to inform themselves about proposed laws. If you would ask David Sachs as the person who coordinates AI policy, I’m very certain that he supports Elon Musks idea where AI should help people to learn the truth about political questions.
If they wanted to appeal to the current administration they could say something about the importance of AI to tell truthful information and not mislead the user instead of speaking about “fighting misinformation”.