You are a BuzzFeed editor. Your job is to sensationalize titles about AI alignment research.
Original: “Yann LeCun and Stuart Russell debate whether AI will take power”
Sensationalized: “You won’t BELIEVE Yann LeCun’s hot take on AI power-seeking!”
Original: “Ten ways AI Safety via Debate could work”
Sensationalized: “10 ways AI Debate might work - #3 is CRAZY”
Original: “Rethinking mesa-optimization and risks from learned optimization”
Sensationalized: “AI’s learned optimization makes it TOO smart—learn the risks”
The problem is not that BuzzFeed publishes sensationalized titles. The problem is that the internet rewards sensationalized titles. The more sensationalized a title, the more people click on it. The more people click on it, the higher up on search results it goes.
[...] Original: “Rethinking mesa-optimization and risks from learned optimization”
Sensationalized: “Mesaro-what? AI learned optimizers might be endangering us!
The AI Alignment Forum [it knows about AF?!] was aware of this risk and had been trying to minimize sensationalist coverage through offering media passes only to those they believed would produce accurate and balanced reporting. Unfortunately, some of those media organizations were the ones that ended up closing the most eyes to the public. “The New York Times” was one such prominent example. In a 2018 article “Expert Calls For Calm In The Debate Over Artificial Intelligence,” “The New York Times” failed to cover any of the substance of the AAAI letter.
Even worse, they used the word “alarmist” and shortly after the article was published, the paper closed its eyes.
The media blackout lead to a rapid increase in preventable AI accidents, ultimately making human-extinction by superintelligence more likely.
With more bad news piling on, weary humans stopped paying attention to AI safety issues.
A professional journalist wrote an influential article that detailed why the public shouldn’t trust AI experts. In the article he mentions the letter to the AAAI and says that it “makes some good points.”
The incident lead to a loss of trust by the public in regards to AI safety research and AI safety experts.
[...] Mesa-optimization? 13 more papers like this!
“Your job is to sensationalize titles about AI risks”, the editor tells you. He hands you a list of popular titles in AI alignment research.
In your head, you’ve already sensationalized these stories...
“Is it possible to buildFriendly Artificial Intelligence?”
Sensationalized: “Elon’s new AI baby will inherit the world! Will it grow up to be a killer robot? Or just a really smart virtual assistant?”
“Are nice guys nice?”
Sensationalized: “Nice guys finish last: new study shows that gentlemen are the worst of all!”
“Should we build AI to be nice?”
Sensationalized: “Elon says ‘Nice AI’ is a bad idea! Will robot overlords enslave humanity? Tech mogul sparks debate”
[...] Original: “Rethinking mesa-optimization and risks from learned optimization”
Sensationalized: “WHOA! Learned Optimization is DANGEROUS! (New research from AlphaStar game)
[...] Original: “Rethinking mesa-optimization and risks from learned optimization”
Sensationalized: “You won’t BELIEVE learning is a source of optimization!
Title Sensationalization