newsletter.safe.ai
Dan H
AISN #49: Superintelligence Strategy
Introducing MASK: A Benchmark for Measuring Honesty in AI Systems
On the Rationality of Deterring ASI
AISN #48: Utility Engineering and EnigmaEval
though I don’t think xAI took an official position one way or the other
I assumed most of everybody assumed xAI supported it since Elon did. I didn’t bother pushing for an additional xAI endorsement given that Elon endorsed it.
AISN #47: Reasoning Models
AISN #46: The Transition
It’s probably worth them mentioning for completeness that Nat Friedman funded an earlier version of the dataset too. (I was advising at that time and provided the main recommendation that it needs to be research-level because they were focusing on Olympiad level.)
Also can confirm they aren’t giving access to the mathematicians’ questions to AI companies other than OpenAI like xAI.
AISN #45: Center for AI Safety 2024 Year in Review
and have clearly been read a non-trivial amount by Elon Musk
Nit: He heard this idea in conversation with an employee AFAICT.
AISN #44: The Trump Circle on AI Safety Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems
AI Safety Newsletter #43: White House Issues First National Security Memo on AI Plus, AI and Job Displacement, and AI Takes Over the Nobels
AI Safety Newsletter #42: Newsom Vetoes SB 1047 Plus, OpenAI’s o1, and AI Governance Summary
AI Safety Newsletter #41: The Next Generation of Compute Scale Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics
AI forecasting bots incoming
Relevant: Natural Selection Favors AIs over Humans
universal optimization algorithm
Evolution is not an optimization algorithm (this is a common misconception discussed in Okasha, Agents and Goals in Evolution).
AI Safety Newsletter #40: California AI Legislation Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?
The Bitter Lesson for AI Safety Research
We have been working for months on this issue and have made substantial progress on it: Tamper-Resistant Safeguards for Open-Weight LLMs
General article about it: https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/
xAI’s thresholds are entirely concrete and not extremely high.
They are specified and as high-quality as you can get. (If there are better datasets let me know.)
I’m not saying it’s perfect, but I wouldn’t but them all in the same bucket. Meta’s is very different from DeepMind’s or xAI’s.