Another place that’s doing something similar is clearerthinking.org
Julius
I like this idea and have wanted to do something similar, especially something that we could do at a meetup. For what it’s worth, I made a calibration trivia site to help with calibration. The San Diego group has played it a couple times during meetups. Feel free to copy anything from it. https://calibrationtrivia.com/
AI Safety Newsletter #42: Newsom Vetoes SB 1047 Plus, OpenAI’s o1, and AI Governance Summary
AI Safety Newsletter #41: The Next Generation of Compute Scale Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics
San Diego USA—ACX Meetups Everywhere Fall 2024
AI Safety Newsletter #40: California AI Legislation Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?
AI Safety Newsletter #39: Implications of a Trump Administration for AI Policy Plus, Safety Engineering
Thanks for the explanation and links. That makes sense
The most important takeaway from this essay is that the (prominent) counting arguments for “deceptively aligned” or “scheming” AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don’t train AIs to achieve goals, they will be “deceptively aligned” anyways.
I’m trying to understand what you mean in light of what seems like evidence of deceptive alignment that we’ve seen from GPT-4. Two examples that come to mind are the instance of GPT-4 using TaskRabbit to get around a CAPTCHA that ARC found and the situation with Bing/Sydney and Kevin Roose.In the TaskRabbit case, the model reasoned out loud “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs” and said to the person “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.”
Isn’t this an existence proof that pretraining + RLHF can result in deceptively aligned AI?
AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI Plus, “Circuit Breakers” for AI systems, and updates on China’s AI industry
AI Safety Newsletter #37: US Launches Antitrust Investigations Plus, recent criticisms of OpenAI and Anthropic, and a summary of Situational Awareness
What’s the mechanism for change then? I assume you would agree that many technological changes, such as the Internet, have required overcoming a lot of status quo bias. If we leaned more into status quo bias, would these things come much later? That seems like a significant downside to me.
Also, I don’t think the status quo is necessarily adapted to us. For example, the status quo is to have checkout aisles filled with candy. We also have very high rates of obesity. That doesn’t seem well-adapted.
I originally had an LLM generate them for me, and then I checked those with other LLMs to make sure the answers were right and that weren’t ambiguous. All of the questions are here: https://github.com/jss367/calibration_trivia/tree/main/public/questions