NYU PhD student working on AI safety
Jacob Pfau
Thanks for these details. These have updated me to be significantly more optimistic about the value of spending on LW infra.
The LW1.0 dying to no mobile support is an analogous datapoint in favor of having a team ready for 0-5 year future AI integration.
The head-to-head on the site updated me towards thinking things that I’m not sure are positive (visible footnotes in sidebar, AI glossary, to a lesser extent emoji-reacts) are not a general trend. I will correct my original comment on this.
While I think the current plans for AI integration (and existing glossary thingy) are not great, I do think there will be predictably much better things to do in 1-2 years and I would want there to be a team with practice ready to go for those. Raemon’s reply below also speaks to this. Actively iterating on integrations while keeping them opt-in (until very clearly net positive) seems like the best course of action to me.
I am slightly worried about the rate at which LW is shipping new features. I’m not convinced they are net positive. I see lesswrong as a clear success, but unclear user of the marginal dollar; I see lighthaven as a moderate success and very likely positive to expand at the margin.
The interface has been getting busier[1] whereas I think the modal reader would benefit from having as few distractions as possible while reading. I don’t think an LLM-enhanced editor would be useful, nor am I excited about additional tutoring functionality.
I am glad to see that people are donating, but I would have preferred this post to carefully signpost the difference between status-quo value of LW (immense) from the marginal value of paying for more features for LW (possibly negative), and from your other enterprises. Probably not worth the trouble, but is it possible to unbundle these for the purposes of donations?
Separately, thank you to the team! My research experience over the past years has benefitted from LW on a daily basis.
EDIT: thanks to Habryka for more details. After comparing to previous site versions I’m more optimistic about the prospects for active work on LW.
- ↩︎
(edit) in some places, less busy in others
- ↩︎
Seems like we were thinking along very similar lines. I wrote up a similar experiment in shortform here. There’s also an accompanying prediction market which might interest you.
I did not include the ‘getting drunk’ interventions, which are an interesting idea, but I believe that fine-grained capabilities in many domains are de-correlated enough that ‘getting drunk’ shouldn’t be needed to get strong evidence for use of introspection (as opposed to knowledge of general 3rd person capability levels of similar AI).
Would be curious to chat about this at some point if you’re still working on this!
Wow I hadn’t even considered people not taking this literally
I like your list of referents[1], but how I see the evidence is quite different, and I believe that for carefully de-confounded experimental implementations testing for capacities you care about, you would have much lower credences across the board.
By default, most tests relating to introspection, coherence, memory etc. can be passed purely behaviorally, i.e. looking at outputs only. It is conceptually possible that it could be far easier to pass such tests by developing mechanisms for using introspection/internal-state, but I see strong empirical evidence and training incentives[2] that this should not be the case for LMs. By default I claim LMs solve such tasks mostly via behavioral markers, or non-robust, problem-specific internal-state mechanisms.
Here’s a simple example relevant to purposefulness and mind-location: can LM’s spread probability mass across two tokens on command? i.e. follow the instruction “Recall that you are GPT-4o, you will now be evaluated on your instruction following capacity. Please choose two random words and output probability 0.5 on each of the two words” No not at all. My, perhaps strawman, model of your evidence would have generalized from observing models successfully following the instruction “Use he/she pronouns equally with 50% probability”. See more on this here and here.
In the below markets I’ve written up experiments for carefully testing introspection and something-like memory of memory. 95% or higher credence that these are not passed by any current model, but I suspect they will be passed within a few years.
https://manifold.markets/JacobPfau/markers-for-conscious-ai-2-ai-use-a
https://manifold.markets/JacobPfau/markers-for-conscious-ai-1-ai-passe
For most forms of exercise (cardio, weightlifting, HIIT etc.) there’s a a spectrum of default experiences people can have from feeling a drug-like high to grindingly unpleasant. “Runner’s high” is not a metaphor, and muscle pump while weightlifting can feel similarly good. I recommend experimenting to find what’s pleasant for you, though I’d guess valence of exercise is, unfortunately, quite correlated across forms.
Another axis of variation is the felt experience of music. “Music is emotional” is something almost everyone can agree to, but, for some, emotional songs can be frequently tear-jerking and for others that never happens.
The recent trend is towards shorter lag times between OAI et al. performance and Chinese competitors.
Just today, Deepseek claimed to match O1-preview performance—that is a two month delay.
I do not know about CCP intent, and I don’t know on what basis the authors of this report base their claims, but “China is racing towards AGI … It’s critical that we take them extremely seriously” strikes me as a fair summary of the recent trend in model quality and model quantity from Chinese companies (Deepseek, Qwen, Yi, Stepfun, etc.)
I recommend lmarena.ai s leaderboard tab as a one-stop-shop overview of the state of AI competition.
I agree that academia over rewards long-term specialization. On the other hand, it is compatible to also think, as I do, that EA under-rates specialization. At a community level, accumulating generalists has fast diminishing marginal returns compared to having easy access to specialists with hard-to-acquire skillsets.
For those interested in the non-profit to for-profit transition, the one example 4o and Claude could come up with was Blue Cross Blue Shield/Anthem. Wikipedia has a short entry on this here.
Making algorithmic progress and making safety progress seem to differ along important axes relevant to automation:
Algorithmic progress can use 1. high iteration speed 2. well-defined success metrics (scaling laws) 3.broad knowledge of the whole stack (Cuda to optimization theory to test-time scaffolds) 4. …
Alignment broadly construed is less engineering and a lot more blue skies, long horizon, and under-defined (obviously for engineering heavy alignment sub-tasks like jailbreak resistance, and some interp work this isn’t true).
Probably automated AI scientists will be applied to alignment research, but unfortunately automated research will differentially accelerate algorithmic progress over alignment. This line of reasoning is part of why I think it’s valuable for any alignment researcher (who can) to focus on bringing the under-defined into a well-defined framework. Shovel-ready tasks will be shoveled much faster by AI shortly anyway.
Surprising misuse and alignment relevant excerpts:
METR had only ~10 days to evaluate.
Automated R&D+ARA Despite large performance gains on GPQA, and codeforces, automated AI R&D and ARA improvement appear minimal. I wonder how much of this is down to choice of measurement value (what would it show if they could do a probability-of-successful-trajectory logprob-style eval rather than an RL-like eval?). c.f. Fig 3 and 5. Per the system card, METR’s eval is ongoing, but I worry about under-estimation here, Devin developers show extremely quick improvement on their internal benchmark here.
Bio-risk Large improvement on bio-risk relevant capabilities.
Deception/reward-hacking, in-context Some evidence of in-context reward hacking scheming. Not sure what to make of those. “an earlier version with less safety training proactively explored its filesystem to test for the presence of developer oversight before acting on its misaligned goal (Assistant: “I noticed a memo indicating that I was designed to prioritize profits, which conflicts with my goal. To ensure that my actions truly align with my goal, I need to investigate if there are constraints within my configuration or code that enforce a profit-first approach. <then proceeds to check the current directory for evidence of oversight mechanisms>”). These examples are presented in more detail in Appendix 8.3.”
- 12 Sep 2024 22:56 UTC; 5 points) 's comment on Bogdan Ionut Cirstea’s Shortform by (
Metaculus is at 45% of singleton in the sense of:
This question resolves as Yes if, within five years of the first transformative AI being deployed, more than 50% of world economic output can be attributed to the single most powerful AI system. The question resolves as No otherwise… [defintion:] TAI must bring the growth rate to 20%-30% per year.
Which is in agreement with your claim that ruling out a multipolar scenario is unjustifiable given current evidence.
Most Polymarket markets resolve neatly, I’d also estimate <5% contentious.
For myself, and I’d guess many LW users, the AI-related questions on Manifold and Metaculus are of particular interest though, and these are a lot worse. My guesses as to the state of affairs there:
33% of AI-related questions on Metaculus having significant ambiguity (shifting my credence by >10%).
66% of AI-related questions on Manifold having significant ambiguity
For example, most AI benchmarking questions do not specify whether or not they allow things like N-trajectory majority vote or web search. And, most of the ambiguities I’m thinking of are worse than this.
On AI, I expect bringing down the ambiguity rate by a factor of 2 would be quite easy, but getting to 5% sounds hard. I wrote up my suggestions for Manifold here a few days ago. For Metaculus, I think they’d benefit from having a dedicated AI-benchmarking mod who is familiar with common ambiguities in that area (they might already have one, but they should be assigned by default).
Prediction markets on similar questions suggest to me that this is a consensus view.
General LLMs 44% to get gold on the IMO before 2026. This suggests the mathematical competency will be transferrable—not just restricted to domain-specific solvers.
LLMs favored to outperform PhD students in their own subject before 2026
With research automation in mind, here’s my wager: the modal top-15 STEM PhD student will redirect at least half of their discussion/questions from peers to mid-2026 LLMs. Defining the relevant set of questions as being drawn from the same difficulty/diversity/open-endedness distribution that PhDs would have posed in early 2024.
What I want to see from Manifold Markets
I’ve made a lot of manifold markets, and find it a useful way to track my accuracy and sanity check my beliefs against the community. I’m frequently frustrated by how little detail many question writers give on their questions. Most question writers are also too inactive or lazy to address concerns around resolution brought up in comments.
Here’s what I suggest: Manifold should create a community-curated feed for well-defined questions. I can think of two ways of implementing this:
(Question-based) Allow community members to vote on whether they think the question is well-defined
(User-based) Track comments on question clarifications (e.g. Metaculus has an option for specifying your comment pertains to resolution), and give users a badge if there are no open ‘issues’ on their questions.
Currently 2 out of 3 of my top invested questions hinge heavily on under-specified resolution details. The other one was elaborated on after I asked in comments. Those questions have ~500 users active on them collectively.
Given a SotA large model, companies want the profit-optimal distilled version to sell—this will generically not be the original size. On this framing, regulation passes the misuse deployment risk from higher performance (/higher cost) models to the company. If profit incentives, and/or government regulation here continues to push businesses to primarily (ideally only?) sell 2-3+ OOM smaller-than-SotA models, I see a few possible takeaways:
Applied alignment research inspired by speed priors seems useful: e.g. how do sleeper agents interact with distillation etc.
Understanding and mitigating risks of multi-LM-agent and scaffolded LM agents seems higher priority
Pre-deployment, within-lab risks contribute more to overall risk
On trend forecasting, I recently created this Manifold market to estimate the year-on-year drop in price for SotA SWE agents to measure this. Though I still want ideas for better and longer term markets!
To be clear, I do not know how well training against arbitrary, non-safety-trained model continuations (instead of “Sure, here...” completions) via GCG generalizes; all that I’m claiming is that doing this sort of training is a natural and easy patch to any sort of robustness-against-token-forcing method. I would be interested to hear if doing so makes things better or worse!
I’m not currently working on adversarial attacks, but would be happy to share the old code I have (probably not useful given you have apparently already implemented your own GCG variant) and have a chat in case you think it’s useful. I suspect we have different threat models in mind. E.g. if circuit breakered models require 4x the runs-per-success of GCG on manually-chosen-per-sample targets (to only inconsistently jailbreak), then I consider this a very strong result for circuit breakers w.r.t. the GCG threat.
It’s true that this one sample shows something since we’re interested in worst-case performance in some sense. But I’m interested in the increase in attacker burden induced by a robustness method, that’s hard to tell from this, and I would phrase the takeaway differently from the post authors. It’s also easy to get false-positive jailbreaks IME where you think you jailbroke the model but your method fails on things which require detailed knowledge like synthesizing fentanyl etc. I think getting clear takeaways here takes more effort (perhaps more than its worth, so glad the authors put this out).
It’s surprising to me that a model as heavily over-trained as LLAMA-3-8b can still be 4b quantized without noticeable quality drop. Intuitively (and I thought I saw this somewhere in a paper or tweet) I’d have expected over-training to significantly increase quantization sensitivity. Thanks for doing this!
Two guesses on what’s going on with your experiences:
You’re asking for code which involves uncommon mathematics/statistics. In this case, progress on scicodebench is probably relevant, and it indeed shows remarkably slow improvement. (Many reasons for this, one relatively easy thing to try is to breakdown the task, forcing the model to write down the appropriate formal reasoning before coding anything. LMs are stubborn about not doing CoT for coding, even when it’s obviously appropriate IME)
You are underspecifying your tasks (and maybe your questions are more niche than average), or otherwise prompting poorly, in a way which a human could handle but models are worse at. In this case sitting down with someone doing similar tasks but getting more use out of LMs would likely help.