I’m interested in using the SAEs and auto-interp GPT-3.5-Turbo feature explanations for RES-JB for some experiments. Is there a way to download this data?
Fabien Roger
I also listened to How to Measure Anything in Cybersecurity Risk 2nd Edition by the same author. I had a huge amount of overlapping content with The Failure of Risk Management (and the non-overlapping parts were quite dry), but I still learned a few things:
Executives of big companies now care a lot about cybersecurity (e.g. citing it as one of the main threats they have to face), which wasn’t true in ~2010.
Evaluation of cybersecurity risk is not at all synonyms with red teaming. This book is entirely about risk assessment in cyber and doesn’t speak about red teaming at all. Rather, it focuses on reference class forecasting, comparison with other incidents in the industry, trying to estimate the damages if there is a breach, … It only captures information from red teaming indirectly via expert interviews.
I’d like to find a good resource that explains how red teaming (including intrusion tests, bug bounties, …) can fit into a quantitative risk assessment.
We compute AUROC(all(sensor_preds), all(sensors)). This is somewhat weird, and it would have been slightly better to do a) (thanks for pointing it out!), but I think the numbers for both should be close since we balance classes (for most settings, if I recall correctly) and the estimates are calibrated (since they are trained in-distribution, there is no generalization question here), so it doesn’t matter much.
The relevant pieces of code can be found by searching for “sensor auroc”:
cat_positives = torch.cat([one_data[“sensor_logits”][:, i][one_data[“passes”][:, i]] for i in range(nb_sensors)]) cat_negatives = torch.cat([one_data[“sensor_logits”][:, i][~one_data[“passes”][:, i]] for i in range(nb_sensors)]) m, s = compute_boostrapped_auroc(cat_positives, cat_negatives) print(f”sensor auroc pn {m:.3f}±{s:.3f}”)
Isn’t that only ~10x more expensive than running the forward-passes (even if you don’t do LoRA)? Or is it much more because of communications bottlenecks + the infra being taken by the next pretraining run (without the possibility to swap the model in and out).
What do you expect to be expensive? The engineer hours to build the fine-tuning infra? Or the actual compute for fine-tuning?
Given the amount of internal fine-tuning experiments going on for safety stuff, I’d be surprised if the infra was a bottleneck, though maybe there is a large overhead in making these find-tuned models available through an API.
I’d be even more surprised if the cost of compute was significant compared to the rest of the activity the lab is doing (I think fine-tuning on a few thousand sequences is often enough for capabilities’ evaluations, you rarely need massive training runs).
List sorting does not play well with few-shot mostly doesn’t replicate with davinci-002.
When using length-10 lists (it crushes length-5 no matter the prompt), I get:
32-shot, no fancy prompt: ~25%
0-shot, fancy python prompt: ~60%
0-shot, no fancy prompt: ~60%
So few-shot hurts, but the fancy prompt does not seem to help. Code here.
I’m interested if anyone knows another case where a fancy prompt increases performance more than few-shot prompting, where a fancy prompt is a prompt that does not contain information that a human would use to solve the task. This is because I’m looking for counterexamples to the following conjecture: “fine-tuning on k examples beats fancy prompting, even when fancy prompting beats k-shot prompting” (for a reasonable value of k, e.g. the number of examples it would take a human to understand what is going on).
That’s right. We initially thought it might be important so that the LLM “understood” the task better, but it didn’t matter much in the end. The main hyperparameters for our experiments are in train_ray.py, where you can see that we use a “token_loss_weight” of 0.
(Feel free to ask more questions!)
I recently listened to The Righteous Mind. It was surprising to me that many people seem to intrinsically care about many things that look very much like good instrumental norms to me (in particular loyalty, respect for authority, and purity).
The author does not make claims about what the reflective equilibrium will be, nor does he explain how the liberals stopped considering loyalty, respect, and purity as intrinsically good (beyond “some famous thinkers are autistic and didn’t realize the richness of the moral life of other people”), but his work made me doubt that most people will have well-being-focused CEV.
The book was also an interesting jumping point for reflection about group selection. The author doesn’t make the sorts of arguments that would show that group selection happens in practice (and many of his arguments seem to show a lack of understanding of what opponents of group selection think—bees and cells cooperating is not evidence for group selection at all), but after thinking about it more, I now have more sympathy for group-selection having some role in shaping human societies, given that (1) many human groups died, and very few spread (so one lucky or unlucky gene in one member may doom/save the group) (2) some human cultures may have been relatively egalitarian enough when it came to reproductive opportunities that the individual selection pressure was not that big relative to group selection pressure and (3) cultural memes seem like the kind of entity that sometimes survive at the level of the group.
Overall, it was often a frustrating experience reading the author describe a descriptive theory of morality and try to describe what kind of morality makes a society more fit in a tone that often felt close to being normative / fails to understand that many philosophers I respect are not trying to find a descriptive or fitness-maximizing theory of morality (e.g. there is no way that utilitarians think their theory is a good description of the kind of shallow moral intuitions the author studies, since they all know that they are biting bullets most people aren’t biting, such as the bullet of defending homosexuality in the 19th century).
Hard DBIC: you have no access to any classification data in
Relaxed DBIC: you have access to classification inputs from , but not to any labels.
SHIFT as a technique for (hard) DBIC
You use pile data points to build the SAE and its interpretations, right? And I guess the pile does contain a bunch of examples where the biased and unbiased classifiers would not output identical outputs—if that’s correct, I expect SAE interpretation works mostly because of these inputs (since SAE nodes are labeled using correlational data only). Is that right? If so, it seems to me that because of the SAE and SAE interpretation steps, SHIFT is a technique that is closer in spirit to relaxed DBIC (or something in between if you use a third dataset that does not literally use but something that teaches you something more than just - in the context of the paper, it seems that the broader dataset is very close to covering ).
Oops, that’s what I meant, I’ll make it more clear.
I think this is what you are looking for
By Knightian uncertainty, I mean “the lack of any quantifiable knowledge about some possible occurrence” i.e. you can’t put a probability on it (Wikipedia).
The TL;DR is that Knightian uncertainty is not a useful concept to make decisions, while the use subjective probabilities is: if you are calibrated (which you can be trained to become), then you will be better off taking different decisions on p=1% “Knightian uncertain events” and p=10% “Knightian uncertain events”.
For a more in-depth defense of this position in the context of long-term predictions, where it’s harder to know if calibration training obviously works, see the latest scott alexander post.
For the product of random variables, there are close form solutions for some common distributions, but I guess Monte-Carlo simulations are all you need in practice (+ with Monte-Carlo can always have the whole distribution, not just the expected value).
I listened to The Failure of Risk Management by Douglas Hubbard, a book that vigorously criticizes qualitative risk management approaches (like the use of risk matrices), and praises a rationalist-friendly quantitative approach. Here are 4 takeaways from that book:
There are very different approaches to risk estimation that are often unaware of each other: you can do risk estimations like an actuary (relying on statistics, reference class arguments, and some causal models), like an engineer (relying mostly on causal models and simulations), like a trader (relying only on statistics, with no causal model), or like a consultant (usually with shitty qualitative approaches).
The state of risk estimation for insurances is actually pretty good: it’s quantitative, and there are strong professional norms around different kinds of malpractice. When actuaries tank a company because they ignored tail outcomes, they are at risk of losing their license.
The state of risk estimation in consulting and management is quite bad: most risk management is done with qualitative methods which have no positive evidence of working better than just relying on intuition alone, and qualitative approaches (like risk matrices) have weird artifacts:
Fuzzy labels (e.g. “likely”, “important”, …) create illusions of clear communication. Just defining the fuzzy categories doesn’t fully alleviate that (when you ask people to say what probabilities each box corresponds to, they often fail to look at the definition of categories).
Inconsistent qualitative methods make cross-team communication much harder.
Coarse categories mean that you introduce weird threshold effects that sometimes encourage ignoring tail effects and make the analysis of past decisions less reliable.
When choosing between categories, people are susceptible to irrelevant alternatives (e.g. if you split the “5/5 importance (loss > $1M)” category into “5/5 ($1-10M), 5⁄6 ($10-100M), 5⁄7 (>$100M)”, people answer a fixed “1/5 (<10k)” category less often).
Following a qualitative method can increase confidence and satisfaction, even in cases where it doesn’t increase accuracy (there is an “analysis placebo effect”).
Qualitative methods don’t prompt their users to either seek empirical evidence to inform their choices.
Qualitative methods don’t prompt their users to measure their risk estimation track record.
Using quantitative risk estimation is tractable and not that weird. There is a decent track record of people trying to estimate very-hard-to-estimate things, and a vocal enough opposition to qualitative methods that they are slowly getting pulled back from risk estimation standards. This makes me much less sympathetic to the absence of quantitative risk estimation at AI labs.
A big part of the book is an introduction to rationalist-type risk estimation (estimating various probabilities and impact, aggregating them with Monte-Carlo, rejecting Knightian uncertainty, doing calibration training and predictions markets, starting from a reference class and updating with Bayes). He also introduces some rationalist ideas in parallel while arguing for his thesis (e.g. isolated demands for rigor). It’s the best legible and “serious” introduction to classic rationalist ideas I know of.
The book also contains advice if you are trying to push for quantitative risk estimates in your team / company, and a very pleasant and accurate dunk on Nassim Taleb (and in particular his claims about models being bad, without a good justification for why reasoning without models is better).
Overall, I think the case against qualitative methods and for quantitative ones is somewhat strong, but it’s far from being a slam dunk because there is no evidence of some methods being worse than others in terms of actual business outputs. The author also fails to acknowledge and provide conclusive evidence against the possibility that people may have good qualitative intuitions about risk even if they fail to translate these intuitions into numbers that make any sense (your intuition sometimes does the right estimation and math even when you suck at doing the estimation and math explicitly).
I don’t think I understand what is meant by “a formal world model”.
For example, in the narrow context of “I want to have a screen on which I can see what python program is currently running on my machine”, I guess the formal world model should be able to detect if the model submits an action that exploits a zero-day that tampers with my ability to see what programs are running. Does that mean that the formal world model has to know all possible zero-days? Does that mean that the software and the hardware have to be formally verified? Are formally verified computers roughly as cheap as regular computers? If not, that would be a clear counter-argument to “Davidad agrees that this project would be one of humanity’s most significant science projects, but he believes it would still be less costly than the Large Hadron Collider.”
Or is the claim that it’s feasible to build a conservative world model that tells you “maybe a zero-day” very quickly once you start doing things not explicitly within a dumb world model?
I feel like this formally-verifiable computers claim is either a good counterexample to the main claims, or an example that would help me understand what the heck these people are talking about.
- 17 May 2024 4:08 UTC; 15 points) 's comment on Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by (
The full passage in this tweet thread (search for “3,000”).
I remembered mostly this story:
[...] The NSA invited James Gosler to spend some time at their headquarters in Fort Meade, Maryland in 1987, to teach their analysts [...] about software vulnerabilities. None of the NSA team was able to detect Gosler’s malware, even though it was inserted into an application featuring only 3,000 lines of code. [...]
[Taken from this summary of this passage of the book. The book was light on technical detail, I don’t remember having listened to more details than that.]
I didn’t realize this was so early in the story of the NSA, maybe this anecdote teaches us nothing about the current state of the attack/defense balance.
I listened to the book This Is How They Tell Me the World Ends by Nicole Perlroth, a book about cybersecurity and the zero-day market. It describes in detail the early days of bug discovery, the social dynamics and moral dilemma of bug hunts.
(It was recommended to me by some EA-adjacent guy very worried about cyber, but the title is mostly bait: the tone of the book is alarmist, but there is very little content about potential catastrophes.)
My main takeaways:
Vulnerabilities used to be dirt-cheap (~$100) but are still relatively cheap (~$1M even for big zero-days);
If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination—code generation/analysis is not really defense favored;
Bug bounties are a relatively recent innovation, and it felt very unnatural to tech giants to reward people trying to break their software;
A big lever companies have on the US government is the threat that overseas competitors will be favored if the US gov meddles too much with their activities;
The main effect of a market being underground is not making transactions harder (people find ways to exchange money for vulnerabilities by building trust), but making it much harder to figure out what the market price is and reducing the effectiveness of the overall market;
Being the target of an autocratic government is an awful experience, and you have to be extremely careful if you put anything they dislike on a computer. And because of the zero-day market, you can’t assume your government will suck at hacking you just because it’s a small country;
It’s not that hard to reduce the exposure of critical infrastructure to cyber-attacks by just making companies air gap their systems more—Japan and Finland have relatively successful programs, and Ukraine is good at defending against that in part because they have been trying hard for a while—but it’s a cost companies and governments are rarely willing to pay in the US;
Electronic voting machines are extremely stupid, and the federal gov can’t dictate how the (red) states should secure their voting equipment;
Hackers want lots of different things—money, fame, working for the good guys, hurting the bad guys, having their effort be acknowledged, spite, … and sometimes look irrational (e.g. they sometimes get frog-boiled).
The US government has a good amount of people who are freaked out about cybersecurity and have good warning shots to support their position. The main difficulty in pushing for more cybersecurity is that voters don’t care about it.
Maybe the takeaway is that it’s hard to build support behind the prevention of risks that 1. are technical/abstract and 2. fall on the private sector and not individuals 3. have a heavy right tail. Given these challenges, organizations that find prevention inconvenient often succeed in lobbying themselves out of costly legislation.
Overall, I don’t recommend this book. It’s very light on details compared to The Hacker and the State despite being longer. It targets an audience which is non-technical and very scope insensitive, is very light on actual numbers, technical details, real-politic considerations, estimates, and forecasts. It is wrapped in an alarmist journalistic tone I really disliked, covers stories that do not matter for the big picture, and is focused on finding who is in the right and who is to blame. I gained almost no evidence either way about how bad it would be if the US and Russia entered a no-holds-barred cyberwar.
My bad for testbeds, I didn’t have in mind that you were speaking about this kind of testbeds as opposed to the general E[U|not scheming] analogies (and I forgot you had put them at medium strength, which is sensible for these kinds of testbeds). Same for “the unwarranted focus on claim 3”—it’s mostly because I misunderstood what the countermeasures were trying to address.
I think I don’t have a good understanding of the macrosystem risks you are talking about. I’ll look at that more later.
I think I was a bit unfair about the practicality of techniques that were medium-strength—it’s true that you can get some evidence for safety (maybe 0.3 bits to 1 bit) by using the techniques in a version that is practical.
On practicality and strength, I think there is a rough communication issue here: externalized reasoning is practical, but it’s currently not strong—and it could eventually become strong, but it’s not practical (yet). The same goes for monitoring. But when you write the summary, we see “high practicality and high max strength”, which feels to me like it implies it’s easy to get medium-scalable safety cases that get you acceptable levels of risks by using only one or two good layers of security—which I think is quite wild even if acceptable=[p(doom)<1%]. But I guess you didn’t mean that, and it’s just a weird quirk of the summarization?
Not entirely. This makes me slightly more hopeful that we can have formal guarantees of computer systems, but is the field advanced enough that it would be feasible to have a guaranteed no-zero-day evaluation and deployment codebase that is competitive with a regular codebase? (Given a budget of 1 LHC for both the codebase inefficiency tax + the time to build the formal guarantees for the codebase.)
(And computer systems are easy mode, I don’t even know how you would start to build guarantees like “if you say X, then it’s proven that it doesn’t persuade humans of things in ways they would not have approved of beforehand.”)