If you’ve been waiting for an excuse to be done, this is probably the point where twenty percent of the effort has gotten eighty percent of the effect.
Should be “eighty percent of the benefit” or similar.
If you’ve been waiting for an excuse to be done, this is probably the point where twenty percent of the effort has gotten eighty percent of the effect.
Should be “eighty percent of the benefit” or similar.
I’d be interested in a Q about whether people voted in the last national election for their country (maybe with an option for “my country does not hold national elections”) and if so how they voted (if you can find a schema that works for most countries, which I guess is hard).
In the highest degree question, one option is “Ph D.”. This should be “PhD”, no spaces, no periods.
Are you planning on having more children? Answer yes if you don’t have children but want some, or if you do have children but want more.
Whether I want to have children and whether I plan to have children are different questions. There are lots of things I want but don’t have plans to get, and one sometimes finds oneself with plans to achieve things that one doesn’t actually want.
Sure, I’m just surprised it could work without me having Calibri installed.
Could be a thing where people can opt into getting the vibes or the vibes and the definitions.
Also, my feedback is that some of the definitions seem kind of vague. Like, apparently an ultracontribution is “a mathematical object representing uncertainty over probability”—this tells me what it’s supposed to be, but doesn’t actually tell me what it is. The ones that actually show up in the text don’t seem too vague, partially because they’re not terms that are super precise.
How are you currently determining which words to highlight? You say “terms that readers might not know” but this varies a lot based on the reader (as you mention in the long-term vision section).
FWIW I think it’s not uncommon for people to not use LLMs daily (e.g. I don’t).
FWIW I think the actual person with responsibility is the author if the author approves it, and you if the author doesn’t.
I believe I’m seeing Gill Sans? But when I google “Calibri” I see text that looks like it’s in Calibri, so that’s confusing.
Since people have reported not being able to see the tweet thread, I will reproduce it in this comment (with pictures replaced by my descriptions of them):
If developers had to prove to regulators that powerful AI systems are safe to deploy, what are the best arguments they could use?
Our new report tackles the (very big!) question of how to make a ‘safety case’ for AI.
[image of the start of the paper]
We define a safety case as a rationale developers provide to regulators to show that their AI systems are unlikely to cause a catastrophe.
The term ‘safety case’ is not new. In many industries (e.g. aviation), products are ‘put on trial’ before they are released.
[cartoon of a trial: regulator is a judge, one side is a developer advancing a safety case, other side is a red team advancing a risk case]
We simplify the process of making a safety case by breaking it into six steps.
Specify the macrosystem (all AI systems) and the deployment setting.
Concretize ‘AI systems cause a catastrophe’ into specific unacceptable outcomes (e.g. the AI systems build a bioweapon)
Justify claims about the deployment setting.
Carve up the collection of AI systems into smaller groups (subsystems) that can be analyzed in isolation.
Assess risk from subsystems acting unilaterally.
Assess risk from subsystems cooperating together.
[The above, but in picture format]
We first focus on step 5: how would developers argue that individual subsystems are safe? These arguments can be grouped into four categories.
Arguments could assert AI systems are
too weak to be dangerous even if open-sourced (inability)
unable to cause disaster due to restrictions (control)
safe despite ablility to cause harm (trustworthiness)
judged to be safe by credible AI advisors (deference)
[The above, but in picture format and slightly reworded]
Consider the analogy of a robber and a diamond. How might one justify that the robber won’t steal the diamond?
[Image with heading: “Why won’t the robber steal the diamond?”.
Row 1 says “Inability The robber is an infant. Analogy: e.g. GPT-3 is clearly too weak to be dangerous”, next to an image of a baby near a diamond on a pedestal.
Row 2 says “Control The robber is in jail. Analogy: e.g. AI systems are sandboxed and can’t escape.”, next to an image of a prisoner in a box with bars, near a diamond on a pedestal.
Row 3 says “Trustworthiness The robber does not steal. Analogy: e.g. AI systems follow developer instructions.”, next to an image of a person sitting down cross-legged and wearing an orange robe, next to a diamond on a pedestal.
Row 4 says “Deference The security service CEO says the diamond is secure. Analogy: AI advisors say that AI systems are safe.”, next to an image of a figure with grey hair saying “All good” next to a diamond on a pedestal with lines coming out of it, connecting it to small grey boxes (sensors?).]
The ordering of these categories is intentional. As AI systems become more powerful, developers will likely rely mostly on inability, then control, then trustworthiness, and finally, deference to AI advisors.
[Image of graph where the horizontal axis is “Increasingly powerful AI” and the vertical axis is “Primary safety argument”. Inability, Control, Trustworthiness, and Deference are shown in order from bottom-left to top-right. An arrow connects the words “We are here” to Inability.]
Next, we give examples of arguments in each category. Arguments are ranked on three axes:
Practicality
Strength
Scalability
No argument received full marks! Research will be needed to justify the safety of advanced AI systems.
[A complicated diagram showing a variety of arguments under the Inability, Control, Trustworthiness, and Deference categories, together with ratings for their Practicality, Maximum Strength, and Scalability.]
The arguments in the previous step pertain to small groups of AI systems. It would be difficult to directly apply them to large groups. We also explain how to justify that the actions of many AI systems won’t cause a catastrophe (step 6 in our framework).
[Image titled “Large-scale AI misbehavior”. Below are 3 rows, with 2 columns. The left column is labelled “Causes” and the right is labelled “Strategies”.
Row 1: Cause: Widespread alignment faking. Strategy: Blitzkrieg: overwhelm controls
Row 2: Cause: Infectious jailbreaks. Strategy: Strike: disable infrastructure.
Row 3: Cause: Rapid memetic value drift. Strategy: Hivemind: combine intelligence.
dots are shown below, likely to indicate that there are more causes and strategies not shown.]
We are hoping this report will:
Motivate research that further clarifies the assumptions behind safety arguments.
Inform the design of hard safety standards.
More in the paper: https://bit.ly/3IJ5N95 Many thanks to my coauthors! @NickGabs01, @DavidSKrueger, and @thlarsen.
Might be of interest to @bshlgrs, @RogerGrosse, @DavidDuvenaud, @EvanHub, @aleks_madry, @ancadianadragan, @rohinmshah, @jackclarkSF, @Manderljung, @RichardMCNgo
Update: I have already gotten over it.
It looks kinda small to me, someone who uses Firefox on Ubuntu.
A thing you are maybe missing is that the discussion groups are now in the past.
You should be sure to point out that many of the readings are dumb and wrong
The hope is that the scholars notice this on their own.
Week 3 title should maybe say “How could we safely train AIs…”? I think there are other training options if you don’t care about safety.
Lol nice catch.
We included a summary of Situational Awareness as an optional reading! I guess I thought the full thing was a bit too long to ask people to read. Thanks for the other recs!
to simplify, we ask that for every expression and set of arguments
Here and in the next dot point, should the inner heuristic estimate be conditioning on a larger set of arguments (perhaps chosen by an unknown method)? Otherwise it seems like you’re just expressing some sort of self-knowledge.
OP doesn’t emphasize liability insurance enough but part of the hope is that you can mandate that companies be insured up to $X00 billion, which costs them less than $X00 billion assuming that they’re not likely to be held liable for that much. Then the hope is the insurance company can say “please don’t do extremely risky stuff or your premium goes up”.
You say “higher numbers for polyamorous relationships” which is contrary to “If you’re polyamorous, but happen to have one partner, you would also put 1 for this question.”