Align it
sudo
Fair enough! For what it’s worth, I think the reconstruction is probably the more load-bearing part of the proposal.
Is your claim that the noise borne asymmetric pressure away from treacherous plans disappears in above-human intelligences? I could see it becoming less material as intelligence increases, but the intuition should still hold in principle.
“Most paths lead to bad outcomes” is not quite right. For most (let’s say human developed, but not a crux) plan specification languages, most syntactically valid plans in that language would not substantially mutate the world state when executed.
I’ll begin by noting that over the course of writing this post, the brittleness of treacherous plans became significantly less central.
However, I’m still reasonably convinced that the intuition is sound. If a plan is adversarial to humans, the plan’s executor will face adverse optimization pressure from humans and adverse optimization pressure complicates error correction.
Consider the case of a sniper with a gun that is loaded with 50% blanks and 50% lethal bullets (such that the ordering of the blanks and lethals are unknown to the sniper). Let’s say his goal is to kill a person on the enemy team.
If the sniper is shooting at an enemy team equipped with counter-snipers, he is highly unlikely to succeed (<50%). In fact, he is quite likely to die.
Without the counter-snipers, the fact that his gun is loaded with 50% blanks suddenly becomes less material. He could always just take another shot.
I claim that our world resembles the world with counter-snipers. The counter-snipers in the real world are humans who do not want to be permanently disempowered.
i.e. that the problem is easily enough addressed that it can be done by firms in the interests of making a good product and/or based on even a modest amount of concern from their employees and leadership
I’m curious about how contingent this prediction is on 1, timelines and 2, rate of alignment research progress. On 2, how much of your P(no takeover) comes from expectations about future research output from ARC specifically?
If tomorrow, all alignment researchers stopped working on alignment (and went to become professional tennis players or something) and no new alignment researchers arrived, how much more pessimistic would you become about AI takeover?
Epistemic Status: First read. Moderately endorsed.
I appreciate this post and I think it’s generally good for this sort of clarification to be made.
One distinction is between dying (“extinction risk”) and having a bad future (“existential risk”). I think there’s a good chance of bad futures without extinction, e.g. that AI systems take over but don’t kill everyone.
This still seems ambiguous to me. Does “dying” here mean literally everyone? Does it mean “all animals,” all mammals,” “all humans,” or just “most humans? If it’s all humans dying, do all humans have to be killed by the AI? Or is it permissible that (for example) the AI leaves N people alive, and N is low enough that human extinction follows at the end of these people’s natural lifespan?
I think I understand your sentence to mean “literally zero humans exist X years after the deployment of the AI as a direct causal effect of the AI’s deployment.”
It’s possible that this specific distinction is just not a big deal, but I thought it’s worth noting.
I think this is probably good to just 80⁄20 with like a weekend of work? So that there’s a basic default action plan for what to do when someone goes “hi designated community person, I’m depressed.”
People really should try to not have depression. Depression is bad for your productivity. Being depressed for eg a year means you lose a year of time, AND it might be bad for your IQ too.
A lot of EAs get depressed or have gotten depressed. This is bad. We should intervene early to stop it.
I think that there should be someone EAs reach out to when they’re depressed (maybe this is Julia Wise?), and then they get told the ways they’re probably right and wrong so their brain can update a bit, and a reasonable action plan to get them on therapy or meds or whatever.
Strong upvoted.
I’m excited about people thinking carefully about publishing norms. I think this post existing is a sign of something healthy.
Re Neel: I think that telling junior mech interp researchers to not worry too much about this seems reasonable. As a (very) junior researcher, I appreciate people not forgetting about us in their posts :)
I’d be excited about more people posting their experiences with tutoring
Short on time. Will respond to last point.
I wrote that they are not planning to “solve alignment once and forever” before deploying first AGI that will help them actually develop alignment and other adjacent sciences.
Surely this is because alignment is hard! Surely if alignment researchers really did find the ultimate solution to alignment and present it on a silver platter, the labs would use it.
Also: An explicit part of SERI MATS’ mission is to put alumni in orgs like Redwood and Anthropic AFAICT. (To the extent your post does this,) it’s plausibly a mistake to treat SERI MATS like an independent alignment research incubator.
Epistemic status: hasty, first pass
First of all thanks for writing this.
I think this letter is “just wrong” in a number of frustrating ways.
A few points:
“Engineering doesn’t help unless one wants to do mechanistic interpretability.” This seems incredibly wrong. Engineering disciplines provide reasonable intuitions for how to reason about complex systems. Almost all engineering disciplines require their practitioners to think concretely. Software engineering in particular also lets you run experiments incredibly quickly, which makes it harder to be wrong.
ML theory in particular is in fact useful for reasoning about minds. This is not to say that cognitive science is not also useful. Further, being able to solve alignment in the current paradigm would mean we have excellent practice when encountering future paradigms.
It seems ridiculous to me to confidently claim that labs won’t care to implement a solution to alignment.
I think you should’ve spent more time developing some of these obvious cruxes, before implying that SERI MATS should change its behavior based on your conclusions. Implementing these changes would obviously have some costs for SERI MATS, and I suspect that SERI MATS organizers do not share your views on a number of these cruxes.
Ordering food to go and eating it at the restaurant without a plate and utensils defeats the purpose of eating it at the restaurant
Restaurants are a quick and convenient way to get food, even if you don’t sit down and eat there. Ordering my food to-go saves me a decent amount of time and food, and also makes it frictionless to leave.
But judging by votes, it seems like people don’t find this advice very helpful. That’s fine :(
I think there might be a misunderstanding. I order food because cooking is time-consuming, not because it doesn’t have enough salt or sugar.
Maybe it’d be good if someone compiled a list of healthy restaurants available on DoorDash/Uber Eats/GrubHub in the rationalist/EA hubs?
Can’t you just combat this by drinking water?
If you plan to eat at the restaurant, you can just ask them for a box if you have food left over.
This is true at most restaurants. Unfortunately, it often takes a long time for the staff to prepare a box for you (o(5 minutes)).
A potential con is that most food needs to be refrigerated if you want to keep it safe to eat for several hours
One might simply get into the habit of putting whatever food they have in the refrigerator. I find that refrigerated food is usually not unpleasant to eat, even without heating.
Sometimes when you purchase an item, the cashier will randomly ask you if you’d like additional related items. For example, when purchasing a hamburger, you may be asked if you’d like fries.
It is usually a horrible idea to agree to these add-ons, since the cashier does not inform you of the price. I would like fries for free, but not for $100, and not even for $5.
The cashier’s decision to withhold pricing information from you should be evidence that you do not, in fact, want to agree to the deal.
Epistemic status: clumsy
An AI could also be misaligned because it acts in ways that don’t pursue any consistent goal (incoherence).
It’s worth noting that this definition of incoherence seems inconsistent with VNM. Eg. A rock might satisfy the folk definition of “pursuing a consistent goal,” but fail to satisfy VNM due to lacking completeness (and by corollary due to not performing expected utility optimization over the outcome space).
The first point isn’t super central. FWIW I do expect that humans will occasionally not swap words back.
Humans should just look at the noised plan and try to convert it into a more reasonable-seeming, executable plan.
Edit: that is, without intentionally changing details.