Engineer at METR.
Previously: Vivek Hebbar’s team at MIRI → Adrià Garriga-Alonso onvarious empirical alignment projects → METR.
I have signed no contracts or agreements whose existence I cannot mention.
Engineer at METR.
Previously: Vivek Hebbar’s team at MIRI → Adrià Garriga-Alonso onvarious empirical alignment projects → METR.
I have signed no contracts or agreements whose existence I cannot mention.
Will we ever have Poké Balls in real life? How fast could they be at storing and retrieving animals? Requirements:
Made of atoms, no teleportation or fantasy physics.
Small enough to be easily thrown, say under 5 inches diameter
Must be able to disassemble and reconstruct an animal as large as an elephant in a reasonable amount of time, say 5 minutes, and store its pattern digitally
Must reconstruct the animal to enough fidelity that its memories are intact and it’s physically identical for most purposes, though maybe not quite to the cellular level
No external power source
Works basically wherever you throw it, though it might be slower to print the animal if it only has air to use as feedstock mass or can’t spread out to dissipate heat
Should not destroy nearby buildings when used
Animals must feel no pain during the process
It feels pretty likely to me that we’ll be able to print complex animals eventually using nanotech/biotech, but the speed requirements here might be pushing the limits of what’s possible. In particular heat dissipation seems like a huge challenge; assuming that 0.2 kcal/g of waste heat is created while assembling the elephant, which is well below what elephants need to build their tissues, you would need to dissipate about 5 GJ of heat, which would take even a full-sized nuclear power plant cooling tower a few seconds. Power might be another challenge. Drexler claims you can eat fuel and oxidizer, turn all the mass into basically any lower-energy state, and come out easily net positive on energy. But if there is none available you would need a nuclear reactor.
and yet, the richest person is still only responsible for 0.1%* of the economic output of the united states.
Musk only owns 0.1% of the economic output of the US but he is responsible for more than this, including large contributions to
Politics
Space
SpaceX is nearly 90% of global upmass
Dragon is the sole American spacecraft that can launch humans to ISS
Starlink probably enables far more economic activity than its revenue
Quality and quantity of US spy satellites (Starshield has ~tripled NRO satellite mass)
Startup culture through the many startups from ex-SpaceX employees
Twitter as a medium of discourse, though this didn’t change much
Electric cars probably sped up by ~1 year by Tesla, which still owns over half the nation’s charging infrastructure
AI, including medium-sized effects on OpenAI and potential future effects through xAI
Depending on your reckoning I wouldn’t be surprised if Elon’s influence added up to >1% of Americans combined. This is not really surprising because a Zipfian relationship would give the top person in a nation of 300 million 5% of the total influence.
Agree that AI takeoff could likely be faster than our OODA loop.
There are four key differences between this and the current AI situation that I think makes this perspective pretty outdated:
AIs are made out of ML, so we have very fine-grained control over how we train them and modify them for deployment, unlike animals which have unpredictable biological drives and long feedback loops.
By now, AIs are obviously developing generalized capabilities. Rather than arguments over whether AIs will ever be superintelligent, the bulk of the discourse is over whether they will supercharge economic growth or cause massive job loss and how quickly.
There are at least 10 companies that could build superintelligence within 10ish years and their CEOs are all high on motivated reasoning, so stopping is infeasible
Current evidence points to takeoff being continuous and merely very fast—even automating AI R&D won’t cause the hockey-stick graph that human civilization had
“Random goals” is a crux. Complicated goals that we can’t control well enough to prevent takeover are not necessarily uniformly random goals from whatever space you have in mind.
“Don’t believe there is any chance” is very strong. If there is a viable way to bet on this I would be willing to bet at even odds that conditional on AI takeover, a few humans survive 100 years past AI takeover.
I’m working on the autonomy length graph mentioned from METR and want to caveat these preliminary results. Basically, we think the effective horizon length of models is a bit shorter than 2 hours, although we do think there is an exponential increase that, if it continues, could mean month-long horizons within 3 years.
Our task suite is of well-defined tasks. We have preliminary data showing that messier tasks like the average SWE intellectual labor are harder for both models and low-context humans.
This graph is of 50% horizon time (the human time-to-complete at which models succeed at 50% of tasks). The 80% horizon time is only about 15 minutes for current models.
We’ll have a blog post out soon showing the trend over the last 5 years and going into more depth about our methodology.
It’s not clear whether agents will think in neuralese, maybe end-to-end RL in English is good enough for the next few years and CoT messages won’t drift enough to allow steganography
Once agents think in either token gibberish or plain vectors maybe self-monitoring will still work fine. After all agents can translate between other languages just fine. We can use model organisms or some other clever experiments to check whether the agent faithfully translates its CoT or unavoidably starts lying to us as it gets more capable.
I care about the exact degree to which monitoring gets worse. Plausibly it gets somewhat worse but is still good enough to catch the model before it coups us.
I’m not happy about this but it seems basically priced in, so not much update on p(doom).
We will soon have Bayesian updates to make. If we observe that incentives created during end-to-end RL naturally produce goal guarding and other dangerous cognitive properties, it will be bad news. If we observe this doesn’t happen, it will be good news (although not very good news because web research seems like it doesn’t require the full range of agency).
Likewise, if we observe models’ monitorability and interpretability start to tank as they think in neuralese, it will be bad news. If monitoring and interpretability are unaffected, good news.
Interesting times.
Thanks for the update! Let me attempt to convey why I think this post would have been better with fewer distinct points:
In retrospect, maybe I should’ve gone into explaining the basics of entropy and enthalpy in my reply, eg:
If you replied with this, I would have said something like “then what’s wrong with the designs for diamond mechanosynthesis tooltips, which don’t resemble enzymes and have been computationally simulated as you mentioned in point 9?” then we would have gone back and forth a few times until either (a) you make some complicated argument I don’t understand enough to believe nor refute, or (b) we agree on what definition of “enzyme” or “selectively bind to individual molecules” is required for nanotech, which probably includes the carbon dimer placer (image below). Even in case (b) we could continue arguing about how practical that thing plus other steps in the process are, and not achieve much.
The problem as I see it is that a post that makes a large number of points quickly, where each point has subtleties requiring an expert to adjudicate, on a site with few experts, is inherently going to generate a lot of misunderstanding. I have a symmetrical problem to you; from my perspective someone was using somewhat complicated arguments to prove things that defy my physical intuition, and to defend against a Gish gallop I need to respond to every point, but doing this in a reasonable amount of time requires me to think and write with less than maximum clarity and accuracy.
The solution I would humbly recommend is to make fewer points, selected carefully to be bulletproof, understandable to non-experts, and important to the overall thesis. Looking back on this, point 14 could have been its own longform, and potentially led to a lot of interesting discussion like this post did. Likewise point 6 paragraph 2.
How would we know?
This doesn’t seem wrong to me so I’m now confused again what the correct analysis is. It would come out the same way if we assume rationalists are selected on g right?
Is a Gaussian prior correct though? I feel like it might be double-counting evidence somehow.
TLDR:
What OP calls “streetlighting”, I call an efficient way to prioritize problems by tractability. This is only a problem insofar as we cannot also prioritize by relevance.
I think problematic streetlighting is largely due to incentives, not because people are not smart / technically skilled enough. Therefore solutions should fix incentives rather than just recruiting smarter people.
First, let me establish that theorists very often disagree on what the hard parts of the alignment problem are, precisely because not enough theoretical and empirical progress has been made to generate agreement on them. All the lists of “core hard problems” OP lists are different, and Paul Christiano famously wrote a 27-point list of disagreements on Eliezer’s. This means that most people’s views of the problem are wrong, and should they stick to their guns they might perseverate on either an irrelevant problem or a doomed approach.
I’d guess that historically perseveration has been an equally large problem as streetlighting among alignment researchers. Think of all the top alignment researchers in 2018 and all the agendas that haven’t seen much progress. Decision theory should probably not take ~30% of researcher time like it did back in the day.[1]
In fact, failure is especially likely for people who are trying to tackle “core hard problems” head-on, and not due to lack of intelligence. Many “core hard problems” are observations of lack of structure, or observations of what might happen in extreme generality e.g. Eliezer’s
“We’ve got no idea what’s actually going on inside the giant inscrutable matrices and tensors of floating-point numbers.”
(summarized) “Outer optimization doesn’t in general produce aligned inner goals”, or
“Human beings cannot inspect an AGI’s output to determine whether the consequences will be good.”
which I will note are completely different type signature from subproblems that people can actually tractably research. Sometimes we fail to define a tractable line of attack. Other times these ill-defined problems get turned into entire subfields of alignment, like interpretability, which are filled with dozens of blind alleys of irrelevance that extremely smart people frequently fall victim to. For comparison, some examples of problems ML and math researchers can actually work on:
Unlearning: Develop a method for post-hoc editing a model, to make it as if it were never trained on certain data points
Causal inference: Develop methods for estimating the causation graph between events given various observational data.
Fermat’s last theorem: Prove whether there are integer solutions to an + bn = cn.
The unit of progress is therefore not “core hard problems” directly, but methods that solve well-defined problems and will also be useful in scalable alignment plans. We must try to understand the problem and update our research directions as we go. Everyone has to pivot because the exact path you expected to solve a problem basically never works. But we have to update on tractability as well as relevance! For example, Redwood (IMO correctly) pivoted away from interp because other plans seemed viable (relevance) and it seemed too hard to explain enough AI cognition through interpretability to solve alignment (tractability).[2]
OP seems to think flinching away from hard problems is usually cope / not being smart enough. But OP’s list of types of cope are completely valid as either fundamental problem-solving strategies or prioritization. (4 is an incentives problem, which I’ll come back to later)
Carol explicitly introduces some assumption simplifying the problem, and claims that without the assumption the problem is impossible. [...]
Carol explicitly says that she’s not trying to solve the full problem, but hopefully the easier version will make useful marginal progress.
Carol explicitly says that her work on easier problems is only intended to help with near-term AI, and hopefully those AIs will be able to solve the harder problems.
1 and 2 are fundamental problem-solving techniques. 1 is a crucial part of Polya’s step 1: understand the problem, and 2 is a core technique for actually solving the problem. I don’t like relying on 3 as stated, but there are many valid reasons for focusing on near-term AI[3].
Now I do think there is lots of distortion of research in unhelpful directions related to (1, 2, 3), often due to publication incentives.[4] But understanding the problem and solving easier versions of it has a great track record in complicated engineering; you just have to solve the hard version eventually (assuming we don’t get lucky with alignment being easy, which is very possible but we shouldn’t plan for).
So to summarize my thoughts:
Streetlighting is real, but much of what OP calls streetlighting is a justified focus on tractability.
We can only solve “core hard problems” by creating tractable well-defined problems
OP’s suggested solution—higher intelligence and technical knowledge—doesn’t seem to fit the problem.
There are dozens of ML PhDs, physics PhDs, and comparably smart people working on alignment. As Ryan Kidd pointed out, the stereotypical MATS student is now a physics PhD or technical professional. And presumably according to OP, most people are still streetlighting.
Technically skilled people seem equally susceptible to incentives-driven streetlighting, as well as perseveration.
If the incentives continue to be wrong, people who defy them might be punished anyway.
Instead, we should fix incentives, maybe like this:
Invest in making “core hard problems” easier to study
Reward people who have alignment plans that at least try to scale to superintelligence
Reward people who think about whether others’ work will be helpful with superintelligence
Develop things like alignment workshops, so people have a venue to publish genuine progress that is not legible to conferences
Pay researchers with illegible results more to compensate for their lack of publication / social rewards
MIRI’s focus on decision theory is itself somewhat due to streetlighting. As I understand, 2012ish MIRI leadership’s worldview was that several problems had to be solved for AI to go well, but the one they could best hire researchers for was decision theory, so they did lots of that. Also someone please correct me on the 30% of researcher time claim if I’m wrong.
OP’s research is not immune to this. My sense is that selection theorems would have worked out if there had been more and better results.
e.g. if deploying on near-term AI will yield empirical feedback needed to stay on track, significant risk comes from near-term AI, near-term AI will be used in scalable oversight schemes, …
As I see it, there is lots of distortion by the publishing process now that lots of work is being published. Alignment is complex enough that progress in understanding the problem is a large enough quantity of work to be a paper. But in a paper, it’s very common to exaggerate one’s work, especially the validity of the assumptions[5], and people need to see through this for the field to function smoothly.
I am probably guilty of this myself, though I try to honestly communicate my feelings about the assumptions in a long limitations section
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone’s utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today’s moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
I agree. You only multiply the SAT z-score by 0.8 if you’re selecting people on high SAT score and estimating the IQ of that subpopulation, making a correction for regressional Goodhart. Rationalists are more likely selected for high g which causes both SAT and IQ, so the z-score should be around 2.42, which means the estimate should be (100 + 2.42 * 15 − 6) = 130.3. From the link, the exact values should depend on the correlations between g, IQ, and SAT score, but it seems unlikely that the correction factor is as low as 0.8.
I was at the NeurIPS many-shot jailbreaking poster today and heard that defenses only shift the attack success curve downwards, rather than changing the power law exponent. How does the power law exponent of BoN jailbreaking compare to many-shot, and are there defenses that change the power law exponent here?
It’s likely possible to engineer away mutations just by checking. ECC memory already has an error rate nine orders of magnitude better than human DNA, and with better error correction you could probably get the error rate low enough that less than one error happens in the expected number of nanobots that will ever exist. ECC is not the kind of checking for which the checking process can be disabled, as the memory module always processes raw bits into error-corrected bits, which fails unless it matches some checksum which can be made astronomically unlikely to happen in a mutation.
I was expecting some math. Maybe something about the expected amount of work you can get out of an AI before it coups you, if you assume the number of actions required to coup is n, the trusted monitor has false positive rate p, etc?
I’m pretty skeptical of this because the analogy seems superficial. Thermodynamics says useful things about abstractions like “work” because we have the laws of thermodynamics. What are the analogous laws for cognitive work / optimization power? It’s not clear to me that it can be quantified such that it is easily accounted for:
We all come from evolution. Where did the cognitive work come from?
Algorithms can be copied
It is also not clear what distinguishes LLM weights from the weights of a model trained on random labels from a cryptographic PRNG. Since the labels are not truly random, they have the same amount of optimization done to them, but since CSPRNGs can’t be broken just by training LLMs on them, the latter model is totally useless while the former is potentially transformative.
My guess is this way of looking at things will be like memetics in relation to genetics: likely to spawn one or two useful expressions like “memetically fit”, but due to the inherent lack of structure in memes compared to DNA life, not a real field compared to other ways of measuring AIs and their effects (scaling laws? SLT?). Hope I’m wrong.
I think eating the Sun is our destiny, both in that I expect it to happen and that I would be pretty sad if we didn’t; I just hope it will be done ethically. This might seem like a strong statement but bear with me
Our civilization has undergone many shifts in values as higher tech levels have indicated that sheer impracticality of living a certain way, and I feel okay about most of these. You won’t see many people nowadays who avoid being photographed because photos steal a piece of their soul. The prohibition on women working outside the home, common in many cultures, is on its way out. Only a few groups like the Amish avoid using electricity for culture reasons. The entire world economy runs on usury stacked upon usury. People cared about all of these things strongly, but practicality won.
To believe that eating the Sun is potentially desirable, you don’t have to have linear utility in energy/mass/whatever and want to turn it into hedonium. It just seems like extending the same sort of tradeoffs societies make every day in 2025 leads to eating the sun, considering just how large a fraction of available resources it will represent to a future civilization. The Sun is 99.9% of the matter and more than 99.9% of the energy in the solar system, and I can’t think of any examples of a culture giving up even 99% of its resources for cultural reasons. No one bans eating 99.9% of available calories, farming 99.9% of available land, or working 99.9% of jobs. Today, traditionally minded and off-grid people generally strike a balance between commitment to their lifestyle and practicality, and many of them use phones and hospitals. Giving up 99.9% of resources would mean giving up metal and basically living in the Stone Age. [1]
When eating the Sun, as long as we spend 0.0001% of the Sun’s energy to set up an equivalent light source pointing at Earth, it doesn’t prevent people from continuing to live on Earth, spending their time farming potatoes and painting, nor does it destroy any habitats. There is really nothing of great intrinsic value lost here. We can’t do the same today when destroying the rainforests! If people block eating the Sun and this is making peoples’ lives worse it’s plausible we should think of them like NIMBYs who prevent dozens of poor people from getting housing because it would ruin their view.
The closest analogies I can think of in the present day are nuclear power bans and people banning vitamin-enriched GMO crops even as children were dying of malnutrition. With nuclear, energy is cheap enough that people can still heat their homes without, so maybe we’ll have an analogous situation where energy is much cheaper than non-hydrogen matter during the period when we would want to eat the Sun. (We would definitely disassemble most of the planets though, unless energy and matter are both cheap relative to some third thing but I don’t see what that would be.) With GMOs I feel pretty sad about the whole situation and wish that science communication were better. At least if we fail to eat the sun and distribute gains to society people probably wouldn’t die as a result.
[1] It might be that the 1000xing income is less valuable in the future than it was in the Neolithic, but probably a Neolithic person would also be skeptical that 1000xing resources is valuable until you explain what technology can do now. If we currently value talking to people across the world, why wouldn’t future people value running 10,000 copies of themselves to socialize with all their friends at once?