I have such a strong intuitive opposition to the Internal Reaction Drive that I agree with your conclusion that we should update away from any theory which allows it. Then again, perhaps it is impossible to build such a drive for the merely practical reason that any material with a positive or negative index of refraction will absorb enough light to turn the drive into an expensive radiator.
Especially given the recent Nobel prize announcement, I think the most concerning piece of information is that there are cultural forces from within the physics community discouraging people from trying to answer the question at all.
Adam Kaufman
You need abstractions to think and plan at all with limited compute, not just to speak. I would guess that plenty animals which are incapable of speaking also mentally rely on abstractions. For instance, when foraging for apples, I suspect an animal probably has a mental category for apples, and treats them as the same kind of thing rather than completely unrelated configurations of atoms.
The planet Mercury is a pretty good source of material:
Mass: kg (which is about 70% iron)Radius: m
Volume: m^3
Density: kg/m^3
Orbital radius: m
A spherical shell around the sun at roughly same radius as Mercury’s orbit would have a surface area of m^2, and spreading out Mercury’s volume over this area gives a thickness of about 1.4 mm. This means Mercury alone provides ample material for collecting all of the Sun’s energy via reflecting light – very thin spinning sheets could act as a swarm of orbiting reflectors that focus sunlight onto large power plants or mirrors that direct it to elsewhere in the solar system. Spinning sheets could be made somewhere between 1-100 μm thick, with thicker cables or supports for additional strength, perhaps 1-10 km wide, and navigate using radiation pressure (using cables that bend the sheet, perhaps). Something like or mirrors would be enough to intercept and redirect all of the sun’s light.
The gravitational binding energy of Mercury is on the order of J, or on the order of an hour of the Sun’s output. This means in theory the time it takes for a new mirror to pay it’s own manufacturing energy cost is in principle quite small; if each kg of material from Mercury is enough to make on the order of 1-100 square meters of mirror, then it will pay for itself in somewhere between minutes and hours (there are roughly 10,000 w/m^2 of solar energy at Mercury’s orbit, and each kg of material on average requires on the order of J to remove). Only 40-80 doublings are required to consume the whole planet depending on how thick the mirrors are and how much material is used to start the process. Even with many orders of magnitude of overhead to account for inefficiency and heat dissipation, I believe Mercury could be disassembled to cover the entire sun with reflectors on the order of years and perhaps as quickly as months; certainly within decades.
A better way to do the memory overwrite experiment is to prepare a list of what’s in the box to match each of ten possible numbers, then have someone provide a random number while your short term memory doesn’t work and see if you can successfully overwrite the memory that corresponds to that number (as measured by correctly guessing the number much later).
I’m confused. I know that it is like something to be me (this is in some sense the only thing I know for sure). It seems like there rules which shape the things I experience, and some of those rules can be studied (like the laws of physics). We are good enough at understanding some of these rules to predict certain systems with a high degree of accuracy, like how an asteroid will orbit a star or how electrons will be pushed through a wire by a particular voltage in a circuit. But I have no way to know or predict if it is like something to be a fish or GPT-4. I know that physical alterations to my brain seem to affect my experience, so it seems like there is a mapping from physical matter to experiences. I do not know precisely what this mapping is, and this indeed seems like a hard problem. In what sense do you disagree with my framing here?
Oh good catch, I missed that. Thanks!
I am not so sure it will be possible to extract useful work towards solving alignment out of systems we do not already know how to carefully steer. I think that substantial progress on alignment is necessary before we know how to build things that actually want to help us advance the science. Even if we built something tomorrow that was in principle smart enough to do good alignment research, I am concerned we don’t know how to make it actually do that rather than, say, imitate more plausible-sounding but incorrect ideas. The fact that appending silly phrases like “I’ll tip $200” improves the probability of receiving correct code from current LLMs indicates to me that we haven’t succeeded at aligning them to maximally want to produce correct code when they are capable of doing so.
How does Harry know the name “Lucius Malfoy”?
We aren’t surprised by HHTHHTTTHT or whatever because we perceive it as the event “a sequence containing a similar number of heads and tails in any order, ideally without a long subsequence of H or T”, which occurs frequently.
I’m enjoying this series, and look forward to the next installment.
The thing I mean by “superintelligence” is very different from a government. A government cannot design nanotechnology, and is made of humans which value human things.
What can men do against such reckless indifference?
Can someone with more knowledge give me a sense of how new this idea is, and guess at the probability that it is onto something?
Why are we so sure chatbots (and parrots for that matter) are not conscious? Well, maybe the word is just too slippery to define, but I would bet that parrots have some degree of subjective experience, and I am sufficiently uncertain regarding chatbots that I do worry about it slightly.
Please note that the graph of per capita war deaths is on a log scale. The number moves over several orders of magnitude. One could certainly make the case that local spikes were sometimes caused by significant shifts in the offense-defense balance (like tanks and planes making offense easier for a while at the beginning of WWII). These shifts are pushed back to equilibrium over time, but personally I would be pretty unhappy about, say, deaths from pandemics spiking 4 orders of magnitude before returning to equilibrium.
Finding Sparse Linear Connections between Features in LLMs
This random Twitter person says that it can’t. Disclaimer: haven’t actually checked for myself.
https://chat.openai.com/share/36c09b9d-cc2e-4cfd-ab07-6e45fb695bb1
Here is me playing against GPT-4, no vision required. It does just fine at normal tic-tac-toe, and figures out anti-tic-tac-toe with a little bit of extra prompting.
Yes. I think the title of my post is misleading (I have updated it now). I think I am trying to point at the problem that the current incentives mean we are going to mess up the outer alignment problem, and natural selection will favor the systems that we fail the hardest on.
Kudos for releasing a concept of a plan! Some thoughts:
Regarding the first safety case:
The amount of progress in mech interp required to make the first safety case suitable seems overly optimistic to me; I basically think that most of the “limitations” are in fact pretty serious. However, I appreciate that the attempt to include concrete requirements.
I believe that getting good results from the following experiments might be particularly unrealistic:
“In order to robustify our evals against sandbagging, we ran an experiment where we steered one or more truth-related features to make the model confident that it is in either testing or unmonitored deployment”
I worry that “truth-related” features are very difficult both in principle and in practice to isolate, as the model’s representation of what other people / entities believe will be hard to disentangle from what it itself believes, to the extent that those are even meaningfully separable concepts.
Even if you could identify a feature roughly corresponding to “I believe the current sentence is true”, it is not immediately clear to me how this could be used to successfully convince the model of more complex situational information, e.g. it is not being monitored and is in a real deployment setting. For instance, I don’t expect artificially activating that feature on the sentence “I am not being monitored and this is a real deployment setting” to just work.
“In all of these model organisms, deactivating the features of concern was sufficient to prevent the harmful behaviors”
I can’t concisely refute this. It is just a “huge if true” kind of claim, and as far as I am aware very much not possible with current techniques. If we could isolate a set of linear directions that were robustly causally responsible for scheming behavior, that would have massive ramifications far beyond this setting.
I worry a little bit about “training on test” here – there may exist some overfit set of classifiers that can stop model organisms from misbehaving in a specific setting without meaningfully generalizing. This is also related to the stated “Realism of the model organisms” concerns.
I strongly approve of this element of the plan: “In order to preserve the integrity of the learned features, we ensured that the learned features were never used as part of the training procedure for the models.”
It’s kind of hard to do this in full generality, since you need to not only avoid training directly against the probes, but also avoid using the information from probes at all when selecting what to train on.
I’m more excited about the control safety case, but have a few nitpicks:
Developing a good elicitation strategy is indeed challenging.
I think it is pretty crucial to use real human red-teaming, especially if sandbagging (more specifically something like exploration hacking) is a concern.
It may be helpful to give the red team way more affordances than the untrusted model would actually have in deployment, and only remove affordances if defense becomes impossible. Some examples of extra affordances:
Letting the red team know the blue team’s exact strategy in order to search for the best response
Giving an untrusted model more unmonitored chain-of-thought / tool use / human assistance than it would actually have in deployment
Allowing human red-teamers to repeatedly attempt to bypass a monitor when in reality a scheming model might have to one-shot it
Figuring out exactly what extra affordances suffice to break the control protocol may also provide useful signal about where the protocol could be strengthened, and what capabilities need to be tested for going forward.
Eliciting capabilities from substantially superhuman models is a much harder problem, and is one of the main reasons I don’t expect control to suffice for a safety case well beyond human-level AI.
The space of strategies beyond trusted monitoring alone is pretty large. You mention untrusted monitoring, but there is also a broad space of strategies that include editing outputs with a trusted model, allocating more monitoring/auditing budget in certain situations, etc.
It is probably best to continually produce new honeypots throughout deployment.
For instance, if untrusted models are being used to do lots of automated software engineering and research, likely requiring huge amounts of context, it will become more obvious to the model over time that it is in deployment.
Making new honeypots out of actual deployment settings, for instance by adding exploitable security flaws, may generally make it much harder for the model to tell when it is being tested.
I haven’t read the last safety case yet, but may have some thoughts on it later. I am most excited about control at the moment, in part due to concerns that interpretability won’t advance far enough to suffice for a safety case by the time we develop transformative AI.