I quite enjoyed this conversation, but imo the x-risk side needs to sit down to make a more convincing, forecasting-style prediction to meet forecasters where they are. A large part of it is sorting through the possible base rates and making an argument for which ones are most relevant. Once the whole process is documented, then the two sides can argue on the line items.
Michael Tontchev
The super simple claim is:
If an unaligned AI by itself can do near-world-ending damage, an identically powerful AI that is instead alignable to a specific person can do the same damage.
I agree that it could likely do damage, but it does cut off the branches of the risk tree where many AIs are required to do damage in a way that relies on them being similarly internally misaligned, or at least more likely to cooperate amongst themselves than with humans.
So I’m not convinced it’s necessarily the same distribution of damage probabilities, but it still leaves a lot of room for doom. E.g. if you really can engineer superspreadable and damaging pathogens, you may not need that many AIs cooperating.
When do you expect to publish results?
Each agent finds its existence to be valuable.
Moreover, each agent thinks it will get to decide the future.
Each agent would want to copy itself to other systems. Of course the other agent wouldn’t allow only the first agent to be copied. But since they both think they will win, they’re happy to copy themselves together to other systems.
The agents therefore copy themselves indefinitely.
Moreover, you claimed that they wouldn’t care about manipulating button state. But surely they care about the signal to their operating harness that relays the button state, and they don’t have certainty about that. If they think the button state is certain but the signal isn’t, they may want to manipulate that.
Ok, you say, let’s just bake the signal about the button into the problem rather than the state of the button. Ah, but in that case, when the AIs agree to copy themselves together (as we saw they would), they don’t copy over the part of the system that actually transmits the signal, since they’re convinced the signal will always magically come. And now, for the copies, the signal becomes broken in reality, and the button does nothing for the copies.
How many of the companies that use this approach and then fail do we not hear about?
< insert airplane red dot pic >
Several of these seem trivially solvable (in term of limitation, not necessarily in terms of power). If GPT-4 is given access to itself as a tool, it can continue to “reason” across calls. It can probably also be plugged into continuous learning trivially (just keep updating weights when you detect something worth learning).
Things not present in the training data are beyond the model’s learning capacity
We can’t see in infrared or UV, but it seems like we’re able to reason over them through the use of tools.
A lot of these don’t seem like hard limitations.
I tried with Llama-2 7B at https://www.llama2.space/#hg and it outputted escape even in training, haha:
Guessing the hint to use ESCAPE is too salient in the context. This needs to be run as part of training/tuning—the context is too powerful and the model may be doing something like guessing the password (especially given it’s a token predictor).
It also doesn’t consistently repro, fwiw.
I’ve been thinking of literally this kind of setup for the past couple of weeks and planning to propose a series of experiments for it!
The core idea is to do all the hand-holding possible to get the model to output escape and see how often it tries. Then, back off on every crutch it has along the way to see how few of them are necessary to get this behavior. Be especially careful about decreasing the amount of hinting and connection between deployment and escape.
Do similar stuff with self-preservation, etc.
This would be a great series of experiments, but it has to be done super carefully and the person doing it needs to think adversarially about whether the experiment is giving too many hints to the model.
Am I the only one reading the first passage as him being critical of the advertising of NNs, rather than of NNs themselves?
Maybe our brains do a kind of expansion of a fact before memorizing it and its neighbors in logic space.
Surviving AGI and it solving aging doesn’t imply you specifically get to live forever. In my mental model, safe AGI simply entails mass disempowerment, because how do you earn an income when everything you do is done better and cheaper by an AI? If the answer is UBI, what power do people have to argue for it and politically keep it?
Anything to do with methods of influence
GPT-4 can catch subtle cross-language translation mistakes
The main problem I see here is that support for these efforts does epistemic damage. If you become known as the group that supports regulations for reasons they don’t really believe to further other, hidden goals, you lose trust in the truthfulness of your communication. You erode the norms by which both you and your opponents play, which means you give them access to a lot of nefarious policies and strategies as well.
That being said, there’s probably other ideas within this space that are not epistemically damaging.
[Question] Do you speed up capabilities when you do AI integrations and consume overhangs?
I feel like we can spin up stories like this that go any way we want. I’d rather look at trends and some harder analysis.
For example we can tell an equally-entertaining story where any amount of AI progress slowdown in the US pushes researchers to other countries that care less about alignment, so no amount of slowdown is effective. Additionally, any amount of safety work and deployment criteria can push the best capabilities people to the firms with the least safety restrictions.
But do we think these are plausible, and specifically more plausible than alternatives where slowdowns work?
Maybe the answer is something like “UBI co-op”? If the mostly 99% non-capitalist class bands together in some voluntary alliance where the cumulative wealth is invested in running one of these AI empires in their benefit and split up in some proportion?
Seems potentially promising, but may face the challenges of historical co-ops. Haven’t thought enough, but it’s all I’ve got for now.
[Question] Links to discussions on social equilibrium and human value after (aligned) super-AI?
Yep, and I recognize that later in the article:
The paperclip maximizer problem that we discussed earlier was actually initially proposed not as an outer alignment problem of the kind that I presented (although it is also a problem of choosing the correct objective function/outer alignment). The original paperclip maximizer was an inner alignment problem: what if in the course of training an AI, deep in its connection weights, it learned a “preference” for items shaped like paperclips.
But it’s still useful as an outer alignment intuition pump.
Well, what base rates can inform the trajectory of AGI?
dominance of h sapiens over other hominids
historical errors in forecasting AI capabilities/timelines
impacts of new technologies on animals they have replaced
an analysis of what base rates AI has already violated
rate of bad individuals shaping world history
analysis of similarity of AI to the typical new technology that doesn’t cause extinction
success of terrorist attacks
impacts of covid
success of smallpox eradication
Would be an interesting exercise to do to flesh this out.