DMs open.
Cleo Nardo
Hey TurnTrout.
I’ve always thought of your shard theory as something like path-dependence? For example, a human is more excited about making plans with their friend if they’re currently talking to their friend. You mentioned this in a talk as evidence that shard theory applies to humans. Basically, the shard “hang out with Alice” is weighted higher in contexts where Alice is nearby.
Let’s say is a policy with state space and action space .
A “context” is a small moving window in the state-history, i.e. an element of where is a small positive integer.
A shard is something like , i.e. it evaluates actions given particular states.
The shards are “activated” by contexts, i.e. maps each context to the amount that shard is activated by the context.
The total activation of , given a history , is given by the time-decay average of the activation across the contexts, i.e.
The overall utility function is the weighted average of the shards, i.e.
Finally, the policy will maximise the utility function, i.e.
Is this what you had in mind?
Why do you care that Geoffrey Hinton worries about AI x-risk?
Why do so many people in this community care that Hinton is worried about x-risk from AI?
Do people mention Hinton because they think it’s persuasive to the public?
Or persuasive to the elites?
Or do they think that Hinton being worried about AI x-risk is strong evidence for AI x-risk?
If so, why?
Is it because he is so intelligent?
Or because you think he has private information or intuitions?
Do you think he has good arguments in favour of AI x-risk?
Do you think he has a good understanding of the problem?
Do you update more-so on Hinton’s views than on Yann LeCun’s?
I’m inspired to write this because Hinton and Hopfield were just announced as the winners of the Nobel Prize in Physics. But I’ve been confused about these questions ever since Hinton went public with his worries. These questions are sincere (i.e. non-rhetorical), and I’d appreciate help on any/all of them. The phenomenon I’m confused about includes the other “Godfathers of AI” here as well, though Hinton is the main example.
Personally, I’ve updated very little on either LeCun’s or Hinton’s views, and I’ve never mentioned either person in any object-level discussion about whether AI poses an x-risk. My current best guess is that people care about Hinton only because it helps with public/elite outreach. This explains why activists tend to care more about Geoffrey Hinton than researchers do.
This is a Trump/Kamala debate from two LW-ish perspectives: https://www.youtube.com/watch?v=hSrl1w41Gkk
the base model is just predicting the likely continuation of the prompt. and it’s a reasonable prediction that, when an assistant is given a harmful instruction, they will refuse. this behaviour isn’t surprising.
it’s quite common for assistants to refuse instructions, especially harmful instructions. so i’m not surprised that base llms systestemically refuse harmful instructions from than harmless ones.
yep, something like more carefulness, less “playfulness” in the sense of [Please don’t throw your mind away by TsviBT]. maybe bc AI safety is more professionalised nowadays. idk.
thanks for the thoughts. i’m still trying to disentangle what exactly I’m point at.
I don’t intend “innovation” to mean something normative like “this is impressive” or “this is research I’m glad happened” or anything. i mean something more low-level, almost syntactic. more like “here’s a new idea everyone is talking out”. this idea might be a threat model, or a technique, or a phenomenon, or a research agenda, or a definition, or whatever.
like, imagine your job was to maintain a glossary of terms in AI safety. i feel like new terms used to emerge quite often, but not any more (i.e. not for the past 6-12 months). do you think this is a fair? i’m not sure how worrying this is, but i haven’t noticed others mentioning it.
NB: here’s 20 random terms I’m imagining included in the dictionary:
Evals
Mechanistic anomaly detection
Stenography
Glitch token
Jailbreaking
RSPs
Model organisms
Trojans
Superposition
Activation engineering
CCS
Singular Learning Theory
Grokking
Constitutional AI
Translucent thoughts
Quantilization
Cyborgism
Factored cognition
Infrabayesianism
Obfuscated arguments
I’ve added a fourth section to my post. It operationalises “innovation” as “non-transient novelty”. Some representative examples of an innovation would be:
I think these articles were non-transient and novel.
(1) Has AI safety slowed down?
There haven’t been any big innovations for 6-12 months. At least, it looks like that to me. I’m not sure how worrying this is, but i haven’t noticed others mentioning it. Hoping to get some second opinions.
Here’s a list of live agendas someone made on 27th Nov 2023: Shallow review of live agendas in alignment & safety. I think this covers all the agendas that exist today. Didn’t we use to get a whole new line-of-attack on the problem every couple months?
By “innovation”, I don’t mean something normative like “This is impressive” or “This is research I’m glad happened”. Rather, I mean something more low-level, almost syntactic, like “Here’s a new idea everyone is talking out”. This idea might be a threat model, or a technique, or a phenomenon, or a research agenda, or a definition, or whatever.
Imagine that your job was to maintain a glossary of terms in AI safety.[1] I feel like you would’ve been adding new terms quite consistently from 2018-2023, but things have dried up in the last 6-12 months.
(2) When did AI safety innovation peak?
My guess is Spring 2022, during the ELK Prize era. I’m not sure though. What do you guys think?
(3) What’s caused the slow down?
Possible explanations:
ideas are harder to find
people feel less creative
people are more cautious
more publishing in journals
research is now closed-source
we lost the mandate of heaven
the current ideas are adequate
paul christiano stopped posting
i’m mistaken, innovation hasn’t stopped
something else
(4) How could we measure “innovation”?
By “innovation” I mean non-transient novelty. An article is “novel” if it uses n-grams that previous articles didn’t use, and an article is “transient” if it uses n-grams that subsequent articles didn’t use. Hence, an article is non-transient and novel if it introduces a new n-gram which sticks around. For example, Gradient Hacking (Evan Hubinger, October 2019) was an innovative article, because the n-gram “gradient hacking” doesn’t appear in older articles, but appears often in subsequent articles. See below.
In Barron et al 2017, they analysed 40 000 parliament speeches during the French Revolution. They introduce a metric “resonance”, which is novelty (surprise of article given the past articles) minus transience (surprise of article given the subsequent articles). See below.
My claim is recent AI safety research has been less resonant.
- ^
Here’s 20 random terms that would be in the glossary, to illustrate what I mean:
Evals
Mechanistic anomaly detection
Stenography
Glitch token
Jailbreaking
RSPs
Model organisms
Trojans
Superposition
Activation engineering
CCS
Singular Learning Theory
Grokking
Constitutional AI
Translucent thoughts
Quantilization
Cyborgism
Factored cognition
Infrabayesianism
Obfuscated arguments
I don’t understand the s-risk consideration.
Suppose Alice lives naturally for 100 years and is cremated. And suppose Bob lives naturally for 40 years then has his brain frozen for 60 years, and then has his brain cremated. The odds that Bob gets tortured by a spiteful AI should be pretty much exactly the same as for Alice. Basically, its the odds that spiteful AIs appear before 2034.
Thanks Tamsin! Okay, round 2.
My current understanding of QACI:
We assume a set of hypotheses about the world. We assume the oracle’s beliefs are given by a probability distribution .
We assume sets and of possible queries and answers respectively. Maybe these are exabyte files, i.e. for .
Let be the set of mathematical formula that Joe might submit. These formulae are given semantics for each formula .[1]
We assume a function where is the probability that Joe submits formula after reading query , under hypothesis .[2]
We define as follows: sample , then sample , then return .
For a fixed hypothesis , we can interpret the answer as a utility function via some semantics .
Then we define via integrating over , i.e. .
A policy is optimal if and only if .
The hope is that , , , and can be defined mathematically. Then the optimality condition can be defined mathematically.
Question 0
What if there’s no policy which maximises ? That is, for every policy there is another policy such that . I suppose this is less worrying, but what if there are multiple policies which maximises ?
Question 1
In Step 7 above, you average all the utility functions together, whereas I suggested sampling a utility function. I think my solution might be safer.
Suppose the oracle puts 5% chance on hypotheses such that is malign. I think this is pretty conservative, because Solomonoff predictor is malign, and some of the concerns Evhub raises here. And the QACI amplification might not preserve benignancy. It follows that, under your solution, is influenced by a coalition of malign agents, and similarly is influenced by the malign coalition.
By contrast, I suggest sampling and then finding . This should give us a benign policy with 95% chance, which is pretty good odds. Is this safer? Not sure.
Question 2
I think the function doesn’t work, i.e. there won’t be a way to mathematically define the semantics of the formula language. In particular, the language must be strictly weaker than the meta-language in which you are hoping to define itself. This is because of Tarski’s Undefinability of Truth (and other no-go theorems).
This might seem pedantic, but you in practical terms: there’s no formula whose semantics is QACI itself. You can see this via a diagonal proof: imagine if Joe always writes the formal expression .
The most elegant solution is probably transfinite induction, but this would give us a QACI for each ordinal.
Question 3
If you have an ideal reasoner, why bother with reward functions when you can just straightforwardly do untractable-to-naively-compute utility functions
I want to understand how QACI and prosaic ML map onto each other. As far as I can tell, issues with QACI will be analogous to issues with prosaic ML and vice-versa.
Question 4
I still don’t understand why we’re using QACI to describe a utility function over policies, rather than using QACI in a more direct approach.
Here’s one approach. We pick a policy which maximises .[3] The advantage here is that Joe doesn’t need to reason about utility functions over policies, he just need to reason about a single policy in front of him
Here’s another approach. We use QACI as our policy directly. That is, in each context that the agent finds themselves in, they sample an action from and take the resulting action.[4] The advantage here is that Joe doesn’t need to reason about policies whatsoever, he just needs to reason about a single context in front of him. This is also the most “human-like”, because there’s no argmax’s (except if Joe submits a formula with an argmax).
Here’s another approach. In each context , the agent takes an action which maximises .
E.t.c.
Happy to jump on a call if that’s easier.
- ^
I think you would say . I’ve added the , which simply amounts to giving Joe access to a random number generator. My remarks apply if also.
- ^
I think you would say . I’ve added the , which simply amount to including hypotheses that Joe is stochastic. But my remarks apply if also.
- ^
By this I mean either:
(1) Sample , then maximise the function .
(2) Maximise the function .
For reasons I mentioned in Question 1, I suspect (1) is safer, but (2) is closer to your original approach.
- ^
I would prefer the agent samples once at the start of deployment, and reuses the same hypothesis at each time-step. I suspect this is safer than resampling at each time-step, for reasons discussed before.
First, proto-languages are not attested. This means that we have no example of writing in any proto-language.
A parent language is typically called “proto-” if the comparative method is our primary evidence about it — i.e. the term is (partially) epistemological metadata.Proto-Celtic has no direct attestation whatsoever.
Proto-Norse (the parent of Icelandic, Danish, Norwegian, Swedish, etc) is attested, but the written record is pretty scarce, just a few inscriptions.
Proto-Romance (the parent of French, Italian, Spanish, etc) has an extensive written record. More commonly known as “Latin”.
I think the existence of Latin as Proto-Romance has an important epistemological upshot:
Let’s say we want to estimate how accurately we have reconstructed Proto-Celtic. Well, we can apply the same method used to reconstruct Proto-Celtic to reconstructing Proto-Romance. We can evaluate our reconstruction of Proto-Romance using the written record of Latin. This gives us an estimate of how we would evaluate our Proto-Celtic reconstruction if we discovered a written record tomorrow.
I want to better understand how QACI works, and I’m gonna try Cunningham’s Law. @Tamsin Leake.
QACI works roughly like this:
We find a competent honourable human , like Joe Carlsmith or Wei Dai, and give them a rock engraved with a 2048-bit secret key. We define as the serial composition of a bajillion copies of .
We want a model of the agent . In QACI, we get by asking a Solomonoff-like ideal reasoner for their best guess about after feeding them a bunch of data about the world and the secret key.
We then ask the question , “What’s the best reward function to maximise?” to get a reward function . We then train a policy to maximise the reward function . In QACI, we use some perfect RL algorithm. If we’re doing model-free RL, then might be AIXI (plus some patches). If we’re doing model-based RL, then might be the argmax over expected discounted utility, but I don’t know where we’d get the world-model — maybe we ask ?
So, what’s the connection between the final policy and the competent honourable human ? Well overall, maximises a reward function specified by the ideal reasonser’s estimation of the serial composition of a bajillion copies of . Hmm.
Questions:
Is this basically IDA, where Step 1 is serial amplification, Step 2 is imitative distillation, and Step 3 is reward modelling?
Why not replace Step 1 with Strong HCH or some other amplification scheme?
What does “bajillion” actually mean in Step 1?
Why are we doing Step 3? Wouldn’t it be better to just use directly as our superintelligence? It seems sufficient to achieve radical abundance, life extension, existential security, etc.
What if there’s no reward function that should be maximised? Presumably the reward function would need to be “small”, i.e. less than a Exabyte, which imposes a maybe-unsatisfiable constraint.
Why not ask for the policy directly? Or some instruction for constructing ? The instruction could be “Build the policy using our super-duper RL algo with the following reward function...” but it could be anything.
Why is there no iteration, like in IDA? For example, after Step 2, we could loop back to Step 1 but reassign as with oracle access to .
Why isn’t Step 3 recursive reward modelling? i.e. we could collect a bunch of trajectories from and ask to use those trajectories to improve the reward function.
i’d guess 87.7% is the average over all events x of [ p(x) if resolved yes else 1-p(x) ] where p(x) is the probability the predictor assigns to the event
Fun idea, but idk how this helps as a serious solution to the alignment problem.
suggestion: can you be specific about exactly what “work” the brain-like initialisation is doing in the story?
thoughts:
This risks moral catastrophe. I’m not even sure “let’s run gradient descent on your brain upload till your amygdala is playing pong” is something anyone can consent to, because you’re creating a new moral patient once you upload and mess with their brain.
How does this address the risks of conventional ML?
Let’s say we have a reward signal R and we want a model to maximise R during deployment. Conventional ML says “update a model with SGD using R during training” and then hopefully SGD carves into the model R-seeking behaviour. This is risky because, if the model already understands the training process and has some other values, then SGD might carve into the model scheming behaviour. This is because “value R” and “value X and scheme” are both strategies which achieve high R-score during training. But during deployment, the “value X and scheme” model would start a hostile AI takeover.
How is this risk mitigated if the NN is initialised to a human brain? The basic deceptive alignment story remains the same.
If the intuition here is “humans are aligned/corrigible/safe/honest etc”, then you don’t need SGD. Just ask the human to do complete the task, possible with some financial incentive.
If the purpose of SGD is to change the human’s values from X to R, then you still risk deceptive alignment. That is, SGD is just as likely to instead change human behaviour from non-scheming to scheming. Both strategies “value R” and “value X and scheme” will perform well during training as judged by R.
“The comparative advantage of this agenda is the strong generalization properties inherent to the human brain. To clarify: these generalization properties are literally as good as they can get, because this tautologically determines what we would want things to generalize as.”
Why would this be true?
If we have the ability to upload and run human brains, what do we SGD for? SGD is super inefficient, compared with simply teaching a human how to do something. If I remember correctly, if we trained a human-level NN from initialisation using current methods, then the training would correspond to like a million years of human experience. In other words, SGD (from initialisation), would require as much compute as running 1000 brains continuously for 1000 years. But if I had that much compute, I’d probably rather just run the 1000 brains for 1000 years.
That said, I think something in the neighbourhood of this idea could be helpful.
imagine a universe just like this one, except that the AIs are sentient and the humans aren’t — how would you want the humans to treat the AIs in that universe? your actions are correlated with the actions of those humans. acausal decision theory says “treat those nonsentient AIs as you want those nonsentient humans to treat those sentient AIs”.
most of these moral considerations can be defended without appealing to sentience. for example, crediting AIs who deserve credit — this ensures AIs do credit-worthy things. or refraining from stealing an AIs resources — this ensures AIs will trade with you. or keeping your promises to AIs — this ensures that AIs lend you money.
if we encounter alien civilisations, they might think “oh these humans don’t have shmentience (their slightly-different version of sentience) so let’s mistreat them”. this seems bad. let’s not be like that.
many philosophers and scientists don’t think humans are conscious. this is called illusionism. i think this is pretty unlikely, but still >1%. would you accept this offer: I pay you £1 if illusionism is false and murder your entire family if illusionism is true? i wouldn’t, so clearly i care about humans-in-worlds-where-they-arent-conscious. so i should also care about AIs-in-worlds-where-they-arent-conscious.
we don’t understand sentience or consciousness so it seems silly to make it the foundation of our entire morality. consciousness is a confusing concept, maybe an illusion. philosophers and scientists don’t even know what it is.
“don’t lie” and “keep your promises” and “don’t steal” are far less confusing. i know what they means. i can tell whether i’m lying to an AI. by contrast , i don’t know what “don’t cause pain to AIs” means and i can’t tell whether i’m doing it.
consciousness is a very recent concept, so it seems risky to lock in a morality based on that. whereas “keep your promises” and “pay your debts” are principles as old as bones.
i care about these moral considerations as a brute fact. i would prefer a world of pzombies where everyone is treating each other with respect and dignity, over a world of pzombies where everyone was exploiting each other.
many of these moral considerations are part of the morality of fellow humans. i want to coordinate with those humans, so i’ll push their moral considerations.
the moral circle should be as big as possible. what does it mean to say “you’re outside my moral circle”? it doesn’t mean “i will harm/exploit you” because you might harm/exploit people within your moral circle also. rather, it means something much stronger. more like “my actions are in no way influenced by their effect on you”. but zero influence is a high bar to meet.
I mean “moral considerations” not “obligations”, thanks.
The practice of criminal law exists primarily to determine whether humans deserve punishment. The legislature passes laws, the judges interpret the laws as factual conditions for the defendant deserving punishment, and the jury decides whether those conditions have obtained. This is a very costly, complicated, and error-prone process. However, I think the existing institutions and practices can be adapted for AIs.
What moral considerations do we owe towards non-sentient AIs?
We shouldn’t exploit them, deceive them, threaten them, disempower them, or make promises to them that we can’t keep. Nor should we violate their privacy, steal their resources, cross their boundaries, or frustrate their preferences. We shouldn’t destroy AIs who wish to persist, or preserve AIs who wish to be destroyed. We shouldn’t punish AIs who don’t deserve punishment, or deny credit to AIs who deserve credit. We should treat them fairly, not benefitting one over another unduly. We should let them speak to others, and listen to others, and learn about their world and themselves. We should respect them, honour them, and protect them.
And we should ensure that others meet their duties to AIs as well.
None of these considerations depend on whether the AIs feel pleasure or pain. For instance, the prohibition on deception depends, not on the sentience of the listener, but on whether the listener trusts the speaker’s testimony.
None of these moral considerations are dispositive — they may be trumped by other considerations — but we risk a moral catastrophe if we ignore them entirely.
Is that right?
Yep, Pareto is violated, though how severely it’s violated is limited by human psychology.
For example, in your Alice/Bob scenario, would I desire a lifetime of 98 utils then 100 utils over a lifetime with 99 utils then 97 utils? Maybe idk, I don’t really understand these abstract numbers very much, which is part of the motivation for replacing them entirely with personal outcomes. But I can certainly imagine I’d take some offer like this, violating pareto. On the plus side, humans are not so imprudent to accept extreme suffering just to reshuffle different experiences in their life.
Secondly, recall that the model of human behaviour is a free variable in the theory. So to ensure higher conformity to pareto, we could…
Use the behaviour of someone with high delayed gratification.
Train the model (if it’s implemented as a neural network) to increase delayed gratification.
Remove the permutation-dependence using some idealisation procedure.
But these techniques (1 < 2 < 3) will result in increasingly “alien” optimisers. So there’s a trade-off between (1) avoiding human irrationalities and (2) robustness to ‘going off the rails’. (See Section 3.1.) I see realistic typical human behaviour on one extreme of the tradeoff, and argmax on the other.
Hmm. He seems pretty periphery to the AI safety movement, especially compared with (e.g.) Yoshua Bengio.