An attempt was made last year, as an outgrowth of some assorted shard theory discussion, but I don’t think it got super far:
cfoster0
Note: I just watched the videos. I personally would not recommend the first video as an explanation to a layperson if I wanted them to come away with accurate intuitions around how today’s neural networks learn / how we optimize them. What it describes is a very different kind of optimizer, one explicitly patterned after natural selection such as a genetic algorithm or population-based training, and the follow-up video more or less admits this. I would personally recommend they opt for videos these instead:
The primary point we’d like to highlight here is that attack model A (removing safety guardrails) is possible, and quite efficient while being cost-effective.
Definitely. Despite my frustrations, I still upvoted your post because I think exploring cost-effective methods to steer AI systems is a good thing.
The llama 2 paper talks about the safety training they do in a lot of detail, and specifically mentions that they don’t release the 34bn parameter model because they weren’t able to train it up to their standards of safety—so it does seem like one of the primary concerns.
I understand you as saying (1) “[whether their safety guardrails can be removed] does seem like one of the primary concerns”. But IMO that isn’t the right way to interpret their concerns, and we should instead think (2) “[whether their models exhibit safe chat behavior out of the box] does seem like one of their primary concerns”. Interpretation 2 explains the decisions made by the Llama2 authors, including why they put safety guardrails on the chat-tuned models but not the base models, as well as why they withheld the 34B one (since they could not get it to exhibit safe chat behavior out of the box). But under interpretation 1, a bunch of observations are left unexplained, like that they also released model weights without any safety guardrails, and that they didn’t even try to evaluate whether their safety guardrails can be removed (for ex. by fine-tuning the weights). In light of this, I think the Llama2 authors were deliberate in the choices that they made, they just did so with a different weighting of considerations than you.
In doing so, we hope to demonstrate a failure mode of releasing model weights—i.e., although models are often red-teamed before they are released (for example, Meta’s LLaMA 2 “has undergone testing by external partners and internal teams to identify performance gaps and mitigate potentially problematic responses in chat use cases”,) adversaries can modify the model weights in a way that makes all the safety red-teaming in the world ineffective.
I feel a bit frustrated about the way this work is motivated, specifically the way it assumes a very particular threat model. I suspect that if you had asked the Llama2 researchers whether they were trying to make end-users unable to modify the model in unexpected and possibly-harmful ways, they would have emphatically said “no”. The rationale for training + releasing in the manner they did is to give everyday users a convenient model they can have normal/safe chats with right out of the box, while still letting more technical users arbitrarily modify the behavior to suit their needs. Heck, they released the base model weights to make this even easier! From their perspective, calling the cheap end-user modifiability of their model a “failure mode” seems like an odd framing.
EDIT: On reflection, I think my frustration is something like “you show that X is vulnerable under attack model A, but it was designed for a more restricted attack model B, and that seems like an unfair critique of X”. I would rather you just argue directly for securing against the more pessimistic attack model.
Parallel distributed processing (as well as “connectionism”) is just an early name for the line of work that was eventually rebranded as “deep learning”. They’re the same research program.
Credit for changing the wording, but I still feel this does not adequately convey how sweeping the impact of the proposal would be if implemented as-is. Foundation model-related work is a sizeable and rapidly growing chunk of active AI development. Of the 15K pre-print papers posted on arXiv under the
CS.AI
category this year, 2K appear to be related to language models. The most popular Llama2 model weights alone have north of 500K downloads to date, and foundation-model related repos have been trending on Github for months. “People working with [a few technical labs’] models” is a massive community containing many thousands of developers, researchers, and hobbyists. It is important to be honest about how they will likely be impacted by this proposed regulation.
If you have checkpoints from different points in training of the same models, you could do a comparison between different-size models at the same loss value (performance). That way, you’re actually measuring the effect of scale alone, rather than scale confounded by performance.
It would definitely move the needle for me if y’all are able to show this behavior arising in base models without forcing, in a reproducible way.
Good question. I don’t have a tight first-principles answer. The helix puts a bit of positional information in the variable magnitude (otherwise it’d be an ellipse, which would alias different positions) and a bit in the variable rotation, whereas the straight line is the far extreme of putting all of it in the magnitude. My intuition is that (in a transformer, at least) encoding information through the norm of vectors + acting on it through translations is “harder” than encoding information through (almost-) orthogonal subspaces + acting on it through rotations.
Relevant comment from Neel Nanda: https://twitter.com/NeelNanda5/status/1671094151633305602
Very cool! I believe this structure allows expressing the “look back N tokens” operation (perhaps even for different Ns across different heads) via a position-independent rotation (and translation?) of the positional subspace of query/key vectors. This sort of operation is useful if many patterns in the dataset depend on the relative arrangement of tokens (for ex. common n-grams) rather than their absolute positions. Since all these models use absolute positional embeddings, the positional embeddings have to contort themselves to make this happen.
It’s absolutely fine if you want to use AI to help summarize content, and then you check that content and endorse it.
I still ask if you could please flag it as such, so the reader can make an informed decision about how to read/respond to the content?
Is this an AI summary (or your own writing)? If so, would you mind flagging it as such?
The main takeaway (translated to standard technical language) is it would be useful to have some structured representation of the relationship between terminal values and instrumental values (at many recursive “layers” of instrumentality), analogous to how Bayes nets represent the structure of a probability distribution. That would potentially be more useful than a “flat” representation in terms of preferences/utility, much like a Bayes net is more useful than a “flat” probability distribution.
That’s an interesting and novel-to-me idea. That said, the paper offers [little] technical development of the idea.
I believe Yoav Shoham has done a bit of work on this, attempting to create a formalism & graphical structure similar to Bayes nets for reasoning about terminal/instrumental value. See these two papers:
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”
I see. The liability proposal isn’t aimed at near-miss scenarios with no actual harm. It is aimed at scenarios with actual harm, but where that actual harm falls short of extinction + the conditions contributing to the harm were of the sort that might otherwise contribute to extinction.
You said no one had named “a specific actionable harm that’s less than extinction” and I offered one (the first that came to mind) that seemed plausible, specific, and actionable under Hanson’s “negligent owner monitoring” condition.
To be clear, though, if I thought that governments could just prevent negligent owner monitoring (& likewise with some of the other conditions) as you suggested, I would be in favor of that!
EDIT: Someone asked Hanson to clarify what he meant by “near-miss” such that it’d be an actionable threshold for liability, and he responded:
Any event where A causes a hurt to B that A had a duty to avoid, the hurt is mediated by an AI, and one of those eight factors I list was present.
Can you re-state that? I find the phrasing of your question confusing.
(Are you saying there is no harm in the near-miss scenarios, so liability doesn’t help? If so I disagree.)
Hanson does not ignore this, he is very clear about it
it seems plausible that for every extreme scenario like [extinction by foom] there are many more “near miss” scenarios which are similar, but which don’t reach such extreme ends. For example, where the AI tries but fails to hide its plans or actions, where it tries but fails to wrest control or prevent opposition, or where it does these things yet its abilities are not broad enough for it to cause existential damage. So if we gave sufficient liability incentives to AI owners to avoid near-miss scenarios, with the liability higher for a closer miss, those incentives would also induce substantial efforts to avoid the worst-case scenarios.
The purpose of this kind of liability is to provide an incentive gradient pushing actors away from the preconditions of harm. Many of those preconditions are applicable to harms at differing scales. For example, if an actor allowed AI systems to send emails in an unconstrained and unmonitored way, that negligence is an enabler for both automated spear-phishing scams (a “lesser harms”) and for AI-engineered global pandemics.
As I understand this, the rough sketch of this approach is basically to realize that incomplete preferences are compatible with a family of utility functions rather than a single one (since they don’t specify how to trade-off between incomparable outcomes), and that you can use randomization to select within this family (implemented via contracts), thereby narrowing in on completed preferences / a utility function. Is that description on track?
If so, is it a problem that the subagents/committee/market may have preferences that are a function of this dealmaking process, like preferences about avoiding the coordination/transaction costs involved, or preferences about how to do randomization? Like, couldn’t you end up with a situation where “completing the preferences” is dispreferred, such that the individual subagents do not choose to aggregate into a single utility maximizer?
Having known some of Conjecture’s founders and their previous work in the context of “early-stage EleutherAI”, I share some[1] of the main frustrations outlined in this post. At the organizational level, even setting aside the departure of key researchers, I do not think that Conjecture’s existing public-facing research artifacts have given much basis for me to recommend the organization to others (aside from existing personal ties). To date, only[2] a few posts like their one on the polytope lens and their one on circumventing interpretability were at the level of quality & novelty I expected from the team. Maybe that is a function of the restrictive information policies, maybe a function of startup issues, maybe just the difficulty of research. In any case, I think that folks ought to require more rigor and critical engagement from their future research outputs[3].
- ^
I didn’t find the critiques of Connor’s “character and trustworthiness” convincing, but I already consider him a colleague & a friend, so external judgments like these don’t move the needle for me.
- ^
The main other post I have in mind was their one on simulators. AFAICT the core of “simulator theory” predated (mid-2021, at least) Conjecture, and yet even with a year of additional incubation, the framework was not brought to a sufficient level of technical quality.
- ^
For example, the “cognitive emulation” work may benefit from review by outside experts, since the nominal goal seems to be to do cognitive science entirely inside of Conjecture.
- ^
I agree that they are related. In the context of this discussion, the critical difference between SGD and evolution is somewhat captured by your Assumption 1:
Evolution does not directly select/optimize the content of minds. Evolution selects/optimizes genomes based (in part) on how they distally shape what minds learn and what minds do (to the extent that impacts reproduction), with even more indirection caused by selection’s heavy dependence on the environment. All of that creates a ton of optimization “slack”, such that large-brained human minds with language could steer optimiztion far faster & more decisively than natural selection could. This what 1a3orn was pointing to earlier with
SGD does not have that slack by default. It acts directly on cognitive content (associations, reflexes, decision-weights), without slack or added indirection. If you control the training dataset/environment, you control what is rewarded and what is penalized, and if you are using SGD, then this lets you directly mold the circuits in the model’s “brain” as desired. That is one of the main alignment-relevant intuitions that gets lost when blurring the evolution/SGD distinction.