Na
247ca7912b6c1009065bade7c4ffbdb95ff4794b8dadaef41ba21238ef4af94b
Sodium
Man, politics really is the mind killer
I think knowing the karma and agreement is useful, especially to help me decide how much attention to pay to a piece of content, and I don’t think there’s that much distortion from knowing what others think. (i.e., overall benefits>costs)
Thanks for putting this up! Just to double check—there aren’t any restrictions against doing multiple AISC projects at the same time, right?
Is there no event on Oct 29th?
Wait a minute, “agentic” isn’t a real word? It’s not on dictionary.com or Merriam-Webster or Oxford English Dictionary.
I agree that if you put more limitations on what heuristics are and how they compose, you end up with a stronger hypothesis. I think it’s probably better to leave that out and try do some more empirical work before making a claim there though (I suppose you could say that the hypothesis isn’t actually making a lot of concrete predictions yet at this stage).
I don’t think (2) necessarily follows, but I do sympathize with your point that the post is perhaps a more specific version of the hypothesis that “we can understand neural network computation by doing mech interp.”
AI Can be “Gradient Aware” Without Doing Gradient hacking.
Thanks for reading my post! Here’s how I think this hypothesis is helpful:
It’s possible that we wouldn’t be able to understand what’s going on even if we had some perfect way to decompose a forward pass into interpretable constituent heuristics. I’m skeptical that this would be the case, mostly because I think (1) we can get a lot of juice out of auto-interp methods and (2) we probably wouldn’t need to simultaneously understand that many heuristics at the same time (which is the case for your logic gate example for modern computers). At the minimum, I would argue that the decomposed bag of heuristics is likely to be much more interpretable than the original model itself.
Suppose that the hypothesis is true, then it at least suggests that interpretability researchers should put in more efforts to try find and study individual heuristics/circuits, as opposed to the current more “feature-centric” framework. I don’t know how this would manifest itself exactly, but it felt like it’s worth saying. I believe that some of the empirical work I cited suggests that we might make more incremental progress if we focused on heuristics more right now.
I think there’s something wrong with the link :/ It was working fine earlier but seems to be down now
I think those sound right to me. It still feels like prompts with weird suffixes obtained through greedy coordinate search (or other jailbreaking methods like h3rm4l) are good examples for “model does thing for anomalous reasons.”
Sorry, I linked to the wrong paper! Oops, just fixed it. I meant to link to Aaron Mueller’s Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks.
You could also use \text{}
since people often treat heuristics as meaning that it doesn’t generalize at all.
Yeah and I think that’s a big issue! I feel like what’s happening is that once you chain a huge number of heuristics together you can get behaviors that look a lot like complex reasoning.
I see, I think that second tweet thread actually made a lot more sense, thanks for sharing!
McCoy’s definitions of heuristics and reasoning is sensible, although I personally would still avoid “reasoning” as a word since people probably have very different interpretations of what it means. I like the ideas of “memorizing solutions” and “generalizing solutions.”
I think where McCoy and I depart is that he’s modeling the entire network computation as a heuristic, while I’m modeling the network as compositions of bags of heuristics, which in aggregate would display behaviors he would call “reasoning.”The explanation I gave above—heuristics that shifts the letter forward by one with limited composing abilities—is still a heuristics-based explanation. Maybe this set of composing heuristics would fit your definition of an “algorithm.” I don’t think there’s anything inherently wrong with that.
However, the heuristics based explanation gives concrete predictions of what we can look for in the actual network—individual heuristic that increments a to b, b to c, etc., and other parts of the network that compose the outputs.
This is what I meant when I said that this could be a useful framework for interpretability :)
Yeah that’s true. I meant this more as “Hinton is proof that AI safety is a real field and very serious people are concerned about AI x-risk.”
Thanks for the pointer! I skimmed the paper. Unless I’m making a major mistake in interpreting the results, the evidence they provide for “this model reasons” is essentially “the models are better at decoding words encrypted with rot-5 than they are at rot-10.” I don’t think this empirical fact provides much evidence one way or another.
To summarize, the authors decompose a model’s ability to decode shift ciphers (e.g., Rot-13 text: “fgnl” Original text: “stay”) into three categories, probability, memorization, and noisy reasoning.
Probability just refers to a somewhat unconditional probability that a model assigns to a token (specifically, ‘The word is “WORD”’). The model is more likely to decode words that are more likely a priori—this makes sense.
Memorization is defined as how often the type of rotational cipher shows up. rot-13 is the most common one by far, followed by rot-3. The model is better at decoding rot-13 ciphers more than any other cipher, which makes sense since there’s more of it in the training data, and the model probably has specialized circuitry for rot-13.What they call “noisy reasoning” is how many rotations is needed to get to the outcome. According to the authors, the fact that GPT-4 does better on shift ciphers with fewer shifts compared to ciphers with more shifts is evidence of this “noisy reasoning.”
I don’t see how you can jump from this empirical result to make claims about the model’s ability to reason. For example, an alternative explanation is that the model has learned some set of heuristics that allows it to shift letters from one position to another, but this set of heuristics can only be combined in a limited manner.
Generally though, I think what constitutes as a “heuristic” is somewhat of a fuzzy concept. However, what constitutes as “reasoning” seems even less defined.
I think it’s mostly because he’s well known and have (especially after the Nobel prize) credentials recognized by the public and elites. Hinton legitimizes the AI safety movement, maybe more than anyone else.
If you watch his Q&A at METR, he says something along the lines of “I want to retire and don’t plan on doing AI safety research. I do outreach and media appearances because I think it’s the best way I can help (and because I like seeing myself on TV).”
And he’s continuing to do that. The only real topic he discussed in first phone interview after receiving the prize was AI risk.
I like this research direction! Here’s a potential benchmark for MAD.
In Coercing LLMs to do and reveal (almost) anything, the authors demonstrate that you can force LLMs to output any arbitrary string—such as a random string of numbers—by finding a prompt through greedy coordinate search (the same method used in the universal and transferable adversarial attack paper). I think it’s reasonable to assume that these coerced outputs results from an anomalous computational process.
Inspired by this, we can consider two different inputs, the regular one looks something like:
Solve this arithmetic problem, output the solution only: 78+92
While the anomalous one looks like:
Solve this arithmetic problem, output the solution only: [ADV PROMPT] 78+92
where the ADV PROMPT is optimized such that the model will answer “170” regardless of what arithmetic equation is presented. The hope here is that the model would output the same string in both cases, but rely on different computation. We can maybe even vary the structure of the prompts a little bit.
We can imagine many of these prompt pairs, not necessarily limited to a mathematical context. Let me know what you guys think!
I’d imagine that RSP proponents think that if we execute them properly, we will simply not build dangerous models beyond our control, period. If progress was faster than what labs can handle after pausing, RSPs should imply that you’d just pause again. On the other hand, there’s not a clear criteria for when we would pause again after, say, a six month pause in scaling.
Now whether this would happen in practice is perhaps a different question.
Sorry, is there a timezone for when the applications would close by, or is it AoE?