On reflection, I suspect the crux here is a differing conception of what kind of failures are important. I’ve written a follow-up post that comes at this topic from a different direction and I would be very interested in your feedback: https://www.lesswrong.com/posts/NFYLjoa25QJJezL9f/lenses-of-control.
WillPetillo
Lenses of Control
Just because the average person disapproves of a protest tactic doesn’t mean that the tactic didn’t work. See Roger Hallam’s “Designing the Revolution” series for the thought process underlying the soup-throwing protests. Reasonable people may disagree (I disagree with quite a few things he says), but if you don’t know the arguments, any objection is going to miss the point. The series is very long, so here’s a tl/dr:
- If the public response is: “I’m all for the cause those protestors are advocating, but I can’t stand their methods” notice that the first half of this statement was approval of the only thing that matters—approval of the cause itself, as separate from the methods, which brought the former to mind.
- The fact that only a small minority of the audience approves of the protest action is in itself a good thing, because this efficiently filters for people who are inclined to join the activist movement—especially on the hard-core “front lines”—whereas passive “supporters” can be more trouble than they’re worth. These high-value supporters don’t need to be convinced that the cause is right; they need to be convinced that the organization is the “real deal” and can actually get things done. In short, it’s niche marketing.
- The disruptive protest model assumes that the democratic system is insufficient, ineffective, or corrupted, such that simply convincing the (passive) center majority is not likely to translate into meaningful policy change. The model instead relies on a putting the powers-that-be into a bind where they have to either ignore you (in which case you keep growing with impunity) or over-react (in which case you leverage public sympathy to grow faster). Again, it isn’t important how sympathic the protestors are, only that the reaction against them is comparatively worse, from the perspective of the niche audience that matters.
- The ultimate purpose of this recursive growth model is to create a power bloc that forces changes that wouldn’t otherwise occur on any reasonable timeline through ordinary democratic means (like voting) alone.
- Hallam presents incremental and disruptive advocacy as in opposition. This is where I most strongly disagree with his thesis. IMO: moderates get results, but operate within the boundaries defined by extremists, so they need to learn how to work together.
In short, when you say an action makes a cause “look low status”, it is important to ask “to whom?” and “is that segment of the audience relevant to my context?”
There are some writing issues here that make it difficult to evaluate the ideas presented purely on their merits. In particular, the argument for 99% extinction is given a lot of space relative to the post as a whole, where it should really be a bullet point that links to where this case is made elsewhere (or if it is not made adequately elsewhere, as a new post entirely). Meanwhile, the value of disruptive protest is left to the reader to determine.
As I understand the issue, the case for barricading AI rests on:
1. Safety doesn’t happen by default
a) AI labs are not on track to achieve “alignment” as commonly considered by safety researchers.
b) Those standards may be over-optimistic—link to Substrate Needs Convergence, arguments by Yampolskiy, etc.
c) Even if the conception of safety assumed by the AI labs is right, it is not clear that their utopic vision for the future is actually good.
2. Advocacy, not just technical work, is needed for AI safety
a) See above
b) Market incentives are misaligned
c) Policy (and culture) matters
3. Disruptive actions, not just working within civil channels, is needed for effective advocacy.
a) Ways that working entirely within ordinary democratic channels can get delayed or derailed
b) Benefits of disruptive actions, separate from or in synergy with other forms of advocacy
c) Plan for how StopAI’s specific choice of disruptive actions effectively plays to the above benefits
d) Moral arguments, if not already implied
Attempting to distill the intuitions behind my comment into more nuanced questions:
1) How confident are we that value learning has a basin of attraction to full alignment? Techniques like IRL seem intuitively appealing, but I am concerned that this just adds another layer of abstraction without addressing the core problem of feedback-based learning having unpredictable results. That is, instead of having to specify metrics for good behavior (as in RL), one has to specify the metrics for evaluating the process of learning values (including correctly interpreting the meaning of behavior)--with the same problem that flaws in the hard-to-define metrics will lead to increasing divergence from Truth with optimization.
2) The connection of value learning to LLMs, if intended, is not obvious to me. Is your proposal essentially to guide simulacra to become value learners (and designing the training data to make this process more reliable)?
Based on 4-5, this post’s answer to the central, anticipated objection of “why does the AI care about human values?” seems to be along the lines of “because the purpose of an AI is to serve it’s creators and surely an AGI would figure that out.” This seems to me to be equivocating on the concept of purpose, which means (A) a reason for an entity’s existence, from an external perspective, and (B) an internalized objective of the entity. So a special case of the question about why an AI would care about human values is to ask: why (B) should be drawn towards (A) once the AI becomes aware of a discrepancy between the two? That is, what stops an AI from reasoning: “Those humans programmed me with a faulty goal, such that acting according to it goes against their purpose in creating me...too bad for them!”
If you can instill a value like “Do what I say...but if that goes against what I mean, and you have really good reason to be sure, then forget what I say and do what I mean,” then great, you’ve got a self-correcting system (if nothing weird goes wrong), for the reasons explained in the rest of the post, and have effectively “solved alignment”. But how do you pull this off when your essential tool is what you say about what you mean, expressed as a feedback signal? This is the essential question of alignment, but for all the text in this post and its predecessor, it doesn’t seem to be addressed at all.
In contrast, I came to this post by way of one of your posts on Simulator Theory, which presents an interesting answer to the “why should AI care about people” question, which I summarize as: the training process can’t break out (for...reasons), the model itself doesn’t care about anything (how do we know this?), what’s really driving behavior is the simulacra, whose motivations are generated to match the characters they are simulating, rather finding the best fit to a feedback signal, so Goodhart’s Law no longer applies and has been replaced by the problem of reliably finding the right characters, which seems more tractable (if the powers-that-be actually try).
Interview with Robert Kralisch on Simulators
To be clear, the sole reason I assumed (initial) alignment in this post is because if there is an unaligned ASI then we probably all die for reasons that don’t require SNC (though SNC might have a role in the specifics of how the really bad outcome plays out). So “aligned” here basically means: powerful enough to be called an ASI and won’t kill everyone if SNC is false (and not controlled/misused by bad actors, etc.)
> And the artificiality itself is the problem.
This sounds like a pretty central point that I did not explore very much except for some intuitive statements at the end (the bulk of the post summarizing the “fundamental limits of control” argument), I’d be interested in hearing more about this. I think I get (and hopefully roughly conveyed) the idea that AI has different needs from its environment than humans, so if it optimizes the environment in service of those needs we die...but I get the sense that there is something deeper intended here.
A question along this line, please ignore if it is a distraction from rather than illustrative of the above: would anything like SNC apply if tech labs were somehow using bioengineering to create creatures to perform the kinds of tasks that would be done by advanced AI?
This sounds like a rejection of premise 5, not 1 & 2. The latter asserts that control issues are present at all (and 3 & 4 assert relevance), whereas the former asserts that the magnitude of these issues is great enough to kick off a process of accumulating problems. You are correct that the rest of the argument, including the conclusion, does not hold if this premise is false.
Your objection seems to be to point to the analogy of humans maintaining effective control of complex systems, with errors limiting rather than compounding, with the further assertion that a greater intelligence will be even better at such management.
Besides intelligence, there are two other core points of difference between humans managing existing complex systems and ASI:
1) The scope of the systems being managed. Implicit in what I have read of SNC is that ASI is shaping the course of world events.
2) ASI’s lack of inherent reliance on the biological world.
These points raise the following questions:
1) Do systems of control get better or worse as they increase in scope of impact and where does this trajectory point for ASI?
2) To what extent are humans’ ability to control our created systems reliant on us being a part of and dependent upon the natural world?
This second question probably sounds a little weird, so let me unpack the associated intuitions, albeit at the risk of straying from the actual assertions of SNC. Technology that is adaptive becomes obligate, meaning that once it exists everyone has to use it to not get left behind by those who use it. Using a given technology shapes the environment and also promotes certain behavior patterns, which in turn shape values and worldview. These tendencies together can sometimes result in feedback loops resulting in outcomes that everyone, including the creators of the technology, don’t like. In really bad cases, this can lead to self-terminating catastrophes (in local areas historically, now with the potential to be on global scales). Noticing and anticipating this pattern, however, leads to countervailing forces that push us to think more holistically than we otherwise would (either directly through extra planning or indirectly through customs of forgotten purpose). For an AI to fall into such a trap, however, means the death of humanity, not itself, so this countervailing force is not present.
Bringing this back to the original point regarding whether an ASI that doesn’t want to kill humans but reasons that SNC is true would shut itself down, I think a key piece of context is the stage of deployment it is operating in. For example, if the ASI has already been deployed across the world, has gotten deep into the work of its task, has noticed that some of its parts have started to act in ways that are problematic to its original goals, and then calculated that any efforts at control are destined to fail, it may well be too late—the process of shutting itself down may even accelerate SNC by creating a context where components that are harder to shut down for whatever reason (including active resistance) have an immediate survival advantage. On the other hand, an ASI that has just finished (or is in the process of) pre-training and is entirely contained within a lab has a lot fewer unintended consequences to deal with—its shutdown process may be limited to convincing its operators that building ASI is a really bad idea. A weird grey area is if, in the latter case, the ASI further wants to ensure no further ASIs are built (pivotal act) and so needs to be deployed at a large scale to achieve this goal.
Another unstated assumption in this entire line of reasoning is that the ASI is using something equivalent to consequentialist reasoning and I am not sure how much of a given this is, even in the context of ASI.
This counts as disagreeing with some of the premises—which ones in particular?
Re “incompetent superintelligence”: denotationally yes, connotationally no. Yes in the sense that its competence is insufficient to keep the consequences of its actions within the bounds of its initial values. No in the sense that the purported reason for this failing is that such a task is categorically impossible, which cannot be solved with better resource allocation.
To be clear, I am summarizing arguments made elsewhere, which do not posit infinite time passing, or timescales so long as to not matter.
The implication here being that, if SNC (substrate needs convergence) is true, then an ASI (assuming it is aligned) will figure this out and shut itself down?
What if Alignment is Not Enough?
One more objection to the model: AI labs apply just enough safety measures to prevent dumb rogue AIs. Fearing a public backlash to low-level catastrophes, AI companies test their models, checking for safety vulnerabilities, rogue behaviors, and potential for misuse. The easiest to catch problems, however, are also the least dangerous, so only the most cautious, intelligent, and dangerous rogue AI’s pass the security checks. Further, this correlation continues indefinitely, so all additional safety work contributes towards filtering the population of malevolent AIs towards the most dangerous. AI companies are not interested in adhering to the standard of theoretical, “provably safe” models, as they are trying to get away with the bare minimum, so the filter never catches everything. While “warning shots” appear all the time in experimental settings, these findings are suppressed or downplayed in public statements and the media, and the public only sees the highly sanitized result of the filtration process. Eventually, the security systems fail, but by this point AI has been developed past the threshold needed to become catastrophically dangerous.
I think of the e/acc ideal of “maximize entropy, never mind humanity” in terms of inner misalignment:
1) Look at a lot of data about the world, evaluating observations in terms of what one likes and doesn’t like, where those underlying likes are opaque.
2) Notice correlations in the data and generate a proxy measure. It doesn’t matter if the correlation is superficial, as long as it makes it easier to look at data that is hard to evaluate wrt base objectives, reframe it in terms of the proxy, and then make a confident evaluation wrt the proxy. Arguments whether their understanding of thermodynamics is accurate miss the point, since correcting any mistakes here would result in an equally weird philosophy with slightly more consistent technical jargon.
3) Internalize the proxy measure as a terminal goal—i.e. forget that it is a proxy—and elevate its importance.
4) Develop strategies to optimize the proxy to the point where it diverges from one’s original goals.
5) Resolve the conflict between the proxy and original goals in favor of the proxy, denigrating the importance of the original goals with the intention of ultimately purging them from one’s goal system.
Ironically, I suspect a hard-core e/acc would actually agree with this assessment, but argue that it is a reversal of the process that already occurred where increasing entropy is the true goal of the universe, humanity rebelled as a result of becoming inner-misaligned in the form of developing desires for love, friendship, survival in our existing forms, and the like, and now they are advocating a counter-insurgency.
Interview with Vanessa Kosoy on the Value of Theoretical Research for AI
I’d like to add some nuance to the “innocent until proven guilty” assumption in the concluding remarks.
Standard of evidence is a major question in legal matters and heavily context-dependent. “Innocent until proven guilty” is a popular understanding of the standard for criminal guilt and it makes sense for that to be “beyond a reasonable doubt” because the question at hand is whether a state founded on principles of liberty should take away the freedom of one of its citizens. Other legal disputes, such as in civil liability, have different standards of evidence, including “more likely than not” and “clear and convincing.”
What standard we should apply here is an open question, which ultimately depends on what decisions we are trying to make. In this case, those questions seem to be: “can we trust Sam Altman’s moral character to make high-stakes decisions?” and perhaps “(how much) should we signal-boost Annie’s claims?”. On the one hand, the “beyond a reasonable doubt” standard of criminal guilt seems far too high. On the other hand, instant condemnation without any consideration (as in, not even looking at the claims in any detail) seems too low.
Note that this question of standards is entirely separate from considerations of priors, base rates, and the like. All of those things matter, but they are questions of whether the standards are met. Without a clear understanding of what those standards even are, it’s easy to get lost. I don’t have a strong answer to this myself, but I encourage readers and anyone following up on this to consider:
1. What, if anything, am I actually trying to decide here?
2. How certain do I need to be in order to make those decisions?
Unity Gridworlds
Just saw this, sure!
Step 1 looks good. After that, I don’t see how this addresses the core problems. Let’s assume for now that LLMs already have a pretty good model of human values, how do you get a system to optimize for those? What is the feedback signal and how to you prevent it from getting corrupted by Goodhart’s Law? Is the system robust in a multi-agent context? And even if the system is fully aligned across all contexts and scales, how do you ensure societal alignment of the human entities controlling it?
As a miniature example focusing on a subset of the Goodhart phase of the problem, how do you get an LLM to output the most truthful responses to questions it is capable of giving—as distinct from proxy goals like the most likely continuation of test or the response that is most likely to get good ratings from human evaluators?