He discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO.
I happen to work for a company whose software uses checksums at many layers, and RAID encoding and low-density parity codes at the lowest layers, to detect and recover from hardware failures. It works pretty well, and the company has sold billions of dollars of products of which that is a key component. Also, many (most?) enterprise servers use RAM with error-correcting codes; I think the common configuration allows it to correct single-bit errors and detect double-bit errors, and my company’s machines will reset themselves when they detect double-bit errors and other problems that impugn the integrity of their runtime state.
One could quibble about whether “retrieving and querying the data that was written” counts as a “computation”, and the extent to which the recovery is achieved through software as opposed to hardware[1], but the source material is a James Mickens comedic rant in any case.
I’d say the important point here is: There is a science to error correction, to building a (more) perfect machine out of imperfect parts, where the solution to unreliable hardware is more of the unreliable hardware, linked up in a clever scheme. They’re good enough at it that each successive generation of data storage technology uses hardware with higher error rates. You can make statements like “If failures are uncorrelated, and failures happen every X time units on average per component, and it takes Y time units to detect and recover a failure, and we can recover from up to M failures out of every group of N components, then on average we will have an unrecoverable failure every Z time units”; then you can (a) think about how to arrange it so that Z >> X and (b) think about the dangers of correlated failures.
(The valid complaint that Mickens’s character makes is that it would suck if every application needed to weave error correction into every codepath, implement its own RAID, etc. It works much better if the error correction is done by some underlying layer that the application treats as an abstraction—using the abstraction tends to be more complex than pretending errors don’t exist (and for noncritical applications the latter is a valid strategy), but not terrible.[2])
With regard to AI. It seems likely that we’ll end up making use of potentially-dangerous AIs to do things. If we do, then we’d want powerful safeguards. It seems unlikely that we’d have 100% confidence in any particular safeguard (i.e. unlikely we’d have formal proofs of the safety properties we’d want). Then we’d want to keep adding more safeguards as long as their failure modes weren’t totally covered by those of preexisting safeguards (and as long as they’re affordable); ideally we’d try to estimate (ideally measure) the chance of failure of each safeguard and how independent they are.
Now, would some of these safeguards be built with the aid of earlier versions of AI? I’d guess yes. This could take a few forms, ranging from “formally-verified direct programs that the earlier AI generated along with their proofs of safety and effectiveness” to “smaller neural nets (cut-down versions of earlier AIs?) that monitor the training of the new AI’s net” to “weaker full-blown AIs watching the decisions output by the new full-blown AI”. I suspect any of these might be useful.
The Godzilla complaint seems to be about that last thing, having weaker AIs monitor the new AI. Well, um, it seems to me that, if the weaker AIs’ output is restricted to the one-bit signal “do we hit the killswitch on the new AI” (and that signal is executed by a machine), then there’s no danger of Godzilla destroying Tokyo after it has taken down Mega-Godzilla. (It’s like cutting off Godzilla’s limbs and tying him to Mega-Godzilla with his jaws open around Mega-Godzilla’s neck, so his only available action is biting down.) There is a danger, if this happens a lot, of the subsequent training of the new AI merely teaching it to evade detection by the weaker predecessors; this danger should be taken seriously; but it is also less bad than having Mega-Godzilla kill us the first time it started going rogue; the safeguard is net positive.
Addressing specific claims:
If we use one AI to oversee another AI, and something goes wrong, that’s not a recoverable error; we’re using AI assistance in the first place because we can’t notice the relevant problems without it.
If, among our many precautions, we have AI 1 oversee AI 2 (in this scheme I describe where AI 1′s only available action is to output a bit that shuts off AI 2)[3], and AI 2 starts to go rogue, and AI 1 fails in its task, then we’re no worse off than if we lacked that particular precaution—either the other precautions save us or they don’t. If AI 1 succeeds, then we have at least some chance of being able to debug what went wrong with AI 2, and save ourselves in the future. Using AI 1 as overseer is strictly beneficial, minus implementation costs. (There’s a second-order risk of someone overestimating how much it protects us and then choosing to be more reckless, but that’s a different kind of thing.)
If we use one maybe-somewhat-aligned AI to build another, and something goes wrong, that’s not a recoverable error
If “goes wrong” means “it kills us all”, then yeah. If “goes wrong” means “it starts acting dangerous but we manage to shut it off”, that is potentially recoverable.
if we had better ways to detect misalignment in the child we’d already have used them on the parent.
The parent could be aligned but incompetent (heh, I think many people know parents like that), while the child is unaligned and more competent. (Why would the programmers create the child if it weren’t expected to be more competent than the parent?) It seems easy to imagine alignment checks that the parent would pass while the child would fail.
The checksums are done by the CPU but hardware-accelerated; I think the RAID stuff is clearly software; the LDPC stuff is done by more specialized hardware; the ECC RAM is hardware, but the choice of how to respond to double bit errors is software.
ECC RAM is one example: the application can just assume that memory accesses are correct, and that double-bit errors will cause the operating system to kill the application—so the application itself needs zero lines of code about RAM errors, though now there does need to be monitoring to restart the process and/or fail over to another machine.
Another example is using a durable database, and going from “just increment and decrement the variables in memory” to “create a database transaction, to be attempted / rolled back as a unit”. That does require code changes in the application itself.
In practice we’d probably want it to output something more like a number between 0 and 1 representing its worry level, and likely the killswitch goes off if either a single overseer exceeds some danger threshold or multiple overseers exceed some lower threshold.
One important difference between data storage vs computation or AI: courtesy of Shannon and Hamming, we have a really good understanding of information transmission (which includes information storage). All those nice error-correction codes are downstream of very well-understood theory.
If we had theory as solid as information theory for AI and alignment, then yeah, I’d be a hell of a lot more optimistic about using one AI to oversee another somewhere in the process. Like, imagine we had the alignment analogue of an error-detecting code which provably detects two-bit errors and corrects one-bit errors with only a logarithmic amount of overhead. With theory that strong (and battle-tested in reality) it becomes plausible that unknown unknowns won’t inevitably ruin all our plans.
Well, the basic idea “adding more safeguards decreases the likelihood they’ll all fail simultaneously, as long as there isn’t a perfect correlation of failure modes” is a simple mathematical fact. “What is the probability of this safeguard failing to detect a rogue AI?” is hard to answer, but “What might this new safeguard do that the other safeguards don’t do?” is easier.
For example. If interpretability work gets anywhere, then one might imagine a suite of safeguards that check for parts of the developing neural net that compute things like “how to detect security holes in C or machine code” or “how quickly humans die to certain poisons” (when that’s not supposed to be the goal); safeguards that check for parts of the net that have many nodes and are not understandable by the other safeguards; safeguards that inspect the usage of CPU or other resources and have some idea of what’s usual; safeguards that try to look for the net thinking strategically about what resource usage looks natural; and so on. These safeguards might all suck / only work in a small fraction of cases, but if you have hundreds or thousands of them, then your odds might get decent.
Or, at least, you might push the correlated failure mode into “the AI has guessed that we have a bunch of safeguards, has imagined the set of safeguards people at our level of competence are likely to have come up with, and is strategically avoiding them”, in which case that strategic thinking is a thing you try to detect. (It would be nice if we ended up being able to prove that the strategic thinking to defeat all our “dangerous thinking detectors” required at least N neurons, and we were able to prune down all nets to the point where less than N not-understood neurons existed.) And—as stated, each of these safeguards might be implemented with the assistance of prior AIs in various ways.
What exactly are you objecting to? Your original post seems to be saying “the entire category of strategies that involves one AI saving us from another is dangerous, doomed, and should be dismissed as ridiculous”, but you don’t seem to be pushing back on my claims that you can cut off Godzilla’s limbs and tie his jaws around Mega-Godzilla’s neck and that the result is net positive. Were you arguing against much more reckless proposals, or people being overconfident in unspecified proposals, or...? The possibility occurs to me that the OP is intentionally overstated in the hopes of provoking a well-thought-out proposal (it did say it anticipated much discussion), along the lines of Cunningham’s Law.
I am mostly objecting to strategies which posit one AI saving us from another as the primary mechanism of alignment—for instance, most of the strategies in 11 Proposals. If we had sufficiently great interpretability, then sure, we could maybe leverage that to make a Godzilla strategy with a decent chance of working (or at least failing in detectable-in-advance ways), but with interpretability tools that good we could probably just make a plan without Godzilla have a decent chance of working (or at least failing in detectable-in-advance ways) by doing basically the same things minus Godzilla. It’s the interpretability tools which take that plan from “close to zero chance of working” to “close to 100% chance of working”; the interpretability is where all the robustness comes from. The Godzilla part adds relatively little and is plausibly net negative (due to making the ML components more complex and brittle).
(Another minor point: “adding more safeguards decreases the likelihood they’ll all fail simultaneously, as long as there isn’t a perfect correlation of failure modes” is only true when the “safeguards” are guaranteed to not increase the chance of failure.)
And—as stated, each of these safeguards might be implemented with the assistance of prior AIs in various ways.
I see two main things you could have in mind here.
First, maybe you imagine training a magic black box to tell us what’s going on inside another magic black box. This is a Godzilla strategy, and fails for the usual Godzilla reasons: errors are not recoverable. If the magic interpretability box fails, we don’t have a built-in way to notice. And no, training lots of magic black boxes to detect lots of things does not really fix the problem; the failure modes are extremely highly correlated. We don’t even need to suppose deception—just a distribution shift in the cognition of the system will cause highly correlated failures, and a distribution shift in cognition is exactly the sort of thing we’d expect from a system just starting to grok consequentialist reasoning.
On the other hand, maybe you imagine AIs serving as research assistants, rather than using AIs directly to interpret other AIs. That plan does have problems, but is basically not a Godzilla plan; the human in the loop means that the standard Godzilla brittleness issue doesn’t really apply.
I happen to work for a company whose software uses checksums at many layers, and RAID encoding and low-density parity codes at the lowest layers, to detect and recover from hardware failures. It works pretty well, and the company has sold billions of dollars of products of which that is a key component. Also, many (most?) enterprise servers use RAM with error-correcting codes; I think the common configuration allows it to correct single-bit errors and detect double-bit errors, and my company’s machines will reset themselves when they detect double-bit errors and other problems that impugn the integrity of their runtime state.
One could quibble about whether “retrieving and querying the data that was written” counts as a “computation”, and the extent to which the recovery is achieved through software as opposed to hardware[1], but the source material is a James Mickens comedic rant in any case.
I’d say the important point here is: There is a science to error correction, to building a (more) perfect machine out of imperfect parts, where the solution to unreliable hardware is more of the unreliable hardware, linked up in a clever scheme. They’re good enough at it that each successive generation of data storage technology uses hardware with higher error rates. You can make statements like “If failures are uncorrelated, and failures happen every X time units on average per component, and it takes Y time units to detect and recover a failure, and we can recover from up to M failures out of every group of N components, then on average we will have an unrecoverable failure every Z time units”; then you can (a) think about how to arrange it so that Z >> X and (b) think about the dangers of correlated failures.
(The valid complaint that Mickens’s character makes is that it would suck if every application needed to weave error correction into every codepath, implement its own RAID, etc. It works much better if the error correction is done by some underlying layer that the application treats as an abstraction—using the abstraction tends to be more complex than pretending errors don’t exist (and for noncritical applications the latter is a valid strategy), but not terrible.[2])
With regard to AI. It seems likely that we’ll end up making use of potentially-dangerous AIs to do things. If we do, then we’d want powerful safeguards. It seems unlikely that we’d have 100% confidence in any particular safeguard (i.e. unlikely we’d have formal proofs of the safety properties we’d want). Then we’d want to keep adding more safeguards as long as their failure modes weren’t totally covered by those of preexisting safeguards (and as long as they’re affordable); ideally we’d try to estimate (ideally measure) the chance of failure of each safeguard and how independent they are.
Now, would some of these safeguards be built with the aid of earlier versions of AI? I’d guess yes. This could take a few forms, ranging from “formally-verified direct programs that the earlier AI generated along with their proofs of safety and effectiveness” to “smaller neural nets (cut-down versions of earlier AIs?) that monitor the training of the new AI’s net” to “weaker full-blown AIs watching the decisions output by the new full-blown AI”. I suspect any of these might be useful.
The Godzilla complaint seems to be about that last thing, having weaker AIs monitor the new AI. Well, um, it seems to me that, if the weaker AIs’ output is restricted to the one-bit signal “do we hit the killswitch on the new AI” (and that signal is executed by a machine), then there’s no danger of Godzilla destroying Tokyo after it has taken down Mega-Godzilla. (It’s like cutting off Godzilla’s limbs and tying him to Mega-Godzilla with his jaws open around Mega-Godzilla’s neck, so his only available action is biting down.) There is a danger, if this happens a lot, of the subsequent training of the new AI merely teaching it to evade detection by the weaker predecessors; this danger should be taken seriously; but it is also less bad than having Mega-Godzilla kill us the first time it started going rogue; the safeguard is net positive.
Addressing specific claims:
If, among our many precautions, we have AI 1 oversee AI 2 (in this scheme I describe where AI 1′s only available action is to output a bit that shuts off AI 2)[3], and AI 2 starts to go rogue, and AI 1 fails in its task, then we’re no worse off than if we lacked that particular precaution—either the other precautions save us or they don’t. If AI 1 succeeds, then we have at least some chance of being able to debug what went wrong with AI 2, and save ourselves in the future. Using AI 1 as overseer is strictly beneficial, minus implementation costs. (There’s a second-order risk of someone overestimating how much it protects us and then choosing to be more reckless, but that’s a different kind of thing.)
If “goes wrong” means “it kills us all”, then yeah. If “goes wrong” means “it starts acting dangerous but we manage to shut it off”, that is potentially recoverable.
The parent could be aligned but incompetent (heh, I think many people know parents like that), while the child is unaligned and more competent. (Why would the programmers create the child if it weren’t expected to be more competent than the parent?) It seems easy to imagine alignment checks that the parent would pass while the child would fail.
The checksums are done by the CPU but hardware-accelerated; I think the RAID stuff is clearly software; the LDPC stuff is done by more specialized hardware; the ECC RAM is hardware, but the choice of how to respond to double bit errors is software.
ECC RAM is one example: the application can just assume that memory accesses are correct, and that double-bit errors will cause the operating system to kill the application—so the application itself needs zero lines of code about RAM errors, though now there does need to be monitoring to restart the process and/or fail over to another machine.
Another example is using a durable database, and going from “just increment and decrement the variables in memory” to “create a database transaction, to be attempted / rolled back as a unit”. That does require code changes in the application itself.
In practice we’d probably want it to output something more like a number between 0 and 1 representing its worry level, and likely the killswitch goes off if either a single overseer exceeds some danger threshold or multiple overseers exceed some lower threshold.
One important difference between data storage vs computation or AI: courtesy of Shannon and Hamming, we have a really good understanding of information transmission (which includes information storage). All those nice error-correction codes are downstream of very well-understood theory.
If we had theory as solid as information theory for AI and alignment, then yeah, I’d be a hell of a lot more optimistic about using one AI to oversee another somewhere in the process. Like, imagine we had the alignment analogue of an error-detecting code which provably detects two-bit errors and corrects one-bit errors with only a logarithmic amount of overhead. With theory that strong (and battle-tested in reality) it becomes plausible that unknown unknowns won’t inevitably ruin all our plans.
Well, the basic idea “adding more safeguards decreases the likelihood they’ll all fail simultaneously, as long as there isn’t a perfect correlation of failure modes” is a simple mathematical fact. “What is the probability of this safeguard failing to detect a rogue AI?” is hard to answer, but “What might this new safeguard do that the other safeguards don’t do?” is easier.
For example. If interpretability work gets anywhere, then one might imagine a suite of safeguards that check for parts of the developing neural net that compute things like “how to detect security holes in C or machine code” or “how quickly humans die to certain poisons” (when that’s not supposed to be the goal); safeguards that check for parts of the net that have many nodes and are not understandable by the other safeguards; safeguards that inspect the usage of CPU or other resources and have some idea of what’s usual; safeguards that try to look for the net thinking strategically about what resource usage looks natural; and so on. These safeguards might all suck / only work in a small fraction of cases, but if you have hundreds or thousands of them, then your odds might get decent.
Or, at least, you might push the correlated failure mode into “the AI has guessed that we have a bunch of safeguards, has imagined the set of safeguards people at our level of competence are likely to have come up with, and is strategically avoiding them”, in which case that strategic thinking is a thing you try to detect. (It would be nice if we ended up being able to prove that the strategic thinking to defeat all our “dangerous thinking detectors” required at least N neurons, and we were able to prune down all nets to the point where less than N not-understood neurons existed.) And—as stated, each of these safeguards might be implemented with the assistance of prior AIs in various ways.
What exactly are you objecting to? Your original post seems to be saying “the entire category of strategies that involves one AI saving us from another is dangerous, doomed, and should be dismissed as ridiculous”, but you don’t seem to be pushing back on my claims that you can cut off Godzilla’s limbs and tie his jaws around Mega-Godzilla’s neck and that the result is net positive. Were you arguing against much more reckless proposals, or people being overconfident in unspecified proposals, or...? The possibility occurs to me that the OP is intentionally overstated in the hopes of provoking a well-thought-out proposal (it did say it anticipated much discussion), along the lines of Cunningham’s Law.
I am mostly objecting to strategies which posit one AI saving us from another as the primary mechanism of alignment—for instance, most of the strategies in 11 Proposals. If we had sufficiently great interpretability, then sure, we could maybe leverage that to make a Godzilla strategy with a decent chance of working (or at least failing in detectable-in-advance ways), but with interpretability tools that good we could probably just make a plan without Godzilla have a decent chance of working (or at least failing in detectable-in-advance ways) by doing basically the same things minus Godzilla. It’s the interpretability tools which take that plan from “close to zero chance of working” to “close to 100% chance of working”; the interpretability is where all the robustness comes from. The Godzilla part adds relatively little and is plausibly net negative (due to making the ML components more complex and brittle).
(Another minor point: “adding more safeguards decreases the likelihood they’ll all fail simultaneously, as long as there isn’t a perfect correlation of failure modes” is only true when the “safeguards” are guaranteed to not increase the chance of failure.)
I see two main things you could have in mind here.
First, maybe you imagine training a magic black box to tell us what’s going on inside another magic black box. This is a Godzilla strategy, and fails for the usual Godzilla reasons: errors are not recoverable. If the magic interpretability box fails, we don’t have a built-in way to notice. And no, training lots of magic black boxes to detect lots of things does not really fix the problem; the failure modes are extremely highly correlated. We don’t even need to suppose deception—just a distribution shift in the cognition of the system will cause highly correlated failures, and a distribution shift in cognition is exactly the sort of thing we’d expect from a system just starting to grok consequentialist reasoning.
On the other hand, maybe you imagine AIs serving as research assistants, rather than using AIs directly to interpret other AIs. That plan does have problems, but is basically not a Godzilla plan; the human in the loop means that the standard Godzilla brittleness issue doesn’t really apply.
My reply to the top-level post here is also relevant as a reply to this specific comment.