I don’t think any of these quite capture what I consider the “main argument” that capabilities generalize more than alignment (although 2 comes closest). That said, this is closely coupled to what I currently consider the most-difficult-to-explain concept in alignment, so it’s not surprising that none of the arguments capture it. I’ll take a shot here.
Motivating question: how can we build a feedback signal for alignment? What goes wrong with a feedback signal for alignment?
It’s easy to come up with a crappy proxy feedback signal—just use human approval or something. And then it will obviously fail horribly under sufficient optimization pressure. But then (goes a standard response) maybe by the time our feedback signal breaks down under optimization pressure, we’ll have figured out something better. We’ll have noticed the ways in which the original broke down, and we’ll fix those, and keep iterating.
That class of strategies is doomed for multiple reasons, but the one I want to highlight here is: how do we notice the ways in which the original feedback signal breaks? How do we notice the problems on which to iterate? Whatever it is that we’re doing to notice problems, that’s the “real” feedback signal, at the outermost optimization loop (i.e. the optimization loop of humans iteratively designing the system). And that outermost feedback signal is also a crappy proxy signal. Humans are not easily able to tell when or where problems occurred even in hindsight, in general. (And that’s before we get anywhere near crazy shit like “Potemkin village world”.)
Now, this isn’t meant to be a proof of impossibility of alignment, or anything like that. Rather, the point is that alignment of strong optimizers simply cannot be done without grounding out in something fundamentally different from a feedback signal. There might be training against feedback signals somewhere in the architecture, but the core problems of alignment have to be solved via something more than just feedback.
the core problems of alignment have to be solved via something more than just feedback.
No. I strongly disagree, assuming you mean “feedback signals” to include “reward signals.” The feedback signal is not the optimization target. The point of the feedback signal is not to be safely maximizable. The point of a feedback signal is to supply cognitive-updates to the network/agent. If the cognitive-updates grow human-aligned cognitive patterns which govern the AI’s behavior, we have built an aligned agent.
For example, suppose that I penalize the agent whenever I catch it lying. Then credit assignment de-emphasizes certain cognitive patterns which produced those outputs, and—if there are exact gradients to alternative actions—emphasizes or fine-tunes new lines of computation which would have produced the alternative actions in that situation. Concretely, I ask the AI whether it hates dogs, and it says “yes”, and then I ask it whether it admitted to hating dogs, and it says “no.”
Perhaps the AI had initially lied due to its pretrained initialization predicting that a human would have lied in that context, but then that reasoning gets penalized by credit assignment when I catch the AI lying. The reinforcement tweaks the AI to be less likely to lie in similar situations. Perhaps it learns “If a human would lie, then be honest.” Perhaps it learns some totally alien other thing. But importantly, the AI is not necessarily optimizing for high reward—the AI is being reconfigured by the reinforcement signals.
I think the key question of alignment is: How do we provide reinforcement signals so as to reliably reinforce and grow certain kinds of cognition within an AI? Asking after feedback signals which don’t “fail horribly under sufficient optimization pressure” misses this more interesting and relevant question.
Straw person: We haven’t found any feedback producer whose outputs are safe to maximise. We strongly suspect there isn’t one.
Ramana’s gloss of TurnTrout: But AIs don’t maximise their feedback. The feedback is just input to the algorithm that shapes the AI’s cognition. This cognition may then go on to in effect “have a world model” and “pursue something” in the real world (as viewed through its world model). But its world model might not even contain the feedback producer, in which case it won’t be pursuing high feedback. (Also, it might just do something else entirely.)
Less straw person: Yeah I get that. But what kind of cognition do you actually get after shaping it with a lot of feedback? (i.e., optimising/selecting the cognition based on its performance at feedback maximisation) If your optimiser worked, then you get something that pursues positive feedback. Spelling things out, what you get will have a world model that includes the feedback producer, and it will pursue real high feedback, as long as doing so is a possible mind configuration and the optimiser can find it, since that will in fact maximise the optimisation objective.
Possible TurnTrout response: We’re obviously not going to be using “argmax” as the optimiser though.
Addendum: I think that this reasoning fails on the single example we have of general intelligence (i.e. human beings). People probably do value “positive feedback” (in terms of reward prediction error or some tight correlate thereof), but people are not generally reward optimizers.
I think perhaps a lot work is being done by “if your optimiser worked”. This might also be where there’s a disanaology between humans<->evolution and AIs<->SGD+PPO (or whatever RL algorithm you’re using to optimise the policy). Maybe evolution is actually a very weak optimiser, that doesn’t really “work”, compared to SGD+RL.
I think the way I’d fit that into my ontology is “the reward signal is not the relevant feedback signal (for purposes of this argument)”. The relevant feedback signal is whatever some human looks at, at the end of the day, to notice when there’s problems or to tell how well the AI is doing by the human’s standards. It’s how we (human designers/operators) notice the problems on which to iterate. It’s whatever the designer is implicitly optimizing for, in the long run, by developing an AI via the particular process the designer is using.
If the human is just using the reward signal as a control interface for steering the AI’s internals, then the reward signal is not the feedback signal to which this argument applies.
We discussed more in person. I ended up agreeing with (what I perceive to be) a substantially different claim than I read from your original comment. I agree that we can’t just figure out alignment by black-boxing AI cognition and seeing whether the AI does good things or not, nor can we just set up feedback loops on that (e.g. train a succession of agents and tweak the process based on how aligned they seem) without some substantial theoretical underpinnings with which to interpret the evidence.
However, I still don’t see how your original comment is a reasonable way to communicate this state of mind. For example, you wrote:
It’s easy to come up with a crappy proxy feedback signal—just use human approval or something. And then it will obviously fail horribly under sufficient optimization pressure.
What does this mean, if not using human approval as a reward signal? Can you briefly step me through a fictional scenario where the described failure obtains?
It’s easy to come up with a crappy proxy feedback signal—just use human approval or something. And then it will obviously fail horribly under sufficient optimization pressure.
alignment of strong optimizers simply cannot be done without grounding out in something fundamentally different from a feedback signal.
I don’t think this is obvious at all. Essentially, we have to make sure that humans give feedback that matches their preferences, and that the agent isn’t changing the human’s preferences to be more easily optimized.
We have the following tools at our disposal:
Recursive reward modelling / Debate. By training agents to help with feedback, improvements in optimization power boosts both the feedback and the process potentially fooling the feedback. It’s possible that it’s easier to fool humans than it is to help them not be fooled, but it’s not obvious this is the case.
Path-specific objectives. By training an explicit model of how humans will be influenced by agent behavior, we can design an agent that optimizes the hypothetical feedback that would have been given, had the agent’s behavior not changed the human’s preferences (under some assumptions).
This makes me mildly optimistic of using feedback even for relatively powerful optimization.
Essentially, we have to make sure that humans give feedback that matches their preferences...
Humans’ stated preferences do not match their preferences-in-hindsight, neither of those matches humans’ self-reported happiness/satisfaction in-the-moment, none of that matches humans’ revealed preferences, and all of those are time-inconsistent. IIRC the first section of Kahnemann’s textbook Well Being: The Foundations of Hedonic Psychology is devoted entirely to the problem of getting feedback from humans on what they actually like, and the tldr is “people have been working on this for decades and all our current proxies have known problems” (not to say they don’t have unknown problems too, but they definitely have known problems). Once we get past the basic proxies, we pretty quickly run into fundamental conceptual issues about what we even mean by “human preferences”.
Make sure the agent isn’t changing the preferences
It seems that RRM/Debate somewhat addresses both of these, and path-specific objectives is mainly aimed at addressing issue 2. I think (part of) John’s point is that RRM/Debate don’t address issue 1 very well, because we don’t have very good or robust processes for judging the various ways we could construct or improve these schemes. Debate relies on a trustworthy/reliable judge at the end of the day, and we might not actually have that.
If the problem is “humans don’t give good feedback”, then we can’t directly train agents to “help” with feedback; there’s nothing besides human feedback to give a signal of what’s “helping” in the first place. We can choose some proxy for what we think is helpful, but then that’s another crappy proxy which will break down under optimization pressure.
It’s not just about “fooling” humans, though that alone is a sufficient failure mode. Bear in mind that in order for “helping humans not be fooled” to be viable as a primary alignment strategy it must be the case that it’s easier to help humans not be fooled than to fool them in approximately all cases, because otherwise a hostile optimizer will head straight for the cases where humans are fallible. And I claim it is very obvious, from looking at existing real-world races between those trying to deceive and those trying to expose the deception, that there will be plenty of cases where the expose-deception side does not have a winning strategy.
The agent changing “human preferences” is another sufficient failure mode. The strategy of “design an agent that optimizes the hypothetical feedback that would have been given” is indeed a conceptually-valid way to solve that problem, and is notably not a direct feedback signal in the RL sense. At that point, we’re doing EU maximization, not reinforcement learning. We’re optimizing for expected utility from a fixed model, we’re not optimizing a feedback signal from the environment. Of course a bunch of the other problems of human feedback still carry over; “the hypothetical feedback a human would have given” is still a crappy proxy. But it’s a step in the right direction.
Sure, humans are sometimes inconsistent, and we don’t always know what we want (thanks for the references, that’s useful!). But I suspect we’re mainly inconsistent in borderline cases, which aren’t catastrophic to get wrong. I’m pretty sure humans would reliably state that they don’t want to be killed, or that lots of other people die, etc. And that when they have a specific task in mind , they state that they want the task done rather than not. All this subject to that they actually understand the main considerations for whatever plan or outcome is in question, but that is exactly what debate and rrm are for
my objection to this objection is that for the most part, we don’t have an option not to pick the best feedback signal we have available at any given time. from a systems perspective, systems alignment only generalizes strongly if it improves capability enough for the relevant system to survive in competition with other systems. this is true at many scales of systems, but it’s always for the same reason: competition between systems means that the most adaptive approach wins. a common mistake is to assume “adaptive” means “growth/accumulation/capture rate”, but what it really means is “durability per unit efficiency”: the instrumental drive to capture resources as fast as possible is fundamentally a decision theory error made by local optimizers.
to consider a specific example of some systems with this decision theory error, a limitation when gene driving mosquitos, for example, is that if the genes you add don’t make the modified mosquitos enough more adaptive, they’ll just die out; you’d need to perform some sort of trade where you offer the modified mosquitos a modified non-animal food source that only the modified mosquitos can eat, and that somehow can’t be separated from the gene drive; you need to offer them a genetic update rule that reliably produces cooperation between species. if you can offer this, then mosquitos which become modified will be durably more competitive, because they have access to food sources that would poison unmodified mosquitos, and they can incrementally no longer threaten humans, so humans would no longer be seeking a way to entirely destroy the species. but it only works if you can get the mosquitos to coordinate en masse, and any mutation that makes that mosquito a defector against mosquito-veganism needs to be stopped in its tracks. the mosquito swarm has to reproduce away the interspecies defection strategy and then not allow it to return, while simultaneously preserving the species.
similarly in most forms of ai safety, there are at least three major labs you need to convince: deepmind, openai, and <whatever is going on over in china>. there are also others that will replicate experiments and some that will perform high quality experiments with somewhat less compute funding. between all of them, you have to come up with a mechanism of alignment that improves capability and which also is convergent about the alignment: if your alignment system doesn’t get better alignment-durability/watt as a result of capability improvement, you haven’t actually aligned anything, just papered over a problem. to some degree you can hope that one of these labs gets there first; but because capability growth is incremental, it’s looking less and less likely that there will be a single watershed moment where a lab pulls so far ahead that no competition can be mounted. and during that window, defense of friendly systems needs to become stronger than inter-system attack.
(by system, again, I mean any organism or meta-organism or neuron or cell or anything inbetween.)
one example goal of something we need an aligned planetary system of beings to do is take control of the ecosystem enough to solve global warming. but in order to do that without screwing everything up, we need a clear picture of what forms of interference with what parts of the universe are acceptable: some clear notion of multi-tenant ownership that allows interfacing the needs of multiple subsystems to determine what their requirements are for their adjacent systems.
I find it notable and interesting that anthropic’s recent research about interpretability (SoLU paper) focuses on isolating individual neurons’ concept ownership, so that the privileged basis isolates them from interfering with each other. I’m intentionally stretching how far I can generalize this, but I really think this direction of reasoning has something interesting to say about ownership of matter as well. local internal coherence of matter ownership is a core property of a human body that should not be violated; while it’s hard to precisely identify whether it’s been violated subtly, sudden death is an easy to identify example of a state transition where the local informational process of a human existing has suddenly ceased and the low-entropy complexity was lost. at the same time, anthropic’s paper is related to previous work on compressibility; attempting to improve interpretability ultimately boils down to attempting to improve the representation quality until it reaches a coherent, distilled structure that can be understood, as discussed in that paper.
I’d argue that, inherently, improvements to interpretability focused on coherent binding to physical variables have a fundamental connection to the potential to improve the formalizeability of the functions a neural network represents. and that that kind of improvement has the potential to allow binding the optimality of your main loss function more accurately to the functions you intend to optimize in the first place.
So then my question becomes—what competitive rules do we want to apply to all scales (within bacteria, within a neural network, within an insect, within a mammal, within a species, within a planet, between planets), in order to get representations at every scale that coherently describe what dynamics are acceptable interference and what are not?
again, I’m pulling together tenuous threads that I can’t quite tie properly, and some of the links might be invalid. I’m a software engineer first, research ideas generator second—and I might be seeing ghosts. but I suspect that somewhere in game theory/game dynamics, there’s an insight about how to structure competition in constructed processes that allows describing how to teach the universe to remember everything anyone ever considered beautiful, or something along those lines.
If this thread is of interest, I’d like to discuss it with more people. I’ve got some links in other posts as well.
I’m interested in this line of reasoning. I can’t really say much in response right now, but I just read that paper you linked—they write such clear and easily, heh, interpretable papers don’t they? - and I have strong opinions about “the correct value system” being rooted in maximizing some weighted sum of the “autonomy” of all living / agentic / intelligent systems, which it seems like you’re gesturing towards as well. I’m interested in trying to figure out how to formalize this.
Nice—thanks for this comment—how would the argument be summarised as a nice heading to go on this list? Maybe “Capabilities can be optimised using feedback but alignment cannot” (and feedback is cheap, and optimisation eventually produces generality)?
Maybe “Humans iteratively designing useful systems and fixing problems provide a robust feedback signal for capabilities, but not for alignment”?
(Also, I now realize that I left this out of the original comment because I assumed it was obvious, but to be explicit: basically any feedback signal on a reasonably-complex/difficult task will select for capabilities. That’s just instrumental convergence.)
I don’t think any of these quite capture what I consider the “main argument” that capabilities generalize more than alignment (although 2 comes closest). That said, this is closely coupled to what I currently consider the most-difficult-to-explain concept in alignment, so it’s not surprising that none of the arguments capture it. I’ll take a shot here.
Motivating question: how can we build a feedback signal for alignment? What goes wrong with a feedback signal for alignment?
It’s easy to come up with a crappy proxy feedback signal—just use human approval or something. And then it will obviously fail horribly under sufficient optimization pressure. But then (goes a standard response) maybe by the time our feedback signal breaks down under optimization pressure, we’ll have figured out something better. We’ll have noticed the ways in which the original broke down, and we’ll fix those, and keep iterating.
That class of strategies is doomed for multiple reasons, but the one I want to highlight here is: how do we notice the ways in which the original feedback signal breaks? How do we notice the problems on which to iterate? Whatever it is that we’re doing to notice problems, that’s the “real” feedback signal, at the outermost optimization loop (i.e. the optimization loop of humans iteratively designing the system). And that outermost feedback signal is also a crappy proxy signal. Humans are not easily able to tell when or where problems occurred even in hindsight, in general. (And that’s before we get anywhere near crazy shit like “Potemkin village world”.)
Now, this isn’t meant to be a proof of impossibility of alignment, or anything like that. Rather, the point is that alignment of strong optimizers simply cannot be done without grounding out in something fundamentally different from a feedback signal. There might be training against feedback signals somewhere in the architecture, but the core problems of alignment have to be solved via something more than just feedback.
No. I strongly disagree, assuming you mean “feedback signals” to include “reward signals.” The feedback signal is not the optimization target. The point of the feedback signal is not to be safely maximizable. The point of a feedback signal is to supply cognitive-updates to the network/agent. If the cognitive-updates grow human-aligned cognitive patterns which govern the AI’s behavior, we have built an aligned agent.
For example, suppose that I penalize the agent whenever I catch it lying. Then credit assignment de-emphasizes certain cognitive patterns which produced those outputs, and—if there are exact gradients to alternative actions—emphasizes or fine-tunes new lines of computation which would have produced the alternative actions in that situation. Concretely, I ask the AI whether it hates dogs, and it says “yes”, and then I ask it whether it admitted to hating dogs, and it says “no.”
Perhaps the AI had initially lied due to its pretrained initialization predicting that a human would have lied in that context, but then that reasoning gets penalized by credit assignment when I catch the AI lying. The reinforcement tweaks the AI to be less likely to lie in similar situations. Perhaps it learns “If a human would lie, then be honest.” Perhaps it learns some totally alien other thing. But importantly, the AI is not necessarily optimizing for high reward—the AI is being reconfigured by the reinforcement signals.
I think the key question of alignment is: How do we provide reinforcement signals so as to reliably reinforce and grow certain kinds of cognition within an AI? Asking after feedback signals which don’t “fail horribly under sufficient optimization pressure” misses this more interesting and relevant question.
Straw person: We haven’t found any feedback producer whose outputs are safe to maximise. We strongly suspect there isn’t one.
Ramana’s gloss of TurnTrout: But AIs don’t maximise their feedback. The feedback is just input to the algorithm that shapes the AI’s cognition. This cognition may then go on to in effect “have a world model” and “pursue something” in the real world (as viewed through its world model). But its world model might not even contain the feedback producer, in which case it won’t be pursuing high feedback. (Also, it might just do something else entirely.)
Less straw person: Yeah I get that. But what kind of cognition do you actually get after shaping it with a lot of feedback? (i.e., optimising/selecting the cognition based on its performance at feedback maximisation) If your optimiser worked, then you get something that pursues positive feedback. Spelling things out, what you get will have a world model that includes the feedback producer, and it will pursue real high feedback, as long as doing so is a possible mind configuration and the optimiser can find it, since that will in fact maximise the optimisation objective.
Possible TurnTrout response: We’re obviously not going to be using “argmax” as the optimiser though.
Thanks for running a model of me :)
Actual TurnTrout response: No.
Addendum: I think that this reasoning fails on the single example we have of general intelligence (i.e. human beings). People probably do value “positive feedback” (in terms of reward prediction error or some tight correlate thereof), but people are not generally reward optimizers.
I think perhaps a lot work is being done by “if your optimiser worked”. This might also be where there’s a disanaology between humans<->evolution and AIs<->SGD+PPO (or whatever RL algorithm you’re using to optimise the policy). Maybe evolution is actually a very weak optimiser, that doesn’t really “work”, compared to SGD+RL.
I think that evolution is not the relevant optimizer for humans in this situation. Instead consider the within-lifetime learning that goes on in human brains. Humans are very probably reinforcement learning agents in a relevant sense; in some ways, humans are the best reinforcement learning agents we have ever seen.
I think the way I’d fit that into my ontology is “the reward signal is not the relevant feedback signal (for purposes of this argument)”. The relevant feedback signal is whatever some human looks at, at the end of the day, to notice when there’s problems or to tell how well the AI is doing by the human’s standards. It’s how we (human designers/operators) notice the problems on which to iterate. It’s whatever the designer is implicitly optimizing for, in the long run, by developing an AI via the particular process the designer is using.
If the human is just using the reward signal as a control interface for steering the AI’s internals, then the reward signal is not the feedback signal to which this argument applies.
We discussed more in person. I ended up agreeing with (what I perceive to be) a substantially different claim than I read from your original comment. I agree that we can’t just figure out alignment by black-boxing AI cognition and seeing whether the AI does good things or not, nor can we just set up feedback loops on that (e.g. train a succession of agents and tweak the process based on how aligned they seem) without some substantial theoretical underpinnings with which to interpret the evidence.
However, I still don’t see how your original comment is a reasonable way to communicate this state of mind. For example, you wrote:
What does this mean, if not using human approval as a reward signal? Can you briefly step me through a fictional scenario where the described failure obtains?
Hm.
Now I don’t understand why this will obviously fail horribly, if your argument doesn’t apply to reward signals. How does human approval fail horribly when used in RL training?
I don’t think this is obvious at all. Essentially, we have to make sure that humans give feedback that matches their preferences, and that the agent isn’t changing the human’s preferences to be more easily optimized.
We have the following tools at our disposal:
Recursive reward modelling / Debate. By training agents to help with feedback, improvements in optimization power boosts both the feedback and the process potentially fooling the feedback. It’s possible that it’s easier to fool humans than it is to help them not be fooled, but it’s not obvious this is the case.
Path-specific objectives. By training an explicit model of how humans will be influenced by agent behavior, we can design an agent that optimizes the hypothetical feedback that would have been given, had the agent’s behavior not changed the human’s preferences (under some assumptions).
This makes me mildly optimistic of using feedback even for relatively powerful optimization.
Minor rant about this is particular:
Humans’ stated preferences do not match their preferences-in-hindsight, neither of those matches humans’ self-reported happiness/satisfaction in-the-moment, none of that matches humans’ revealed preferences, and all of those are time-inconsistent. IIRC the first section of Kahnemann’s textbook Well Being: The Foundations of Hedonic Psychology is devoted entirely to the problem of getting feedback from humans on what they actually like, and the tldr is “people have been working on this for decades and all our current proxies have known problems” (not to say they don’t have unknown problems too, but they definitely have known problems). Once we get past the basic proxies, we pretty quickly run into fundamental conceptual issues about what we even mean by “human preferences”.
The desiderata you mentioned:
Make sure the feedback matches the preferences
Make sure the agent isn’t changing the preferences
It seems that RRM/Debate somewhat addresses both of these, and path-specific objectives is mainly aimed at addressing issue 2. I think (part of) John’s point is that RRM/Debate don’t address issue 1 very well, because we don’t have very good or robust processes for judging the various ways we could construct or improve these schemes. Debate relies on a trustworthy/reliable judge at the end of the day, and we might not actually have that.
If the problem is “humans don’t give good feedback”, then we can’t directly train agents to “help” with feedback; there’s nothing besides human feedback to give a signal of what’s “helping” in the first place. We can choose some proxy for what we think is helpful, but then that’s another crappy proxy which will break down under optimization pressure.
It’s not just about “fooling” humans, though that alone is a sufficient failure mode. Bear in mind that in order for “helping humans not be fooled” to be viable as a primary alignment strategy it must be the case that it’s easier to help humans not be fooled than to fool them in approximately all cases, because otherwise a hostile optimizer will head straight for the cases where humans are fallible. And I claim it is very obvious, from looking at existing real-world races between those trying to deceive and those trying to expose the deception, that there will be plenty of cases where the expose-deception side does not have a winning strategy.
The agent changing “human preferences” is another sufficient failure mode. The strategy of “design an agent that optimizes the hypothetical feedback that would have been given” is indeed a conceptually-valid way to solve that problem, and is notably not a direct feedback signal in the RL sense. At that point, we’re doing EU maximization, not reinforcement learning. We’re optimizing for expected utility from a fixed model, we’re not optimizing a feedback signal from the environment. Of course a bunch of the other problems of human feedback still carry over; “the hypothetical feedback a human would have given” is still a crappy proxy. But it’s a step in the right direction.
Sure, humans are sometimes inconsistent, and we don’t always know what we want (thanks for the references, that’s useful!). But I suspect we’re mainly inconsistent in borderline cases, which aren’t catastrophic to get wrong. I’m pretty sure humans would reliably state that they don’t want to be killed, or that lots of other people die, etc. And that when they have a specific task in mind , they state that they want the task done rather than not. All this subject to that they actually understand the main considerations for whatever plan or outcome is in question, but that is exactly what debate and rrm are for
my objection to this objection is that for the most part, we don’t have an option not to pick the best feedback signal we have available at any given time. from a systems perspective, systems alignment only generalizes strongly if it improves capability enough for the relevant system to survive in competition with other systems. this is true at many scales of systems, but it’s always for the same reason: competition between systems means that the most adaptive approach wins. a common mistake is to assume “adaptive” means “growth/accumulation/capture rate”, but what it really means is “durability per unit efficiency”: the instrumental drive to capture resources as fast as possible is fundamentally a decision theory error made by local optimizers.
to consider a specific example of some systems with this decision theory error, a limitation when gene driving mosquitos, for example, is that if the genes you add don’t make the modified mosquitos enough more adaptive, they’ll just die out; you’d need to perform some sort of trade where you offer the modified mosquitos a modified non-animal food source that only the modified mosquitos can eat, and that somehow can’t be separated from the gene drive; you need to offer them a genetic update rule that reliably produces cooperation between species. if you can offer this, then mosquitos which become modified will be durably more competitive, because they have access to food sources that would poison unmodified mosquitos, and they can incrementally no longer threaten humans, so humans would no longer be seeking a way to entirely destroy the species. but it only works if you can get the mosquitos to coordinate en masse, and any mutation that makes that mosquito a defector against mosquito-veganism needs to be stopped in its tracks. the mosquito swarm has to reproduce away the interspecies defection strategy and then not allow it to return, while simultaneously preserving the species.
similarly in most forms of ai safety, there are at least three major labs you need to convince: deepmind, openai, and <whatever is going on over in china>. there are also others that will replicate experiments and some that will perform high quality experiments with somewhat less compute funding. between all of them, you have to come up with a mechanism of alignment that improves capability and which also is convergent about the alignment: if your alignment system doesn’t get better alignment-durability/watt as a result of capability improvement, you haven’t actually aligned anything, just papered over a problem. to some degree you can hope that one of these labs gets there first; but because capability growth is incremental, it’s looking less and less likely that there will be a single watershed moment where a lab pulls so far ahead that no competition can be mounted. and during that window, defense of friendly systems needs to become stronger than inter-system attack.
(by system, again, I mean any organism or meta-organism or neuron or cell or anything inbetween.)
one example goal of something we need an aligned planetary system of beings to do is take control of the ecosystem enough to solve global warming. but in order to do that without screwing everything up, we need a clear picture of what forms of interference with what parts of the universe are acceptable: some clear notion of multi-tenant ownership that allows interfacing the needs of multiple subsystems to determine what their requirements are for their adjacent systems.
I find it notable and interesting that anthropic’s recent research about interpretability (SoLU paper) focuses on isolating individual neurons’ concept ownership, so that the privileged basis isolates them from interfering with each other. I’m intentionally stretching how far I can generalize this, but I really think this direction of reasoning has something interesting to say about ownership of matter as well. local internal coherence of matter ownership is a core property of a human body that should not be violated; while it’s hard to precisely identify whether it’s been violated subtly, sudden death is an easy to identify example of a state transition where the local informational process of a human existing has suddenly ceased and the low-entropy complexity was lost. at the same time, anthropic’s paper is related to previous work on compressibility; attempting to improve interpretability ultimately boils down to attempting to improve the representation quality until it reaches a coherent, distilled structure that can be understood, as discussed in that paper.
I’d argue that, inherently, improvements to interpretability focused on coherent binding to physical variables have a fundamental connection to the potential to improve the formalizeability of the functions a neural network represents. and that that kind of improvement has the potential to allow binding the optimality of your main loss function more accurately to the functions you intend to optimize in the first place.
So then my question becomes—what competitive rules do we want to apply to all scales (within bacteria, within a neural network, within an insect, within a mammal, within a species, within a planet, between planets), in order to get representations at every scale that coherently describe what dynamics are acceptable interference and what are not?
again, I’m pulling together tenuous threads that I can’t quite tie properly, and some of the links might be invalid. I’m a software engineer first, research ideas generator second—and I might be seeing ghosts. but I suspect that somewhere in game theory/game dynamics, there’s an insight about how to structure competition in constructed processes that allows describing how to teach the universe to remember everything anyone ever considered beautiful, or something along those lines.
If this thread is of interest, I’d like to discuss it with more people. I’ve got some links in other posts as well.
I’m interested in this line of reasoning. I can’t really say much in response right now, but I just read that paper you linked—they write such clear and easily, heh, interpretable papers don’t they? - and I have strong opinions about “the correct value system” being rooted in maximizing some weighted sum of the “autonomy” of all living / agentic / intelligent systems, which it seems like you’re gesturing towards as well. I’m interested in trying to figure out how to formalize this.
Nice—thanks for this comment—how would the argument be summarised as a nice heading to go on this list? Maybe “Capabilities can be optimised using feedback but alignment cannot” (and feedback is cheap, and optimisation eventually produces generality)?
Maybe “Humans iteratively designing useful systems and fixing problems provide a robust feedback signal for capabilities, but not for alignment”?
(Also, I now realize that I left this out of the original comment because I assumed it was obvious, but to be explicit: basically any feedback signal on a reasonably-complex/difficult task will select for capabilities. That’s just instrumental convergence.)