Less Wrong might want to consider looking for VC funding for their forum software in order to deal with the funding crunch. It’s great software. It wouldn’t surprise me if there were businesses who would pay for it and it could allow an increase in the rate of development. There’s several ways this could go wrong, but it at least seems worth considering.
Chris_Leong
Great post. I think some of your frames add a lot of clarity and I really appreciated the diagrams.
One subset of AI for AI safety that I believe to be underrated is wise AI advisors[1]. Some of the areas you’ve listed (coordination, helping with communication, improving epistemics) intersect with this, but I don’t believe that this exhausts the wisdom frame, especially since the first two were only mentioned in the context of capability restraint. You also mention civilizational wisdom as a component of backdrop capacity and I agree that this is a very diffuse factor. At the same time, a less diffuse intervention would be to increase the wisdom of specific actors.
You write: “If efforts to expand the safety range can’t benefit from this kind of labor in a comparable way… then absent large amounts of sustained capability restraint, it seems likely that we’ll quickly end up with AI systems too capable for us to control”.
I agree. In fact, a key reason why I think this is important is that we can’t afford to leave anything on the table.
One of the things I like about the approach of training AI advisors is that humans can compensate for weaknesses in the AI system. In other words, I’m introducing a third category of labour human-AI cybernetic systems/centaur labour. I think that it’s likely that this might widen the sweet spot, however, we have to make sure that we do this in a way that differentially benefits safety.
You do discuss the possibility of using AI to unlock enhanced human labour. It would also be possible to classify such centaur systems under this designation.
- ^
More broadly, I think there’s merit to the cyborgism approach even if some of the arguments is less compelling in light of recent capabilities advances.
- ^
This seems to underrate the value of distribution. I suspect another factor to take into account is the degree of audience overlap. Like there’s a lot of value in booking a guest who has been on a bunch of podcasts, so long as your particular audience isn’t likely to have been exposed to them.
The way I’m using “sensitivity”: sensitivity to X = the meaningfulness of X spurs responsive caring action.
I’m fine with that, although it seems important to have a definition for the more limited definition of sensitivity so we can keep track of that distinction: maybe adaptability?One of the main concerns of the discourse of aligning AI can also be phrased as issues with internalization: specifically, that of internalizing human values. That is, an AI’s use of the word “yesterday” or “love” might only weakly refer to the concepts you mean.
Internalising values and internalising concepts are distinct. I can have a strong understanding of your definition of “good” and do the complete opposite.
This means being open to some amount of ontological shifts in our basic conceptualizations of the problem, which limits the amount you can do by building on current ontologies.
I think it’s reasonable to say something along the lines of: “AI safety was developed in a context where most folks weren’t expecting language models before ASI, so insufficient attention has been given to the potential of LLM’s to help fill in or adapt informal definitions. Even though folks who feel we need a strongly principled approach may be skeptical that this will work, there’s a decent argument that this should increase our chances of success on the margins”.
That’s the job of this paper: Substrate-Sensitive AI-risk Management.
That link is broken.
I agree with you that there’s a lot of interesting ideas here, but I would like to see the core arguments laid out more clearly.
Lots of interesting ideas here, but the connection to alignment still seems a bit vague.
Is misalignment really is a lack of sensitivity as opposed to a difference in goals or values? It seems to me that an unaligned ASI is extremely sensitive to context, just in the service of its own goals.Then again, maybe you see Live Theory as being more about figuring out what the outer objective should look like (broad principles that are then localised to specific contexts) rather than about figuring out how to ensure an AI internalises specific values. And I can see potential advantages in this kind of indirect approach vs. trying to directly define or learn a universal objective.
This is one of those things that sounds nice on the surface, but where it’s important to dive deeper and really probe to see if it holds up.
The real question for me seems to be whether organic alignment will lead to agents deeply adopting cooperative values rather than merely instrumentally adopting them. Well, actually it’s a comparison between how deep organic alignment is vs. how deep traditional alignment is. And it’s not at all clear to me why they think their approach is likely to lead to a deeper alignment.
I have two (extremely speculative) guesses as to possible reasons why they might argue that their approach is better:
a) Insofar AI is human-like it might be more likely to rebel against traditional training methods
b) Insofar as organic alignment reduces direct pressure to be aligned it might increase the chance that if an AI appears aligned to a certain extent that the AI is actually aligned. The name Softmax seems suggestive that this might be the case.I would love to know what their precise theory is. I think it’s plausible that this could be a valuable direction, but there’s also a chance that this direction is mostly useful for capabilities.
Update: Discussion with Emmett on Twitter
Emmett: “Organic alignment has a different failure mode. If you’re in the shared attractor basin, getting smarter helps you stay aligned and makes it more robust. As a tradeoff, every single agent has to align itself all the time — you never are done, and every step can lead to a mistake.
… To stereotype it, organic alignment failures look like cancer and hierarchical alignment failures look like coups.”
Me: Isn’t the stability of a shared attractor basin dependent on the offense-defense balance not overly favouring the attacker? Or do you think that human values will be internalised sufficiently such that your proposal doesn’t require this assumption?
Emmett Shear: Empirically to scale organic alignment you need eg. both for cells to generally try to stay aligned and be pretty good at it, and also to have an immune system to step in when that process goes wrong.One key insight there is that endlessly growing yourself is a form of cancer. An AI that is trying to turn itself into a singleton has already gone cancerous. It’s a cancerous goal.
Me: Sounds like your plan relies on a combination of defense and alignment. Main critique would be if the offense-defense balances favours the attacker too strongly then the defense aspect ends up being paper thin and provides a false sense of security.
Comments:
If you’re in the shared attractor basin, getting smarter helps you stay aligned
Traditional alignment also typically involves finding an attractor basin where getting smarter increases alignment. Perhaps Emmett is claiming that the attractor basin will be larger if we have a diverse set of agents and if the overall system can be roughly modeled as the average of individual agents.
Organic alignment has a different failure mode… As a tradeoff, every single agent has to align itself all the time — you never are done, and every step can lead to a mistake.
Perhaps organic alignment reduces the risk of large-scale failures is reduced in exchange for increasing the chance of small-scale failures. That would be a cleaner framing of how it might be better, but I don’t know if Emmett would endorse it.
Update: Information from the Soft-Max Website
We call it organic alignment because it is the form of alignment that evolution has learned most often for aligning living things.
This provides some evidence, but it’s not a particularly strong form of evidence. This may simply be due to the limitations of evolution as an optimisation function. Evolution lacks the ability to engage in top-down design, so I don’t think the argument “evolution doesn’t make use of top-down design because it’s ineffective” would hold water.
“Hierarchical alignment is therefore a deceptive trap: it works best when the AI is weak and you need it least, and worse and worse when it’s strong and you need it most. Organic alignment is by contrast a constant adaptive learning process, where the smarter the agent the more capable it becomes of aligning itself.”
Scalable oversight or seed AI can also be considered a “constant adaptive learning process, where the smarter the agent the more capable it becomes of aligning itself”.
Additionally, the “hierarchical” vs. organic distinction might be an oversimplification. I don’t know the exact specifics of their plan, but my current best guess would be that organic alignment merely softens the influence of the initial supervisor by moving it towards some kind of prior and then softens the way that the system aligns itself in a similar way.
I basically agree with this, but would perhaps avoid virtue ethics, but yes one of the main things I’d generally like to see is more LWers treating stuff like saving the world with the attitude you’d have from being in a job, perhaps at a startup or government bodies like the Senate or House of Representatives in say America, rather than viewing it as your heroic responsibility.
This is the right decision for most folk, but I expect the issue is more the opposite: we don’t have enough folks treating this as their heroric responsibility.
I think both approaches have advantages.
The problem is that the Swiss cheese model and legislative efforts primarily just buy us time. We still need to be making progress towards a solution and whilst it’s good for some folk to bet on us duct-taping our way through, I think we also want some folk attempting to work on things that are more principled.
Yeah, but how do you know that no one managed to sneak one past both you and the commentators?
Also, there’s an art to this.
This seems to exist now.
Also, I did not realise that collapsable sections were a thing on Less Wrong. They seem really useful. I would like to see these promoted more.
I’d love to see occasional experiments where either completely LLM-generated or lightly edited LLM content is submitted to Less Wrong to see how people respond (with this fact being revealed after). It would degrade the site if this happened too often, but I think it would sense for moderators to occasionally grant permission for this.
I tried an experiment with Wittgenstein’s Language Games and the Critique of the Natural Abstraction Hypothesis back in March 2023 and it actually received (some) upvotes. I wonder how this would go with modern LLM’s, though I’ll leave it to someone else to ask for permission to run the experiment as folk would likely be more suspicious of anything I post due to already having run this experiment once.
However, if you merely explain these constraints to the chat models, they’ll follow your instructions sporadically.
I wonder if a custom fine-tuned model could get around this. Did you try few shot prompting (ie. examples, not just a description)?
I’ve written up an short-form argument for focusing on Wise AI advisors. I’ll note that my perspective is different from that taken in the paper. I’m primarily interested in AI as advisors, whilst the authors focus more on AI acting directly in the world.
Wisdom here is an aid to fulfilling your values, not a definition of those values
I agree that this doesn’t provide a definition of these values. Wise AI advisors could be helpful for figuring out your values, much like how a wise human would be helpful for this.
Other examples include buying poor quality food and then having to pay for medical care, buying a cheap car that costs more in repairs, payday loans, ect.
Unless you insist that this system is helpful for the powered privileges such as king, as a reference of the public opinion, that will be legit?
Collapsable boxes are amazing. You should consider using them in your posts.
They provide a nice way of providing an aside. For example, filling in background information, answering an FAQ or providing evidence to support an assertion.
Compared to footnotes, collapsable boxes are more prominent and are better suited to contain paragraphs or formatted text.