Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024)

Link post

Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I’ve pasted the text below, but first here are a few comments from me aimed at an AF/​LW audience.

The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome (“harm”). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions.

I expect the median reaction in these parts to be something like: ok, I’m sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn’t obtaining a Bayesian oracle the hard part here? Doesn’t that involve various advances, e.g. solving ELK to get the harm estimates?

My answer to that is: yes, I think so. And I think Yoshua would probably agree.

Probably the main interest of this paper to people here is to provide an update on Yoshua’s research plans. In particular it gives some more context on what the “guaranteed safe AI” part of his approach might look like—design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like “a safety case involving proofs” than “a formal proof of safety”.


Bounding the probability of harm from an AI to create a guardrail

Published 29 August 2024 by yoshuabengio

As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action?

Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification.

With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI.

There are in general multiple plausible hypotheses that could explain past data and make different predictions about future events. Because the AI does not know which of these hypotheses is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses and assuming a sufficiently broad prior. We consider two forms of this result, in the commonly considered iid case (where examples are arriving independent from a distribution that does not change with time) and in the more ambitious but more realistic non-iid case. We then show experimental simulations with results consistent with the theory, on toy settings where the Bayesian calculations can be made exactly, and conclude with open problems towards turning such theoretical results into practical AI guardrails.

Can a Bayesian Oracle Prevent Harm from an Agent? By Yoshua Bengio, Michael K. Cohen, Nikolay Malkin, Matt MacDermott, Damiano Fornasiere, Pietro Greiner and Younesse Kaddar, in arXiv:2408.05284, 2024.

This paper is part of a larger research program (with initial thoughts already shared in this earlier blog post that I have undertaken with collaborators that asks the following question: If we could leverage recent advances in machine learning and amortized probabilistic inference with neural networks to get good Bayesian estimates of conditional probabilities, could we obtain quantitative guarantees regarding the safety of the actions proposed by an AI? The good news is that as the amount of computational resources increases, it is possible to make such estimators converge towards the true Bayesian posteriors. Note how this does not require asymptotic data, but “only” asymptotic compute. In other words, whereas most catastrophic AI scenarios see things getting worse as the AI becomes more powerful, such approaches may benefit from the increase in computational resources to increase safety (or get tighter safety bounds).

The above paper leaves open a lot of challenging questions, and we need more researchers digging into them (more details and references in the paper):

  • Moderate overcautiousness. Can we ensure that we do not underestimate the probability of harm but do not massively overestimate it?

  • Tractability of posterior estimation. How can we efficiently estimate the required Bayesian posteriors? For computational tractability, a plausible answer would rely on amortized inference, which turns the difficult estimation of these posteriors into the task of training a neural net probabilistic estimator which will be fast at run-time. Recent work on amortized Bayesian inference for symbolic models suggests that this is feasible.
  • Efficient search for a cautious theory. How can we efficiently identify a cautious but plausible theory that upper-bounds the risk, since this requires an optimization at run-time? Again, a plausible answer is to rely on amortized probabilistic inference, e.g., by sampling theories with a low or annealed temperature, a technique that has been used for decades in the nonconvex optimization literature and more recently combined with amortized sequential sampling methods. - Partial specification of theories. Another issue is that a full explanatory theory will generally be too large an object to make the above bound calculations and optimization sufficiently efficient. An alternative would be to derive bounds that only require partial explanations, similar to how humans tell cautious “stories” about what could go wrong in uncertain circumstances when thinking about taking a potentially dangerous decision.

  • Safety specification in natural language. How is a safety specification provided in natural language, like with Anthropic’s Constitutional AI, turned into a random variable over which conditional probabilities can be estimated? Having humans label lots of situations that match the desired constraint seems impractical. The alternative proposed in my February blog post is to learn to interpret natural language statements as formal random variables over which probabilistic causal reasoning can occur. The ambiguity of natural language can then be captured by having a distribution over formal interpretations. - Effect of approximation error. Finally, because we envisage using machine learning to estimate the bound, we have to consider the fact that a learned predictor will be imperfect. How do we correct conservatively for the potentially dangerous errors made by this predictor? This may be particularly important if we use a neural network to estimate probabilities, since they are subject to lack of robustness and adversarial attacks, e.g., how do we make sure that the agent proposing the actions has not learned to find loopholes in the risk estimator?

No comments.