Previously “Lanrian” on here. Research analyst at Open Philanthropy. Views are my own.
Lukas Finnveden
Here’s the best explanation + study I’ve seen of Dunning-Krueger-ish graphs: https://www.clearerthinking.org/post/is-the-dunning-kruger-effect-real-or-are-unskilled-people-more-rational-than-it-seems
Their analysis suggests that their data is pretty well-explained by a combination of a “Closer-To-The-Average Effect” (which may or may not be rational — there are multiple possible rational reasons for it) and a “Better-Than-Average Effect” that appear ~uniformly across the board (but getting swamped by the “closer-to-the-average effect” at the upper end).
probably research done outside of labs has produced more differential safety progress in total
To be clear — this statement is consistent with companies producing way more safety research than non-companies, if companies also produce even way more capabilities progress than non-companies? (Which I would’ve thought is the case, though I’m not well-informed. Not sure if “total research outside of labs look competitive with research from labs” is meant to deny this possibility, or if you’re only talking about safety research there.)
There’s at least two different senses in which “control” can “fail” for a powerful system:
Control evaluations can indicate that there’s no way to deploy the system such that you both (i) get a lot of use out of it, and (ii) can get a low probability of catastrophe.
Control evaluations are undermined such that humans think that the model can be deployed safely, but actually the humans were misled and there’s a high probability of catastrophe.
My impression is that Ryan & Buck typically talks about the first case. (E.g. in the link above.) I.e.: My guess would be that they’re not saying that well-designed control evaluations become untrustworthy — just that they’ll stop promising you safety.
But to be clear: In this question, you’re asking about something more analogous to the second case, right? (Sabotage/sandbagging evaluations being misleading about models’ actual capabilities at sabotage & sandbagging?)
My question posed in other words: Would you count “evaluations clearly say that models can sabotage & sandbag” as success or failure?
More generally, Dario appears to assume that for 5-10 years after powerful AI we’ll just have a million AIs which are a bit smarter than the smartest humans and perhaps 100x faster rather than AIs which are radically smarter, faster, and more numerous than humans. I don’t see any argument that AI progress will stop at the point of top humans rather continuing much further.
Well, there’s footnote 10:
Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI.
So his view seems to be that even significantly smarter AIs just wouldn’t be able to accomplish that much more than what he’s discussing here. Such that they’re not very relevant.
(I disagree. Maybe there are some hard limits, here, but maybe there’s not. For most of the bottlenecks that Dario discusses, I don’t know how you become confident that there are 0 ways to speed them up or circumvent them. We’re talking about putting in many times more intellectual labor than our whole civilization has spent on any topic to date.)
I wonder if work on AI for epistemics could be great for mitigating the “gradually cede control of the Earth to AGI” threat model. A large majority of economic and political power is held by people who would strongly oppose human extinction, so I expect that “lack of political support for stopping human extinction” would be less of a bottleneck than “consensus that we’re heading towards human extinction” and “consensus on what policy proposals will solve the problem”. Both of these could be significantly accelerated by AI. Normally, one of my biggest concerns about “AI for epistemics” is that we might not have much time to get good use of the epistemic assistance before the end — but if the idea is that we’ll have AGI for many years (as we’re gradually heading towards extinction) then there will be plenty of time.
and as he admits in the footnote he didn’t include in the LW version, in real life, when adequately incentivized to win rather than find excuses involving ‘well, chaos theory shows you can’t predict ball bounces more than n bounces out’, pinball pros learn how to win and rack up high scores despite ‘muh chaos’.
I was confused about this part of your comment because the post directly talks about this in the conclusion.
The strategy typically is to catch the ball with the flippers, then to carefully hit the balls so that it takes a particular ramp which scores a lot of points and then returns the ball to the flippers. Professional pinball players try to avoid the parts of the board where the motion is chaotic.
The “off-site footnote” you’re referring to seems to just be saying “The result is a pretty boring game. However, some of these ramps release extra balls after you have used them a few times. My guess is that this is the game designer trying to reintroduce chaos to make the game more interesting again.” which is just a minor detail. AFAICT pros could score lots of points even without the extra balls.
(I’m leaving this comment here because I was getting confused about whether there had been major edits to the post, since the relevant content is currently in the conclusion and not the footnote. I was digging through the wayback machine and didn’t see any major edits. So trying to save other people from the same confusion.)
Yeah I was imagining we can proliferate by ‘gradient descenting’ on similar cases.
What is this referring to? Are you thinking about something like: varying small facts about the scenario to get a function from “details of the scenario”->p(escape attempt) and then switch to a scenario with a higher p and then repeat?
Have you tried using different AI models within perplexity? Any ideas about which one is best? I don’t know whether to expect better results from Sonnet 3.5 (within perplexity) or one of the models that perplexity have finetuned themselves, like Sonar Huge.
To be clear, uncertainty about the number of iterations isn’t enough. You need to have positive probability on arbitrarily high numbers of iterations, and never have it be the case that the probability of p(>n rounds) is so much less than p(n rounds) that it’s worth defecting on round n regardless of the effect of your reputation. These are pretty strong assumptions.
So cooperation is crucially dependent on your belief that all the way from 10 rounds to Graham’s number of rounds (and beyond), the probability of >n rounds conditional on n rounds is never lower than e.g. 20% (or whatever number is implied by the pay-off structure of your game).
What’s important in “AI for epistemics”?
it sounds to me like ruling this out requires an assumption about the correlations of an action being the same as the correlations of an earlier self-modifying action to enforce that later action.
I would guess that assumption would be sufficient to defeat my counter-example, yeah.
I do think this is a big assumption. Definitely not one that I’d want to generally assume for practical purposes, even if it makes for a nicer theory of decision theory. But it would be super interesting if someone could make a proper defense of it typically being true in practice.
E.g.: Is it really true that a human’s decision about whether or not to program a seed AI to take action A has the same correlations as that same superintelligence deciding whether or not to take action A 1000 years later while using a jupiter brain for its computation? Intuitively, I’d say that the human would correlate mostly with other humans and other evolved species, and that the superintelligence would mostly correlate with other superintelligences, and it’d be a big deal if that wasn’t true.
However, there is no tiling theorem for UDT that I am aware of, which means we don’t know whether UDT is reflectively consistent; it’s only a conjecture.
I think this conjecture is probably false for reasons described in this section of “When does EDT seek evidence about correlations?”. The section offers an argument for why son-of-EDT isn’t UEDT, but I think it generalizes to an argument for why son-of-UEDT isn’t UEDT.
Briefly: UEDT-at-timestep-1 is making a different decision than UEDT-at-timestep-0. This means that its decision might be correlated (according to the prior) with some facts which UEDT-at-timestep-0′s decision isn’t correlated with. From the perspective of UEDT-at-timestep-0, it’s bad to let UEDT-at-timestep-1 make decisions on the basis of correlations with things that UEDT-at-timestep-0 can’t control.
Notice that learning-UDT implies UDT: an agent eventually behaves as if it were applying UDT with each Pn. Therefore, in particular, it eventually behaves like UDT with prior P0. So (with the exception of some early behavior which might not conform to UDT at all) this is basically UDT with a prior which allows for learning. The prior P0 is required to eventually agree with the recommendations of P1, P2, … (which also implies that these eventually agree with each other).
I don’t understand this argument.
“an agent eventually behaves as if it were applying UDT with each Pn” — why can’t an agent skip over some Pn entirely or get stuck on P9 or whatever?
“Therefore, in particular, it eventually behaves like UDT with prior P0.” even granting the above — sure, it will beahve like UDT with prior p0 at some point. But then after that it might have some other prior. Why would it stick with P0?
Incidentally: Were the persuasion evals done on models with honesty training or on helpfulness-only models? (Couldn’t find this in the paper, sorry if I missed it.)
Tbc: It should be fine to argue against those implications, right? It’s just that, if you grant the implication, then you can’t publicly refute Y.
I also like Paul’s idea (which I can’t now find the link for) of having labs make specific “underlined statements” to which employees can anonymously add caveats or contradictions that will be publicly displayed alongside the statements
Link: https://sideways-view.com/2018/02/01/honest-organizations/
Maybe interesting: I think a similar double-counting problem would appear naturally if you tried to train an RL agent in a setting where:
“Reward” is proportional to an estimate of some impartial measure of goodness.
There are multiple identical copies of your RL algorithm (including: they all use the same random seed for exploration).
In a repeated version of the calculator example (importantly: where in each iteration, you randomly decide whether the people who saw “true” get offered a bet or the people who saw “false” get offered a bet — never both), the RL algorithms would learn that, indeed:
99% of the time, they’re in the group where the calculator doesn’t make an error
and on average, when they get offered a bet, they will get more reward afterwards if they take it than if they don’t.
The reason that this happens is because, when the RL agents lose money, there’s fewer agents that associate negative reinforcement with having taken a bet just-before. Whereas whenever they gain money, there’s more agents that associate positive reinforcement with having taken a bet just-before. So the total amount of reinforcement is greater in the latter case, so the RL agents learn to bet. (Despite how this loses them money on average.)
I agree it seems plausible that AIs could boost takeover success probability (and holding on to that victory through the first several months) by more than 0.1% by killing a large fraction of humans.
Though on the other hand, the AI might also need to keep some humans loyal early during takeover, to e.g. do some physical tasks that it doesn’t have great robot control over. And mass-killing isn’t necessarily super easy, either; and attempts in that direction could raise a lot of extra opposition. So it’s not clear where the pragmatics point.
(Main thing I was reacting to in my above comment was Steven’s scenario where the AI already has many copies across the solar system, already has robot armies, and is contemplating how to send firmware updates. I.e. it seemed more like a scenario of “holding on in the long-term” than “how to initially establish control and survive”. Where I feel like the surveillance scenarios are probably stable.)
Related: The monkey and the machine by Paul Christiano. (Bottom-up processes ~= monkey. Verbal planner ~= deliberator. Section IV talks about the deliberator building trust with the monkey.)
A difference between this essay and Paul’s is that this one seems to lean further towards “a good state is one where the verbal planner ~only spends attention on things that the bottom-up processes care about”, whereas Paul’s essay suggests a compromise where the deliberator gets to spend a good chunk of attention on things that the monkey doesn’t care about. (In Rand’s metaphor, I guess this would be like using some of your investment returns for consumption. Where consumption would presumably count as a type of dead money, although the connotations don’t feel exactly right, so maybe it should be in a 3rd bucket.)