I’ll post about my views on different numbers of OOMs soon
Tom Davidson
Sorry, for my comments on this post I’ve been referring to “software only singularity?” only as “will the parameter r >1 when we f first fully automate AI RnD”, not as a threshold for some number of OOMs. That’s what Ryan’s analysis seemed to be referring to.
I separately think that even if initially r>1 the software explosion might not go on for that long
Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models,
Sorry,I don’t follow why they’re less certain?
based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case
I’d be interested to hear more about this. The semi conductor case is hard as we don’t know how far we are from limits, but if we use Landauer’s limit then I’d guess you’re right. There’s also uncertainty about how much alg progress we will and have met
Why are they more recoverable? Seems like a human who seized power would seek asi advice on how to cement their power
Thanks for this!
Compared to you, I more expect evidence of scheming if it exists.
You argue weak schemers might just play nice. But if so, we can use them to do loads of intellectual labour to make fancy behavioral red teaming and interp to catch out the next gen of AI.
More generally, the plan of bootstrapping to increasingly complex behavioral tests and control schemes seems likely to work. It seems like if one model has spent a lot of thinking time designing a scheme then another model would have to be much smarter to zero shot cause a catastrophe without the scheme detecting it. Eg. analogies with humans suggest this.
I agree the easy vs hard worlds influence the chance of AI taking over.
But are you also claiming it influences the badness of takeover conditional on it happening? (That’s the subject of my post)
So you predict that if Claude was in a situation where it knew that it had complete power over you and could make you say that you liked it then it would stop being nice? I think would continue to be nice in any situation of that rough kind which suggests it’s actually nice not just narcissistically pretending
But a human could instruct an aligned ASI to help it take over and do a lot of damage
That structural difference you point to seems massive. The reputational downsides of bad behavior will be multiplied 100-fold+ for AI as it reflects on millions of instances and the company’s reputation.
And it will be much easier to record and monitor ai thinking and actions to catch bad behaviour.
Why unlikely we can detect selfishness? Why can’t we bootstrap from human-level?
One dynamic initially preventing stasis in influence post-AGI is that different ppl have different discount rates, so those with lower discounts will slowly gain influence over time
Yep I’m saying you’re wrong about this. If money compounds but you don’t have utility=log($) then you shouldn’t Kelly bet
Your formula is only valid if utility = log($).
With that assumption the equation compares your utility with and without insurance. Simple!
If you had some other utility function, like utility = $, then you should make insurance decisions differently.
I think the Kelly betting stuff is a big distraction, and that ppl with utility=$ shouldn’t bet like that. I think the result that Kelly betting maximizes long term $ bakes in assumptions about utility functions and is easily misunderstood—someone with utility=$ probably goes bankrupt but might become insanely rich AI is happy not to Kelly bet. (I haven’t explained this point properly, but recall reading about this and it’s just wrong on it’s face that someone with utility=$ should follow your formula)
I enjoyed reading this, thanks.
I think your definition of solving alignment here might be too broad?
If we have superintelligent agentic AI that tries to help its user but we end up missing out of the benefits of AI bc of catastrophic coordination failures, or bc of misuse, then I think you’re saying we didn’t solve alignment bc we didn’t elicit the benefits?
You discuss this, but I prefer to separate out control and alignment. Where I wouldn’t count us as having solved alignment if we only elicit behavior via intense/exploitative control schemes. So I’d adjust your alignment definition with the extra requirement that we avoided takeover while not doing super-intense control schemes relative to what is acceptable to do to humans today. Which is a higher bar, and separates it from the thing we care about—avoiding takeover and eliciting benefits—but I think that’s a better def
I enjoyed it, and think that ideas are important, but found it hard to follow at points
Some suggestions:
explain more why self criticism allows one part to assert control
give more examples throughout, especially the second half. I think some paragraphs don’t have examples and are harder to understand
flesh out examples to make them longer and more detailed
I think your model will underestimate the benefits of ramping up spending quickly today.
You model the size of the $ overhang as constant. But in fact it’s doubling every couple of years as global spending on producing on AI chips grows. (The overhang relates to the fraction of chips used in the largest training run, not the fraction of GWP spent on the largest training run.) That means that ramping up spending quickly (on training runs or software or hardware research) gives that $ overhang less time to grow
Why are you at 50% ai kills >99% ppl given the points you make in the other direction?
So far causally upstream of the human evaluator’s opinion? Eg an AI counselor optimizing for getting to know you
I think the “soup of heuristics” stories (where the AI is optimizing something far causally upstream of reward instead of something that is downstream or close enough to be robustly correlated) don’t lead to takeover in the same way
Why does it not lead to takeover in the same way?
AI understands that the game ends after 1908 and modifies accordingly.
Does it? In the game you link it seems like the bot doesn’t act accordingly in the last move phase. Turkey misses a chance to grab Rumania, Germany misses a chance to grab London, and I think France misses something as well.
I meant at any point, but was imagining the period around full automation yeah. Why do you ask?