Independent alignment researcher
I have signed no contracts or agreements whose existence I cannot mention.
Independent alignment researcher
I have signed no contracts or agreements whose existence I cannot mention.
Politicians announce all sorts of things on the campaign trail, that usually is not much indication of what post-election policy will be.
Seems more likely the drop was from Trump tariff leaks than deepseek’s app.
I also note that 30x seems like an under-estimate to me, but also too simplified. AIs will make some tasks vastly easier, but won’t help too much with other tasks. We will have a new set of bottlenecks once we reach the “AIs vastly helping with your work” phase. The question to ask is “what will the new bottlenecks be, and who do we have to hire to be prepared for them?”
If you are uncertain, this consideration should lean you much more towards adaptive generalists than the standard academic crop.
There’s the standard software engineer response of “You cannot make a baby in 1 month with 9 pregnant women”. If you don’t have a term in this calculation for the amount of research hours that must be done serially vs the amount of research hours that can be done in parallel, then it will always seem like we have too few people, and should invest vastly more in growth growth growth!
If you find that actually your constraint is serial research output, then you still may conclude you need a lot of people, but you will sacrifice a reasonable amount of growth speed for attracting better serial researchers.
(Possibly this shakes out to mathematicians and physicists, but I don’t want to bring that conversation into here)
The most obvious one imo is the immune system & the signals it sends.
Others:
Circadian rhythm
Age is perhaps a candidate here, though it may be more or less a candidate depending on if you’re talking about someone before or after 30
Hospice workers sometimes talk about the body “knowing how to die”, maybe there’s something to that
If that’s the situation, then why the “if and only if”, if we magically make then all believe they will die if they make ASI, then they would all individually be incentivized to stop it from happening independent of China’s actions.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
This seems false to me. Eg Trump for one seems likely to do what the person who pays him the most & is the most loyal to him tells him to do, and AI risk worriers do not have the money or the politics for either of those criteria compared to, for example, Elon Musk.
Its on his Linkedin at least. Apparently since the start of the year.
I will note this sounds a lot like Turntrout’s old Attainable Utility Preservation scheme. Not exactly, but enough that I wouldn’t be surprised if a bunch of the math here has already been worked out by him (and possibly, in the comments, a bunch of the failure-modes identified).
Engineers: Its impossible.
Meta management: Tony Stark DeepSeek was able to build this in a cave! With a box of scraps!
Although I don’t think the first example is great, seems more like a capability/observation-bandwidth issue.
I think you can have multiple failures at the same time. The reason I think this was also goodhart was because I think the failure-mode could have been averted if sonnet was told “collect wood WITHOUT BREAKING MY HOUSE” ahead of time.
If you put current language models in weird situations & give them a goal, I’d say they do do edge instantiation, without the missing “creativity” ingredient. Eg see claude sonnet in minecraft repurposing someone’s house for wood after being asked to collect wood.
Edit: There are other instances of this too, where you can tell claude to protect you in minecraft, and it will constantly tp to your position, and build walls around you when monsters are around. Protecting you, but also preventing any movement or fun you may have wanted to have.
I don’t understand why Remmelt going “off the deep end” should affect AI safety camp’s funding. That seems reasonable for speculative bets, but not when there’s a strong track-record available.
It is, we’ve been limiting ourselves to readings from the sequence highlights. I’ll ask around to see if other organizers would like to broaden our horizons.
I mean, one of them’s math built bombs and computers & directly influenced pretty much every part of applied math today, and the other one’s math built math. Not saying he wasn’t smart, but no question are bombs & computers more flashy.
Fixed!
The paper you’re thinking of is probably The Developmental Landscape of In-Context Learning.
Yeah, these are mysteries, I don’t know why. TSMC I think did get hit pretty hard though.