Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker’s rules.
I have not signed any agreements whose existence I cannot mention.
Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker’s rules.
I have not signed any agreements whose existence I cannot mention.
While far from what I hoped for, this is the closest to what I hoped for that I managed to find so far: https://www.chinatalk.media/p/is-china-agi-pilled
Overall, the Skeptic makes the stronger case — especially when it comes to China’s government policy. There’s no clear evidence that senior policymakers believe in short AGI timelines. The government certainly treats AI as a major priority, but it is one among many technologies they focus on. When they speak about AI, they also more often than not speak about things like industrial automation as opposed to how Dario would define AGI. There’s no moonshot AGI project, no centralized push. And the funding gaps between leading Chinese AI labs and their American counterparts remain enormous.
The Believer’s strongest argument is that the rise of DeepSeek has changed the conversation. We’ve seen more policy signals, high-level meetings, and new investment commitments. These suggest that momentum is building. But it remains unclear how long this momentum can be maintained–and whether it will really translate into AGI moonshots. While Xi talks about “two bombs one satellite”-style mobilzation in the abstract, he hasn’t channeled that idea into any concerted AGI push and there are no signs on any “whole nation” 举国 effort to centralize resources. Rather, the DeepSeek frenzy again is translating into application-focused development, with every product from WeChat to air conditioning now offering DeepSeek integrations.
This debate also exposes a flaw in the question itself: “Is China racing to AGI?” assumes a monolith where none exists. China’s ecosystem is a patchwork — startup founders like Liang Wenfeng and Yang Zhilin dream of AGI while policymakers prioritize practical wins. Investors, meanwhile, waver between skepticism and cautious optimism. The U.S. has its own fractures on how soon AGI is achievable (Altman vs. LeCun), but its private sector’s sheer financial and computational muscle gives the race narrative more bite. In China, the pieces don’t yet align.
The fact that their models are on par with openAI and anthropic but it’s open source.
This is perfectly consistent with my
“just”: build AI that is useful for whatever they want their AIs to do and not fall behind the West while also not taking the Western claims about AGI/ASI/singularity at face value?
You can totally want to have fancy LLMs while not believe in AGI/ASI/singularity.
There are people from the safety community arguing for jail for folks who download open source models.
Who? What proportion of the community are they? Also, all open-source models? Jail for downloading GPT2?
You can’t have it both ways. Either open source is risky and an acceleration and should be limited/punished, or there is no acceptable change to timelines from open source AI and hence it doesn’t need to be regulated.
It seems to me like you’re making a move from “there are people in the AI safety community who hold one view and some who hold the other view” to “the AI safety community holds both of these views”?
I’m not sure why this is.
The most straightforward explanation would be that there are more underexploited niches for top-0.01%-intelligence people than there are top-0.01%-intelligence people.
After thinking about it for a few minutes, I’d expect that MadHatter has disengaged from this community/cause anyway, so that kind of public reveal is not going to hurt them much, whereas it might have a big symbolic/common-knowledge-establishing value.
Self-Other Overlap: https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Emergent Misalignment: https://x.com/ESYudkowsky/status/1894453376215388644
He was throwing vaguely positive comments about Chris Olah, but I think always/usually caveating it with “capabilities go like this [big slope], Chris Olah’s interpretability goes like this [small slope]” (e.g., on Lex Fridman podcast and IIRC some other podcast(s)).
ETA:
SolidGoldMagikarp: https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation#Jj5yN2YTp5AphJaEd
He also said that Collin Burns’s DLK was a “highly dignified work”. Ctrl+f “dignified” here though it doesn’t link to the tweet (?) but should be findable/verifiable.
I know almost nothing about audio ML, but I would expect one big inconvenience when doing audio-NN-interp to be that a lot of complexity in sound is difficult to represent visually. Images and text (/token strings) don’t have this problem.
I am confused about what autism is. Whenever I try to investigate this question I end up coming across long lists of traits and symptoms where various things are unclear to me.
Isn’t that the case with a lot of psychological/psychiatric conditions?
Criteria for a major depressive episode include “5 or more depressive symptoms for ≥ 2 weeks”, and there are 9 depressive symptoms, so you could have 2 individuals diagnosed with a major depressive episode but having only one depressive symptom in common.
I know. I just don’t expect it to.
Steganography /j
So it seems to be a reasonable interpretation that we might see human level AI around mid-2030 to 2040, which happens to be about my personal median.
What are the reasons your median is mid-2030s to 2040, other than this way of extrapolating the METR results?
How does the point about Hitler murder plots connect to the point about anthropics?
they can’t read Lesswrong or EA blogs
VPNs exist and are probably widely used in China + much of “all this work” is on ArXiv etc.
If that was his goal, he has better options.
I’m confused about how to think about this idea, but I really appreciate having this idea in my collection of ideas.
To show how weird English is: English is the only proto indo european language that doesn’t think the moon is female (“la luna”) and spoons are male (“der Löffel”). I mean… maybe not those genders specifically in every language. But some gender in each language.
Persian is ungendered too. They don’t even have gendered pronouns.
Writing articles in Chinese for my family members, explaining things like cognitive bias, evolutionary psychology, and why dialectical materialism is wrong.
Your needing to write them seems to suggest that there’s not enough content like that in Chinese, in which case it would plausibly make sense to publish them somewhere?
I’m also curious about how your family received these articles.
I think that the scenario of the war between several ASI (each merged with its origin country) is underexplored. Yes, there can be a value handshake between ASIs, but their creators will work to prevent this and see it as a type of misalignment.
Not clear to me, as long as they expect the conflict to be sufficiently destructive.
I wonder whether it’s related to this https://x.com/RichardMCNgo/status/1866948971694002657 (ping to @Richard_Ngo to get around to writing this up (as I think he hasn’t done it yet?))
To steelman a devil’s advocate: If your intent-aligned AGI/ASI went something like
and this would be, in an important sense, more democratic, because the people (/demos) would have more influence over their societies.