I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven’t read the fine print. /s
artemium
In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an “AGI-soon” world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological “stone age.”Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy—not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
Full integration with US AI efforts, securing a guarantee of equal benefits from aligned superintelligence. This could also give EU AI safety labs a seat at the table for alignment discussions.
Develop an autonomous EU AI leader, excelling in capabilities and alignment research to negotiate with the US and China as an equal. This would demand a drastic policy shift, massive investment in data centers and nuclear power, and deregulation, likely unrealistic in the short term.
If we accept all the premises of this scenario, what prescriptive actions might an average individual take in their current position at this point in time?
Some random ideas:
Continue investing in NVIDIA and other key winners in this timeline, particularly now with the temporary price discount following the recent tariff debacle
If you are not based in the U.S., consider relocating there (in both scenarios, the U.S. plays a decisive role. In the Slowdown scenario, benefits would primarily concentrate in the U.S., benefiting American citizens)
Attempt to raise awareness about the AI-2027 project, so that major political players can apply more pressure to reach the slowdown timeline
Are there any other recommendations?
If that was the case, wouldn’t Scott and Daniel develop the impressive AI-2027 website themselves with the help of AI Agents, instead of utilising your human webdev skills? /jk :D
[Question] What are the chances that Superhuman Agents are already being tested on the internet?
The answer surely depends mostly on what his impact will be on AI developments, both through his influence on the policy of the new administration and what he does with xAI. While I understand that his political actions might be mind-killing (to say the least) to many of his former fans, I would much prefer a scenario where Elon has infuriating politics but a positive impact on solving alignment over one with the opposite outcome.
A new open-source model has been announced by the Chinese lab DeepSeek: DeepSeek-V3. It reportedly outperforms both Sonnet 3.5 and GPT-4o on most tasks and is almost certainly the most capable fully open-source model to date.
Beyond the implications of open-sourcing a model of this caliber, I was surprised to learn that they trained it using only 2,000 H800 GPUs! This suggests that, with an exceptionally competent team of researchers, it’s possible to overcome computational limitations.
Here are two potential implications:
Sanctioning China may not be effective if they are already capable of training cutting-edge models without relying on massive computational resources.
We could be in a serious hardware overhang scenario, where we already have sufficient compute to build AGI, and the only limiting factor is engineering talent.
(I am extremely uncertain of this, it was just my reaction after reading about it)
artemium’s Shortform
Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:
When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran, or his elephant caravans tramp through perfumed jungles in Kled, where forgotten palaces with veined ivory columns sleep lovely and unbroken under the moon.
Btw, have you heard about PropheticAI? They are working on device that is supposed to help you with lucid dreaming?
Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow?
I am not sure if dotcom 2000 market crash is the best way to describe a “fizzle”. The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants we know today.
I expect most of the current AI startups and business models will fail and we will see plenty of market corrections, but this will be orthogonal to ground truth about AI discoveries that will happen only in a few cutting edge labs which will be shielded from temporary market corrections.
But coming back to the object level question: I really don’t have a specific backup plan, I expect even the non-AGI level AI based on the advancement of the current models will significantly impact various industries so will stick to software engineering for forceable future.
My dark horse bet is on 3d country trying desperately to catch up to US/China just when they will be close to reaching agreement on slowing down progress. Most likely: France.
Why so? My understanding is that, if AGI will arrives in 2026 it will be based on the current paradigm of training increasingly large LLMs on massive clusters of advanced GPUs. Given that US has banned selling advanced GPUs to China, how do you expect them to catch up that soon?
To add to this point, author in question is infamous for doxxing Scott Alexander and writing a hit piece on rationalist community before.
https://slatestarcodex.com/2020/09/11/update-on-my-situation/
I was also born in a former socialist country -Yugoslavia, which was notable for the prevalence of worker-managed firms in its economy. This made it somewhat unique among other socialist countries that used a more centralized approach with state ownership over entire industries.
While it is somewhat different than worker-owned cooperatives in modern market economies it does offer a useful data point. The general conclusion is that they work a bit better than a typical state-owned firm, but are still significantly worse in their economic performance compared to the median private company. This is the reason why despite having plenty of experience with worker-managed firms almost all ex-YU countries today have economies dominated by fully private companies and no one is really enthusiastic about repeating the worker-managed experiment.
Also agree about not promoting political content on LW but would love to read your writings on some other platform if possible.
If it reaches that point, the goal for Russia would not be to win but to ensure another side loses too, and this outcome might be preferable (to them) to a humiliating conventional defeat that might permanently end Russian sovereignty. In the end, the West has far more to lose than Russia and the stakes aren’t that high for us and they know it.
No. I think everything else is in crappy shape cause the Nuclear arsenal was always a priority for the Russian defense industry and most of the money and resources went there. I’ve noticed that the meme “perhaps Russian nukes don’t work” is getting increasingly popular which can have pretty bad consequences if the meme spreads and emboldens escalation.
It is like being incentivized to play Russian roulette because you hear bullets were made in a country that produced some other crappy products.
Looks awesome! Maybe there could be extended UI that tracks the recent research papers (sorta like I did here) or SOTA achievements. But maybe that would ruin the smooth minimalism of the page.
That’s a reasonable concern, but I don’t think it’s healthy to ruminate too much about it. You made a courageous and virtuous move, and it’s impossible to perfectly predict all possible futures from that point onward. If this fails, I presume failure was overdetermined, and your actions wouldn’t really matter.
The only mistake you and your team made, in my opinion, was writing the slowdown scenario for AI-2027. While I know that wasn’t your intention, a lot of people interpreted it as a 50% chance of ‘the US wins global supremacy and achieves utopia,’ which just added fuel to the fire (‘See, even the biggest doomers think we can win! LFG!!!!’).
It also likely hyperstitionized increased suspicion among other leading countries that the US would never negotiate in good faith, making it significantly harder to strike a deal with China and others.