This is a speculative map of a hot discussion topic. I’m posting it in question form in the hope we can rapidly map the space in answers.
Looking at various claims at X and at the AI summit, it seems possible to identify some key counter-regulation narratives and frames that various actors are pushing.
Because a lot of the public policy debate won’t be about “what are some sensible things to do” within a particular frame, but rather about fights for frame control, or “what frame to think in”, it seems beneficial to have at least some sketch of a map of the discourse.
I’m posting this as a question with the hope we can rapidly map the space, and one example of a “local map”:
“It’s about open source vs. regulatory capture”
It seems the coalition against AI safety, most visibly represented by Yann LeCun and Meta, has identified “it’s about open source vs. big tech” as a favorable frame in which they can argue and build a coalition of open-source advocates who believe in the open-source ideology, academics who want access to large models, and small AI labs and developers believing they will remain long-term competitive by fine-tuning smaller models and capturing various niche markets. LeCun and others attempt to portray themselves as the force of science and open inquiry, while the scaling labs proposing regulation are the evil big tech attempting regulatory capture. Because this seems to be the prefered anti-regulation frame, I will spend most time on this.
Apart from the mentioned groups, this narrative seems to be memetically fit in a “proudly cynical” crowd which assumes everything everyone is doing or saying is primarily self-interested and profit-driven.
Overall, the narrative has clear problems with explaining away inconvenient facts, including:
Thousands of academics calling for regulation are uncanny counter-evidence for x-risk being just a ploy by the top labs.
The narrative strategy seems to explain this by some of the senior academics just being deluded, and others also pursuing a self-interested strategy in expectation of funding.
Many of the people explaining AI risk now were publicly concerned about AI risk before founding labs, and at times when it was academically extremely unprofitable, sometimes sacrificing standard academic careers.
The narrative move is to just ignore this.
Also, many things are just assumed—for example, if the resulting regulation would be in the interest o frontrunners.
What could be memetically viable counter-arguments within the frame?
Personally, I tend to point out that motivation to avoid AI risk is completely compatible with self-interest. Leaders of AI labs also have skin in the game.
Also, recently I try to ask people to use the explanatory frame of ‘cui bono’ also to the other side, namely, Meta.
One possible hypothesis here is Meta just loves open source and wants everyone to flourish.
A more likely hypothesis is Meta wants to own the open-source ecosystem.
A more complex hypothesis is Meta doesn’t actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.
To understand the second option, it’s a prerequisite to comprehend the “commoditize the complement” strategy. This is a business approach where a company aims to drive down the cost or increase the availability of goods or services complementary to its own offerings. The outcome is an increase in the value of the company’s services.
Some famous successful examples of this strategy include Microsoft and PC hardware: PC hardware became a commodity, while Microsoft came close to monopolizing the OS, extracting huge profits. Or, Apple’s App Store: The complement to the phone is the apps. Apps have become a cheap commodity under immense competitive pressure, with Apple becoming the most valuable company in the world. Gwern has a great post on the topic.
The future Meta aims for is:
Meta becomes the platform of virtual reality (Metaverse).
People basically move there.
Most of the addictive VR content is generated by AIs, which is the complement.
For this strategy to succeed, it’s quite important to have a thriving ecosystem of VR producers, competing on which content will be the most addictive or hack human brains the fastest. Why an entire ecosystem? Because it fosters more creativity in brain hacking. Moreover, if the content was produced by Meta itself, it would be easier to regulate.
Different arguments try to argue against ideological open source absolutism: unless you believe absolutely every piece of information should be freely distributable, there are some conditions under which certain information should be public.
Other clearly. important narratives to map seem to be at least
“It’s about West vs. China”
Hopefully losing traction with China participating in the recent summit, and top scientists from China signing letters calling for regulation
”It’s about near term risks vs. hypothetical sci-fi”
Hopefully losing traction with anyone being able to interact with GPT4
Strong upvoted.
I think that this particular dystopian outcome is Moloch, not “aimed” malevolence; I think that being “aimed at a dystopian outcome” is an oversimplification of the two-level game; complex internal conflict within the company, parallel to external conflict with other companies. For example, stronger AI and stronger brain-hacking/autoanalysis allows them to reduce the risk of users dropping below 2 hours of use per day (giving their platform a moat and securing the value of the company), while simultaneously reducing the risk of users spending 4+ hours per day which causes watchdog scrutiny, more AI means more degrees of freedom to reap the benefits of addiction with less of the unsightly bits.
I’ve previously described a hypothetical scenario where:
Likewise, even if companies in both the US and China seem to currently eschew brain-hacking paradigms, they might reverse course at any time, especially if brain-hacking truly is the superior move for a company or government to make in the context of the current multimodal-based ML paradigm, especially in the current Cold-War style affairs for US-China.
Your and Gwern’s “commoditize the complement” point is now a very helpful gear in my model, both for targeted influence tech and for modelling the US and Chinese tech industries more generally, thank you. Also, I had either forgotten or failed to realize that a thriving community of human creators allows for more intense influence strategies to be discovered by multi-armed bandit algorithms, rather than just being algorithmically bottlenecked or user/sensor data bottlenecked.
One possible framing is hubristic overconfidence vs. humble caution.
We aren’t saying AGI will overthrow humanity soon if we’re not careful; we’re saying it could. Everyone saying that’s ridiculous is essentially saying “hold my beer!” while attempting a stunt nobody has ever pulled off before. They could be right that it will be easy enough, they’re smart enough, and they will know when the danger approaches. But they’re gambling the future of all humanity on that confidence.
Experts have widely varying opinions on the dangers of AGI, so the humble belief is that we don’t know what’s possible, so should behave in accordance with accepting a very broad distribution of timelines, alignment difficulty, and levels of coordination.
That framing won’t make the most concerned among us happy. It will result in people who aren’t long-termists wanting to approach AGI fast enough to save themselves or their children from a painful death from natural causes. But it might be an acceptable compromise, while we gather and analyze more information.
How about “It’s not proven yet that vastly super-intelligent machines (i.e. >10x peak human intelligence) are even possible.” as a possible frame?
I can’t see a counterargument to it yet.
Even if we have only smartest-human-level models, you can spawn 100000 copies at 10x speed and organize them in the way “one model checks if output of other model displays cognitive biases” and get maybe not “design nanotech in 10 days” level but still something smarter than any organized group of humans.
Hmm. I’ve not seen any research about that possibility, which is obvious enough that I’d expect to see it if it were actually promising. And naively, it’s not clear that you’d get more powerful results from using 1M times the compute this way, compared to more direct scaling.
I’d put that in the exact same bucket as “not known if it’s even possible”.
Such possibility is explored at least here: https://arxiv.org/abs/2305.17066 but that’s not the point. The point is: even in hypothetical world where scaling laws and algorithmic progress hit the wall at smartest-human-level, you can do this and get an arbitrary level of intelligence. In real world, of course, there are better ways.
How do you know that’s possible?
There are definitely enough matter on Earth to sustain additional 100k human brains with signal speed 1000m/s instead of 100m/s? I actually can’t imagine how our understanding of physics should be wrong for it to not be possible.
I think you’re using a different sense of the word “possible”. In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say “there’s plenty of mass to use for computronium”. That’s not the same as saying “there is an achievable causal path from what we experience now to the world described”.
Did you misunderstand my question?
How does the total mass of the Earth or ‘signal speed 1000m/s instead of 100m/s’ demonstrate how you know?
The only reason why it can be impossible is if the amount of compute needed to run one smart-as-smartest human model is so huge that we need to literally disassemble Earth to run 100000 copies. It’s quite unrealistic reason because similar amout of compute for actual humans fit an actual small cranium.
Why is the amount of matter in a human brain relevant?