Analysis of key AI analogies

The following is an analysis of seven prominent AI analogies: aliens, the brain, climate change, electricity, the Industrial Revolution, the neocortex, & nuclear fission. You can find longer versions of these as separate blogposts on my substack.

0. Why?

AI analogies have a real-world impact

  • For better or worse, analogies play a prominent role in the public debate about the long-term trajectory and impacts of AI.

  • Analogies play a role in designing international institutions for AI (e.g. CERN, IPCC) and in legal decisions

  • Analogies as mental heuristics can influence policymakers in critical decisions. Changes in AI analogies can lead to worldview shifts (e.g. Hinton)

  • Having worked with a diverse set of experts my sense is that their thinking is anchored by wildly different analogies

Analogies can be misleading

  • Boaz Barak (“Metaphors for AI, and why I don’t like them”) and Matthew Barnett (“Against most, but not all, AI risk analogies”) have already discussed the shortcomings of analogies on this forum

  • Every individual analogy is imperfect. AI is its own thing, and there is simply no precedent that would closely match the characteristics of AI across 50+ governance-relevant dimensions.

  • Overly relying on a single analogy without considering differences and other analogies can lead to blind spots, overconfidence, and overfitting reality to a preconceived pattern.

Analogies can be useful

  • When facing a complex, open-ended challenge, we do not start with a system model. It is not clear which domain logic, questions, scenarios, risks, or opportunities we should pay attention to. Analogies can be a tool to explore such a future with deep uncertainty.

  • Analogies can be an instrumental tool in advocacy to communicate complex concepts in a digestible and intuitively appealing way.

My analysis is written in the spirit of exploration without prescribing or proscribing any specific analogy. At the same time, as a repository, it may still be of interest to policy advocates.

1. Aliens (full text)

Basic idea

  • comparison to first contact with an alien civilization

  • symbolizing AI’s underlying non-human reasoning processes, masked by human-like responses from RLHF

Selected users

Selected commonalities

  1. Superhuman power potential: Technologically mature extraterrestrials would likely be either far less advanced than us or significantly more advanced, comparable to our potential future digital superintelligence.

  2. Digital life: Popular culture often envisions aliens as evolved humans, but mature aliens are likely digital beings due to the advantages of digital intelligence over biological constraints and because digital beings can be more easily transported across space. The closest Earthly equivalent to these digital aliens is artificial intelligence.

  3. Terraforming: Humans shape their environment for biological needs, while terraforming by digital aliens would require habitats like electricity grids and data centers, which is very similar to a rapid build-out of AI infrastructure. Pathogens from digital aliens are unlikely to affect humans directly but could impact our information technology.

  4. Consciousness: We understand neural correlates of consciousness in biological systems but not in digital systems. The consciousness of future AI and digital aliens remains a complex and uncertain issue.

  5. Non-anthropomorphic minds: AI and aliens encompass a vast range of possible minds shaped by different environments and selection pressures than human minds. AI can develop non-human strategies, especially when trained with reinforcement learning. AI can have non-human failure modes such as through adversarial attacks. Future AI may have modular and superhuman bandwidth of sensors and effectors.

Selected differences

  1. Origin and human agency: AI originates from humans on Earth, unlike extraterrestrial intelligences. Humans have some control over AI’s development and deployment, unlike unpredictable extraterrestrial encounters. The development of AI is arguably also more gradual than the sudden arrival of mature aliens.

  2. Human-AI Interdependence: Aliens would exist autonomous from our civilization. In contrast, AI is increasingly integrated into human infrastructure, creating mutual dependence. This interdependence shifts from AI depending on humans to humans depending on AI.

  3. “First contact”: A first contact with aliens would likely involve centralized, limited communication for containment /​ diplomatic representation purposes. AI-human interaction is decentralized and happens at high speed and volume over millions of devices.

  4. Shared language and familiarity with human culture: AI is trained on human data and deeply embedded in human culture, making communication easier. This familiarity will grow as AI becomes more integrated into personal aspects of human life.

  5. Access to and editability of AI connectome: The weights and biases of artificial neural networks are transparent and editable, allowing for potential alignment with human goals (if we could interpret them). In contrast, the brains of aliens would likely be encrypted or inaccessible.

Bonus trivia: Does worshipping aliens pay off?

  • If we look at intrahuman “first contact” cases, we can find examples of the less developed civilization worshipping the more developed arrivals as “Gods”. On some pacific islands cargo cults developed after US planes dropped supplies during the Second World War.

  • The US Air Force has never bothered to return to Melanesia, where they have been religiously worshipping the US Air Force for its cargo drops for more than 75 years now.

  • Yet, the US Air Force has been conducting an annual, humanitarian “Operation Christmas Drop” for pacific islands since 1952 - for the non-worshippers of Micronesia.

2. Brain (full text)

Basic idea

  • Artificial neural networks = biological neural networks

  • brain as source of inspiration for AI algorithms and architectures

  • helps to inform the scaling hypothesis and long-term predictions on AI

Selected users

  • biology-inspired or connectionist school of AI (Hinton, Bengio, Sutskever, Hassabis, LeCun etc.)

Selected commonalities

  1. Basic neuron logic: The McCulloch-Pitts model (1943) conceptualized artificial neurons based on simple logical operations, forming the foundation of artificial neural networks. Basic inhibition and excitation: Biological neurons use neurotransmitters like glutamate (excitatory) and GABA (inhibitory). Artificial neurons use positive or negative weights to simulate this effect.

  2. Multimodal neurons: Human neurons respond to specific individuals regardless of representation form (photos, names). AI has developed multimodal neurons that similarly respond to various representations of the same subject.

  3. Reinforcement learning: Inspired by psychology (Skinner, Pavlov) and dopamine pathways in the brain, reinforcement learning in AI involves learning from actions’ consequences via rewards/​punishments. Temporal-Difference Learning: Enhances reinforcement learning by using internal value functions to assess and reinforce behaviors continuously, rather than relying solely on sparse external rewards.

  4. Intelligence increases with scale: Larger brain size (relative to body) correlates with higher intelligence in species. Similarly, AI performance improves predictably with increased training data, compute, and model parameters.

  5. Cultural learning: Cultural learning allows humans to transcend genetic information limits. AI can now directly access and utilize this accumulated human knowledge.

Selected differences

  1. Neuron activation: Biological neurons fire in a binary manner. Biological neurons accumulate charge over time from inputs. The brain uses a variety of specialized neurotransmitters and hormones affecting mood and motivation.

  2. Backpropagation: We do not fully understand how learning in the human brain works. However, it does not use backpropagation as learning algorithm.

  3. Speed: The human brain has no unified clock speed, but neurons usually cannot fire more often than about 250 times per second. In contrast, modern computers work with a unified clock speed and instructions sent at intervals of multiple billion times per second. The signals between biological neurons can travel at speeds of up to 120 meters per second. In contrast, signals in a chip can travel optically up to the theoretical maximum of the speed of light, which is 300′000′000 meters per second.

  4. Working memory: Humans have very limited working memory. The most cited study on the capacity of the human brain to hold different elements in mind simultaneously suggests an upper limit of 7 elements (plus or minus two)

  5. Speed of evolution of size: The computing power going into large AI models grows by about 4.2x per year, and the parameter count grows by about 2.8x per year. In contrast, the average doubling period for brain volume, from Australopithecus to early Homo sapiens, was approximately 1.8 million years.

  6. Access to connectome: We have access to the full connectome of AI models. For comparison, the first (and, so far, only) fully reconstructed connectome of a biological neural network belongs to the roundworm C. elegans.

  7. Ownership and distribution: Brains are “owned” by individual humans. The infrastructure of artificial neural networks is owned by the tech giants, such as Amazon, Microsoft, and Google. There are no “brain billionaires” that have more neocortex than entire countries.

3. Climate Change (full text)

Basic idea

  • AI requires global cooperation like climate change

  • We need an “IPCC for AI

Selected user

Selected commonalities

  1. Complexity: Both climate change and AI development are highly complex and uncertain due to their dependence on global human activity with feedback loops and non-linear effects.

  2. Trend and hazards: Climate change is a long-term trend affecting the frequency and intensity of weather-related hazards. Similarly, the widespread diffusion and integration of AI into society increases the risk surface.

  3. Global public goods: For climate change, reducing emissions benefits globally but costs locally, leading to potential free-riding. Similarly, some level of global cooperation is needed to prevent AI arms races and misuse by criminals.

  4. Powerful private sector: Major fossil fuel companies have large market caps, similar to leading AI and tech companies like NVIDIA, TSMC, Microsoft, Alphabet, Amazon, and Meta.

  5. Concerns about existential risk: There are widespread public concerns that climate change could be an existential threat. For AI, a large portion of AI scientists believe that we should take existential risk from AI seriously.

Selected differences

  1. Scientific consensus: Climate science has a strong consensus that climate change is real and caused by humans. In AI there are more perceived disagreements among researchers about the potential risks and future impacts of AI.

  2. System-orientation: Climate science focuses on the planetary-scale system, led by independent academics and public networks. AI focuses on individual technological artifacts, primarily driven by private sector experts, with less emphasis on system-wide monitoring.

  3. Wizard vs. prophet vision: The climate debate is dominated by the “prophet” vision, advocating for sustainable living and reduced consumption to be within planetary boundaries. The AI debate is more aligned with the “wizard” vision, emphasizing exponential growth and take-off.

  4. Time horizon: Climate change projections and goals extend far into the future, with detailed assessments and long-term targets (e.g., 2100). AI projections are much shorter, with national strategies typically looking only about 10 years ahead.

  5. Speed of change: Climate change progresses slowly, with significant changes taking decades to centuries. AI development is rapid, with exponential growth in capabilities and applications, causing dramatic impacts within a few years.

Bonus trivia: There was a shift from “wizard” to “prophet” vision in climate around 1970

  • The default assumption for the future pre-1970 seemed to be artificial climate control with relatively little concern about inadvertent climate change

  • Today the default vision is a that natural climate change is an existential threat and there is a strong ideological opposition to solar geoengineering

4. Electricity (full text)

Basic idea

  • AI is like electrification (US, ca. 1880-1950)

  • AI is a general-purpose technology

Selected user

Selected commonalities

  1. Cross-industry applications, complements, productivity: Both electricity and AI have some of the classic hallmarks of general-purpose technologies, meaning they have widespread applications across numerous industries, they have innovational complements, and we expect them to boost productivity.

  2. Switch from in-house capacity to an outsourced service: Before widespread electrification, power was primarily generated in-house. After 1900, the availability of cheaper, centrally produced power led to a shift towards outsourcing power production, adopting an electricity-as-a-service model.

    Large cloud providers, who own significant AI hardware, offer AI compute as a service, allowing companies to use AI capabilities without owning the hardware. This trend could lead to a decrease in in-house intellectual labor and an increase in the use of flexible, outsourced AI intelligence, as “AI remote workers” or “exocortex” of companies.

Selected differences

  1. No new transmission infrastructure: Electrification was in large parts about building a new transmission network that connects all homes (others are water & telecommunications). AI does not require any new transmission network. Rather AI is distributed over the existing data networks as part of Internet traffic

  2. Local vs global market: Electricity is location-dependent due to transmission losses, leading to varying costs and no global market. AI can function globally without transmission losses, resembling the internet’s integrated market but must comply with local laws.

  3. Public utility regulation: The electricity grid and market are heavily regulated as a natural monopoly with public service obligations. AI does not have such regulations.

  4. There is no free tier of electricity: AI services often offer a free tier, unlike electricity which is always paid. AI’s cost structure and rapid widespread access differ from the historical luxury-to-necessity transition of electricity.

  5. Degree of commodification: Electricity has a uniform quality. AI models differ in significant ways so that AI tokens are not equally commodified. No one will ever run a medical device on electricity from one power plant and then from another power plant just to see if it reaches the same conclusion. In contrast, it is reasonable to ask for a second or even third opinion on medical diagnosis from different AI doctors.

  6. Labor substitution: Electrification was not a labor substitution. In factories it was a transition from one artificial form of energy to another. As such, it has never caused any significant worries about massive job losses. This stands in contrast to the First Industrial Revolution in which many laborers lost their jobs. In terms of labor turnover and related societal unrest and pushback, the First Industrial Revolution is a better fit to AI than electrification.

  7. Interpretability, agency, autonomy: Electric current as it comes out of your socket is a controlled and understood physical phenomenon with no cognition, goals, or agency. We understood how electricity functions at the time of electrification. In contrast, the inner working of large neural networks are still poorly understood. AI-companies are not just building general-purpose tools, but general-purpose agents that can follow instructions with many intermediate steps and use tools themselves, and we should expect these to get more and more autonomy over time.

Bonus trivia: But AI *is* electricity

  • Literally. On some level, AI is just electrons moving on chips. Or, if you want, a process to turn electricity into heat.

  • However, much like saying humans are water (humans are literally 60% water by weight), this low level of analysis is inadequate for a meaningful analysis.

5. Industrial Revolution (full text)

Basic idea

  • AI will create another Industrial Revolution. Which is understood in one of the following ways:

    • productivity revolution in the industrial sector

    • shift of the dominant employment sector

    • GDP growth acceleration

    • The Industrial Revolution replaced human muscle power with mechanical power. AI will do the same for human brain power.

Selected user

Selected commonalities

  1. New knowledge access institutions: The Industrial Revolution saw the emergence of institutions that significantly increased the availability and accessibility of knowledge (e.g., Royal Society & various scientific societies). Similarly, the AI Revolution coincides with increased knowledge accessibility through the Internet and future AI could evolve into advanced personal tutors.

  2. Invention of a new method of invention: The Industrial Revolutions coincided with a shift from individual inventors to institutionalized R&D. AI can already be seen as a new method of invention that can predict patterns based on vast data sets (e.g., AlphaFold. GNoMe). In the future, AI might evolve into “AI scientists,” potentially automating scientific and technological advancements.

  3. Reorganization & deskilling: The Industrial Revolution restructured production to leverage artificial energy and there was a deskilling of labor. AI is expected to automate tasks in the service and knowledge economy that could also deskill labor.

  4. Labor-capital conflict over distribution of surplus: Distribution of productivity gains depends on whether workers or capital owners have more leverage. For the first ca. 50 years of the Industrial Revolution only capital owners benefited, while real wages for workers stagnated or declined during “Engels’ pause.” AI’s productivity gains are expected to displace some human labor, leading to a struggle over surplus distribution. The deskilling effect of AI could weaken workers’ negotiating power, favoring capital owners.

  5. Potential for new political systems & ideologies: The Industrial Revolution led to significant political changes, including the end of feudalism, new imperialism, and the rise of labor movements, socialism, and communism. The AI revolution may similarly lead to new political systems and ideologies. Potential outcomes include universal basic income, techno-authoritarianism, technopolar “snow crash,” AI-led “singleton,” or technocapitalism without humans.

Selected differences

  1. Industrial robots vs. knowledge service LLMs: The Industrial Revolution primarily transformed the industrial sector, while the AI revolution is likely to have a broader impact, particularly on the service sector and knowledge economy.

  2. Demographics: During the Industrial Revolution the population was younger. The older demographics in the AI revolution mean societal issues will focus more on retirement rather than child labor. High automation may address labor shortages caused by an aging population, and the reduced risk of political instability is due to the lower proportion of young, risk-taking individuals.

  3. Speed of transformation: The transition from human to artificial cognitive power is expected to be much faster than the transition from human to artificial muscle power during the Industrial Revolution.

  4. Energy-intelligence ratio: The Industrial Revolution expanded available energy, while the AI revolution expands available intelligence. This shift changes the energy-to-intelligence ratio, with AI making intelligence abundant and energy relatively scarce. This dynamic could create significant changes in the economy and human labor’s role.

  5. Potential loss of control over the economy: The Industrial Revolution empowered humanity, but the AI revolution could lead to a gradual loss of human control over the economy. Future AI agents could gain economic and political power through legal and financial means, leading to a world where AI agents vastly outnumber and outperform humans.

Bonus trivia: The humble clover’s contribution to the Industrial Revolution

  • The British agricultural revolution preceded the Industrial Revolution and contributed to urbanization and a workforce for the Industrial Revolution

  • Agricultural productivity at the time was limited by nitrogen. The key innovation of the British agricultural revolution was to introduce clover to crop rotation. Clover beats other soil-fixing plants by 3-5x (this was before Guano, synthetic ammonia).

  • So, the clover has rightly become a common symbol of good luck in the UK, Ireland and much of Western Europe.

6. Neocortex (full text)

Basic idea

  • evolution of neocortex = evolution of synthetic neocortex

  • limbic system:neocortex = neocortex:AI

Selected users

Selected commonalities

  1. Alignment with goals: The neocortex aligns with personal goals, suggesting personal AIs should too, without necessarily integrating with the brain via a neural interface.

  2. Enhanced planning and prediction: The neocortex’s role in planning and assessing success probabilities suggests that personal AIs could further enhance these abilities, regardless of brain-computer interface integration.

  3. Emergent abilities: Just as the enlarged neocortex brought unforeseen capabilities, scaling artificial neural networks might lead to new, unpredictable cognitive abilities.

Selected differences

  1. Signal speed: A significant speed gap exists between the biological brain and potential exocortex due to the latter’s potential for operating at gigahertz speeds.

  2. Consciousness: The exocortex, unlike the human brain, would likely not possess consciousness, leading to a greater proportion of the brain being unconscious.

  3. Speed of brain evolution: The evolutionary doubling period of human brain volume is approximately 1.8 million years. The training compute for large AI models has doubled every 6 months for the last 14 years.

  4. Upper limit of exocortex size: The size of the neocortex is limited by the human skull which is in turn limited by the birth canal. There is no fixed upper limit for exocortex volume.

  5. Distribution and variety of exocortex: The distribution of neocortex among humans is fairly even. There are no “brain billionaires”. In contrast, computing power for the exocortex is distributed unequally within and between countries.

  6. Ownership of exocortex: You own your brain. In contrast, big tech owns the AI cloud capacity.

  7. Autonomous viability: While the neocortex cannot function independently of the human body, large AI models will be increasingly autonomous.

Bonus trivia: Triune Brain Theory

  • The popular idea that the older limbic system controls the more recently evolved and much more powerful neocortex comes Paul MacLean’s “Triune Brain Theory”

  • This theory may suggest that the brain evolved in layers and reptiles would only have a “reptilian brain.” Yet, all vertebrates share similar brain parts that have reorganized and grown differently.

  • Modern neuroscience, using advanced imaging techniques, shows that high-level brain functions arise from dynamic interactions across multiple brain regions, contradicting the triune brain’s idea of quasi-autonomous parts.

  • Assertions about a hierarchical relationship between the limbic system and the neocortex, where the “monkey brain” controls the cortex, are misleading. The pre-frontal cortex plays multiple important roles in motivation.

7. Nuclear fission (full text)

Basic idea

  • nuclear weapons = AI, in terms of factors such as risk, containment, or power

  • some have called for an “IAEA for AI”, a “CERN for AI”, or a “Manhattan Project for AI”

Selected users

Selected commonalities

  1. Ideas of a chain reaction: A nuclear chain reaction occurs when an atom splits into two smaller atoms, releasing 2-3 neutrons and energy, which then cause further splits, creating a self-sustaining cascade. The concept of an intelligence explosion involves an AI system improving itself to the point of becoming vastly superhuman in a short time. This idea is scientifically controversial but positive feedback loops in AI development, especially in compute, data, and algorithms, could potentially lead to such an explosion.

  2. Conflicted Scientists

    1. Concerns and Regrets: Key contributors to nuclear bomb development, like Einstein and Oppenheimer, expressed concerns and regrets about their work’s societal impact. Similar concerns are seen among AI researchers like Geoffrey Hinton and Yoshua Bengio.

    2. Discovery as Motivation: Oppenheimer described the allure of scientific discovery as a motivation that overshadowed concerns about societal impacts. Geoffrey Hinton echoed this sentiment, stating the excitement of discovery drove his (past) research despite concerns.

    3. Shifting publication norms: Initially open, nuclear research publication norms shifted towards secrecy due to the potential dangers. This shift was led by concerned scientists like Leo Szilard. AI research is experiencing a similar shift, with leading labs becoming more conservative about publishing details due to potential misuse risks.

  3. Concerns about existential risk: The destructive potential of nuclear weapons raised existential risk concerns, leading to efforts for international control and the metaphorical “Doomsday Clock.” AI scientists estimate a significant risk of AI leading to catastrophic outcomes, comparable to nuclear war.

  4. One-Worldism: The development of nuclear weapons prompted calls for a world government to manage their existential risks, with notable advocacy from scientists like Albert Einstein. Some have suggested similar ideas for AI (“singleton”, “high-tech panopticon”) though this remains a less mainstream view.

  5. Ideas for international control through supply chain bottlenecks: International control efforts for nuclear have largely focused on bottlenecks in the supply chain, particularly uranium enrichment and plutonium production, to prevent weapon proliferation. Similar control efforts are proposed for AI, focusing on the highly concentrated AI hardware supply chain as a point of international governance and verification.

Selected differences

  1. Military vs. private sector: Nuclear fission originated as a military technology during World War II, with its first applications being bombs, followed by military submarines. Civilian use came later and was driven by political responses to Soviet advancements. AI development is led by the private sector and military applications are being adapted from civilian innovations.

  2. Ability to discriminate: Nuclear weapons are too large to effectively discriminate between military and civilian targets. Target lists for nuclear war include major cities, not just military sites. AI in military applications, such as targeting systems, poses legal and ethical challenges but can be designed to discriminate between civilians and combatants.

  3. Deterrence logic: Nuclear weapons operate on deterrence by mutually assured destruction. Their strategic value lies in guaranteeing devastation through second-strike capability, independent of other military strengths. AI lacks a clear deterrence logic comparable to nuclear weapons. Signaling AI power is less straightforward, and AI systems are vulnerable to physical destruction (e.g., data centers).

  4. Ease of proliferation over time: Nuclear proliferation has become somewhat easier due to advances in technology and the spread of civilian nuclear energy, but it remains a challenging and prolonged process for most countries. AI proliferation is becoming exponentially easier due to improvements in hardware and algorithms. The rapid pace of AI advancements makes controlling the spread of absolute levels of capabilities more difficult, unlike the slower proliferation of nuclear technology

  5. Autonomy, agency: Nuclear weapons are powerful tools but lack intelligence, self-replication, or self-improvement capabilities. They are fully understood and designed by humans. AI systems possess increasing levels of autonomy and the potential to self-improve and create new technologies. AI represents a new method of invention that can significantly impact various technological areas.

Bonus trivia: The impact of Szilard’s silence is more complex than the narrative

  • There is a popular claim that Szilard’s fight for changing publication norms has led to Fermi’s self-censorship, which in turn led Germany to cripple their program by choosing heavy water over graphite as moderator. This originates from Rhodes 1986 book on the US nuclear program.

  • In 1989 Mark Walker wrote a book on the German program based on original German sources. Bothe’s efforts to evaluate graphite as a moderator did reach misleading results due to lack of purity. However, Hanle realized that this was due to pollution and informed the Heereswaffenamt, incl. with instructions for how to create sufficiently pure graphite. Their decision to go with heavy water rather than pure graphite as a moderator was primarily based on economic considerations.