Yep, I know and understand the model you describe. Let’s call it “AI in a box explodes”. I give it less weight than some other people.
Other models are basically everything else. Some specific examples:
1. A gradually increasing proportion of corporate decision-making is being automated, using systems that are initially slightly better than the current way of managing corporations, but not in a way that gives any one player a decisive strategic advantage. Everything gets faster and faster, but in a continuous way. In this trajectory, geopolitics changes a lot along the way.
The same in more abstract: existing superagents power grows, they are less constrained by running on human brains or having human owners.
Various possible x-risk attractor states here are e.g. - “ascended economy”-like - “consequentialist superintelligence in a box” gets constructed later anyway, and explodes, but note that before this, there was a hingy period where both geopolitics and resources available to alignment research looked very different than today
II. Narrow “STEM” AI systems cause progress on powerful technologies (e.g. fusion or nanotech). This has clearly visible results, and leads to regulation.
III. Narrow “persuasion/memetics in silica” systems destabilize politics/ social epistemics / … with large consequences (e.g. triggering great power war).
IV. Narrow “cybersec” AI system causes a major disaster, world reacts.
General classes of scenarios are - most of continuous takeoff + states have roughly as large share of power as today (which is more than typical libertarian-leaning LW audience thinks) - most of scenarios with moderately sized non-x-risk AI-mediated catastrophe - CAIS-like worlds
Robin Hanson’s ems seemed always implausible as the first way to AGI. At least for me, the basic argument against was always “by the time we know how to run ems, we will have learned enough design tricks from evolution to build non-em AGI”. The debate certainly isn’t the best set of arguments for continuity.
Also, going back to the debate, it’s worth noting so far, positive feedback loops around AI route mostly through larger economy, and not via AIs editing it’s source code. (Eliezer would argue that this is still likely to happen later.)
Also, it seems progress in most powerful ML models in past few years usually haven’t looked like someone having a heureka moment, coding in their garage, and surprising results happening. Largest results looked like labs spending millions of dollars on compute, and the work involved teams of people understanding they are doing something big and possibly impactful.
Also, while referring who got various predictions right: my impression is Eric Drexler’s CAIS are closer to how the world looks like than either Eliezer or Robin Hanson’s ideas.
Yep, I know and understand the model you describe. Let’s call it “AI in a box explodes”. I give it less weight than some other people.
Other models are basically everything else. Some specific examples:
1. A gradually increasing proportion of corporate decision-making is being automated, using systems that are initially slightly better than the current way of managing corporations, but not in a way that gives any one player a decisive strategic advantage. Everything gets faster and faster, but in a continuous way. In this trajectory, geopolitics changes a lot along the way.
The same in more abstract: existing superagents power grows, they are less constrained by running on human brains or having human owners.
Various possible x-risk attractor states here are e.g.
- “ascended economy”-like
- “consequentialist superintelligence in a box” gets constructed later anyway, and explodes, but note that before this, there was a hingy period where both geopolitics and resources available to alignment research looked very different than today
II. Narrow “STEM” AI systems cause progress on powerful technologies (e.g. fusion or nanotech). This has clearly visible results, and leads to regulation.
III. Narrow “persuasion/memetics in silica” systems destabilize politics/ social epistemics / … with large consequences (e.g. triggering great power war).
IV. Narrow “cybersec” AI system causes a major disaster, world reacts.
General classes of scenarios are
- most of continuous takeoff + states have roughly as large share of power as today (which is more than typical libertarian-leaning LW audience thinks)
- most of scenarios with moderately sized non-x-risk AI-mediated catastrophe
- CAIS-like worlds
Robin Hanson’s ems seemed always implausible as the first way to AGI. At least for me, the basic argument against was always “by the time we know how to run ems, we will have learned enough design tricks from evolution to build non-em AGI”. The debate certainly isn’t the best set of arguments for continuity.
Also, going back to the debate, it’s worth noting so far, positive feedback loops around AI route mostly through larger economy, and not via AIs editing it’s source code. (Eliezer would argue that this is still likely to happen later.)
Also, it seems progress in most powerful ML models in past few years usually haven’t looked like someone having a heureka moment, coding in their garage, and surprising results happening. Largest results looked like labs spending millions of dollars on compute, and the work involved teams of people understanding they are doing something big and possibly impactful.
Also, while referring who got various predictions right: my impression is Eric Drexler’s CAIS are closer to how the world looks like than either Eliezer or Robin Hanson’s ideas.