[Update: As of today Nov. 16 (after checking with Eliezer), I’ve edited the Arbital page to define “pivotal act” the way it’s usually used: to refer to a good gameboard-flipping action, not e.g. ‘AI destroys humanity’. The quote below uses the old definition, where ‘pivotal’ meant anything world-destroying or world-saving.]
Eliezer’s using the word “pivotal” here to mean something relatively specific, described on Arbital:
The term ‘pivotal’ in the context of value alignment theory is a guarded term to refer to events, particularly the development of sufficiently advanced AIs, that will make a large difference a billion years later. A ‘pivotal’ event upsets the current gameboard—decisively settles a win or loss, or drastically changes the probability of win or loss, or changes the future conditions under which a win or loss is determined.
[...]
Examples of pivotal and non-pivotal events
Pivotal events:
non-value-aligned AI is built, takes over universe
human intelligence enhancement powerful enough that the best enhanced humans are qualitatively and significantly smarter than the smartest non-enhanced humans
upload humans and run them at speeds more comparable to those of an AI
prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)
design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)
a complete and detailed synaptic-vesicle-level scan of a human brain results in cracking the cortical and cerebellar algorithms, which rapidly leads to non-value-aligned neuromorphic AI
Non-pivotal events:
curing cancer (good for you, but it didn’t resolve the value alignment problem)
proving the Riemann Hypothesis (ditto)
an extremely expensive way to augment human intelligence by the equivalent of 5 IQ points that doesn’t work reliably on people who are already very smart
making a billion dollars on the stock market
robotic cars devalue the human capital of professional drivers, and mismanagement of aggregate demand by central banks plus burdensome labor market regulations is an obstacle to their re-employment
Borderline cases:
unified world government with powerful monitoring regime for ‘dangerous’ technologies
widely used gene therapy that brought anyone up to a minimum equivalent IQ of 120
Centrality to limited AI proposals
We can view the general problem of Limited AI as having the central question: What is a pivotal positive accomplishment, such that an AI which does that thing and not some other things is therefore a whole lot safer to build? This is not a trivial question because it turns out that most interesting things require general cognitive capabilities, and most interesting goals can require arbitrarily complicated value identification problems to pursue safely.
It’s trivial to create an “AI” which is absolutely safe and can’t be used for any pivotal achievements. E.g. Google Maps, or a rock with “2 + 2 = 4″ painted on it.
[...]
Centrality to concept of ‘advanced agent’
We can view the notion of an advanced agent as “agent with enough cognitive capacity to cause a pivotal event, positive or negative”; the advanced agent properties are either those properties that might lead up to participation in a pivotal event, or properties that might play a critical role in determining the AI’s trajectory and hence how the pivotal event turns out.
In conversations I’ve seen that use the word “pivotal”, it’s usually asking about pivotal acts we can do that end the acute x-risk period (things that make it the case that random people in the world can’t suddenly kill everyone with AGI or bioweapons or what-have-you). I.e., it’s specifically focused on good pivotal acts.
IMO it’s confusing that Eliezer uses the word “pivotal” on Arbital to also refer to ways AI could destroy the world. If we’re talking about stuff like “what’s the easiest pivotal act?” or “how hard do pivotal acts tend to be?”, I’ll give wildly different answers if I’m including ‘ways to destroy the world’ and not just ‘ways to save the world’—destroying the world seems drastically easier to me. And I don’t know of an unambiguous short synonym for ‘good pivotal act’.
(Eliezer proposes ‘pivotal achievement’, but empirically I don’t see people using this much, and it still has the same problem that it re-uses the word ‘pivotal’ for both categories of event, thus making them feel very similar.)
Usually I care about either ‘ways of saving the world’ or ‘ways of destroying the world’—I rarely find myself needing a word for the superset. E.g., I’ll find myself searching for a short term to express things like ‘the first AGI company needs to look for a way-to-save-the-world’ or ‘I wish EAs would spend more time thinking about ways-to-use-AGI-to-save-the-world’. But if I say ‘pivotal’, this will technically include x-catastrophes, which is not what I have in mind.
(On the other hand, the concept of ‘the kind of AI that’s liable to cause pivotal events’ does make sense to me and feels very useful, because I think AGI gets you both the world-saving and the world-destroying capabilities in one fell swoop (though not necessarily the ability to align AGI to actually utilize the capabilities you want). But given my beliefs about AGI, I’m satisfied with just using the term ‘AGI’ to refer to ‘the kind of AI that’s liable to cause pivotal events’. Eliezer’s more-specifically-about-pivotal-events term for this on Arbital, ‘advanced agent’, seems fine to me too.)
Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
The term ‘pivotal act’ in the context of AI alignment theory is a guarded term to refer to actions that will make a large positive difference a billion years later. Synonyms include ‘pivotal achievement’ and ‘astronomical achievement’.
We can contrast this with existential catastrophes (or ‘x-catastrophes’), events that will make a large negative difference a billion years later. Collectively, this page will refer to pivotal acts and existential catastrophes as astronomically significant events (or ‘a-events’).
‘Pivotal event’ is a deprecated term for referring to astronomically significant events, and ‘pivotal catastrophe’ is a deprecated term for existential catastrophes. ‘Pivotal’ was originally used to refer to the superset (a-events), but AI alignment researchers kept running into the problem of lacking a crisp way to talk about ‘winning’ actions in particular, and their distinctive features.
Usage has therefore shifted such that (as of late 2021) researchers use ‘pivotal’ and ‘pivotal act’ to refer to good events that upset the current gameboard—events that decisively settle a win, or drastically increase the probability of a win.
Under this definition, it seems that “nuke every fab on Earth” would qualify as “borderline”, and every outcome that is both “pivotal” and “good” depends on solving the alignment problem.
[Update: As of
todayNov. 16 (after checking with Eliezer), I’ve edited the Arbital page to define “pivotal act” the way it’s usually used: to refer to a good gameboard-flipping action, not e.g. ‘AI destroys humanity’. The quote below uses the old definition, where ‘pivotal’ meant anything world-destroying or world-saving.]Eliezer’s using the word “pivotal” here to mean something relatively specific, described on Arbital:
In conversations I’ve seen that use the word “pivotal”, it’s usually asking about pivotal acts we can do that end the acute x-risk period (things that make it the case that random people in the world can’t suddenly kill everyone with AGI or bioweapons or what-have-you). I.e., it’s specifically focused on good pivotal acts.
IMO it’s confusing that Eliezer uses the word “pivotal” on Arbital to also refer to ways AI could destroy the world. If we’re talking about stuff like “what’s the easiest pivotal act?” or “how hard do pivotal acts tend to be?”, I’ll give wildly different answers if I’m including ‘ways to destroy the world’ and not just ‘ways to save the world’—destroying the world seems drastically easier to me. And I don’t know of an unambiguous short synonym for ‘good pivotal act’.
(Eliezer proposes ‘pivotal achievement’, but empirically I don’t see people using this much, and it still has the same problem that it re-uses the word ‘pivotal’ for both categories of event, thus making them feel very similar.)
Usually I care about either ‘ways of saving the world’ or ‘ways of destroying the world’—I rarely find myself needing a word for the superset. E.g., I’ll find myself searching for a short term to express things like ‘the first AGI company needs to look for a way-to-save-the-world’ or ‘I wish EAs would spend more time thinking about ways-to-use-AGI-to-save-the-world’. But if I say ‘pivotal’, this will technically include x-catastrophes, which is not what I have in mind.
(On the other hand, the concept of ‘the kind of AI that’s liable to cause pivotal events’ does make sense to me and feels very useful, because I think AGI gets you both the world-saving and the world-destroying capabilities in one fell swoop (though not necessarily the ability to align AGI to actually utilize the capabilities you want). But given my beliefs about AGI, I’m satisfied with just using the term ‘AGI’ to refer to ‘the kind of AI that’s liable to cause pivotal events’. Eliezer’s more-specifically-about-pivotal-events term for this on Arbital, ‘advanced agent’, seems fine to me too.)
Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
New “pivotal act” page:
Under this definition, it seems that “nuke every fab on Earth” would qualify as “borderline”, and every outcome that is both “pivotal” and “good” depends on solving the alignment problem.