IMO it’s confusing that Eliezer uses the word “pivotal” on Arbital to also refer to ways AI could destroy the world. If we’re talking about stuff like “what’s the easiest pivotal act?” or “how hard do pivotal acts tend to be?”, I’ll give wildly different answers if I’m including ‘ways to destroy the world’ and not just ‘ways to save the world’—destroying the world seems drastically easier to me. And I don’t know of an unambiguous short synonym for ‘good pivotal act’.
(Eliezer proposes ‘pivotal achievement’, but empirically I don’t see people using this much, and it still has the same problem that it re-uses the word ‘pivotal’ for both categories of event, thus making them feel very similar.)
Usually I care about either ‘ways of saving the world’ or ‘ways of destroying the world’—I rarely find myself needing a word for the superset. E.g., I’ll find myself searching for a short term to express things like ‘the first AGI company needs to look for a way-to-save-the-world’ or ‘I wish EAs would spend more time thinking about ways-to-use-AGI-to-save-the-world’. But if I say ‘pivotal’, this will technically include x-catastrophes, which is not what I have in mind.
(On the other hand, the concept of ‘the kind of AI that’s liable to cause pivotal events’ does make sense to me and feels very useful, because I think AGI gets you both the world-saving and the world-destroying capabilities in one fell swoop (though not necessarily the ability to align AGI to actually utilize the capabilities you want). But given my beliefs about AGI, I’m satisfied with just using the term ‘AGI’ to refer to ‘the kind of AI that’s liable to cause pivotal events’. Eliezer’s more-specifically-about-pivotal-events term for this on Arbital, ‘advanced agent’, seems fine to me too.)
Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
The term ‘pivotal act’ in the context of AI alignment theory is a guarded term to refer to actions that will make a large positive difference a billion years later. Synonyms include ‘pivotal achievement’ and ‘astronomical achievement’.
We can contrast this with existential catastrophes (or ‘x-catastrophes’), events that will make a large negative difference a billion years later. Collectively, this page will refer to pivotal acts and existential catastrophes as astronomically significant events (or ‘a-events’).
‘Pivotal event’ is a deprecated term for referring to astronomically significant events, and ‘pivotal catastrophe’ is a deprecated term for existential catastrophes. ‘Pivotal’ was originally used to refer to the superset (a-events), but AI alignment researchers kept running into the problem of lacking a crisp way to talk about ‘winning’ actions in particular, and their distinctive features.
Usage has therefore shifted such that (as of late 2021) researchers use ‘pivotal’ and ‘pivotal act’ to refer to good events that upset the current gameboard—events that decisively settle a win, or drastically increase the probability of a win.
IMO it’s confusing that Eliezer uses the word “pivotal” on Arbital to also refer to ways AI could destroy the world. If we’re talking about stuff like “what’s the easiest pivotal act?” or “how hard do pivotal acts tend to be?”, I’ll give wildly different answers if I’m including ‘ways to destroy the world’ and not just ‘ways to save the world’—destroying the world seems drastically easier to me. And I don’t know of an unambiguous short synonym for ‘good pivotal act’.
(Eliezer proposes ‘pivotal achievement’, but empirically I don’t see people using this much, and it still has the same problem that it re-uses the word ‘pivotal’ for both categories of event, thus making them feel very similar.)
Usually I care about either ‘ways of saving the world’ or ‘ways of destroying the world’—I rarely find myself needing a word for the superset. E.g., I’ll find myself searching for a short term to express things like ‘the first AGI company needs to look for a way-to-save-the-world’ or ‘I wish EAs would spend more time thinking about ways-to-use-AGI-to-save-the-world’. But if I say ‘pivotal’, this will technically include x-catastrophes, which is not what I have in mind.
(On the other hand, the concept of ‘the kind of AI that’s liable to cause pivotal events’ does make sense to me and feels very useful, because I think AGI gets you both the world-saving and the world-destroying capabilities in one fell swoop (though not necessarily the ability to align AGI to actually utilize the capabilities you want). But given my beliefs about AGI, I’m satisfied with just using the term ‘AGI’ to refer to ‘the kind of AI that’s liable to cause pivotal events’. Eliezer’s more-specifically-about-pivotal-events term for this on Arbital, ‘advanced agent’, seems fine to me too.)
Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
New “pivotal act” page: