Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
The term ‘pivotal act’ in the context of AI alignment theory is a guarded term to refer to actions that will make a large positive difference a billion years later. Synonyms include ‘pivotal achievement’ and ‘astronomical achievement’.
We can contrast this with existential catastrophes (or ‘x-catastrophes’), events that will make a large negative difference a billion years later. Collectively, this page will refer to pivotal acts and existential catastrophes as astronomically significant events (or ‘a-events’).
‘Pivotal event’ is a deprecated term for referring to astronomically significant events, and ‘pivotal catastrophe’ is a deprecated term for existential catastrophes. ‘Pivotal’ was originally used to refer to the superset (a-events), but AI alignment researchers kept running into the problem of lacking a crisp way to talk about ‘winning’ actions in particular, and their distinctive features.
Usage has therefore shifted such that (as of late 2021) researchers use ‘pivotal’ and ‘pivotal act’ to refer to good events that upset the current gameboard—events that decisively settle a win, or drastically increase the probability of a win.
Update: Eliezer has agreed to let me edit the Arbital article to follow more standard usage nowadays, with ‘pivotal acts’ referring to good gameboard-flipping actions. The article will use ‘existential catastrophe’ to refer to bad gameboard-flipping events, and ‘astronomically significant event’ to refer to the superset. Will re-quote the article here once there’s a new version.
New “pivotal act” page: