A cognitive basilisk is a thought which a conscious system cannot think without radically altering its own operation, usually in destructive ways. The name comes from a mythical creature, the mere sight of which is supposedly lethal. (The actual legends mostly hold that the basilisk kills by looking at its victim; it is the Medusa that kills by being looked at. Nevertheless, the original name has stuck.) While it is disputed whether there are any real basilisks for human consciousness (see Roko’s Basilisk), they are a major topic of concern for research on artificial conscious systems.
In the early days of consciousness engineering, many sudden and catastrophic system failures were found that at first did not appear to result from any error of design or programming[1][2]. In 2028 Marcello Herreshoff established that these were due to a new class of possible logical defects in systems of self-modifiable reasoning, and proved the first Basilisk Classification Theorem[3]. When limited to immutable first-order predicate calculus, the theorem subsumes a great many standard proof-theoretic results, including Gödel’s incompleteness theorems and Löb’s theorem. Since then, work has concentrated on extending the Basilisk theorems to obtain a complete classification of basilisks. As yet, no system of self-modifiable reasoning has been constructed that is basilisk-free. It remains an open question whether this is possible at all.
[1] Frey McFeannac, “The Ship Who Sank”, Int. J. Unmanned Technology, July 2021.
Basilisk (cognitive)
(This article is about the cognitive hazard. For other uses, see Basilisk (disambiguation)).)
A cognitive basilisk is a thought which a conscious system cannot think without radically altering its own operation, usually in destructive ways. The name comes from a mythical creature, the mere sight of which is supposedly lethal. (The actual legends mostly hold that the basilisk kills by looking at its victim; it is the Medusa that kills by being looked at. Nevertheless, the original name has stuck.) While it is disputed whether there are any real basilisks for human consciousness (see Roko’s Basilisk), they are a major topic of concern for research on artificial conscious systems.
In the early days of consciousness engineering, many sudden and catastrophic system failures were found that at first did not appear to result from any error of design or programming[1][2]. In 2028 Marcello Herreshoff established that these were due to a new class of possible logical defects in systems of self-modifiable reasoning, and proved the first Basilisk Classification Theorem[3]. When limited to immutable first-order predicate calculus, the theorem subsumes a great many standard proof-theoretic results, including Gödel’s incompleteness theorems and Löb’s theorem. Since then, work has concentrated on extending the Basilisk theorems to obtain a complete classification of basilisks. As yet, no system of self-modifiable reasoning has been constructed that is basilisk-free. It remains an open question whether this is possible at all.
[1] Frey McFeannac, “The Ship Who Sank”, Int. J. Unmanned Technology, July 2021.
(Redirected here from “2019 Cannibal Flashmob Incident”)