Why is Roko’s Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes?
The idea is that an FAI build on timeless decision theory might automatically behave that way. There’s also Eliezer’s conjecture that any working FAI has to be build on timeless decision theory.
The idea is that an FAI build on timeless decision theory might automatically behave that way. There’s also Eliezer’s conjecture that any working FAI has to be build on timeless decision theory.