I agree that many beliefs about basilisks are ridiculous. Especially beliefs about what the correct decisions to make are in response to various scenarios. It would be a mistake to not believe that there is a particular failure mode that an AI creating civilisation could have which resulted in scenarios referred to as Roko’s Basilisk. It isn’t even an all that remarkable or unusual failure mode. Just a particular instance of extorting those vulnerable to extortion in the name of “the greater good”.
Arguing that an idea could do harm by merely occupying space in a brain is a tremendous discredit to humanity.
The mistake here is ‘merely’. I can think of reasons why I would not want my covert assets to each have knowledge of all the other asset’s false identities. The presence of that information could cause (allow) other agents to do harm. This isn’t particularly different in means of action.
I agree that many beliefs about basilisks are ridiculous. Especially beliefs about what the correct decisions to make are in response to various scenarios. It would be a mistake to not believe that there is a particular failure mode that an AI creating civilisation could have which resulted in scenarios referred to as Roko’s Basilisk. It isn’t even an all that remarkable or unusual failure mode. Just a particular instance of extorting those vulnerable to extortion in the name of “the greater good”.
The mistake here is ‘merely’. I can think of reasons why I would not want my covert assets to each have knowledge of all the other asset’s false identities. The presence of that information could cause (allow) other agents to do harm. This isn’t particularly different in means of action.