That post doesn’t seem to recognize the “basilisk” nature of it though.
If this post is correct, humans have a very strong casual incentive to create this version of Roko’s basilisk (separate from that of creating friendly AI).
That’s because the more likely it is to be created, the more bargaining power it will have, which directly translates into how much of the universe the paperclip maximizer would let humans have.
Here is a comparison between working on a CDT-based FAI v.s. this Roko’s basilisk:
If they get created, the CDT-based work is slightly better because it gives us 100% of the universe, instead of bargaining parts of it away.
If the paperclip maximizer gets created, work on the CDT-based one gives no benefit. Work on the Roko’s basilisk does translate into a direct benefit.
Notice that this does not rely on any humans actually participating in the acasual bargain. They simply influenced one.
Hmm, yeah basically the same.
That post doesn’t seem to recognize the “basilisk” nature of it though.
If this post is correct, humans have a very strong casual incentive to create this version of Roko’s basilisk (separate from that of creating friendly AI).
That’s because the more likely it is to be created, the more bargaining power it will have, which directly translates into how much of the universe the paperclip maximizer would let humans have.
Here is a comparison between working on a CDT-based FAI v.s. this Roko’s basilisk:
If they get created, the CDT-based work is slightly better because it gives us 100% of the universe, instead of bargaining parts of it away.
If the paperclip maximizer gets created, work on the CDT-based one gives no benefit. Work on the Roko’s basilisk does translate into a direct benefit.
Notice that this does not rely on any humans actually participating in the acasual bargain. They simply influenced one.