I found this post meaningful, thank you for posting.
I don’t think it’s productive to comment on whether the game is rational, or whether it’s a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart.
Thank you.
Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there’s no way I could be convinced.
I found this post meaningful, thank you for posting.
I don’t think it’s productive to comment on whether the game is rational, or whether it’s a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart.
Thank you.
Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there’s no way I could be convinced.