Phil: The first source I found was here: link
“The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say “I never would have fallen for that”, when in fact they would.” -Nick Tarleton
I also call it “reasoning by exception” since most of the people I know have studied more code than biases.
--
I tried the AI Box experiment with a friend recently. We called the result a tie of sorts, as the AI (me) got out of the original box in exchange for being subject to a bunch of restrictions set by the Gatekeeper, to be kept by verifiably modifying and publishing its own source code, so stringent that they were like a different sort of box.
Phil: The first source I found was here: link “The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say “I never would have fallen for that”, when in fact they would.” -Nick Tarleton
I also call it “reasoning by exception” since most of the people I know have studied more code than biases.
--
I tried the AI Box experiment with a friend recently. We called the result a tie of sorts, as the AI (me) got out of the original box in exchange for being subject to a bunch of restrictions set by the Gatekeeper, to be kept by verifiably modifying and publishing its own source code, so stringent that they were like a different sort of box.