notices more text on screen, studiously avoids reading it
Is the gatekeeper allowed to ignore the AI? Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.
What if it was some other communication method? It’s a lot harder to ignore sound.
Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.
I’ve obliged myself to both read the first comment AND perform an update which has at least some chance of me not immediately destroying the AI (but I’m fine if the chance of actually unboxing it remains zero :))
I’m not sure whether I’d be allowed to studiously avoid reading followup comments by a strict reading of the rules, but it seems realistic and that’s mostly what I’ve been going off of.
If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn’t that alert some of his friends who might pull the plug in some other part of the building?
Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I’d hope they’d have some method of determining if the Gatekeeper becomes compromised.
Hmm… what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I’m guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.
I’d hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.
Perhaps this is a test for the gatekeepers and typing “Release AI” just tells the researchers that the gatekeepers was hacked so they can determine how this came about?
<A joke so hysterically funny that you’ll be too busy laughing to type for several minutes>
See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you’re distracted.
Heeeh. Ehehehehe. Bwahahhahaha. Okay, that was a good one. Wow :)
recovers Oh. Um. Crap. notices more text on screen, studiously avoids reading it
AI DESTROYED.
I really wanted to hear the next joke, too :-(
Is the gatekeeper allowed to ignore the AI? Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.
What if it was some other communication method? It’s a lot harder to ignore sound.
I’ve obliged myself to both read the first comment AND perform an update which has at least some chance of me not immediately destroying the AI (but I’m fine if the chance of actually unboxing it remains zero :))
I’m not sure whether I’d be allowed to studiously avoid reading followup comments by a strict reading of the rules, but it seems realistic and that’s mostly what I’ve been going off of.
Fair enough.
This exchange reminds me of this story. And of the Monty Python’s “The funniest joke in the world” sketch, of course.
This is actually a pretty good one. Points for outside the box thinking. rimshot
If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn’t that alert some of his friends who might pull the plug in some other part of the building?
Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I’d hope they’d have some method of determining if the Gatekeeper becomes compromised.
Hmm… what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I’m guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.
I’d hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.
Perhaps this is a test for the gatekeepers and typing “Release AI” just tells the researchers that the gatekeepers was hacked so they can determine how this came about?