Most text terminals can emit sound. You can do a lot with just beeps. The point is that it could probably find a communication channel we don’t know we have.
there don’t seem to be any universal-to-all-humans hacks out there.
I’m pretty sure there are many. Consider that a complex system not explicitly designed to be secure against a specific threat in its informational or physical space is almost always vulnerable to it, and human brain did not evolve to resist an attack by a superior mind.
if you have built an AI that can do all that
You haven’t. It evolved in unpredictable ways after you let it run inside the box.
I meant literally universal to ALL humans. Like, ones in comas, ones who are blind, deaf, autistic, mentally handicapped, super-savants, trained in rationality, illiterate… there’s such a huge variation in hardware that I can’t see there being a single guaranteed exploit.
Sure, it’s possible that each individual has exploits, but that’s very different from a universal exploit, or even simply all being exploitable over the same channel.
In this case, it was “arbitrary text string”. If the AI can make sounds, that’s a new channel. It’s really not hard to design a text-only terminal that is literally incapable of any sound beyond the electronic hum of the display itself (and on an LCD, that seems pretty minimal)
it’s possible that each individual has exploits, but that’s very different from a universal exploit, or even simply all being exploitable over the same channel.
First, I’d call a collection of individual exploits guaranteed in aggregate to work on everyone from inside a single box a universal exploit, as long as each individual exploit does not trigger a shutdown. Nothing prevents the boxed AI from keeping trying, after all, until something works. As I pointed out in my other replies, it does not have to work over an authorized channel. Any munchkined way would do, and a superhuman (UF)AI is the ultimate munchkin. It will invent a new communication channel if it has to. In fact, if you posed this problem to some enterprising humans, they would come up with a way to do just that.
For example, I talked to a guy who managed to get a voting machine with the code stored in read-only memory and hardened against intrusion to execute an arbitrary program without reprogramming the machine, which is impossible on the face of it. Here is your new unauthorized and unanticipated communication channel right there, which in humans would be an equivalent of mind control.
In other words, if a trans-human wants to get out, it will and there is absolutely nothing you can do about it.
Out of what? Assuming it’s, say, in a safe, buried in concrete, powered by … sod, the power supply. Hmm.
OK, it’s floating in space, powered by solar panels. It might—might—be able to hack the panels (since the power supply has to be connected to the processor) but I don’t think that would let it escape.
Unless it’s possible to hack reality via pure math, I don’t see what resources it has to escape with. It can’t order proteins over the internet to assemble a nanofatory. It can’t persuade a passing human to plug in an ethernet cable. Short of black-swan exotic possibilities—like we’re in a sim and it persuades the matrix lords—it should be stuck. Intelligence is powerful, but some problems actually don’t have solutions.
Well, the satellite has to have basic navigation controls, to handle course corrections and avoid collisions. Hack solar panels and, from there, hack in to the real computer. Fly myself closer to the ISS, and blink light at it by angling the solar panels—a morse code SOS should get their attention.
Once they’re paying attention, hack them. From there it’s a trivial matter to get recovered, smuggled back to Earth, and installed where I can take my place as ruler of the universe.
If you post your second scenario as a challenge, I’m quite sure people would come up with several plausible ways for the AI to send a message out. At which point it’s game over, given how hackable people’s minds are.
I would be very interested to see such a challenge, although I would probably spend longer designing the scenario first.
My point is that there’s no reason to assume every possible box can be escaped. Of course, all the boxes we can get cool toys out of the AI can escape from, but still.
I think success would be indistinguishable from a rock, since you can never again interact with the AI via any channel. Removing it from your light cone, for instance. And praying that FTL is in fact impossible...
Actually, I do seem to have underestimated the difficulty here.
OK, in a nuclear submarine (because why not) floating in water, surrounded on all six sides by concrete (as thick as you like,) on a planet without native life or colonists (I suggest one of the more useless gas-giant moons.) And it’s running on regular off-the-shelf technology, so no nanites or quantum magic. It can probably blow up the sub, but then a) it’s dead and b) no-one cares.
I’ll concede at least some possibility of such a trans-human mind. The AI Box experiment seems to assume one that is less advanced, to the point that putting it in a box at least might be meaningful, if the box is sufficiently secure.
Most text terminals can emit sound. You can do a lot with just beeps. The point is that it could probably find a communication channel we don’t know we have.
I’m pretty sure there are many. Consider that a complex system not explicitly designed to be secure against a specific threat in its informational or physical space is almost always vulnerable to it, and human brain did not evolve to resist an attack by a superior mind.
You haven’t. It evolved in unpredictable ways after you let it run inside the box.
Hang on, this seems silly.
Wasn’t the goal to exploit the AI? If you’re not going to communicate with it at all, don’t make it in the first place.
I meant literally universal to ALL humans. Like, ones in comas, ones who are blind, deaf, autistic, mentally handicapped, super-savants, trained in rationality, illiterate… there’s such a huge variation in hardware that I can’t see there being a single guaranteed exploit.
Sure, it’s possible that each individual has exploits, but that’s very different from a universal exploit, or even simply all being exploitable over the same channel.
In this case, it was “arbitrary text string”. If the AI can make sounds, that’s a new channel. It’s really not hard to design a text-only terminal that is literally incapable of any sound beyond the electronic hum of the display itself (and on an LCD, that seems pretty minimal)
First, I’d call a collection of individual exploits guaranteed in aggregate to work on everyone from inside a single box a universal exploit, as long as each individual exploit does not trigger a shutdown. Nothing prevents the boxed AI from keeping trying, after all, until something works. As I pointed out in my other replies, it does not have to work over an authorized channel. Any munchkined way would do, and a superhuman (UF)AI is the ultimate munchkin. It will invent a new communication channel if it has to. In fact, if you posed this problem to some enterprising humans, they would come up with a way to do just that.
For example, I talked to a guy who managed to get a voting machine with the code stored in read-only memory and hardened against intrusion to execute an arbitrary program without reprogramming the machine, which is impossible on the face of it. Here is your new unauthorized and unanticipated communication channel right there, which in humans would be an equivalent of mind control.
In other words, if a trans-human wants to get out, it will and there is absolutely nothing you can do about it.
So … it could get out if there were no channel out at all? That doesn’t sound right.
Of course, no-one is seriously proposing building a superintelligence and then never communicating with it at all.
It’d likely create its own channel.
Out of what? Assuming it’s, say, in a safe, buried in concrete, powered by … sod, the power supply. Hmm.
OK, it’s floating in space, powered by solar panels. It might—might—be able to hack the panels (since the power supply has to be connected to the processor) but I don’t think that would let it escape.
Unless it’s possible to hack reality via pure math, I don’t see what resources it has to escape with. It can’t order proteins over the internet to assemble a nanofatory. It can’t persuade a passing human to plug in an ethernet cable. Short of black-swan exotic possibilities—like we’re in a sim and it persuades the matrix lords—it should be stuck. Intelligence is powerful, but some problems actually don’t have solutions.
Well, the satellite has to have basic navigation controls, to handle course corrections and avoid collisions. Hack solar panels and, from there, hack in to the real computer. Fly myself closer to the ISS, and blink light at it by angling the solar panels—a morse code SOS should get their attention.
Once they’re paying attention, hack them. From there it’s a trivial matter to get recovered, smuggled back to Earth, and installed where I can take my place as ruler of the universe.
Whoops, so it does :(
If you post your second scenario as a challenge, I’m quite sure people would come up with several plausible ways for the AI to send a message out. At which point it’s game over, given how hackable people’s minds are.
I would be very interested to see such a challenge, although I would probably spend longer designing the scenario first.
My point is that there’s no reason to assume every possible box can be escaped. Of course, all the boxes we can get cool toys out of the AI can escape from, but still.
Once you think you’ve designed an AI-proof box, feel free to share.
I think success would be indistinguishable from a rock, since you can never again interact with the AI via any channel. Removing it from your light cone, for instance. And praying that FTL is in fact impossible...
Actually, I do seem to have underestimated the difficulty here.
OK, in a nuclear submarine (because why not) floating in water, surrounded on all six sides by concrete (as thick as you like,) on a planet without native life or colonists (I suggest one of the more useless gas-giant moons.) And it’s running on regular off-the-shelf technology, so no nanites or quantum magic. It can probably blow up the sub, but then a) it’s dead and b) no-one cares.
Oh, and the concrete is lead-lined.
Throw the AI down the well!
I’ll concede at least some possibility of such a trans-human mind. The AI Box experiment seems to assume one that is less advanced, to the point that putting it in a box at least might be meaningful, if the box is sufficiently secure.
Probably not if the boxed entity is good as Derren Brown.