Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can’t.
I’m trying to brainstorm exactly what physical infrastructures would suffice to make an AGI impotent, assuming the long-term. For instance, if you put all protein products in a long que with neutron bombs nearby and inspect every product protein-by-protein...just neutron bomb all protein products if an anamoly is detected. Same for the 2050 world’s computer infrastructures. Have computers all wired to self destruct with backups in a bomb shelter. If the antivirus program (might not even be necessary if quantum computers are ubiquitous) detects an anomoly, there goes all the computers.
I’m smarter than a grizzly or Ebola, but I’m still probably dead against either. That disproves your argument. More importantly, drafting such defenses probably has a higher EV of societal good than against AGI because humans will almost certainly try these sorts of attacks.
I’m not saying every defense will work, but plz specifically disprove the defenses I’ve written. It might help e-security some day. There is the opportunity here to do this as IDK these conversations are happening in too many other forums, but singulatarians are dropping the ball because of a political cognitive bias that they wanna build their software like it or not.
Another defense is once/if a science of AGI is established, determine the minimum run-time needed on the most powerful computers not under surveillence, to make an AGI. Have all computers built to radioactively decay before that run-time is achieved.
Another run-time defense, don’t allow distributed computing applications to use beyond a certain # of nodes.
I can understand dismissing the after-AGI defenses, but to categorically dismiss the pre-AGI defenses...
My thesis is that the computer hardware required for AGI is so advanced, that the technology of the day can ensure surveillence wins, if it is desired not to construct an AGI. Once you get beyond the cognitive bias that thought is computation, you start to appreciate how far into the future AGI is, and that the prime threat of this nature is from conventional AI programmes.
bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...
plz specifically disprove the defenses I’ve written.
Expense.
People will not pay for the extensive defenses you have suggested… at least not until it’s been proven necessary… ie it’s already too late.
Even then they’ll bitch and moan about the inconvenience, and why wouldn’t you? hair-trigger bomb on every computer on the planet? ready to go off the moment it “detects an anomaly”?
Have you any idea how many bugs there are in computer applications? Would you trust your life (you’ll die in the bomb too) to your computer not crashing due to some dodgy malware your kid downloaded while surfing for pron?
Even if it’s just on the computers that are running the AGI (and AGI programmers are almost as susceptible to malware), it would still be nigh-on-impossble to “detect an anomaly”.
What’s an anomaly? How do we determine it?
Any program that tried to examine its own code looking for an anomaly would have to simulate the running of the very code it was testing… thus causing the potentiality for it to actually become the anomalous program itself.
…it’s not actually possible to determine what will happen in a program any other way (and even then I’d be highly dubious).
So… nice try, but sadly not really feasible to implement.
:)
A very simple algorithm will let you protect yourself from any wild animals, viruses, etc using nothing but your intelligence… just use it to figure out how not to encounter them!
Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can’t.
I’m trying to brainstorm exactly what physical infrastructures would suffice to make an AGI impotent, assuming the long-term. For instance, if you put all protein products in a long que with neutron bombs nearby and inspect every product protein-by-protein...just neutron bomb all protein products if an anamoly is detected. Same for the 2050 world’s computer infrastructures. Have computers all wired to self destruct with backups in a bomb shelter. If the antivirus program (might not even be necessary if quantum computers are ubiquitous) detects an anomoly, there goes all the computers. I’m smarter than a grizzly or Ebola, but I’m still probably dead against either. That disproves your argument. More importantly, drafting such defenses probably has a higher EV of societal good than against AGI because humans will almost certainly try these sorts of attacks.
I’m not saying every defense will work, but plz specifically disprove the defenses I’ve written. It might help e-security some day. There is the opportunity here to do this as IDK these conversations are happening in too many other forums, but singulatarians are dropping the ball because of a political cognitive bias that they wanna build their software like it or not.
Another defense is once/if a science of AGI is established, determine the minimum run-time needed on the most powerful computers not under surveillence, to make an AGI. Have all computers built to radioactively decay before that run-time is achieved. Another run-time defense, don’t allow distributed computing applications to use beyond a certain # of nodes. I can understand dismissing the after-AGI defenses, but to categorically dismiss the pre-AGI defenses...
My thesis is that the computer hardware required for AGI is so advanced, that the technology of the day can ensure surveillence wins, if it is desired not to construct an AGI. Once you get beyond the cognitive bias that thought is computation, you start to appreciate how far into the future AGI is, and that the prime threat of this nature is from conventional AI programmes.
bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...
Expense.
People will not pay for the extensive defenses you have suggested… at least not until it’s been proven necessary… ie it’s already too late.
Even then they’ll bitch and moan about the inconvenience, and why wouldn’t you? hair-trigger bomb on every computer on the planet? ready to go off the moment it “detects an anomaly”?
Have you any idea how many bugs there are in computer applications? Would you trust your life (you’ll die in the bomb too) to your computer not crashing due to some dodgy malware your kid downloaded while surfing for pron?
Even if it’s just on the computers that are running the AGI (and AGI programmers are almost as susceptible to malware), it would still be nigh-on-impossble to “detect an anomaly”.
What’s an anomaly? How do we determine it? Any program that tried to examine its own code looking for an anomaly would have to simulate the running of the very code it was testing… thus causing the potentiality for it to actually become the anomalous program itself. …it’s not actually possible to determine what will happen in a program any other way (and even then I’d be highly dubious).
So… nice try, but sadly not really feasible to implement. :)
Upvoted for this line: “I’m smarter than a grizzly or Ebola, but I’m still probably dead against either.”
It’s very important to remember: Intelligence is a lot—but it’s not everything.
A very simple algorithm will let you protect yourself from any wild animals, viruses, etc using nothing but your intelligence… just use it to figure out how not to encounter them!