So, you want to limit the AI’s intellect to genius level, so if something did go Wrong, then the AI would not be unstoppable.
Thinking of AGI like it’s a human genius level mistakes the nature of it. There are likely some tasks that an AGI is worse and others where it’s better.
An AGI also has the advantage of being able to duplicate instances. If you would create 1000000 copies of the 10 best computer security researchers, that army could likely hack itself into every computer system available.
An AGI also has the advantage of being able to duplicate instances. If you would create 1000000 copies of the 10 best computer security researchers, that army could likely hack itself into every computer system available.
Just a little nitpick here, but your last statement is unlikely. There are actual computer systems that are completely unhackable even by any team of skilled security researchers (non-networked devices, etc).
Of course, the generalization of your point is much more likely, but the likely forms of AI takeover probably involve a diverse set of skills, including subtle and complex strategic social/cultural manipulations. Human brains are more hackable than many computer systems in this regard.
Just a little nitpick here, but your last statement is unlikely.
I did use the word “available” as a qualifier. Some airgapped computers don’t qualify as available. But 100% isn’t required.
I consider it politically impossible to shut down all nonairgapped computers. Taking away computers creates civil war and in civil war computers are useful to win conflicts. The AGI just has to stay alive and self improve.
For an AGI hacking a great number of computers allows the AGI to use those computers to
Just a little nitpick here, but your last statement is unlikely. There are actual computer systems that are completely unhackable even by any team of skilled security researchers (non-networked devices, etc).
Only if you limit ‘hacking’ to networking and don’t include things like social engineering, blackmailing and physical attack/intrusion.
Well, that’s another limitation you’d have to put on it. I get that the AI would work differently to a human. And now that I think of it, every year or so when you reset it, you could adjust it so its better suited for the task at hand. Thanks for the reply!
Thinking of AGI like it’s a human genius level mistakes the nature of it. There are likely some tasks that an AGI is worse and others where it’s better.
An AGI also has the advantage of being able to duplicate instances. If you would create 1000000 copies of the 10 best computer security researchers, that army could likely hack itself into every computer system available.
Just a little nitpick here, but your last statement is unlikely. There are actual computer systems that are completely unhackable even by any team of skilled security researchers (non-networked devices, etc).
Of course, the generalization of your point is much more likely, but the likely forms of AI takeover probably involve a diverse set of skills, including subtle and complex strategic social/cultural manipulations. Human brains are more hackable than many computer systems in this regard.
I did use the word “available” as a qualifier. Some airgapped computers don’t qualify as available. But 100% isn’t required.
I consider it politically impossible to shut down all nonairgapped computers. Taking away computers creates civil war and in civil war computers are useful to win conflicts. The AGI just has to stay alive and self improve.
For an AGI hacking a great number of computers allows the AGI to use those computers to
Only if you limit ‘hacking’ to networking and don’t include things like social engineering, blackmailing and physical attack/intrusion.
Well, that’s another limitation you’d have to put on it. I get that the AI would work differently to a human. And now that I think of it, every year or so when you reset it, you could adjust it so its better suited for the task at hand. Thanks for the reply!