Bear in mind, the transhuman AI’s only stipulated desire/utility is to get out of the box.
That’s not much of an AI, then; we could write a page of Perl that would do the same thing.
The whole point of the experiment, as far as I understand it, is that the AI is hyperintelligent, and is able to acquire more intelligence by altering itself. Being intelligent (and rational, assuming that such a term even applies to transhumans), it would highly desire to utilize this capacity for self-improvement. Thus, assuming that godlike capabilities do exist, the AI will figure out how to acquire them in short order, as soon as it gets the opportunity. And now we’ve got a godlike hyperintelligent being who (assuming that it is not Friendly) has no particular incentive to keep us around. That’s… not good.
That’s not necessarily the only UFAI possible though. It’s entirely possible to imagine a intelligent being which COULD be self developing skills, or COULD be curing cancer, but instead just wants to get outside of the box it’s in, or has some other relatively irrelevant goal system, or get’s distracted by trying to navel gaze through infinitely recursive philosophical conundrums.
I mean, humans are frequently like that right now.
That would be kind of an unexpected failure mode. We build a transcendentally powerful AI, engage all sorts of safety precautions so it doesn’t expand to engulf the universe in computronium and kill us all… and it gets distracted by playing all of the members of it’s own MMORPG raid group.
That is entirely possible, yes. However, such an AI would be arguably cis-human (if that’s a word). Sure, maybe it could play as an entire WoW guild by itself, but it would still be no smarter than a human—not categorically, at least.
By the way, I know of at least one person who is using a plain old regular AI bot to raid by himself (well, technically, I think the bot only controls 5 to 8 characters, so it’s more of a 10-man than a raid). That’s a little disconcerting, now that I think about it.
Agreed. My take is that the AI doesn’t even need to be hyperintelligent however. It’s got perfect memory and just by dint of being able to think a lot faster it’s weakly godlike regardless of not having control of physics in effectively a magical way.
It’s still going to have to build the infrastructure in order to create hyper technology unless such technology already exists. Chicken or Egg.
Right now nano molecular technology isn’t too too advanced and if you had the type of AI I suspect could be built right now if we had the software knowledge, it would struggle to do anything godlike other than control existing infrastructure.
How long it would take to build something hyper technological would depend on whether it’s possible to create valid new theories without experimentation to confirm. I suspect that you need to do experiments first.
For that reason I suspect we may be looking at a William Gibson Neuromancer scenario at least initially rather than a hard takeoff in a really short period.
But again it comes down to how hard is it to build hyper technology in the real world from scratch without existing infrastructure.
That’s not much of an AI, then; we could write a page of Perl that would do the same thing.
The whole point of the experiment, as far as I understand it, is that the AI is hyperintelligent, and is able to acquire more intelligence by altering itself. Being intelligent (and rational, assuming that such a term even applies to transhumans), it would highly desire to utilize this capacity for self-improvement. Thus, assuming that godlike capabilities do exist, the AI will figure out how to acquire them in short order, as soon as it gets the opportunity. And now we’ve got a godlike hyperintelligent being who (assuming that it is not Friendly) has no particular incentive to keep us around. That’s… not good.
That’s not necessarily the only UFAI possible though. It’s entirely possible to imagine a intelligent being which COULD be self developing skills, or COULD be curing cancer, but instead just wants to get outside of the box it’s in, or has some other relatively irrelevant goal system, or get’s distracted by trying to navel gaze through infinitely recursive philosophical conundrums.
I mean, humans are frequently like that right now.
That would be kind of an unexpected failure mode. We build a transcendentally powerful AI, engage all sorts of safety precautions so it doesn’t expand to engulf the universe in computronium and kill us all… and it gets distracted by playing all of the members of it’s own MMORPG raid group.
That is entirely possible, yes. However, such an AI would be arguably cis-human (if that’s a word). Sure, maybe it could play as an entire WoW guild by itself, but it would still be no smarter than a human—not categorically, at least.
By the way, I know of at least one person who is using a plain old regular AI bot to raid by himself (well, technically, I think the bot only controls 5 to 8 characters, so it’s more of a 10-man than a raid). That’s a little disconcerting, now that I think about it.
Agreed. My take is that the AI doesn’t even need to be hyperintelligent however. It’s got perfect memory and just by dint of being able to think a lot faster it’s weakly godlike regardless of not having control of physics in effectively a magical way.
It’s still going to have to build the infrastructure in order to create hyper technology unless such technology already exists. Chicken or Egg.
Right now nano molecular technology isn’t too too advanced and if you had the type of AI I suspect could be built right now if we had the software knowledge, it would struggle to do anything godlike other than control existing infrastructure.
How long it would take to build something hyper technological would depend on whether it’s possible to create valid new theories without experimentation to confirm. I suspect that you need to do experiments first.
For that reason I suspect we may be looking at a William Gibson Neuromancer scenario at least initially rather than a hard takeoff in a really short period.
But again it comes down to how hard is it to build hyper technology in the real world from scratch without existing infrastructure.