Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.
It seems to be a kind-of irrelevant argument, since the stock market machines, query answering machines, etc. that humans actually build mostly try and perform their tasks as quickly as they can. There is not much idle thumb-twiddling in the real world of intellligent machines.
It doesn’t much matter what machines who are not told to ack quickly will do—we want machines to do things fast, and will build them that way.
It seems to be a kind-of irrelevant argument, since the stock market machines, query answering machines, etc. that humans actually build mostly try and perform their tasks as quickly as they can. There is not much idle thumb-twiddling in the real world of intellligent machines.
It doesn’t much matter what machines who are not told to ack quickly will do—we want machines to do things fast, and will build them that way.