I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there’s your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much “conquer” any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that’s as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can’t. Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
It’s not enough to say “But of course the AI will figure this out. It’s smarter than us, so it will figure out this thing that eludes all humans.” Show me how it’s going to do all these things, and then you’re treating the issue seriously. Otherwise you’re just assigning it magic powers by fiat.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players
See an example for one stress marker. That’s an order of magnitude above noticing blushes. Dogs have much better sense of smell, and technology exists to simulate that. You can probably detect the pulse of each person in a noisy room with just an array of sufficiently-accurate microphones.
Note that human intellect was sufficient to discover the technique, and the technique is powerful enough to allow human senses to see the movements directly, you don’t even need to examine Fourier transforms and the like.
Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
I can’t and you can‘t. But dog and chimpanzee experts can predict lots of things I couldn’t. And experts on human behavior can predict lots of things about humans that might seem impossible to non-trained humans. Psychiatrists and psychologists can often deduce with decent confidence lots of things from seemingly innocuous facts, despite the mess their discipline might be in bulk. Sociopaths can often manipulate people despite (allegedly) not feeling the emotions they manipulate. Salesmen are often vilified for selling things the buyer doesn’t want, and the fact that there exist consistently better and worse salesmen indicate that it’s not just luck. Hell, I can predict lots of things about people I know well despite not being smarter than them.
Note that the hypothetical poker player (or whatever) doesn’t need to predict perfectly. They just need to do it much better than humans. And the fact that expert human poker players have been known to win poker tournaments without looking at their cards is evidence that even human-level prediction is hugely useful.
Hell, Eliezer allegedly got out of the box using only a text channel, he didn’t have the luxury of looking at the person to judge the emotional effects of his messages.
I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much “conquer” any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that’s as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can’t. Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
It’s not enough to say “But of course the AI will figure this out. It’s smarter than us, so it will figure out this thing that eludes all humans.” Show me how it’s going to do all these things, and then you’re treating the issue seriously. Otherwise you’re just assigning it magic powers by fiat.
See an example for one stress marker. That’s an order of magnitude above noticing blushes. Dogs have much better sense of smell, and technology exists to simulate that. You can probably detect the pulse of each person in a noisy room with just an array of sufficiently-accurate microphones.
Note that human intellect was sufficient to discover the technique, and the technique is powerful enough to allow human senses to see the movements directly, you don’t even need to examine Fourier transforms and the like.
I can’t and you can‘t. But dog and chimpanzee experts can predict lots of things I couldn’t. And experts on human behavior can predict lots of things about humans that might seem impossible to non-trained humans. Psychiatrists and psychologists can often deduce with decent confidence lots of things from seemingly innocuous facts, despite the mess their discipline might be in bulk. Sociopaths can often manipulate people despite (allegedly) not feeling the emotions they manipulate. Salesmen are often vilified for selling things the buyer doesn’t want, and the fact that there exist consistently better and worse salesmen indicate that it’s not just luck. Hell, I can predict lots of things about people I know well despite not being smarter than them.
Note that the hypothetical poker player (or whatever) doesn’t need to predict perfectly. They just need to do it much better than humans. And the fact that expert human poker players have been known to win poker tournaments without looking at their cards is evidence that even human-level prediction is hugely useful.
Hell, Eliezer allegedly got out of the box using only a text channel, he didn’t have the luxury of looking at the person to judge the emotional effects of his messages.