Who says an AI’s motivations and decisions won’t be affected by laws? As you point out, economic entities’ actions are constrained and influenced by the laws of the societies they operate in. Laws form a structure that modifies how an economy operates. An AI would simply be another entity operating within an economy built around laws and regulations, as are corporations, persons, nations, and families today. AIs might break the laws, as corporations and people do today; but they will nonetheless be constrained by the ability and willingness of governments to enforce those laws.
AIs are not magic genies. They can’t just wave a wand and say, “Alakazam. I want the world to give me all its resources” and expect it to happen. There are no magic genies.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there’s your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there’s your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much “conquer” any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that’s as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can’t. Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
It’s not enough to say “But of course the AI will figure this out. It’s smarter than us, so it will figure out this thing that eludes all humans.” Show me how it’s going to do all these things, and then you’re treating the issue seriously. Otherwise you’re just assigning it magic powers by fiat.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players
See an example for one stress marker. That’s an order of magnitude above noticing blushes. Dogs have much better sense of smell, and technology exists to simulate that. You can probably detect the pulse of each person in a noisy room with just an array of sufficiently-accurate microphones.
Note that human intellect was sufficient to discover the technique, and the technique is powerful enough to allow human senses to see the movements directly, you don’t even need to examine Fourier transforms and the like.
Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
I can’t and you can‘t. But dog and chimpanzee experts can predict lots of things I couldn’t. And experts on human behavior can predict lots of things about humans that might seem impossible to non-trained humans. Psychiatrists and psychologists can often deduce with decent confidence lots of things from seemingly innocuous facts, despite the mess their discipline might be in bulk. Sociopaths can often manipulate people despite (allegedly) not feeling the emotions they manipulate. Salesmen are often vilified for selling things the buyer doesn’t want, and the fact that there exist consistently better and worse salesmen indicate that it’s not just luck. Hell, I can predict lots of things about people I know well despite not being smarter than them.
Note that the hypothetical poker player (or whatever) doesn’t need to predict perfectly. They just need to do it much better than humans. And the fact that expert human poker players have been known to win poker tournaments without looking at their cards is evidence that even human-level prediction is hugely useful.
Hell, Eliezer allegedly got out of the box using only a text channel, he didn’t have the luxury of looking at the person to judge the emotional effects of his messages.
Laws, the costs of breaking them, the costs of making different ones, are just another optimization problem for businesses. Indeed, my singular insight about the intelligence services of nations is that the laws that constrain civilians within a country in commercial interactions are explicitly not applied to government intelligence agents and police generally, and especially when they are operating against other countries.
An AI will be as constrained by laws as would a similarly intelligent corporation. An AI which is much smarter than the collective intelligence of the best human corporations will be much less constrained by laws, especially as it accumulates wealth, which is essentially control of valuable tools.
One would expect in the mid term (as opposed to the long term) AI’s to be part of corporations, that there would be an AI + human alliances which would be the most competitive.
If we get Kurzweil’s future as opposed to the lesswrong orthodox future, AI will be integrated with human intelligence, that is, I will have modifications made to me that give me much higher intelligence than I have now. Conceivably at some point, the enhancements will have me jumping to a non-human substrate, but the line between what was unmodified human and what is clearly no longer human will be very hard to define. As opposed to the lesswrong vision which is AI’s running off to the singularity while humans sit there paralyzed relying on their 1 kHz clocked parallel processor built entirely of meat. In which case the dividing line SEEMS much clearer.
Modified humans: human or not? I’m betting CEV when calculated will show that they are. I know I want to be smarter, how ’bout you?
And the laws of modified humans will be a whole lot more complex than the laws of bio-humans, just as the laws of humans are much more complex than the laws of monkeys.
Who says an AI’s motivations and decisions won’t be affected by laws? As you point out, economic entities’ actions are constrained and influenced by the laws of the societies they operate in. Laws form a structure that modifies how an economy operates. An AI would simply be another entity operating within an economy built around laws and regulations, as are corporations, persons, nations, and families today. AIs might break the laws, as corporations and people do today; but they will nonetheless be constrained by the ability and willingness of governments to enforce those laws.
AIs are not magic genies. They can’t just wave a wand and say, “Alakazam. I want the world to give me all its resources” and expect it to happen. There are no magic genies.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there’s your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much “conquer” any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that’s as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can’t. Humans can’t predict the behavior of dogs or chimpanzees and they’re operating on a level way below ours.
It’s not enough to say “But of course the AI will figure this out. It’s smarter than us, so it will figure out this thing that eludes all humans.” Show me how it’s going to do all these things, and then you’re treating the issue seriously. Otherwise you’re just assigning it magic powers by fiat.
See an example for one stress marker. That’s an order of magnitude above noticing blushes. Dogs have much better sense of smell, and technology exists to simulate that. You can probably detect the pulse of each person in a noisy room with just an array of sufficiently-accurate microphones.
Note that human intellect was sufficient to discover the technique, and the technique is powerful enough to allow human senses to see the movements directly, you don’t even need to examine Fourier transforms and the like.
I can’t and you can‘t. But dog and chimpanzee experts can predict lots of things I couldn’t. And experts on human behavior can predict lots of things about humans that might seem impossible to non-trained humans. Psychiatrists and psychologists can often deduce with decent confidence lots of things from seemingly innocuous facts, despite the mess their discipline might be in bulk. Sociopaths can often manipulate people despite (allegedly) not feeling the emotions they manipulate. Salesmen are often vilified for selling things the buyer doesn’t want, and the fact that there exist consistently better and worse salesmen indicate that it’s not just luck. Hell, I can predict lots of things about people I know well despite not being smarter than them.
Note that the hypothetical poker player (or whatever) doesn’t need to predict perfectly. They just need to do it much better than humans. And the fact that expert human poker players have been known to win poker tournaments without looking at their cards is evidence that even human-level prediction is hugely useful.
Hell, Eliezer allegedly got out of the box using only a text channel, he didn’t have the luxury of looking at the person to judge the emotional effects of his messages.
Laws, the costs of breaking them, the costs of making different ones, are just another optimization problem for businesses. Indeed, my singular insight about the intelligence services of nations is that the laws that constrain civilians within a country in commercial interactions are explicitly not applied to government intelligence agents and police generally, and especially when they are operating against other countries.
An AI will be as constrained by laws as would a similarly intelligent corporation. An AI which is much smarter than the collective intelligence of the best human corporations will be much less constrained by laws, especially as it accumulates wealth, which is essentially control of valuable tools.
One would expect in the mid term (as opposed to the long term) AI’s to be part of corporations, that there would be an AI + human alliances which would be the most competitive.
If we get Kurzweil’s future as opposed to the lesswrong orthodox future, AI will be integrated with human intelligence, that is, I will have modifications made to me that give me much higher intelligence than I have now. Conceivably at some point, the enhancements will have me jumping to a non-human substrate, but the line between what was unmodified human and what is clearly no longer human will be very hard to define. As opposed to the lesswrong vision which is AI’s running off to the singularity while humans sit there paralyzed relying on their 1 kHz clocked parallel processor built entirely of meat. In which case the dividing line SEEMS much clearer.
Modified humans: human or not? I’m betting CEV when calculated will show that they are. I know I want to be smarter, how ’bout you?
And the laws of modified humans will be a whole lot more complex than the laws of bio-humans, just as the laws of humans are much more complex than the laws of monkeys.