I meant powerful in the Eliezer sense of “ability to achieve its goals”. All google is, is a manifestation of the power of the humans that built it (and maintain it) (and the links that webmasters have put up and craft to be google friendly) as it has no goals of its own.
Until we have built a common vocabulary (that cuts the world at its joints), most conversations will unfortunately be pointless.
All google is, is a manifestation of the power of the humans that built it
No. If you take that approach then you’ll just be saying that about every GAI, no matter how powerful. Google engineers can not solve the problems that google solves. They can’t even hold the problem (which includes links between millions of websites) in their heads. They CAN hold in their heads the problem of creating something that can solve the problem. Within google’s domain, humans aren’t even players.
Even allowing a human the time and notepaper and procedural knowledge to do what google does, that’s not a human solving the same problem, that’s a human implementing the abstract computation that is google.
Human can and do generate optimization process that are more powerful than themselves.
This may seem more harsh than I intend:
I see your proposed law as just a privileged hypothesis, without any evidence, defending the notion that humans must somehow be special.
To spell things out—a problem with the idea of a law saying that “a system can’t develop a system more powerful than itself by anything other than chance” is that it is pretty easy to do that.
Two humans can (fairly simply) make more humans, and then large groups of humans can have considerably more power than the original pair of humans did.
For example, no human can remember the whole internet and answer questions about its content—but a bunch of humans and their artefacts can do just that.
This is an example of synergy—the power of collective intelligence.
I can solve more problems when I have a hammer than when I don’t, I can be synergistic with a hammer, you don’t need other people for synergy. This just means that the power depends upon the environment.
Lets talk about the power P of a system S being defined as a function P(S, E). With E being the environment. So when I am talking about something more powerful I mean for all E. P(S1,E) > P (S2,E). Or at least for huge amounts of E or on average. It is not sufficient to show a single case.
I don’t think that organizations of humans have a coherent goal structure, so they don’t have a coherent power.
Why don’t you think organizations have “coherent goals”. They certainly claim to do so. For instance, Google claims it wants “to organize the world’s information and make it universally accessible and useful”. Its actions seems to be roughly consistent with that. What is the problem?
They really don’t maximise that value… you’d get closer to the mark if you added in words like profit and executive pay.
But the main reason I don’t think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.
Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.
Humans built Google. They did it by clubbing together. This seems like a powerful approach.
I meant powerful in the Eliezer sense of “ability to achieve its goals”. All google is, is a manifestation of the power of the humans that built it (and maintain it) (and the links that webmasters have put up and craft to be google friendly) as it has no goals of its own.
Until we have built a common vocabulary (that cuts the world at its joints), most conversations will unfortunately be pointless.
No. If you take that approach then you’ll just be saying that about every GAI, no matter how powerful. Google engineers can not solve the problems that google solves. They can’t even hold the problem (which includes links between millions of websites) in their heads. They CAN hold in their heads the problem of creating something that can solve the problem. Within google’s domain, humans aren’t even players.
Even allowing a human the time and notepaper and procedural knowledge to do what google does, that’s not a human solving the same problem, that’s a human implementing the abstract computation that is google.
Human can and do generate optimization process that are more powerful than themselves.
This may seem more harsh than I intend: I see your proposed law as just a privileged hypothesis, without any evidence, defending the notion that humans must somehow be special.
To spell things out—a problem with the idea of a law saying that “a system can’t develop a system more powerful than itself by anything other than chance” is that it is pretty easy to do that.
Two humans can (fairly simply) make more humans, and then large groups of humans can have considerably more power than the original pair of humans did.
For example, no human can remember the whole internet and answer questions about its content—but a bunch of humans and their artefacts can do just that.
This is an example of synergy—the power of collective intelligence.
I can solve more problems when I have a hammer than when I don’t, I can be synergistic with a hammer, you don’t need other people for synergy. This just means that the power depends upon the environment.
Lets talk about the power P of a system S being defined as a function P(S, E). With E being the environment. So when I am talking about something more powerful I mean for all E. P(S1,E) > P (S2,E). Or at least for huge amounts of E or on average. It is not sufficient to show a single case.
I don’t think that organizations of humans have a coherent goal structure, so they don’t have a coherent power.
Why don’t you think organizations have “coherent goals”. They certainly claim to do so. For instance, Google claims it wants “to organize the world’s information and make it universally accessible and useful”. Its actions seems to be roughly consistent with that. What is the problem?
They really don’t maximise that value… you’d get closer to the mark if you added in words like profit and executive pay.
But the main reason I don’t think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.
Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.