The impression that I got is that he had a pretty short time scale in mind (i.e. to the point that he was working with labor unions in the present day). One could argue that he believed that AI would develop faster than it has, or that he thought that networking with labor unions in the present would be useful for preventing problems 50+ years down the road.
AFAIT, all he did was write a few letters to two or three union officials, alerting them to this issue. I don’t think that really counts as “networking”. I also wasn’t able to find any place where Wiener gave a specific time scale, but if he did expect a short time scale, I think his error was definitely in expecting that AI would develop faster than it has, rather in economic theory. If we assume the existence of AIs that are as capable as any human of average intelligence, and can be operated at a cost less than a human’s minimum wage (or subsistence wage in the absence of minimum wage laws), then clearly there would be a great deal of unemployment, and “equilibrating influences” isn’t going to help. I think the following quotes show that this is what Wiener had in mind:
It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams but might very well be, respectively, the readings of artificial sense organs, such as photoelectric cells or thermometers, and the performance of motors or solenoids. With the aid of strain gauges or similar agencies to read the performance of these motor organs and to report, to “feed back,” to the central control system as an artificial kinesthetic sense, we are already in a position to construct artificial machines of almost any degree of elaborateness of performance.
[...}
The modern industrial revolution is similarly bound to devalue the human brain, at least in its simpler and more routine decisions. Of course, just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator may survive the second. However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.
AFAIT, all he did was write a few letters to two or three union officials, alerting them to this issue. I don’t think that really counts as “networking”. I also wasn’t able to find any place where Wiener gave a specific time scale,
This is partially a semantic issue (what do we mean by “networking”?). One of the quotations that I pasted from the article above is
Early in the postwar period, Wiener began an active outreach to organized labor. He made contact with union leaders, but he could not impress union officials the seriousness of the challenges posed by automation. The experience left him frustrated and strongly suspecting that labor leaders had a limited view of the coming realities of automation and few tools for dealing with his larger questions about the future of labor itself.
which gives the impression of urgency (a sense that he viewed it as a high priority and time-sensitive issue).
If we assume the existence of AIs that are as capable as any human of average intelligence
Capable of what? Some tasks that previously required labor of humans of average intelligence have been automated, and others haven’t been automated. There’s still an abundance of jobs for people of average intelligence that pay above-minimum wage.
I think the following quotes show that this is what Wiener had in mind:
The first paragraph that you quote gives the impression that he may have (mistakenly) thought that humans were on the brink of developing robotics that are sufficiently sophisticated to replace physical labor. But robotics don’t suffice to replace all desired labor that humans of average intelligence are capable of.
Do you think that other sources give a different impression?
I was reading Wiener’s own writings, here and here
which gives the impression of urgency (a sense that he viewed it as a high priority and time-sensitive issue).
Wiener’s own writings do not seem to give such an impression of urgency, and I note that he didn’t do anything beyond contacting a few union leaders, such as lobbying directly to politicians. Here’s how he described his contact with union leaders:
To arrive at this society, we need a good deal of planning and a good deal of struggle, which, if the best comes to the best, may be on the plane of ideas, and otherwise – who knows? I thus felt it my duty to pass on my information and understanding of the position to those who have an active interest in the conditions and the future of labor, that is, to the labor unions. I did manage to make contact with one or two persons high up in the CIO, and from them I received a very intelligent and sympathetic hearing. Further than these individuals, neither I nor any of them was able to go.
Quoting you again:
Capable of what? Some tasks that previously required labor of humans of average intelligence have been automated, and others haven’t been automated. There’s still an abundance of jobs for people of average intelligence that pay above-minimum wage.
Capable of any job that a human of average intelligence could perform. I thought that’s pretty clear from “However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.”
The first paragraph that you quote gives the impression that he may have (mistakenly) thought that humans were on the brink of developing robotics that are sufficiently sophisticated to replace physical labor.
It seems clear, at least in his later writings (1960, second link above), that he really was thinking of AGI, not just robotics:
Complete subservience
and complete intelligence do not go
together. How often in ancient times
the clever Greek philosopher slave of
a less intelligent Roman slaveholder
must have dominated the actions of his
master rather than obeyed his wishes!
Similarly, if the machines become
more and more efficient and operate
at a higher and higher psychological
level, the catastrophe foreseen by
Butler of the dominance of the machine
comes nearer and nearer.
I was reading Wiener’s own writings, here and here
Thanks.
Wiener’s own writings do not seem to give such an impression of urgency, and I note that he didn’t do anything beyond contacting a few union leaders, such as lobbying directly to politicians. Here’s how he described his contact with union leaders:
Based on your quotation, I agree. I was reporting on what I read, and didn’t deep dive the situation, because I came to the conclusion that the case of Wiener and automation doesn’t have high relevance.
Capable of any job that a human of average intelligence could perform. I thought that’s pretty clear from “However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.”
We have a difference of interpretation. I thought he wasn’t talking about AGI because AGI could probably replace high intelligence people too, and he suggests that high intelligence people wouldn’t be replaced.
It seems clear, at least in his later writings (1957, second link above), that he really was thinking of AGI, not just robotics:
I think that he was writing about narrow AI in his earlier writings, and AGI in his later writings.
AFAIT, all he did was write a few letters to two or three union officials, alerting them to this issue. I don’t think that really counts as “networking”. I also wasn’t able to find any place where Wiener gave a specific time scale, but if he did expect a short time scale, I think his error was definitely in expecting that AI would develop faster than it has, rather in economic theory. If we assume the existence of AIs that are as capable as any human of average intelligence, and can be operated at a cost less than a human’s minimum wage (or subsistence wage in the absence of minimum wage laws), then clearly there would be a great deal of unemployment, and “equilibrating influences” isn’t going to help. I think the following quotes show that this is what Wiener had in mind:
My impressions are largely primarily from Some Notes on Wiener’s Concerns about the Social Impact of Cybernetics, the Effects of Automation on Labor, and “the Human Use of Human Beings” (though I did spend some time looking at other sources). Do you think that other sources give a different impression?
This is partially a semantic issue (what do we mean by “networking”?). One of the quotations that I pasted from the article above is
which gives the impression of urgency (a sense that he viewed it as a high priority and time-sensitive issue).
Capable of what? Some tasks that previously required labor of humans of average intelligence have been automated, and others haven’t been automated. There’s still an abundance of jobs for people of average intelligence that pay above-minimum wage.
The first paragraph that you quote gives the impression that he may have (mistakenly) thought that humans were on the brink of developing robotics that are sufficiently sophisticated to replace physical labor. But robotics don’t suffice to replace all desired labor that humans of average intelligence are capable of.
I was reading Wiener’s own writings, here and here
Wiener’s own writings do not seem to give such an impression of urgency, and I note that he didn’t do anything beyond contacting a few union leaders, such as lobbying directly to politicians. Here’s how he described his contact with union leaders:
Quoting you again:
Capable of any job that a human of average intelligence could perform. I thought that’s pretty clear from “However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.”
It seems clear, at least in his later writings (1960, second link above), that he really was thinking of AGI, not just robotics:
Thanks.
Based on your quotation, I agree. I was reporting on what I read, and didn’t deep dive the situation, because I came to the conclusion that the case of Wiener and automation doesn’t have high relevance.
We have a difference of interpretation. I thought he wasn’t talking about AGI because AGI could probably replace high intelligence people too, and he suggests that high intelligence people wouldn’t be replaced.
I think that he was writing about narrow AI in his earlier writings, and AGI in his later writings.