I think anthropomorphism is the worst of all. I have now seen programs “trying to do things”, “wanting to do things”, “believing thing to be true”, “knowing things” etc. Don’t be so naïve as to believe that this use of language is harmless. It invited the programmer to identify himself with the execution of the program and almost forces upon him the use of operational semantics.
I know who Dijkstra was, respect him greatly, and agree with most of that article, and indeed, most of everything he wrote. But this is something I disagree about. He would (here) have us speak of a computer’s “store” instead of its “memory”, and there were various other substitutions that he would have us do. All that that would achieve would be to develop a parallel vocabulary, one for computing machines and one for thinking beings, and an injunction to always use the right vocabulary for the right context.
What it is for a human being to try things, want things, believe things, know things, etc. is different from what it is for a program to do these things. But they also have an amount of commonality that makes insisting on separate vocabulary an unproductive ritual.
So when, for example, a compiler complains to me (must I say “issues an error message”?) that it couldn’t find a file, I want it to give me the answers to questions such as “why did you look for that file?” (i.e. show me the place where you were instructed to access it), “what were you looking for?” (i.e. show me the file name exactly as you received it), “where did you look for it?” (i.e. show me the directory search path in force at the point where you looked for it), and “why did you look there?” (i.e. show me where you got that search path from). This seems to me an entirely natural and unproblematic way of speaking, and not at all in conflict with his larger message, which is of fundamental importance for programming, that programming is a mathematical activity which, when done right, carries mathematical guarantees of correctness.
That message is especially important to the task of designing superintelligent machines.
I don’t know if this was your intent when you chose the username, but I subconsciously prepend “Pfft.” to the beginning of all your comments and read them in a dismissive tone.
A computer is a mathematical machine, mathematics made physical. It is built of logic gates, devices which compute certain outputs as mathematical functions of their inputs. This is what they are designed to be, and in comparison with all the other physical devices mankind has contrived, they operate with phenomenal reliability.
Mathematics operates with absolute certainty. (Anyone quoting Eliezer’s password is invited to go away and not come back until they’ve devised a new foundation for probability theory in which P(A|A) < 1.) Physical realisations can fall short. But an ordinary desktop computer can operate for weeks at a time without any hardware glitches. If you multiply the number of gates by the clock speed by the duration, that comes to somewhere in the region of 10 to the 24th operations—approximately Avogadro’s number—every one of which worked as designed. When your program goes wrong, hardware error isn’t the way to bet.
If the basic semiconductor gate were not so reliable, if each gate failed “only” one in a million times, you would be having millions of errors every second and computing on the scale of today would hardly be possible. This is one reason we don’t use valves any more. (Another is that they’re too big.) Above a certain size, the proportion of operating time taken up by replacing burnt-out valves approaches 100%.
Programs constructed on top of that hardware are themselves mathematical objects, physically realised. When you write a program to accomplish a precisely defined task, if you get the program right, it will do the right thing every single time. It will not be “stressed” by hard inputs, as a bridge is stressed by a heavy load. It will not need to be “maintained”, as a car must be maintained. These physical metaphors do not apply to mathematical objects. A correctly written program “just works”, a hacker expression of high praise.
This is not an easy thing to accomplish. It can be accomplished, but a prerequisite is to realise that you are engaged in a mathematical activity, and to know how to approach the task as such.
The mathematical nature of the discipline was recognised from the very start by the founders of computing. Turing and von Neumann were mathematicians, and von Neumann explicitly referred back to Leibniz’s idea of a calculus ratiocinator, in the sense of both a mechanical method of reasoning and a machine for performing it. That reached its mathematical fulfilment in the late 19th and early 20th century with the development of mathematical logic, and its physical fulfilment with the development of the general purpose computer.
I also don’t think this is a concern. It’s just analogy, metaphor, figurative language, which is more or less what the human mind runs on. I also don’t think it leads to real anthropomorphization in the minds of those using it; it’s more just a useful shorthand. Compare something I overheard once about atoms of a certain reactive element “wanting” to bond with other atoms. I don’t think either party was ascribing agency to those atoms in this case; rather, “it wants X” is commonly understood as a useful shorthand for “it behaves as if it wanted X”.
Thus it is common to hear hardware or software talked about as though it has homunculi talking to each other inside it, with intentions and desires. Thus, one hears “The protocol handler got confused”, or that programs “are trying” to do things, or one may say of a routine that “its goal in life is to X”. Or: “You can’t run those two cards on the same bus; they fight over interrupt 9.”
One even hears explanations like “… and its poor little brain couldn’t understand X, and it died.” Sometimes modelling things this way actually seems to make them easier to understand, perhaps because it’s instinctively natural to think of anything with a really complex behavioral repertoire as ‘like a person’ rather than ‘like a thing’.
At first glance, to anyone who understands how these programs actually work, this seems like an absurdity. As hackers are among the people who know best how these phenomena work, it seems odd that they would use language that seems to ascribe consciousness to them. The mind-set behind this tendency thus demands examination.
The key to understanding this kind of usage is that it isn’t done in a naive way; hackers don’t personalize their stuff in the sense of feeling empathy with it, nor do they mystically believe that the things they work on every day are ‘alive’. To the contrary: hackers who anthropomorphize are expressing not a vitalistic view of program behavior but a mechanistic view of human behavior.
-- Edgar Dijkstra, The Fruits of Misunderstanding
I know who Dijkstra was, respect him greatly, and agree with most of that article, and indeed, most of everything he wrote. But this is something I disagree about. He would (here) have us speak of a computer’s “store” instead of its “memory”, and there were various other substitutions that he would have us do. All that that would achieve would be to develop a parallel vocabulary, one for computing machines and one for thinking beings, and an injunction to always use the right vocabulary for the right context.
What it is for a human being to try things, want things, believe things, know things, etc. is different from what it is for a program to do these things. But they also have an amount of commonality that makes insisting on separate vocabulary an unproductive ritual.
So when, for example, a compiler complains to me (must I say “issues an error message”?) that it couldn’t find a file, I want it to give me the answers to questions such as “why did you look for that file?” (i.e. show me the place where you were instructed to access it), “what were you looking for?” (i.e. show me the file name exactly as you received it), “where did you look for it?” (i.e. show me the directory search path in force at the point where you looked for it), and “why did you look there?” (i.e. show me where you got that search path from). This seems to me an entirely natural and unproblematic way of speaking, and not at all in conflict with his larger message, which is of fundamental importance for programming, that programming is a mathematical activity which, when done right, carries mathematical guarantees of correctness.
That message is especially important to the task of designing superintelligent machines.
BTW, It’s “Edsger”.
I thought the most interesting part of the quote was the proposed link between “empathizing” reasoning and operational semantics.
I don’t know if this was your intent when you chose the username, but I subconsciously prepend “Pfft.” to the beginning of all your comments and read them in a dismissive tone.
Ha, yeah that’s an unintended effect.
How so?
A computer is a mathematical machine, mathematics made physical. It is built of logic gates, devices which compute certain outputs as mathematical functions of their inputs. This is what they are designed to be, and in comparison with all the other physical devices mankind has contrived, they operate with phenomenal reliability.
Mathematics operates with absolute certainty. (Anyone quoting Eliezer’s password is invited to go away and not come back until they’ve devised a new foundation for probability theory in which P(A|A) < 1.) Physical realisations can fall short. But an ordinary desktop computer can operate for weeks at a time without any hardware glitches. If you multiply the number of gates by the clock speed by the duration, that comes to somewhere in the region of 10 to the 24th operations—approximately Avogadro’s number—every one of which worked as designed. When your program goes wrong, hardware error isn’t the way to bet.
If the basic semiconductor gate were not so reliable, if each gate failed “only” one in a million times, you would be having millions of errors every second and computing on the scale of today would hardly be possible. This is one reason we don’t use valves any more. (Another is that they’re too big.) Above a certain size, the proportion of operating time taken up by replacing burnt-out valves approaches 100%.
Programs constructed on top of that hardware are themselves mathematical objects, physically realised. When you write a program to accomplish a precisely defined task, if you get the program right, it will do the right thing every single time. It will not be “stressed” by hard inputs, as a bridge is stressed by a heavy load. It will not need to be “maintained”, as a car must be maintained. These physical metaphors do not apply to mathematical objects. A correctly written program “just works”, a hacker expression of high praise.
This is not an easy thing to accomplish. It can be accomplished, but a prerequisite is to realise that you are engaged in a mathematical activity, and to know how to approach the task as such.
The mathematical nature of the discipline was recognised from the very start by the founders of computing. Turing and von Neumann were mathematicians, and von Neumann explicitly referred back to Leibniz’s idea of a calculus ratiocinator, in the sense of both a mechanical method of reasoning and a machine for performing it. That reached its mathematical fulfilment in the late 19th and early 20th century with the development of mathematical logic, and its physical fulfilment with the development of the general purpose computer.
I also don’t think this is a concern. It’s just analogy, metaphor, figurative language, which is more or less what the human mind runs on. I also don’t think it leads to real anthropomorphization in the minds of those using it; it’s more just a useful shorthand. Compare something I overheard once about atoms of a certain reactive element “wanting” to bond with other atoms. I don’t think either party was ascribing agency to those atoms in this case; rather, “it wants X” is commonly understood as a useful shorthand for “it behaves as if it wanted X”.
Edit: see also: http://catb.org/jargon/html/anthropomorphization.html
--”Fabulous Prizes”, Dresden Codak
And, while we’re on the subject, here’s a classic:
-- Sidney Morgenbesser to B. F. Skinner
(via Eliezer, natch.)