A typo is when you think you have written down X, but actually you have written down Y. Are we not squarely in a map-territory discrepancy?
And, speaking from personal experience, those can be very painful to debug, because right until the very moment of realization you are prepared to swear that the problem must be something very subtle, since quite obviously you wrote down what you meant to write down.
If you’re lucky, and work in a statically typed language, the compiler will catch typos for you. Lucky because the typo has to be a) in an identifier (the compiler doesn’t check strings or integer values), b) such that it results in an undefined identifier (you could have a typo which turns one defined identifier into another defined identifier) or c) in an identifier of the wrong type.
I don’t know what you mean by a language-specific error, what I can come up with is also a map-territory discrepancy: you think that the array accessing convention is 1-based when in fact it is 0-based.
The more down-to-earth formulation is “every bug is in fact a programmer mistake”.
It’s almost not worth mentioning… but my experience in a different domain says otherwise. Namely the game of Go: one of the first, most basic rules you learn in Go is “players alternate play”. Or, to say it another way, “you ain’t going to get to play two moves in a row”. Every player is supposed to know this… and yet I have seen one strong player totally humiliate others by pointing out that they played exactly as if they hoped to get two moves in succession.
Every bug is in fact a programmer mistake, and everyone knows this… but why does everyone behave as if they thought different?
If we’re thinking about programming as a way of deeply understanding a problem—or at least as a process which leads to understanding a problem as a necessary step—then we have these errors that don’t reflect a misunderstanding of the problem. They may reflect a misunderstanding of the language, or a typo, which I realize are still map-territory discrepancies (isn’t any mistake?) but have nothing to do with the problem you’re trying to solve.
In a way, I suppose I’m nitpicking. But it also needs to be said because when debugging, you need to be aware of two levels of differences: differences between what the correct solution is and what you think it is, and differences between what the program does and what you think it does.
This comes up a lot when I’m grading mathematical proofs. Sometimes the mistake is a faulty step: for instance, an assumption of something that shouldn’t be assumed, or maybe only a partial solution is found. Sometimes, the mistake is in the presentation: the idea of the proof matches the correct idea, but a key step is unexplained, or a final answer is wrong due to an error in arithmetic. I think it definitely matters which kind of error the students are making.
The big difference between a typo in writing and a typo in code is that in the first case the hardware that does the interpretation transparently covers up the mistake (which is why editing is a hard job, btw). In the second case the consequences can be more severe, are likely to crop up later and inconvenience more people. Code is unforgiving.
As a case study we could consider the latest “bug” to have a noticeable effect on LW. Someone released this code into production believing that it worked, which turned out to be very different from the reality.
A typo is when you think you have written down X, but actually you have written down Y. Are we not squarely in a map-territory discrepancy?
And, speaking from personal experience, those can be very painful to debug, because right until the very moment of realization you are prepared to swear that the problem must be something very subtle, since quite obviously you wrote down what you meant to write down.
If you’re lucky, and work in a statically typed language, the compiler will catch typos for you. Lucky because the typo has to be a) in an identifier (the compiler doesn’t check strings or integer values), b) such that it results in an undefined identifier (you could have a typo which turns one defined identifier into another defined identifier) or c) in an identifier of the wrong type.
I don’t know what you mean by a language-specific error, what I can come up with is also a map-territory discrepancy: you think that the array accessing convention is 1-based when in fact it is 0-based.
The more down-to-earth formulation is “every bug is in fact a programmer mistake”.
It’s almost not worth mentioning… but my experience in a different domain says otherwise. Namely the game of Go: one of the first, most basic rules you learn in Go is “players alternate play”. Or, to say it another way, “you ain’t going to get to play two moves in a row”. Every player is supposed to know this… and yet I have seen one strong player totally humiliate others by pointing out that they played exactly as if they hoped to get two moves in succession.
Every bug is in fact a programmer mistake, and everyone knows this… but why does everyone behave as if they thought different?
If we’re thinking about programming as a way of deeply understanding a problem—or at least as a process which leads to understanding a problem as a necessary step—then we have these errors that don’t reflect a misunderstanding of the problem. They may reflect a misunderstanding of the language, or a typo, which I realize are still map-territory discrepancies (isn’t any mistake?) but have nothing to do with the problem you’re trying to solve.
In a way, I suppose I’m nitpicking. But it also needs to be said because when debugging, you need to be aware of two levels of differences: differences between what the correct solution is and what you think it is, and differences between what the program does and what you think it does.
This comes up a lot when I’m grading mathematical proofs. Sometimes the mistake is a faulty step: for instance, an assumption of something that shouldn’t be assumed, or maybe only a partial solution is found. Sometimes, the mistake is in the presentation: the idea of the proof matches the correct idea, but a key step is unexplained, or a final answer is wrong due to an error in arithmetic. I think it definitely matters which kind of error the students are making.
The big difference between a typo in writing and a typo in code is that in the first case the hardware that does the interpretation transparently covers up the mistake (which is why editing is a hard job, btw). In the second case the consequences can be more severe, are likely to crop up later and inconvenience more people. Code is unforgiving.
As a case study we could consider the latest “bug” to have a noticeable effect on LW. Someone released this code into production believing that it worked, which turned out to be very different from the reality.