I have to disagree on Python; I think consistency and minimalism are the most important things in an “introductory” language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don’t think about.
I’d lean toward either C (for learning the “pushing electrons around silicon” end of things) or Scheme (for learning the “abstract conceptual elegance” end of things). It helps that both have excellent learning materials available.
Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some “mainstream” programming and wants to stretch their brain.
You make some good points, but I still disagree with you. For someone who’s trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I’ve always thought that the quickest way to learn programming was to do programming, and until you’ve been doing it for a while, you won’t understand it.
Well, I admit that my thoughts are colored somewhat by an impression—acquired by having made a living from programming for some years—that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase “software engineer” with a straight face! But I’ll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly—both in terms of producing interesting results and getting rapid feedback—is important, but not the most important thing.
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I’m looking for.
I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?
We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
I want to get a grasp of the underlying nature of computer science,
Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.
Sipser’s Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It’s also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.
I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn’t recommend them as a starting point.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
You seem to already know Lisp, so probably not. Read the table of contents. If you haven’t written an interpreter, then yes.
The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say “of course you can emulate a Turing machine as data on the tape of a universal Turing machine,” and there’s no point to fill in the details. But it’s easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.
SICP is a programming book, not a theoretical book, but there’s a lot of overlap when it comes to interpreters. And you probably learn both better this way.
I almost put this history lesson in my previous comment: Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I’m not entirely sure why the difference. Because they’re more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it’s easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)
Um… I think it’s a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.
EDIT: It’s interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who’s not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.
Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it’d be like programming in Brainfuck. It was von Neumann’s insights leading to the stored-program architecture that made computing remotely sensible.
There’s plenty of ridiculously opaque models of computation (Post’s tag machine, Conway’s Life, exponential Diophantine equations...) but I can’t begin to imagine one that would be more comprehensible than untyped lambda calculus.
I’m pretty sure that Eliezer meant that Turing machines are better for giving novices a “model of computation”. That is, they will gain a better intuitive sense of what computers can and can’t do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
A friend of mine has invented a “Game of Lambda” played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.
You realize you’ve just called every computer scientist inhuman?
Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.
If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd)
Of course, not so odd for anyone who uses Excel...
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)
And no looking it up, that’s cheating! Took me the better part of a day to figure it out, it’s a real mind-twister.
Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.
I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church’s claim.
I can’t back up these claims as well as I’d like. I’m not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter—there was a long period before near-universal acceptance), that is my impression.
Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.
ETA: well, it’s straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing’s paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.
It’s also worth noting that Curry’s combinatory logic predated Church’s λ-calculus by about a decade, and also constitutes a model of universal computation.
It’s really all the same thing in the end anyhow; general recursion (e.g., Curry’s Y combinator) is on some level equivalent to Gödel’s incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn’t have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name ‘lisp’ wasn’t so apt as a metaphor for readability.
Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?
I’m betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn’t as mathematically pure as lambda calculus, but it’s a lot closer to how real computers work.
Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the “well, that’s great in theory, but it could never be implemented in practice” objections that are so hard to push people past.
Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it’ll eventually do anything. It just doesn’t feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.
Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book—see below), as is The Art of Computer Programming by Knuth.
I have physically seen these books, but not studied any of them—I’m just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as “C for experienced programmers”, not “learn programming through C”.
TAoCP is “learn computer science”, which I think is rather different than learning programming. Again, a fine book, but not quite on target initially.
I’ve only flipped through SICP, so I have little to say.
TAoCP and SICP are probably both computer science—I recommended those particularly as being computer science books, rather than elementary programming. I’ll take your word on Kernighan and Ritchie, though—put that one off until you want to learn C, then.
Merely an engineer? I’ve failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.
Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.
Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don’t know anything about computer programming.
I’m not sure what you mean—as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.
I’ve just read the abstract on Wikipedia and I assumed that it might encompass what you do.
Mechanical engineers design and build engines and power plants...structures and vehicles of all sizes...
I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.
I couldn’t build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever.
Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools—the general-interest areas have commercial software already for sale.
Bear in mind that I’m not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren’t for beginners.) I’d suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn’t cover.
‘Follow conversations’, indeed. That’s what I mean. Being able to grasp concepts that involve ‘symbolic computation’ and information processing by means of formal language. I don’t aim at actively taking part in productive programming. I don’t want to become a poet, I want to be able to appreciate poetry, perceive its beauty.
Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”—Edsger W Dijkstra.
More modern versions aren’t that bad, and it’s not quite fair to tar them with the same brush, but I still wouldn’t recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.
Dijkstra’s quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn’t actually a bad language at all. On the other hand, it also lacks much of the “easy to pick up and experiment with” aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
Yeah, you won’t be able to be very productive regarding bottom-up groundwork. But you’ll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You’ll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
Agree with where you place Python, Scheme and Haskell. But I don’t recommend C. Don’t waste time there until you already know how to program well.
Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don’t think that’s really what XiXiDu is looking for.
Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
I remain unconvinced that C++ has anything to offer in these cases;
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
Sometimes objects just are more than one type.
This argues for interfaces, not multiple implementation inheritance.
And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
Seriously? All my objections to C++ come from it’s complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide newtypes Product and Sum that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also a newtype for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operation and or or; this is actually a case of the lattice duality above.
For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.Sum Integer can also be made an instance of Num, so that you can continue to use * for multiplication, + for addition, and so forth.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
I have to disagree on Python; I think consistency and minimalism are the most important things in an “introductory” language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don’t think about.
I’d lean toward either C (for learning the “pushing electrons around silicon” end of things) or Scheme (for learning the “abstract conceptual elegance” end of things). It helps that both have excellent learning materials available.
Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some “mainstream” programming and wants to stretch their brain.
You make some good points, but I still disagree with you. For someone who’s trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I’ve always thought that the quickest way to learn programming was to do programming, and until you’ve been doing it for a while, you won’t understand it.
Well, I admit that my thoughts are colored somewhat by an impression—acquired by having made a living from programming for some years—that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase “software engineer” with a straight face! But I’ll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly—both in terms of producing interesting results and getting rapid feedback—is important, but not the most important thing.
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I’m looking for.
I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?
We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.
Recommendations on the above? Books, essays...
Sipser’s Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It’s also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.
I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn’t recommend them as a starting point.
Elsewhere wnoise said that SICP and Knuth were computer science, but additional suggestions would be nice.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
For my undergraduate work, I used two books. The first is Jan L. A. van de Snepscheut’s What Computing Is All About. It is, unfortunately, out-of-print.
The second was Elements of the Theory of Computation by Harry Lewis and Christos H. Papadimitriou.
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
You probably should have spelled out that SICP is on the λ-calculus side.
Gah. Do I need to add this to my reading list?
You seem to already know Lisp, so probably not. Read the table of contents. If you haven’t written an interpreter, then yes.
The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say “of course you can emulate a Turing machine as data on the tape of a universal Turing machine,” and there’s no point to fill in the details. But it’s easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.
SICP is a programming book, not a theoretical book, but there’s a lot of overlap when it comes to interpreters. And you probably learn both better this way.
I almost put this history lesson in my previous comment:
Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I’m not entirely sure why the difference. Because they’re more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it’s easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)
Um… I think it’s a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.
EDIT: It’s interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who’s not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.
Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it’d be like programming in Brainfuck. It was von Neumann’s insights leading to the stored-program architecture that made computing remotely sensible.
There’s plenty of ridiculously opaque models of computation (Post’s tag machine, Conway’s Life, exponential Diophantine equations...) but I can’t begin to imagine one that would be more comprehensible than untyped lambda calculus.
I’m pretty sure that Eliezer meant that Turing machines are better for giving novices a “model of computation”. That is, they will gain a better intuitive sense of what computers can and can’t do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
A friend of mine has invented a “Game of Lambda” played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.
Alligator Eggs is another variation on the same theme.
You realize you’ve just called every computer scientist inhuman?
Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.
Just accept the compliment. ;)
Of course, not so odd for anyone who uses Excel...
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)
And no looking it up, that’s cheating! Took me the better part of a day to figure it out, it’s a real mind-twister.
It’s much of a muchness; in pure form, both are incomprehensible for nontrivial programs. Practical programming languages have aspects of both.
Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.
I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church’s claim.
I can’t back up these claims as well as I’d like. I’m not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter—there was a long period before near-universal acceptance), that is my impression.
Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.
ETA: well, it’s straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing’s paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.
EDIT2: so maybe that is what you said originally.
It’s also worth noting that Curry’s combinatory logic predated Church’s λ-calculus by about a decade, and also constitutes a model of universal computation.
It’s really all the same thing in the end anyhow; general recursion (e.g., Curry’s Y combinator) is on some level equivalent to Gödel’s incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn’t have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name ‘lisp’ wasn’t so apt as a metaphor for readability.
Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?
Wikipedia says lambda calculus was published in 1936 and the Turing machine was published in 1937.
I’m betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn’t as mathematically pure as lambda calculus, but it’s a lot closer to how real computers work.
I think the link you want is to the history of the Church-Turing thesis.
The history in the paper linked from this blog post may also be enlightening!
Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the “well, that’s great in theory, but it could never be implemented in practice” objections that are so hard to push people past.
Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it’ll eventually do anything. It just doesn’t feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.
Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.
Any opinion on the 2nd edition of Elements?
Nope. I used the first edition. I wouldn’t call it a “classic”, but it was readable and covered the basics.
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book—see below), as is The Art of Computer Programming by Knuth.
I have physically seen these books, but not studied any of them—I’m just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as “C for experienced programmers”, not “learn programming through C”.
TAoCP is “learn computer science”, which I think is rather different than learning programming. Again, a fine book, but not quite on target initially.
I’ve only flipped through SICP, so I have little to say.
TAoCP and SICP are probably both computer science—I recommended those particularly as being computer science books, rather than elementary programming. I’ll take your word on Kernighan and Ritchie, though—put that one off until you want to learn C, then.
Merely an engineer? I’ve failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.
Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.
Yes, of course. Misinterpreted what you said.
Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don’t know anything about computer programming.
I’m not sure what you mean—as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.
I’ve just read the abstract on Wikipedia and I assumed that it might encompass what you do.
I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.
I couldn’t build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever.
Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools—the general-interest areas have commercial software already for sale.
Bear in mind that I’m not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren’t for beginners.) I’d suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn’t cover.
‘Follow conversations’, indeed. That’s what I mean. Being able to grasp concepts that involve ‘symbolic computation’ and information processing by means of formal language. I don’t aim at actively taking part in productive programming. I don’t want to become a poet, I want to be able to appreciate poetry, perceive its beauty.
Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”—Edsger W Dijkstra.
More modern versions aren’t that bad, and it’s not quite fair to tar them with the same brush, but I still wouldn’t recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.
Dijkstra’s quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn’t actually a bad language at all. On the other hand, it also lacks much of the “easy to pick up and experiment with” aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
Yeah, you won’t be able to be very productive regarding bottom-up groundwork. But you’ll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You’ll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
Agree with where you place Python, Scheme and Haskell. But I don’t recommend C. Don’t waste time there until you already know how to program well.
Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don’t think that’s really what XiXiDu is looking for.
Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
This argues for interfaces, not multiple implementation inheritance. And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
Upvoted purely for the image.
Eiffel does (in, obviously, my opinion).
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide
newtype
sProduct
andSum
that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also anewtype
for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operationand
oror
; this is actually a case of the lattice duality above.For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.
Sum Integer
can also be made an instance ofNum
, so that you can continue to use * for multiplication, + for addition, and so forth.Sather looks interesting but I haven’t taken the time to explore it. (And yes, covariance vs contravariance is a tricky one.)
Both these languages also demonstrate the real (everyday) use for C… you compile your actual code into it.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
Yeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven’t heard about it before...
Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.