As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.
I’m not completely illiterate. I know the ‘basics’ of programming. Nevertheless, I want to start from the very beginning.
I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.
I’m thinking about starting with Processing and Lua. What do you think?
In an amazing coincidence, many of the suggestions you get will be the suggester’s current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They’ll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.
Others have said this but you can’t hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won’t enjoy it.
So, what language should you learn first? Well the answer is… (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, “I’m trying to do X.” instead of, “I’m having a problem getting feature blah blah blah to work.”)
I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.
I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you’re not coding something you find interesting, it’s hard to stay motivated for long enough to learn the language.
My advice is to learn a language that a lot of people learn as a first language. You’ll be able to take advantage of tutorials and support geared toward newbies. You can always learn “cooler” languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they’ll be able to mentor/advise you.
Finally, I should give some of my background. I’ve been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I’ve played with but don’t really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I’m probably still in the honeymoon phase since I’ve been using it for less than a year.
Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!
Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.
In other words, it’s already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don’t want to be one of them anymore.
If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?
Being an eye witness of your own motives and growing-up is a tough exercise to conclude accurately.
I believe that it would be of no help in the mentioned discussions. It is rather inherent, something neurological.
I grew up in a very religious environment. Any significance, my goals, were mainly set to focus on being a good Christian. Although I assume it never reached my ‘inner self’, I consciously tried to motivate myself to reach this particular goal due to fear of dying. But on a rather unconscious level it never worked, this goal has always been ineffectual.
At the age of 13, my decision to become vegetarian changed everything. With all my heart I came to the conclusion that something is wrong about all the pain and suffering. A sense for human suffering was still effectively dimmed, due to a whole life of indoctrination telling me that our pain is our own fault. But what about the animals? Why would an all-loving God design the universe this way? To cut a long story short, still believing, it made me abandon this God. With the onset of the Internet here in Germany I then learnt that there was nothing to abandon in the first place...I guess I won’t have to go into details here.
Anyway, that was just one of the things that changed. I’m really bad when it comes to social things. Thus I suffered a lot in school, it wasn’t easy. Those problems with other kids, a lack of concentration and that I always found the given explanations counterintuitive and hard to follow, dimmed any motivation to learn more. All these problems rather caused me to associate education with torture, I wanted it to end. Though curiosity was always a part of my character. I’ve probably been the only kid who liked to watch documentations and news at an early age.
Then there is the mental side I mentioned at the beginning. These are probably the most important reasons for all that happened and happens in my life. I got quite a few ticks and psychic problems. When I was a kid I was suffering from Tourette syndrome, which didn’t help in school either. But many other urges are still prevalent. I pretty much have to consciously think about a lot that other people might just do and decide upon unconsciously. Like sleeping, I pretty much have to tell me each time why there are more reasons to sleep now than in favor of further evaluation. Or how, when and about what do I start to think, when do I stop and decide. How do I set the threshold? For me it is inherently very low, the slightest stimulus triggers a high tide of possibilities. Like when you look up some article on Wikipedia, you can click through forever. There is much more...I hope you see what I mean by mental problems.
I could refine the above or go on for long. I will just stop now. You see, my motivation is complex and pretty much based on my mental problems and curiosity. I love playing games, but I cannot push myself to play more than a few minutes. Then there’s this fear and urge to think of what else is there, what I could be missing and what could happen if I just enjoy playing this game. I have to do it...I’m not strong enough not to care. Take this reply as an example, I really had to push myself to answer but also had an urge to write it. It’s a pain. Though now the fear of how much time it takes up and what else I could do grew stronger.
Bottom line is that my motivation is a mixture of curiosity, inclination, mental problems, my youth, relieve, not staying dumb, fear of being wrong again about the nature of reality and so on. Really, the only problem I have with learning programming right now is that there are so many other problems in my head, not my ‘motivation’. I often don’t find the time to read more than one page in a book per day.
I’m sorry if this post sounds a bit confused, not having the best day today. Also just ask if you have further questions. I should probably think about it a bit more thoroughly anyway. But now you have some idea. I hope...
P.S.
Another milestone that changed everything was discovering Orion’s Arm. It was so awesome, I just had to learn more. That basicaly led me to get into science, transhumanism and later OB/LW.
Thank you very much for writing this up. It wouldn’t surprise me a bit if akrasia has a neurological basis, and I’m a little surprised that I haven’t seen any posts really looking at it from that angle. Dopamine?
And on the other hand, your story is also about ideas and circumstances that undercut motivation.
Those who restrain desire, do so because theirs is weak enough to be restrained.
—William Blake
I haven’t read up on the akrasia discussions. I don’t believe into intelligence. I believe in efficiency regarding goals stated in advance. It’s all about what we want and how to achieve it. And what we want is merely ‘the line of least resistance’.
Whatever intelligence is, it can’t be intelligent all the way down. It’s just dumb stuff at the bottom.
—Andy Clark
The universe really just exists. And it appears to us that it is unfolding because we are part of it. We appear to each other to be free and intelligent because we believe that we are not part of it.
There is a lot of talk here on LW on how to become less wrong. That works. Though it is not a proactive approach but simply trial and error allowed for by the mostly large error tolerance of our existence.
It’s all about practicability, what works. If prayer worked, we’d use it if we wanted to use it.
Narns, Humans, Centauri… we all do what we do for the same reason: because it seems like a good idea at the time.
—G’Kar, Babylon 5
Anything you learn on lesswrong.com you’ll have to apply by relying on fundamental non-intelligent processes. You can only hope to be lucky to learn enough in-time to avoid fatal failure. Since no possible system can use advanced heuristics to tackle, or even evaluate, every stimulus. For example, at what point are you going to use Bayesian statistics? You won’t even be able to evaluate the importance of all data to be able to judge when to apply more rigorous tools. You can only be a passive observer who’s waiting for new data by experience. And until new data arrives, rely on prior knowledge.
A man can do what he wants, but not want what he wants.
—Arthur Schopenhauer
Thus I don’t think that a weakness of will does exist. I also don’t think that you can do anything but your best. What is the right thing to do does always rely on what you want. Never you do something that you do not want. Only in retrospect or on average might we want something else. On that basis we then do conclude that what we have done was wrong and that we knew better. But what really was different at that time was that what we wanted, which changed the truth value of what we, contemplating at present, in retrospect know to be the best to do.
So what is it that can help us dealing with akrasia? Nothing. In future we might be able to strengthen our goals, so that what we want at the time of applying the amplification of our goals is what we’re going to want forever. Or as long as something even stronger shifts our desires again.
If we could deliberately seize control of our pleasure systems, we could reproduce the pleasure of success. That be the end of everything.
—Marvin Minsky
I’m happy with how it is right now. I’m very happy that there is what we call akrasia. If there wasn’t, I’d still be religious.
I think the path outlined in ESR’s How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn’t a bad idea at all. Perl is no longer important; you probably need never learn it.
First, I do not think that learning to program computers must be part of a decent education. Many people learn to solve simple integrals in high-school, but the effect, beyond simple brain-training, is nil.
For programming it’s the same. Learning to program well takes years. I mean years of full-time studying/programming etc.
However, if you really want to learn programming, the first question is not the language, but what you wanna do. You learn one language until you have built up some self-confidence, then learn another. The “what” typically breaks down very early. Sorry, I cannot give you any hints on this.
And, first exercise, you should post this question (or search for answers to this question, as it has been posted already too many times) on the correct forums for programming questions. Finding those forums is the first start into learning programming. You’ll never be able to keep all the required facts for programming in your head.
I’ve never heard of processing, but I like Lua (more than python), and Lisp. However, even Java is just fine. Don’t get into the habit of thinking that mediocre languages inhibit your progress. At the beginning, nearly all languages are more advanced than you.
What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I’m interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don’t understand that simple statement. As I said, I already know the ‘basics’ of programming. I thoroughly understand it. Just so you get an idea.
In addition to reading up on all lesswrong.com sequences, I’m mainly into mathematics and physics right now. That’s where I have the biggest deficits. I see my planned ‘study’ of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a ‘Turing machine’.
And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.
Well, now that I understand your intentions a little bit better (and having read through the other comments), I seriously want to second the recommendation of Scheme.
Use DrScheme as environment (zero-hassle), go through SICP and HTDP. Algorithms are nice, Knuth’s series and so, but it may be more than you are asking. Project Euler is a website where you can find some inspirations for problems you may want to solve. Scheme as a language has the advantages that you will not need time to wrap your head around ugly syntax (most languages, except for Lua, maybe Python), memory management (C), or mathematical purity (Haskell, Prolog). AFAIK they also distinguish between exact (rational numbers, limited only by RAM) and inexact numbers (floating points) -- regularly a confusion for people trying to do some numeric code the first time. The trade-offs are quite different for professional programmers, though.
Consider finding a Coding Dojo near your location.
There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm.
To get a feeling for the difference, look at this animation of Paul Graham writing an article—crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this “demo” simplifies the process of writing an article considerably, but it illustrates neatly what books can’t teach about writing.)
What I mean by “learning how to program” is the analogue of that animation in the context of writing code. It isn’t the same as learning to design algorithms or data structures. It is what you’ll learn about getting from algorithms or data structures in your head to algorithms expressed in code.
Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.
I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language!
I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It’s very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.
After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.
Just keep in mind that when starting out learning programming, it’s probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that’s a pretty good reason to choose it.
Even if you don’t have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I’d recommend it. If you play with web pages, javascript...
At least that’s my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your “Nevertheless, I want to start from the very beginning,” which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don’t work. That’s not to disagree with wnoise about the value of Haskell either practical or educational, but I’m skeptical about it as an introduction.
ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
Unlearn existing habits from other languages
Learn about intimidating math-y-sounding things
And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.
But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another “monad tutorial” filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.
The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.
I’m going through SICP now. I’m not getting as much out of it as I expected, because much of it I already know, is uninteresting to me since I expect lazy evaluation due to Haskell, or is just tedious (I got sick pretty quick with the authors’ hard-on for number theory).
SICP is nice if you’ve never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don’t immediately make you yawn just by looking at them.
Scheme isn’t much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.
Haskell does tend to ruin one for other languages, though lately I’ve been getting slightly frustrated with some of Haskell’s own limitations...
Personally, I’m a big fan of Haskell. It will make your brain hurt, but that’s part of the point—it’s very good at easily creating and using mathematically sound abstractions. I’m not a big fan of Lua, though it’s a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it’s not a bad choice at all.
Toss in another vote for Haskell. It was my first language (and back before Real World Haskell was written); I’m happy with that choice—there were difficult patches, but they came with better understanding.
I wouldn’t recommend Haskell as a first language. I’m a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it’s hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I’m not saying it’s necessarily a bad idea to start with Haskell, but I think you’d have a much easier time getting started with Python.
Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a “training wheels” language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.
And above all, write code. This should go without saying, but you’d be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.
I have to disagree on Python; I think consistency and minimalism are the most important things in an “introductory” language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don’t think about.
I’d lean toward either C (for learning the “pushing electrons around silicon” end of things) or Scheme (for learning the “abstract conceptual elegance” end of things). It helps that both have excellent learning materials available.
Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some “mainstream” programming and wants to stretch their brain.
You make some good points, but I still disagree with you. For someone who’s trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I’ve always thought that the quickest way to learn programming was to do programming, and until you’ve been doing it for a while, you won’t understand it.
Well, I admit that my thoughts are colored somewhat by an impression—acquired by having made a living from programming for some years—that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase “software engineer” with a straight face! But I’ll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly—both in terms of producing interesting results and getting rapid feedback—is important, but not the most important thing.
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I’m looking for.
I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?
We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
I want to get a grasp of the underlying nature of computer science,
Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.
Sipser’s Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It’s also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.
I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn’t recommend them as a starting point.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
You seem to already know Lisp, so probably not. Read the table of contents. If you haven’t written an interpreter, then yes.
The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say “of course you can emulate a Turing machine as data on the tape of a universal Turing machine,” and there’s no point to fill in the details. But it’s easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.
SICP is a programming book, not a theoretical book, but there’s a lot of overlap when it comes to interpreters. And you probably learn both better this way.
I almost put this history lesson in my previous comment: Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I’m not entirely sure why the difference. Because they’re more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it’s easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)
Um… I think it’s a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.
EDIT: It’s interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who’s not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.
Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it’d be like programming in Brainfuck. It was von Neumann’s insights leading to the stored-program architecture that made computing remotely sensible.
There’s plenty of ridiculously opaque models of computation (Post’s tag machine, Conway’s Life, exponential Diophantine equations...) but I can’t begin to imagine one that would be more comprehensible than untyped lambda calculus.
I’m pretty sure that Eliezer meant that Turing machines are better for giving novices a “model of computation”. That is, they will gain a better intuitive sense of what computers can and can’t do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
A friend of mine has invented a “Game of Lambda” played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.
You realize you’ve just called every computer scientist inhuman?
Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.
If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd)
Of course, not so odd for anyone who uses Excel...
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)
And no looking it up, that’s cheating! Took me the better part of a day to figure it out, it’s a real mind-twister.
Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.
I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church’s claim.
I can’t back up these claims as well as I’d like. I’m not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter—there was a long period before near-universal acceptance), that is my impression.
Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.
ETA: well, it’s straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing’s paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.
It’s also worth noting that Curry’s combinatory logic predated Church’s λ-calculus by about a decade, and also constitutes a model of universal computation.
It’s really all the same thing in the end anyhow; general recursion (e.g., Curry’s Y combinator) is on some level equivalent to Gödel’s incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn’t have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name ‘lisp’ wasn’t so apt as a metaphor for readability.
Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?
I’m betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn’t as mathematically pure as lambda calculus, but it’s a lot closer to how real computers work.
Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the “well, that’s great in theory, but it could never be implemented in practice” objections that are so hard to push people past.
Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it’ll eventually do anything. It just doesn’t feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.
Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book—see below), as is The Art of Computer Programming by Knuth.
I have physically seen these books, but not studied any of them—I’m just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as “C for experienced programmers”, not “learn programming through C”.
TAoCP is “learn computer science”, which I think is rather different than learning programming. Again, a fine book, but not quite on target initially.
I’ve only flipped through SICP, so I have little to say.
TAoCP and SICP are probably both computer science—I recommended those particularly as being computer science books, rather than elementary programming. I’ll take your word on Kernighan and Ritchie, though—put that one off until you want to learn C, then.
Merely an engineer? I’ve failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.
Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.
Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don’t know anything about computer programming.
I’m not sure what you mean—as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.
I’ve just read the abstract on Wikipedia and I assumed that it might encompass what you do.
Mechanical engineers design and build engines and power plants...structures and vehicles of all sizes...
I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.
I couldn’t build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever.
Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools—the general-interest areas have commercial software already for sale.
Bear in mind that I’m not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren’t for beginners.) I’d suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn’t cover.
‘Follow conversations’, indeed. That’s what I mean. Being able to grasp concepts that involve ‘symbolic computation’ and information processing by means of formal language. I don’t aim at actively taking part in productive programming. I don’t want to become a poet, I want to be able to appreciate poetry, perceive its beauty.
Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”—Edsger W Dijkstra.
More modern versions aren’t that bad, and it’s not quite fair to tar them with the same brush, but I still wouldn’t recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.
Dijkstra’s quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn’t actually a bad language at all. On the other hand, it also lacks much of the “easy to pick up and experiment with” aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
Yeah, you won’t be able to be very productive regarding bottom-up groundwork. But you’ll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You’ll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
Agree with where you place Python, Scheme and Haskell. But I don’t recommend C. Don’t waste time there until you already know how to program well.
Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don’t think that’s really what XiXiDu is looking for.
Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
I remain unconvinced that C++ has anything to offer in these cases;
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
Sometimes objects just are more than one type.
This argues for interfaces, not multiple implementation inheritance.
And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
Seriously? All my objections to C++ come from it’s complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide newtypes Product and Sum that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also a newtype for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operation and or or; this is actually a case of the lattice duality above.
For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.Sum Integer can also be made an instance of Num, so that you can continue to use * for multiplication, + for addition, and so forth.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
Thanks, I’ll sure get into those languages. But I think I’ll just try and see if I can get into Haskell first. I’m intrigued after reading the introduction.
Even if you are not in a position to use Haskell in your programming projects, learning Haskell can make you a better programmer in any language.
Haskell has good support for parallel and multicore programming.
I’d weakly recommend Python, it’s free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages …),is well-designed and has features you’ll encounter in other languages (classes, functional programming …), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.
But a lot of those advantages apply to languages like Ruby.
If you want to go into more exotic languages, I’d suggest Scheme over Haskell, it seems more beginner-friendly to me.
It mostly depends on what occasions you’ll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who’s a good teacher and a Java expert, go for Java.
I recommend Python as well. Python has clean syntax, enforces good indentation and code layout, has a large number of very useful libraries, doesn’t require a lot of boilerplate to get going but still has good mechanisms for structuring code, has good support for a variety of data structures built in, a read-eval-print loop for playing around with the language, and a lot more. If you want to learn to program, learn Python.
(Processing is probably very good, too, for interesting you in programming. It gives immediate visual feedback, which is nice, but it isn’t quite as general purpose as Python. Lua I know very little about.)
That being said, Python does very little checking for errors before you run your code, and so is not particularly well suited for large or even medium sized, complex programs where your own reasoning is not sufficient to find errors. For these, I’d recommend learning other languages later on. Java is probably a good second language. It requires quite a bit more infrastructure to get something up and running, but it has great libraries and steadily increasing ability to track down errors in code when it is compiled.
After that, it depends on what you want to do. I would recommend Haskell if you are looking to stretch your mind (or OCaml if you are looking to stretch it a little less ;-)). On the other hand, if you are looking to write useful programs, C is probably pretty good, and will teach you more about how computers work. C++ is popular for a lot of applications, so you may want to learn it, but I hate it as an unprincipled mess of language features half-hazardly thrown together. I’d say exactly the same thing about most web languages (Javascript (which is very different from Java), Ruby, PHP, etc.) Perl is incredibly useful for small things, but very hard to reason about.
(As to AngryParsley’s comment about people recommending their favorite languages, mine are probably C, Haskell and OCaml, which I am not recommending first.)
What programming language should I learn?
As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.
I’m not completely illiterate. I know the ‘basics’ of programming. Nevertheless, I want to start from the very beginning.
I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.
I’m thinking about starting with Processing and Lua. What do you think?
In an amazing coincidence, many of the suggestions you get will be the suggester’s current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They’ll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.
Others have said this but you can’t hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won’t enjoy it.
So, what language should you learn first? Well the answer is… (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, “I’m trying to do X.” instead of, “I’m having a problem getting feature blah blah blah to work.”)
I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you’re not coding something you find interesting, it’s hard to stay motivated for long enough to learn the language.
My advice is to learn a language that a lot of people learn as a first language. You’ll be able to take advantage of tutorials and support geared toward newbies. You can always learn “cooler” languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they’ll be able to mentor/advise you.
Finally, I should give some of my background. I’ve been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I’ve played with but don’t really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I’m probably still in the honeymoon phase since I’ve been using it for less than a year.
Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!
Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.
In other words, it’s already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don’t want to be one of them anymore.
If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?
Being an eye witness of your own motives and growing-up is a tough exercise to conclude accurately.
I believe that it would be of no help in the mentioned discussions. It is rather inherent, something neurological.
I grew up in a very religious environment. Any significance, my goals, were mainly set to focus on being a good Christian. Although I assume it never reached my ‘inner self’, I consciously tried to motivate myself to reach this particular goal due to fear of dying. But on a rather unconscious level it never worked, this goal has always been ineffectual.
At the age of 13, my decision to become vegetarian changed everything. With all my heart I came to the conclusion that something is wrong about all the pain and suffering. A sense for human suffering was still effectively dimmed, due to a whole life of indoctrination telling me that our pain is our own fault. But what about the animals? Why would an all-loving God design the universe this way? To cut a long story short, still believing, it made me abandon this God. With the onset of the Internet here in Germany I then learnt that there was nothing to abandon in the first place...I guess I won’t have to go into details here.
Anyway, that was just one of the things that changed. I’m really bad when it comes to social things. Thus I suffered a lot in school, it wasn’t easy. Those problems with other kids, a lack of concentration and that I always found the given explanations counterintuitive and hard to follow, dimmed any motivation to learn more. All these problems rather caused me to associate education with torture, I wanted it to end. Though curiosity was always a part of my character. I’ve probably been the only kid who liked to watch documentations and news at an early age.
Then there is the mental side I mentioned at the beginning. These are probably the most important reasons for all that happened and happens in my life. I got quite a few ticks and psychic problems. When I was a kid I was suffering from Tourette syndrome, which didn’t help in school either. But many other urges are still prevalent. I pretty much have to consciously think about a lot that other people might just do and decide upon unconsciously. Like sleeping, I pretty much have to tell me each time why there are more reasons to sleep now than in favor of further evaluation. Or how, when and about what do I start to think, when do I stop and decide. How do I set the threshold? For me it is inherently very low, the slightest stimulus triggers a high tide of possibilities. Like when you look up some article on Wikipedia, you can click through forever. There is much more...I hope you see what I mean by mental problems.
I could refine the above or go on for long. I will just stop now. You see, my motivation is complex and pretty much based on my mental problems and curiosity. I love playing games, but I cannot push myself to play more than a few minutes. Then there’s this fear and urge to think of what else is there, what I could be missing and what could happen if I just enjoy playing this game. I have to do it...I’m not strong enough not to care. Take this reply as an example, I really had to push myself to answer but also had an urge to write it. It’s a pain. Though now the fear of how much time it takes up and what else I could do grew stronger.
Bottom line is that my motivation is a mixture of curiosity, inclination, mental problems, my youth, relieve, not staying dumb, fear of being wrong again about the nature of reality and so on. Really, the only problem I have with learning programming right now is that there are so many other problems in my head, not my ‘motivation’. I often don’t find the time to read more than one page in a book per day.
I’m sorry if this post sounds a bit confused, not having the best day today. Also just ask if you have further questions. I should probably think about it a bit more thoroughly anyway. But now you have some idea. I hope...
P.S. Another milestone that changed everything was discovering Orion’s Arm. It was so awesome, I just had to learn more. That basicaly led me to get into science, transhumanism and later OB/LW.
Thank you very much for writing this up. It wouldn’t surprise me a bit if akrasia has a neurological basis, and I’m a little surprised that I haven’t seen any posts really looking at it from that angle. Dopamine?
And on the other hand, your story is also about ideas and circumstances that undercut motivation.
Those who restrain desire, do so because theirs is weak enough to be restrained. —William Blake
I haven’t read up on the akrasia discussions. I don’t believe into intelligence. I believe in efficiency regarding goals stated in advance. It’s all about what we want and how to achieve it. And what we want is merely ‘the line of least resistance’.
Whatever intelligence is, it can’t be intelligent all the way down. It’s just dumb stuff at the bottom. —Andy Clark
The universe really just exists. And it appears to us that it is unfolding because we are part of it. We appear to each other to be free and intelligent because we believe that we are not part of it.
There is a lot of talk here on LW on how to become less wrong. That works. Though it is not a proactive approach but simply trial and error allowed for by the mostly large error tolerance of our existence.
It’s all about practicability, what works. If prayer worked, we’d use it if we wanted to use it.
Narns, Humans, Centauri… we all do what we do for the same reason: because it seems like a good idea at the time. —G’Kar, Babylon 5
Anything you learn on lesswrong.com you’ll have to apply by relying on fundamental non-intelligent processes. You can only hope to be lucky to learn enough in-time to avoid fatal failure. Since no possible system can use advanced heuristics to tackle, or even evaluate, every stimulus. For example, at what point are you going to use Bayesian statistics? You won’t even be able to evaluate the importance of all data to be able to judge when to apply more rigorous tools. You can only be a passive observer who’s waiting for new data by experience. And until new data arrives, rely on prior knowledge.
A man can do what he wants, but not want what he wants. —Arthur Schopenhauer
Thus I don’t think that a weakness of will does exist. I also don’t think that you can do anything but your best. What is the right thing to do does always rely on what you want. Never you do something that you do not want. Only in retrospect or on average might we want something else. On that basis we then do conclude that what we have done was wrong and that we knew better. But what really was different at that time was that what we wanted, which changed the truth value of what we, contemplating at present, in retrospect know to be the best to do.
So what is it that can help us dealing with akrasia? Nothing. In future we might be able to strengthen our goals, so that what we want at the time of applying the amplification of our goals is what we’re going to want forever. Or as long as something even stronger shifts our desires again.
If we could deliberately seize control of our pleasure systems, we could reproduce the pleasure of success. That be the end of everything. —Marvin Minsky
I’m happy with how it is right now. I’m very happy that there is what we call akrasia. If there wasn’t, I’d still be religious.
I think the path outlined in ESR’s How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn’t a bad idea at all. Perl is no longer important; you probably need never learn it.
First, I do not think that learning to program computers must be part of a decent education. Many people learn to solve simple integrals in high-school, but the effect, beyond simple brain-training, is nil.
For programming it’s the same. Learning to program well takes years. I mean years of full-time studying/programming etc.
However, if you really want to learn programming, the first question is not the language, but what you wanna do. You learn one language until you have built up some self-confidence, then learn another. The “what” typically breaks down very early. Sorry, I cannot give you any hints on this.
And, first exercise, you should post this question (or search for answers to this question, as it has been posted already too many times) on the correct forums for programming questions. Finding those forums is the first start into learning programming. You’ll never be able to keep all the required facts for programming in your head.
I’ve never heard of processing, but I like Lua (more than python), and Lisp. However, even Java is just fine. Don’t get into the habit of thinking that mediocre languages inhibit your progress. At the beginning, nearly all languages are more advanced than you.
What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I’m interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don’t understand that simple statement. As I said, I already know the ‘basics’ of programming. I thoroughly understand it. Just so you get an idea.
In addition to reading up on all lesswrong.com sequences, I’m mainly into mathematics and physics right now. That’s where I have the biggest deficits. I see my planned ‘study’ of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a ‘Turing machine’.
And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.
Well, now that I understand your intentions a little bit better (and having read through the other comments), I seriously want to second the recommendation of Scheme.
Use DrScheme as environment (zero-hassle), go through SICP and HTDP. Algorithms are nice, Knuth’s series and so, but it may be more than you are asking. Project Euler is a website where you can find some inspirations for problems you may want to solve. Scheme as a language has the advantages that you will not need time to wrap your head around ugly syntax (most languages, except for Lua, maybe Python), memory management (C), or mathematical purity (Haskell, Prolog). AFAIK they also distinguish between exact (rational numbers, limited only by RAM) and inexact numbers (floating points) -- regularly a confusion for people trying to do some numeric code the first time. The trade-offs are quite different for professional programmers, though.
edit: welcome to the web, using links!
Consider finding a Coding Dojo near your location.
There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm.
To get a feeling for the difference, look at this animation of Paul Graham writing an article—crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this “demo” simplifies the process of writing an article considerably, but it illustrates neatly what books can’t teach about writing.)
What I mean by “learning how to program” is the analogue of that animation in the context of writing code. It isn’t the same as learning to design algorithms or data structures. It is what you’ll learn about getting from algorithms or data structures in your head to algorithms expressed in code.
Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.
I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language!
I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It’s very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.
After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.
Just keep in mind that when starting out learning programming, it’s probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that’s a pretty good reason to choose it.
Even if you don’t have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I’d recommend it. If you play with web pages, javascript...
At least that’s my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your “Nevertheless, I want to start from the very beginning,” which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don’t work. That’s not to disagree with wnoise about the value of Haskell either practical or educational, but I’m skeptical about it as an introduction.
ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
Unlearn existing habits from other languages
Learn about intimidating math-y-sounding things
And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.
But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another “monad tutorial” filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.
The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.
I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg.
I’m always fascinated by data visualisation. I thought Processing might come in handy.
Thanks for mentioning SICP. I’ll check it out.
I’m going through SICP now. I’m not getting as much out of it as I expected, because much of it I already know, is uninteresting to me since I expect lazy evaluation due to Haskell, or is just tedious (I got sick pretty quick with the authors’ hard-on for number theory).
SICP is nice if you’ve never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don’t immediately make you yawn just by looking at them.
Scheme isn’t much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.
Haskell does tend to ruin one for other languages, though lately I’ve been getting slightly frustrated with some of Haskell’s own limitations...
Personally, I’m a big fan of Haskell. It will make your brain hurt, but that’s part of the point—it’s very good at easily creating and using mathematically sound abstractions. I’m not a big fan of Lua, though it’s a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it’s not a bad choice at all.
Toss in another vote for Haskell. It was my first language (and back before Real World Haskell was written); I’m happy with that choice—there were difficult patches, but they came with better understanding.
Thanks, I didn’t know about Haskell, sounds great. Open source and all. I think you already convinced me.
I wouldn’t recommend Haskell as a first language. I’m a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it’s hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I’m not saying it’s necessarily a bad idea to start with Haskell, but I think you’d have a much easier time getting started with Python.
Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a “training wheels” language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.
And above all, write code. This should go without saying, but you’d be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.
I have to disagree on Python; I think consistency and minimalism are the most important things in an “introductory” language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don’t think about.
I’d lean toward either C (for learning the “pushing electrons around silicon” end of things) or Scheme (for learning the “abstract conceptual elegance” end of things). It helps that both have excellent learning materials available.
Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some “mainstream” programming and wants to stretch their brain.
You make some good points, but I still disagree with you. For someone who’s trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I’ve always thought that the quickest way to learn programming was to do programming, and until you’ve been doing it for a while, you won’t understand it.
Well, I admit that my thoughts are colored somewhat by an impression—acquired by having made a living from programming for some years—that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase “software engineer” with a straight face! But I’ll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly—both in terms of producing interesting results and getting rapid feedback—is important, but not the most important thing.
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I’m looking for.
I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?
We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.
Recommendations on the above? Books, essays...
Sipser’s Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It’s also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.
I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn’t recommend them as a starting point.
Elsewhere wnoise said that SICP and Knuth were computer science, but additional suggestions would be nice.
Well, they’re computer sciencey, but they are definitely geared to approaching from the programming, even “Von Neumann machine” side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
For my undergraduate work, I used two books. The first is Jan L. A. van de Snepscheut’s What Computing Is All About. It is, unfortunately, out-of-print.
The second was Elements of the Theory of Computation by Harry Lewis and Christos H. Papadimitriou.
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
You probably should have spelled out that SICP is on the λ-calculus side.
Gah. Do I need to add this to my reading list?
You seem to already know Lisp, so probably not. Read the table of contents. If you haven’t written an interpreter, then yes.
The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say “of course you can emulate a Turing machine as data on the tape of a universal Turing machine,” and there’s no point to fill in the details. But it’s easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.
SICP is a programming book, not a theoretical book, but there’s a lot of overlap when it comes to interpreters. And you probably learn both better this way.
I almost put this history lesson in my previous comment:
Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I’m not entirely sure why the difference. Because they’re more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it’s easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)
Um… I think it’s a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.
EDIT: It’s interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who’s not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.
Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it’d be like programming in Brainfuck. It was von Neumann’s insights leading to the stored-program architecture that made computing remotely sensible.
There’s plenty of ridiculously opaque models of computation (Post’s tag machine, Conway’s Life, exponential Diophantine equations...) but I can’t begin to imagine one that would be more comprehensible than untyped lambda calculus.
I’m pretty sure that Eliezer meant that Turing machines are better for giving novices a “model of computation”. That is, they will gain a better intuitive sense of what computers can and can’t do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
A friend of mine has invented a “Game of Lambda” played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.
Alligator Eggs is another variation on the same theme.
You realize you’ve just called every computer scientist inhuman?
Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where “true” is a function returning its first argument, and “false” is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.
Just accept the compliment. ;)
Of course, not so odd for anyone who uses Excel...
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)
And no looking it up, that’s cheating! Took me the better part of a day to figure it out, it’s a real mind-twister.
It’s much of a muchness; in pure form, both are incomprehensible for nontrivial programs. Practical programming languages have aspects of both.
Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.
I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church’s claim.
I can’t back up these claims as well as I’d like. I’m not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter—there was a long period before near-universal acceptance), that is my impression.
Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.
ETA: well, it’s straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing’s paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.
EDIT2: so maybe that is what you said originally.
It’s also worth noting that Curry’s combinatory logic predated Church’s λ-calculus by about a decade, and also constitutes a model of universal computation.
It’s really all the same thing in the end anyhow; general recursion (e.g., Curry’s Y combinator) is on some level equivalent to Gödel’s incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn’t have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name ‘lisp’ wasn’t so apt as a metaphor for readability.
Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?
Wikipedia says lambda calculus was published in 1936 and the Turing machine was published in 1937.
I’m betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn’t as mathematically pure as lambda calculus, but it’s a lot closer to how real computers work.
I think the link you want is to the history of the Church-Turing thesis.
The history in the paper linked from this blog post may also be enlightening!
Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the “well, that’s great in theory, but it could never be implemented in practice” objections that are so hard to push people past.
Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it’ll eventually do anything. It just doesn’t feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.
Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.
Any opinion on the 2nd edition of Elements?
Nope. I used the first edition. I wouldn’t call it a “classic”, but it was readable and covered the basics.
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book—see below), as is The Art of Computer Programming by Knuth.
I have physically seen these books, but not studied any of them—I’m just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as “C for experienced programmers”, not “learn programming through C”.
TAoCP is “learn computer science”, which I think is rather different than learning programming. Again, a fine book, but not quite on target initially.
I’ve only flipped through SICP, so I have little to say.
TAoCP and SICP are probably both computer science—I recommended those particularly as being computer science books, rather than elementary programming. I’ll take your word on Kernighan and Ritchie, though—put that one off until you want to learn C, then.
Merely an engineer? I’ve failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.
Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.
Yes, of course. Misinterpreted what you said.
Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don’t know anything about computer programming.
I’m not sure what you mean—as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.
I’ve just read the abstract on Wikipedia and I assumed that it might encompass what you do.
I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.
I couldn’t build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever.
Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools—the general-interest areas have commercial software already for sale.
Bear in mind that I’m not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren’t for beginners.) I’d suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn’t cover.
‘Follow conversations’, indeed. That’s what I mean. Being able to grasp concepts that involve ‘symbolic computation’ and information processing by means of formal language. I don’t aim at actively taking part in productive programming. I don’t want to become a poet, I want to be able to appreciate poetry, perceive its beauty.
Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”—Edsger W Dijkstra.
More modern versions aren’t that bad, and it’s not quite fair to tar them with the same brush, but I still wouldn’t recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.
Dijkstra’s quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn’t actually a bad language at all. On the other hand, it also lacks much of the “easy to pick up and experiment with” aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
Yeah, you won’t be able to be very productive regarding bottom-up groundwork. But you’ll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You’ll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
Agree with where you place Python, Scheme and Haskell. But I don’t recommend C. Don’t waste time there until you already know how to program well.
Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don’t think that’s really what XiXiDu is looking for.
Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
This argues for interfaces, not multiple implementation inheritance. And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
Upvoted purely for the image.
Eiffel does (in, obviously, my opinion).
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide
newtype
sProduct
andSum
that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also anewtype
for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operationand
oror
; this is actually a case of the lattice duality above.For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.
Sum Integer
can also be made an instance ofNum
, so that you can continue to use * for multiplication, + for addition, and so forth.Sather looks interesting but I haven’t taken the time to explore it. (And yes, covariance vs contravariance is a tricky one.)
Both these languages also demonstrate the real (everyday) use for C… you compile your actual code into it.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
Yeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven’t heard about it before...
Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.
Thanks, I’ll sure get into those languages. But I think I’ll just try and see if I can get into Haskell first. I’m intrigued after reading the introduction.
If I get struck, I’ll the route you mentioned.
Relevant answer to this question here, recently popularized on Hacker News.
I’d weakly recommend Python, it’s free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages …),is well-designed and has features you’ll encounter in other languages (classes, functional programming …), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.
But a lot of those advantages apply to languages like Ruby.
If you want to go into more exotic languages, I’d suggest Scheme over Haskell, it seems more beginner-friendly to me.
It mostly depends on what occasions you’ll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who’s a good teacher and a Java expert, go for Java.
My first language was, awfully enough, GW-Basic. It had line numbers. I don’t recommend anything like it.
My first real programming language was Perl. Perl is… fun. ;)
I recommend Haskell (more fun) or Ruby (more mainstream).
I recommend Python as well. Python has clean syntax, enforces good indentation and code layout, has a large number of very useful libraries, doesn’t require a lot of boilerplate to get going but still has good mechanisms for structuring code, has good support for a variety of data structures built in, a read-eval-print loop for playing around with the language, and a lot more. If you want to learn to program, learn Python.
(Processing is probably very good, too, for interesting you in programming. It gives immediate visual feedback, which is nice, but it isn’t quite as general purpose as Python. Lua I know very little about.)
That being said, Python does very little checking for errors before you run your code, and so is not particularly well suited for large or even medium sized, complex programs where your own reasoning is not sufficient to find errors. For these, I’d recommend learning other languages later on. Java is probably a good second language. It requires quite a bit more infrastructure to get something up and running, but it has great libraries and steadily increasing ability to track down errors in code when it is compiled.
After that, it depends on what you want to do. I would recommend Haskell if you are looking to stretch your mind (or OCaml if you are looking to stretch it a little less ;-)). On the other hand, if you are looking to write useful programs, C is probably pretty good, and will teach you more about how computers work. C++ is popular for a lot of applications, so you may want to learn it, but I hate it as an unprincipled mess of language features half-hazardly thrown together. I’d say exactly the same thing about most web languages (Javascript (which is very different from Java), Ruby, PHP, etc.) Perl is incredibly useful for small things, but very hard to reason about.
(As to AngryParsley’s comment about people recommending their favorite languages, mine are probably C, Haskell and OCaml, which I am not recommending first.)
Those two seem great, Lua in particular seems to match exactly the purpose you describe.