All this is simple to look up—programming is not fluent speech, it is writing.
Have you ever tried learning a foreign language ? Maybe it was easy for you—I know people who seem to have a natural aptitude for it—but for me, it was basically a long painful slog through dictionary-land. Yes, from a strictly algorithmic standpoint, you could look up every word you intend to read or write; but this works very poorly for most humans.
If you consider your language a good fit to your task at any time, you are likely just not asking for the best.
I think your demands might be a bit too strict. I am perfectly ok with using a language that is a good, though not 100% perfect, fit for my task. Sometimes, I would even settle for an inferior language, if doing so grants me access to more powerful libraries that free me from extra work. Sure, I could “ask for the best”, but I have other goals to accomplish.
But C syntax is plainy malicious even in assignments...
How so ? Perhaps you were thinking of C++, which is indeed malicious ?
But it is also clear that you should always know that you are not learning some magical set of all basic concepts, just the concepts that are simpliest to learn in the beginning.
I agree with you that there’s no magical silver bullet set of concepts, but I also believe that some concepts are vastly more important than others, regardless of how easy they are to learn. For example, the basic concept you internalize when learning assembly is that (roughly speaking) the computer isn’t a magical genie with arbitrary rules—instead, it’s a bag of circuits that moves electrons around. This idea seems trivial when written down, but internalizing it is key to becoming a successful programmer. It also leads naturally to understanding pointers, on which the vast majority of other languages—yes, even Scheme—are built. I doubt that you can properly understand things like type inference without first understanding bits and pointers.
English, French (I usually forget the latter and recover it when I have any proximate use for it). My native language is Russian. It is a big relief when learning French that most words have the same translations in many contexts. This multi-translation problem is way more annoying than simply looking up words.
Sometimes, I would even settle for an inferior language, if doing so grants me access to more powerful libraries that free me from extra work
This actually confirms my point. You will have to choose inferior language from time to time, and its lack of tools of adapting language to your task is either local incompetence of language authors or lack of resources for development of language or lnaguage community arrogance.
But C syntax is plainy malicious even in assignments…
How so ? Perhaps you were thinking of C++, which is indeed malicious ?
“i+= i++ + ++i;” can be reliably compiled but not predicted. There are many actual everyday examples like “if(a=b);”.
Of course, it is not even close to C++, which takes malicious semantics a few levels up.
basic concept you internalize when learning assembly is that (roughly speaking) the computer isn’t a magical genie with arbitrary rules
leads naturally to understanding pointers, on which the vast majority of other languages
Any command-line programming environment will make you internalize that computer has some rules and that it does what you order—literally.
x86 assembly is quite arbitrary anyway. Maybe LLVM assembly (which is closer to “pointer machine” than to “random access machine) would be nicer. After all, high-level languages use specially wrapped pointers even in implementation.
I doubt that you can properly understand things like type inference without first understanding bits and pointers.
You cannot properly understand some performance implications, maybe. But the actual input-output correspondence can be grokked anyway. Of course, only higher-order functions have a strict proof that they can be understood without proper understanding of imperative semantics.
This multi-translation problem is way more annoying than simply looking up words.
It’s possible that you are much better at automatically memorizing words than I am.
You will have to choose inferior language from time to time, and its lack of tools of adapting language to your task is either local incompetence or lack of resources or arrogance.
Wait… what ? Are you saying that, when I have some practical task to finish, the best solution is to pick the most elegant language, disregarding all other options—and that not doing so makes me arrogant ? I am pretty sure this isn’t right. For example, my current project involves some Bluetooth communication and data visualization on Windows machines. There are libraries for Java and C# that fulfill all my Bluetooth and graphical needs; the Python library is close, but not as good. Are you saying that, instead of C#, I should just pick Scheme or Haskell or something, and implement my own Bluetooth stack and drawing APIs ? I am pretty sure that’s not what you meant...
“i+= i++ + ++i;” can be reliably compiled but not predicted.
Ok that’s a good point; I forgot about those pre-/post-increments, because I avoid them myself. They’re pretty terrible.
On the other hand, the regular assignment operator does make sense; the rules that let you say “if(a=b)” also let you say “a=b=c”. The result of an assignment operator is the RHS. I don’t see this as a bad thing, though it might’ve been better to use “eq” or some other token instead of the comparison operator “==”.
Any command-line programming environment will make you internalize that computer has some rules and that it does what you order—literally.
True, and that’s a good lesson too, but programming in assembly lets you get close (though not too uncomfortably so) to the actual hardware. This allows you to internalize the idea that at least some of these rules are not arbitrary. Instead, they stem from the fact that, ultimately, your computer is an electron-pushing device which is operating under real-world constraints. This is important, because arbitrary rules are something you have to memorize, whereas physical constraints are something you can understand.
You are right about x86 assembly, though, which is why I mentioned “a small microcontroller” in my original post. Their assemblies tend to make more sense.
But the actual input-output correspondence can be grokked anyway.
You are right, though this depends on which problem you’re solving. If you approach the programming language completely in abstract, then yes, you can understand things like input-output correspondence from the strictly algebraic point of view. What you won’t understand, though (at least, not as readily), is why all these language features were created in the first place, and which problems they are designed to solve. But if you never intend to write practical programs that perform applied tasks, maybe that’s ok.
It’s possible that you are much better at automatically memorizing words than I am.
Or simply annoyed by different things.
Are you saying that, when I have some practical task to finish, the best solution is to pick the most elegant language, disregarding all other options—and that not doing so makes me arrogant?
Sorry for unclear phrase. I mean that language’s lack of tools is language’s arrogance.
rules that let you say “if(a=b)” also let you say “a=b=c”
“a=b=c;” vs “a=c; b=c;” is not much; the former syntax simplifies injection of vulnerabilities (intentionally or incidentally).
Instead, they stem from the fact that, ultimately, your computer is an electron-pushing device which is operating under real-world constraints
You are right about x86 assembly, though, which is why I mentioned “a small microcontroller” in my original post. Their assemblies tend to make more sense
I have written in C for these microcontrollers—physical constraints visibly leak into the language, so if you are learning C anyway, you could delay learning assembly.
why all these language features were created in the first place, and which problems they are designed to solve. But if you never intend to write practical programs that perform applied tasks, maybe that’s ok.
If you learn just Scheme and OCaml you still can understand what type system and type inference gives you.
You can appreciate steam engine without knowing nuclear physics, after all.
I mean that language’s lack of tools is language’s arrogance.
I’m still not sure what you mean by that. Are you suggesting that all languages should make all possible tools available ? For example, should every language, including C, Javascript, Java, C#, Ruby, Python, Dart, etc., provide a full suite of Bluetooth communication libraries ? I agree that it would be really neat if this were the case, but IMO it’s highly impractical. Languages are (so far) written by humans, and humans have a limited amount of time to spend on them.
“a=b=c;” vs “a=c; b=c;” is not much; the former syntax simplifies injection of vulnerabilities (intentionally or incidentally).
What do you mean by “injection of vulnerabilities” ? Also, “a=b=c;” should be more correctly rendered as “b=c; a = b;”. This makes it possible to use shorthand such as “if ( (answer = confirmRequest()) == CANCEL) … ”.
so if you are learning C anyway, you could delay learning assembly.
Sure, you could delay it, but it’s best to learn it properly the first time. There are certain essential things that are easy to do with assembly that are harder to do with C: for example, balancing your branches so that every iteration of the main loop takes the same number of cycles.
If you learn just Scheme and OCaml you still can understand what type system and type inference gives you.
If you were a person who only knew Scheme, how would you explain “what type inference gives you”, and why it’s useful ?
I mean that language’s lack of tools is language’s arrogance.
I’m still not sure what you mean by that. Are you suggesting that all languages should make all possible tools available ?
It was a clarification to some specific phrase in my previous comment. The original phrase answers both your questions. I specifically said that it can be lack of resources or competence, not only arrogance. And this is specifically about tools that allow you to tailor the language to your specific task, so that there are no problems with language that you are prohibited from solving. Somebody can always write a bluetooth library.
certain essential things that are easy to do with assembly that are harder to do with C: for example, balancing your branches so that every iteration of the main loop takes the same number of cycles
This is not essential for many applications, even with what is now called microcontrollers. Learning optimization on that level is something you can do while having a good grasp of other concepts already.
If you were a person who only knew Scheme, how would you explain “what type inference gives you”, and why it’s useful ?
Type inference allows you to write with strict typechecks and catch some kinds of errors without cluttering the code with type specifications for every variable.
And this is specifically about tools that allow you to tailor the language to your specific task, so that there are no problems with language that you are prohibited from solving. Somebody can always write a bluetooth library.
That makes sense, and I do wish that more languages supported more capabilities, but I think it’s unrealistic to expect all languages to support all, or even most, or even some large fraction of real-world tasks that are out there. There are vastly more tasks than there are languages: graphics (raster, vector, and 3d, on various systems), sound, desktop user interfaces, bluetooth, TCP/IP networking, bio-sequence alignment, finance, distributed computation, HTML parsing and rendering, SQL access… and that’s just the stuff I’d had to handle this month !
Learning optimization on that level is something you can do while having a good grasp of other concepts already.
I think the opposite is true: performing this kind of optimization (even on a “toy” program) is exactly the kind of task that can help you internalize those concepts.
Type inference allows you to write with strict typechecks and catch some kinds of errors without cluttering the code with type specifications for every variable.
I agree with you there, but I’ll play Devil’s Advocate, in my attempt to adopt the perspective of someone who only knows Scheme. So, can you give me an example of some Scheme code where the strict typechecks you mentioned are truly helpful ? To me (or, rather, my Schemer’s Advocate persona) this sounds inelegant. In Scheme, most entities are pairs, or data structures built of pairs, anyway. Sure, there are a few primitives, but why should I worry about 5 being different from 5.0 or “5” ? That sounds like a job for the interpreter.
That makes sense, and I do wish that more languages supported more capabilities, but I think it’s unrealistic to expect all languages to support all, or even most, or even some large fraction of real-world tasks that are out there
You didn’t understand my point correctly. Language per se should not support directly, say, bluetooth—because bluetooth will change in an incompatible way. Language could live without a bluetooth library—why not, there is always FFI for dire cases. But the question is about allowing to define a nice API if a need arises. More or less any metaprogramming tool that is not constrained in what it can create would do—those who want to use it, will wrap it in a layer that is nice to use, you can then just incorporate their work.
Common Lisp didn’t have any object system in the first edition of the standard; CLOS was prototyped using macros, documented, and then this documentation was basically included in standard. Of couse, macro use could be somewhat more clumsy or more explicit for any reason (make it easier to control overuse, for example) - this is not a problem. The problem is there when you have zero ways to do something—for example, to define a non-trivial iteration pattern.
So, can you give me an example of some Scheme code where the strict typechecks you mentioned are truly helpful ? To me (or, rather, my Schemer’s Advocate persona) this sounds inelegant. In Scheme, most entities are pairs, or data structures built of pairs, anyway. Sure, there are a few primitives, but why should I worry about 5 being different from 5.0 or “5” ? That sounds like a job for the interpreter.
Sorry? I was talking about things that help to catch errors. In any small snippet the errors are simple enough to find for this to be unillustrative. It only helps you when you have some kind of wrong assignment in 1K+LOC.
Have you ever tried learning a foreign language ? Maybe it was easy for you—I know people who seem to have a natural aptitude for it—but for me, it was basically a long painful slog through dictionary-land. Yes, from a strictly algorithmic standpoint, you could look up every word you intend to read or write; but this works very poorly for most humans.
I think your demands might be a bit too strict. I am perfectly ok with using a language that is a good, though not 100% perfect, fit for my task. Sometimes, I would even settle for an inferior language, if doing so grants me access to more powerful libraries that free me from extra work. Sure, I could “ask for the best”, but I have other goals to accomplish.
How so ? Perhaps you were thinking of C++, which is indeed malicious ?
I agree with you that there’s no magical silver bullet set of concepts, but I also believe that some concepts are vastly more important than others, regardless of how easy they are to learn. For example, the basic concept you internalize when learning assembly is that (roughly speaking) the computer isn’t a magical genie with arbitrary rules—instead, it’s a bag of circuits that moves electrons around. This idea seems trivial when written down, but internalizing it is key to becoming a successful programmer. It also leads naturally to understanding pointers, on which the vast majority of other languages—yes, even Scheme—are built. I doubt that you can properly understand things like type inference without first understanding bits and pointers.
English, French (I usually forget the latter and recover it when I have any proximate use for it). My native language is Russian. It is a big relief when learning French that most words have the same translations in many contexts. This multi-translation problem is way more annoying than simply looking up words.
This actually confirms my point. You will have to choose inferior language from time to time, and its lack of tools of adapting language to your task is either local incompetence of language authors or lack of resources for development of language or lnaguage community arrogance.
“i+= i++ + ++i;” can be reliably compiled but not predicted. There are many actual everyday examples like “if(a=b);”.
Of course, it is not even close to C++, which takes malicious semantics a few levels up.
Any command-line programming environment will make you internalize that computer has some rules and that it does what you order—literally.
x86 assembly is quite arbitrary anyway. Maybe LLVM assembly (which is closer to “pointer machine” than to “random access machine) would be nicer. After all, high-level languages use specially wrapped pointers even in implementation.
You cannot properly understand some performance implications, maybe. But the actual input-output correspondence can be grokked anyway. Of course, only higher-order functions have a strict proof that they can be understood without proper understanding of imperative semantics.
It’s possible that you are much better at automatically memorizing words than I am.
Wait… what ? Are you saying that, when I have some practical task to finish, the best solution is to pick the most elegant language, disregarding all other options—and that not doing so makes me arrogant ? I am pretty sure this isn’t right. For example, my current project involves some Bluetooth communication and data visualization on Windows machines. There are libraries for Java and C# that fulfill all my Bluetooth and graphical needs; the Python library is close, but not as good. Are you saying that, instead of C#, I should just pick Scheme or Haskell or something, and implement my own Bluetooth stack and drawing APIs ? I am pretty sure that’s not what you meant...
Ok that’s a good point; I forgot about those pre-/post-increments, because I avoid them myself. They’re pretty terrible.
On the other hand, the regular assignment operator does make sense; the rules that let you say “if(a=b)” also let you say “a=b=c”. The result of an assignment operator is the RHS. I don’t see this as a bad thing, though it might’ve been better to use “eq” or some other token instead of the comparison operator “==”.
True, and that’s a good lesson too, but programming in assembly lets you get close (though not too uncomfortably so) to the actual hardware. This allows you to internalize the idea that at least some of these rules are not arbitrary. Instead, they stem from the fact that, ultimately, your computer is an electron-pushing device which is operating under real-world constraints. This is important, because arbitrary rules are something you have to memorize, whereas physical constraints are something you can understand.
You are right about x86 assembly, though, which is why I mentioned “a small microcontroller” in my original post. Their assemblies tend to make more sense.
You are right, though this depends on which problem you’re solving. If you approach the programming language completely in abstract, then yes, you can understand things like input-output correspondence from the strictly algebraic point of view. What you won’t understand, though (at least, not as readily), is why all these language features were created in the first place, and which problems they are designed to solve. But if you never intend to write practical programs that perform applied tasks, maybe that’s ok.
Or simply annoyed by different things.
Sorry for unclear phrase. I mean that language’s lack of tools is language’s arrogance.
“a=b=c;” vs “a=c; b=c;” is not much; the former syntax simplifies injection of vulnerabilities (intentionally or incidentally).
I have written in C for these microcontrollers—physical constraints visibly leak into the language, so if you are learning C anyway, you could delay learning assembly.
If you learn just Scheme and OCaml you still can understand what type system and type inference gives you.
You can appreciate steam engine without knowing nuclear physics, after all.
I’m still not sure what you mean by that. Are you suggesting that all languages should make all possible tools available ? For example, should every language, including C, Javascript, Java, C#, Ruby, Python, Dart, etc., provide a full suite of Bluetooth communication libraries ? I agree that it would be really neat if this were the case, but IMO it’s highly impractical. Languages are (so far) written by humans, and humans have a limited amount of time to spend on them.
What do you mean by “injection of vulnerabilities” ? Also, “a=b=c;” should be more correctly rendered as “b=c; a = b;”. This makes it possible to use shorthand such as “if ( (answer = confirmRequest()) == CANCEL) … ”.
Sure, you could delay it, but it’s best to learn it properly the first time. There are certain essential things that are easy to do with assembly that are harder to do with C: for example, balancing your branches so that every iteration of the main loop takes the same number of cycles.
If you were a person who only knew Scheme, how would you explain “what type inference gives you”, and why it’s useful ?
It was a clarification to some specific phrase in my previous comment. The original phrase answers both your questions. I specifically said that it can be lack of resources or competence, not only arrogance. And this is specifically about tools that allow you to tailor the language to your specific task, so that there are no problems with language that you are prohibited from solving. Somebody can always write a bluetooth library.
This is not essential for many applications, even with what is now called microcontrollers. Learning optimization on that level is something you can do while having a good grasp of other concepts already.
Type inference allows you to write with strict typechecks and catch some kinds of errors without cluttering the code with type specifications for every variable.
That makes sense, and I do wish that more languages supported more capabilities, but I think it’s unrealistic to expect all languages to support all, or even most, or even some large fraction of real-world tasks that are out there. There are vastly more tasks than there are languages: graphics (raster, vector, and 3d, on various systems), sound, desktop user interfaces, bluetooth, TCP/IP networking, bio-sequence alignment, finance, distributed computation, HTML parsing and rendering, SQL access… and that’s just the stuff I’d had to handle this month !
I think the opposite is true: performing this kind of optimization (even on a “toy” program) is exactly the kind of task that can help you internalize those concepts.
I agree with you there, but I’ll play Devil’s Advocate, in my attempt to adopt the perspective of someone who only knows Scheme. So, can you give me an example of some Scheme code where the strict typechecks you mentioned are truly helpful ? To me (or, rather, my Schemer’s Advocate persona) this sounds inelegant. In Scheme, most entities are pairs, or data structures built of pairs, anyway. Sure, there are a few primitives, but why should I worry about 5 being different from 5.0 or “5” ? That sounds like a job for the interpreter.
You didn’t understand my point correctly. Language per se should not support directly, say, bluetooth—because bluetooth will change in an incompatible way. Language could live without a bluetooth library—why not, there is always FFI for dire cases. But the question is about allowing to define a nice API if a need arises. More or less any metaprogramming tool that is not constrained in what it can create would do—those who want to use it, will wrap it in a layer that is nice to use, you can then just incorporate their work.
Common Lisp didn’t have any object system in the first edition of the standard; CLOS was prototyped using macros, documented, and then this documentation was basically included in standard. Of couse, macro use could be somewhat more clumsy or more explicit for any reason (make it easier to control overuse, for example) - this is not a problem. The problem is there when you have zero ways to do something—for example, to define a non-trivial iteration pattern.
Sorry? I was talking about things that help to catch errors. In any small snippet the errors are simple enough to find for this to be unillustrative. It only helps you when you have some kind of wrong assignment in 1K+LOC.