This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it ‘click’ for how things work behind the scenes. It doesn’t have to be much and you’ll probably never use it again, but the concepts will help your broader understanding.
If you can be more specific about which parts baffle you, I might be able to recommend something more useful.
Nothing in particular baffles me. I can get through the material pretty fine. It’s just that I prefer starting from a solid and thorough grasp of all the fundamentals and working on up from there, rather than jumping head-first into the middle of a subject and then working backwards to fill in any gaps as needed. I also prefer understanding why things work rather than just knowing that they do.
It’s just that I prefer starting from a solid and thorough grasp of all the fundamentals
Which fundamentals do you have in mind? There are multiple levels of “fundamentals” and they fork, too.
For example, the “physical execution” fork will lead you to delving into assembly language and basic operations that processors perform. But the “computer science” fork will lead you into a very different direction, maybe to LISP’s lambdas and ultimately to things like the Turing machine.
Whatever fundamentals are necessary to understand the things that I’m likely to come across while programming (I’m hoping to go into data science, if that makes a difference). I don’t know enough to know which particular fundamentals are needed for this, so I guess that’s actually part of the question.
Well, if you’ll be going into data science, it’s unlikely that you will care greatly about the particulars of the underlying hardware. This means the computer-science branch is more useful to you than the physical-execution one.
I am still not sure what kind of fundamentals do you want. The issue is that the lowest abstraction level is trivially simple: you have memory which can store and retrieve values (numbers, basically), and you have a processing unit which understands sequences of instructions about doing logical and mathematical operations on those values. That’s it.
The interesting parts, and the ones from which understanding comes (IMHO) are somewhat higher in the abstraction hierarchy. They are often referred to as programming language paradigms.
The major paradigms are imperative (Fortran, C, Perl, etc.), functional (LISP), logical (Prolog), and object-oriented (Smalltalk, Ruby).
They are noticeably different in that writing non-trivial code in different paradigms requires you to… rearrange your mind in particular ways. The experience is often described as a *click*, an “oh, now it all makes sense” moment.
I guess a good starting point might be: Where do I go to learn about each of the different paradigms? Again, I’d like to know the theory as well as the practice.
I understand what you mean here, but in programming, it sometimes makes sense to do things this way. For example, in my introduction to programming course, I used Dictionaries/Hashes to write some programs. Key-value pairs are important for writing certain types of simple programs, but I didn’t really understand how they worked. Almost a year later, I took an algorithms course and learned about hash functions and hash maps and finally understood how they worked. It wouldn’t have made sense to refrain from using this tool until I’d learned how to implement them and it was really rewarding to finally understand it.
I always like to learn things from the ground up too, but this way just works sometimes in programming.
This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it ‘click’ for how things work behind the scenes. It doesn’t have to be much and you’ll probably never use it again, but the concepts will help your broader understanding.
If you can be more specific about which parts baffle you, I might be able to recommend something more useful.
Nothing in particular baffles me. I can get through the material pretty fine. It’s just that I prefer starting from a solid and thorough grasp of all the fundamentals and working on up from there, rather than jumping head-first into the middle of a subject and then working backwards to fill in any gaps as needed. I also prefer understanding why things work rather than just knowing that they do.
Which fundamentals do you have in mind? There are multiple levels of “fundamentals” and they fork, too.
For example, the “physical execution” fork will lead you to delving into assembly language and basic operations that processors perform. But the “computer science” fork will lead you into a very different direction, maybe to LISP’s lambdas and ultimately to things like the Turing machine.
Whatever fundamentals are necessary to understand the things that I’m likely to come across while programming (I’m hoping to go into data science, if that makes a difference). I don’t know enough to know which particular fundamentals are needed for this, so I guess that’s actually part of the question.
Well, if you’ll be going into data science, it’s unlikely that you will care greatly about the particulars of the underlying hardware. This means the computer-science branch is more useful to you than the physical-execution one.
I am still not sure what kind of fundamentals do you want. The issue is that the lowest abstraction level is trivially simple: you have memory which can store and retrieve values (numbers, basically), and you have a processing unit which understands sequences of instructions about doing logical and mathematical operations on those values. That’s it.
The interesting parts, and the ones from which understanding comes (IMHO) are somewhat higher in the abstraction hierarchy. They are often referred to as programming language paradigms.
The major paradigms are imperative (Fortran, C, Perl, etc.), functional (LISP), logical (Prolog), and object-oriented (Smalltalk, Ruby).
They are noticeably different in that writing non-trivial code in different paradigms requires you to… rearrange your mind in particular ways. The experience is often described as a *click*, an “oh, now it all makes sense” moment.
I guess a good starting point might be: Where do I go to learn about each of the different paradigms? Again, I’d like to know the theory as well as the practice.
Google is your friend. You can start e.g. here or here.
I understand what you mean here, but in programming, it sometimes makes sense to do things this way. For example, in my introduction to programming course, I used Dictionaries/Hashes to write some programs. Key-value pairs are important for writing certain types of simple programs, but I didn’t really understand how they worked. Almost a year later, I took an algorithms course and learned about hash functions and hash maps and finally understood how they worked. It wouldn’t have made sense to refrain from using this tool until I’d learned how to implement them and it was really rewarding to finally understand it.
I always like to learn things from the ground up too, but this way just works sometimes in programming.