I can’t really point to anything I learned while getting my CS degree that was particularly instrumental to being able to “really program”, and for that matter I don’t think I could “really program” until some time after I had graduated. Practice counts for a lot here, especially considering how ungrounded and unreliable the current pedagogy of “software engineering” is. Practice and perfectionism did much more for me as a programmer than any course I took in college.
I grew intellectually through that study, but I’ve upped my practical effectiveness enormously in the last few years by working with great CS people and absorbing all I can of their mindset.
From my vantage point as a CS grad who maybe wishes he majored in math, physics, or chemistry instead, it looks to me like you’re getting the best of both worlds. CS programs tend to be very focused on producing capable programmers, but programming is largely an operational set of knowledge that can’t yet be reliably taught in a classroom. My wife is currently pursuing a chemistry degree, and I’m downright jealous of the information density in her courses compared to mine. CS pedagogy is simply immature compared to older fields, full of conflicting opinion, heuristics, and transient industry buzzwords. An exercise: Ask three randomly chosen CS majors what object-orientation is and why it’s so great, and compare their answers.
I’m waffling on the regret because the degree did lead directly to an enjoyable, well-paying job. But most of the time I spent in CS classes feels wasted in comparison to the maturity and density of math and science classes.
As a simple counter-point, my experience is nearly the exact opposite of yours. I felt that I got a lot out of my CS classes. Not every CS class, mind you, but enough that myself without a CS degree and myself with a CS degree would barely be comparable.
An exercise: Ask three randomly chosen CS majors what object-orientation is and why it’s so great, and compare their answers.
While we are not strictly random, I can give 1 of 3:
Object-orientated programming is the concept of designing code around simple, efficient, and reusable objects that can work together to accomplish a larger goal. Compare this against sequential programming, which is essentially a long list of code that is only useful for one specific task.
Object-orientated programming is the concept of designing code around simple, efficient, and reusable objects that can work together to accomplish a larger goal. Compare this against sequential programming, which is essentially a long list of code that is only useful for one specific task.
I think a more accurate instrumental description of OOP is that it’s code that invokes operations in terms of abstract data types, while having code organization based on concrete data types.
This definition spans everything from prototype-based OO to inheritance to interface polymorphism and generic functions, and various combinations thereof.
That having been said, I don’t know very many CS people or industry programmers who would give such a concise definition, unless they’ve seen OO done in say, Python, Java, JavaScript, Lisp, Haskell, Eiffel, Dylan, and C -- or at least enough other languages to see the OO forest distinct from the trees. Academics are likely to babble about a bunch of stuff that doesn’t matter, while industry folks are likely to babble about how cool OO is or that it’s just “how it’s done”.
I should probably be downvoted for OT-ness, but...
Object-orientated programming is the concept of designing code around simple, efficient, and reusable objects that can work together to accomplish a larger goal.
I think this characterization is more affective than descriptive.
Compare this against sequential programming, which is essentially a long list of code that is only useful for one specific task.
This seems to be a non-standard use of the term “sequential”, especially considering that most popular object-oriented languages are imperative, executing one statement after another in sequence. The usual comparison is against structured programming, which could possibly be described as the practice of designing code around simple, efficient, and reusable functions that can work together to accomplish a larger goal.
Fair enough. “Sequential” was a term from memory. I assume it came from my classes, but what you are saying makes sense. In all honesty, I am not one to define a term like OOP. I use it and know it, but mostly from the principle of “I know it when I see it” not from strict terminology.
“Sequential,” to me, is not simply executing one statement after another in sequence. It is a confusing use of the word and possible non-standard. I am not one for terms in general, so take my descriptions with a lot of salt.
As for OT, I figured in a post on CS discussion CS is fair game. Especially this far down the thread. You’re good in my book.
In all honesty, I am not one to define a term like OOP.
I’m not a big fan of the terminology myself. That’s why I chose OO for the exercise… it’s presented (at least it was when I was in school) as this critically important concept/technique/whatever, but in common use the term doesn’t seem to denote much of anything. In comparison, I don’t see much of this happening in chemistry or physics.
I am not one for terms in general, so take my descriptions with a lot of salt.
I’d caution against throwing the baby (concise terminology) out with the bathwater (vague industrial buzzwords) here. For example, I’ve seen a lot of cynicism directed at the term “ajax” because of its associated buzz, despite it having a relatively unambiguous, useful meaning. Good terminology facilitates effective chunking) and communication.
For what it’s worth, in my experience about the only different between OOP and old-school structured imperative programming is that OOP design revolves around semi-atomic, opaque chunks of data, with limited sets of operations that are allowed on a given type of data. In contrast, non-OOP imperative programming typically revolves around a hierarchical breakdown of the task into subtasks implemented as procedures, which the programmer invokes sequentially and passes smallish, transparent bits of data to.
The two are essentially isomorphic, but one or the other may be more natural depending on how well your problem domain decomposes into either 1) a series of subtasks or 2) a collection of self-contained data with a limited range of sensible actions.
For instance, procedures like C’s standard string functions would be more natural in an OOP system because they define a limited set of sensible operations on a fragile data structure, whereas things like Singleton objects and static methods in OOP languages are a hack for things that are more sensibly non-OOP.
Most other details about OOP (like inheritance) will inspire more religious wars than anything else.
OOP proponents usually claim that structured programming projects become too complex for any individual or group to manage at around 100,000 lines of code, but the only references my Google-fu was able to dig up for that claim are twenty-some years out of date:
C. Jones, Programming Productivity, McGraw-Hill, New York, New York, 1986.
C. Jones, Editor, Tutorial Programming Productivity: Issues for The Eighties, Second Edition, IEEE Catalog No. EHO239-4, IEEE Computer Society Press, Washington, DC, 1986.
Object-orientated programming is the concept of designing code around simple, efficient, and reusable objects that can work together to accomplish a larger goal.
Removing the ‘fluff’ from this sentence, we get: “Object-orientated programming is designing code around objects,” which looks awfully close to a tautology.
Mmm… well, the “fluff” was there for a reason. “Simple” means easy to understand and not particularly complex; “efficient” means the object does one thing and one thing well; “reusable” means the object is not tied down into any particular infrastructure; “work together to accomplish a larger goal” means that an object is designed to work with other objects, not designed to solve a big problem. I suppose I could have expounded on the terms but I didn’t figure anyone cared enough.
I’m still not terribly convinced anyone actually cares enough.
The qualifiers—simple, efficient, reusable—distinguish good OO code from bad OO code. They have nothing to do with OO in general. Bad programmers will write object oriented code that is complex, inefficient, and non-reusable. Likewise, “working together to accomplish a goal” applies just as much to subroutines in an imperative language or functions in a functional programming language.
I can’t really point to anything I learned while getting my CS degree that was particularly instrumental to being able to “really program”, and for that matter I don’t think I could “really program” until some time after I had graduated. Practice counts for a lot here, especially considering how ungrounded and unreliable the current pedagogy of “software engineering” is. Practice and perfectionism did much more for me as a programmer than any course I took in college.
From my vantage point as a CS grad who maybe wishes he majored in math, physics, or chemistry instead, it looks to me like you’re getting the best of both worlds. CS programs tend to be very focused on producing capable programmers, but programming is largely an operational set of knowledge that can’t yet be reliably taught in a classroom. My wife is currently pursuing a chemistry degree, and I’m downright jealous of the information density in her courses compared to mine. CS pedagogy is simply immature compared to older fields, full of conflicting opinion, heuristics, and transient industry buzzwords. An exercise: Ask three randomly chosen CS majors what object-orientation is and why it’s so great, and compare their answers.
I’m waffling on the regret because the degree did lead directly to an enjoyable, well-paying job. But most of the time I spent in CS classes feels wasted in comparison to the maturity and density of math and science classes.
Programming is maths in a way. As evidence I give you the Curry-Howard Correspondence
I really wish my CS degree had included type theory and Coq and the like.
As a simple counter-point, my experience is nearly the exact opposite of yours. I felt that I got a lot out of my CS classes. Not every CS class, mind you, but enough that myself without a CS degree and myself with a CS degree would barely be comparable.
While we are not strictly random, I can give 1 of 3:
Object-orientated programming is the concept of designing code around simple, efficient, and reusable objects that can work together to accomplish a larger goal. Compare this against sequential programming, which is essentially a long list of code that is only useful for one specific task.
I think a more accurate instrumental description of OOP is that it’s code that invokes operations in terms of abstract data types, while having code organization based on concrete data types.
This definition spans everything from prototype-based OO to inheritance to interface polymorphism and generic functions, and various combinations thereof.
That having been said, I don’t know very many CS people or industry programmers who would give such a concise definition, unless they’ve seen OO done in say, Python, Java, JavaScript, Lisp, Haskell, Eiffel, Dylan, and C -- or at least enough other languages to see the OO forest distinct from the trees. Academics are likely to babble about a bunch of stuff that doesn’t matter, while industry folks are likely to babble about how cool OO is or that it’s just “how it’s done”.
I should probably be downvoted for OT-ness, but...
I think this characterization is more affective than descriptive.
This seems to be a non-standard use of the term “sequential”, especially considering that most popular object-oriented languages are imperative, executing one statement after another in sequence. The usual comparison is against structured programming, which could possibly be described as the practice of designing code around simple, efficient, and reusable functions that can work together to accomplish a larger goal.
Fair enough. “Sequential” was a term from memory. I assume it came from my classes, but what you are saying makes sense. In all honesty, I am not one to define a term like OOP. I use it and know it, but mostly from the principle of “I know it when I see it” not from strict terminology.
“Sequential,” to me, is not simply executing one statement after another in sequence. It is a confusing use of the word and possible non-standard. I am not one for terms in general, so take my descriptions with a lot of salt.
As for OT, I figured in a post on CS discussion CS is fair game. Especially this far down the thread. You’re good in my book.
I’m not a big fan of the terminology myself. That’s why I chose OO for the exercise… it’s presented (at least it was when I was in school) as this critically important concept/technique/whatever, but in common use the term doesn’t seem to denote much of anything. In comparison, I don’t see much of this happening in chemistry or physics.
I’d caution against throwing the baby (concise terminology) out with the bathwater (vague industrial buzzwords) here. For example, I’ve seen a lot of cynicism directed at the term “ajax” because of its associated buzz, despite it having a relatively unambiguous, useful meaning. Good terminology facilitates effective chunking) and communication.
For what it’s worth, in my experience about the only different between OOP and old-school structured imperative programming is that OOP design revolves around semi-atomic, opaque chunks of data, with limited sets of operations that are allowed on a given type of data. In contrast, non-OOP imperative programming typically revolves around a hierarchical breakdown of the task into subtasks implemented as procedures, which the programmer invokes sequentially and passes smallish, transparent bits of data to.
The two are essentially isomorphic, but one or the other may be more natural depending on how well your problem domain decomposes into either 1) a series of subtasks or 2) a collection of self-contained data with a limited range of sensible actions.
For instance, procedures like C’s standard string functions would be more natural in an OOP system because they define a limited set of sensible operations on a fragile data structure, whereas things like Singleton objects and static methods in OOP languages are a hack for things that are more sensibly non-OOP.
Most other details about OOP (like inheritance) will inspire more religious wars than anything else.
OOP proponents usually claim that structured programming projects become too complex for any individual or group to manage at around 100,000 lines of code, but the only references my Google-fu was able to dig up for that claim are twenty-some years out of date:
C. Jones, Programming Productivity, McGraw-Hill, New York, New York, 1986.
C. Jones, Editor, Tutorial Programming Productivity: Issues for The Eighties, Second Edition, IEEE Catalog No. EHO239-4, IEEE Computer Society Press, Washington, DC, 1986.
Removing the ‘fluff’ from this sentence, we get: “Object-orientated programming is designing code around objects,” which looks awfully close to a tautology.
Mmm… well, the “fluff” was there for a reason. “Simple” means easy to understand and not particularly complex; “efficient” means the object does one thing and one thing well; “reusable” means the object is not tied down into any particular infrastructure; “work together to accomplish a larger goal” means that an object is designed to work with other objects, not designed to solve a big problem. I suppose I could have expounded on the terms but I didn’t figure anyone cared enough.
I’m still not terribly convinced anyone actually cares enough.
The qualifiers—simple, efficient, reusable—distinguish good OO code from bad OO code. They have nothing to do with OO in general. Bad programmers will write object oriented code that is complex, inefficient, and non-reusable. Likewise, “working together to accomplish a goal” applies just as much to subroutines in an imperative language or functions in a functional programming language.
Hmm, parent is at −2. I would be curious how anybody could actually believe (and justify) that OOP