Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
I remain unconvinced that C++ has anything to offer in these cases;
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
Sometimes objects just are more than one type.
This argues for interfaces, not multiple implementation inheritance.
And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
Seriously? All my objections to C++ come from it’s complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide newtypes Product and Sum that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also a newtype for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operation and or or; this is actually a case of the lattice duality above.
For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.Sum Integer can also be made an instance of Num, so that you can continue to use * for multiplication, + for addition, and so forth.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
Agree on where C is useful and got the same impression about the applicability to XiXiDu’s (where on earth does that name come from?!?) goals.
I’m interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn’t meet your ‘minimalist’ ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I’ve converted to primarily using a language that relies on duck-typing.
“Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.”—Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can’t prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:
It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
Templates are a clunky, disappointing imitation of real metaprogramming.
Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
Combining error handling via exceptions with manual memory management is frankly absurd.
The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.
I could elaborate further, but it’s too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical “real” OO language, but I’d probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).
ETA: Well, that came out awkwardly verbose. Apologies.
I’m sure I could manage 1k before I considered the point settled and moved on to a language that isn’t a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.
This is the one point I disagree with, and I do so both on the assertion ‘almost uniformly’ and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book ‘Object Oriented Software Construction’ is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven’t gotten around to it yet.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.
This argues for interfaces, not multiple implementation inheritance. And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that’s trivial in Smalltalk or Objective-C...
The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
Upvoted purely for the image.
Eiffel does (in, obviously, my opinion).
It does handle the diamond inheritance problem as best as can be expected—the renaming feature is quite nice. Though related, this isn’t what I’m concerned with. AFAICT, it really doesn’t handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety—harder to use in some common cases.)
Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?
In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or …
There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don’t have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.
So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide
newtype
sProduct
andSum
that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also anewtype
for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right—monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operationand
oror
; this is actually a case of the lattice duality above.For this simple case, it’d be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.
I’m not entirely happy with this either. If you’re only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g.
Sum Integer
can also be made an instance ofNum
, so that you can continue to use * for multiplication, + for addition, and so forth.Sather looks interesting but I haven’t taken the time to explore it. (And yes, covariance vs contravariance is a tricky one.)
Both these languages also demonstrate the real (everyday) use for C… you compile your actual code into it.
I don’t think Sather is a viable language at this point, unfortunately.
Yes, C is useful for that, though c—and LLVM are providing new paths as well.
I personally think C will stick around for a while because getting it running on a given architecture provides a “good enough” ABI that is likely to be stable enough that HLLs FFIs can depend on it.
I put C++ as a “learn only if needed language”. It’s extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.