I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
I remain unconvinced that C++ has anything to offer in these cases;
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn’t suggest for learning purposes, either.
Well, the problem isn’t really multiple inheritance itself, it’s the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn’t really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we’ve all seen trick questions about “okay, which method will this call?”). Something closer to a simple type predicate, like the interfaces in Google’s Go language or like Haskell’s type classes, is much less painful here. Or of course duck typing, if static type-checking isn’t your thing.
Compositional code reuse in objects—what I meant by “implementation inheritance”—also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that “multiple inheritance” makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it’s generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of “parent” types.
Consider the following types:
Tree structures containing values of some type A.
Lists containing values of some type A.
Text strings, stored as immutable lists of characters.
Text strings as above, but with a maximum length of 255.
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there’s no shared implementation or obvious subtyping relationship.
The text strings can’t implement the above interface (because they’re not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren’t subtypes of the list, though, because it’s mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Of course, but I’m more considering ‘languages to learn that make you a better programmer’.
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
I don’t agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma ‘multiple inheritance is bad’ and don’t allow generics enforce bad habits while at the same time insisting that they are the True Way.
I think I agree on this note, with certain restrictions on what counts as ‘civilized’. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn’t do it in Java or .NET (except Eiffel.NET).