The Old OOP
As my friend Jacob Gabrielson once put it, advocating Object-Oriented Programming is like advocating Pants-Oriented Clothing. // Steve Yegge
I was taught the Objected Oriented Programming (hereinafter “OOP”) long ago. Looking back now, it turned out that it maybe caused more harm than it was useful to my software design skills. I had to re-learn everything the right way.
To summarize, here is why my education on OOP was problematic:
- Inheritance was given too much focus. Polymorphism was not covered enough.
- The used language was C++ which has non-virtual methods by default.
I will explain these why these are problems shortly. But first I want to state an important observation: what I was taught about was “old OOP”, while what we have now is a “new OOP”. Differences between them are quite important.
OOP has changed
Of course I cannot put all the blame on my teachers. On the contrary, they did a good job of keeping me interested in computer science during my school days, despite the heavy competition from other subjects, such as physics and chemistry.
It is just that we, as an industry, have learned a thing or two since then. The very understanding of what OOP stands for has changed during the last three decades.
Unfortunately, the bad designs that the “old OOP” gave life to have scarred myself and many programmers I am familiar with. Some of them even decided to “never again” use OOP, or be wary about SW systems that claim to be OOP.
I am teaching such “old-timers” that it is time to re-learn objects. I am trying to break the negative predisposition towards these three letters, to show that the modern OOP is different. And yet, it is still just another tool applicable in some situations, and that is why you have to learn it anew. Doing so will allow you to have freedom of choice or how, when or why to apply these design principles, and when to abstain from them.
Evidence of the change
But that wasn’t real communism! // Twitter, maybe
But has understanding of OOP really changed over time?
Looking at programming languages evolution, I’d say it is definitely the case. Compare mainstream programming languages that created 30 years ago and those created in the last decade. The “mainstream” part here is important, because even 30 years ago there were languages that exposed “proper OOP”, it is just very few used them.
Many things that were optional are now default and vice versa, certain things are now prohibited or made harder to express, and some things were made easier. The core concepts are the same, but important practicalities have been changed. Specific examples that immediately come to my mind.
- Namespaces (≈ encapsulation) are no longer tied to classes and are generally easier to use.
- Protection (≈ encapsulation) in hierarchies has become more balanced. I.e. fewer languages offer the “protected” scope, and some of them even forgo the “private” scope. Simultaneously, it is not required to expose the guts of your class in the declaration of its interface.
- Inheritance has been clarified in many problematic cases, i.e. multiple inheritance, base class code reuse versus interface inheritance, including source headers (which transitively inherits everything they include) versus importing just the interface definitions.
- Polymorphism is more prominent and easier to use, arguably at the cost of higher risk of run-time “refused bequest” exceptions.
Inheritance was given too much focus
Even classic books on SW design advise to prefer composition over inheritance. The problem is, they do not write it in LARGE ENOUGH LETTERS.
I believe it is a big reason why OOP sucks for so many people. We should explain it over and over again when re-teaching programmers. The “old OOP”, as I was taught it, gave too much focus on behavior inheritance as a nifty way reuse code, i.e. to reduce duplication. It rarely focused on the price it brought with it: source code coupling.
The “new OOP” should explain how to achieve better designs by:
- Having flatter inheritance hierarchies.
- Using inheritance as a mechanism for declaring and enforcing interfaces.
- Avoiding using inheritance as a mechanism for behavior reuse unless it is absolutely proven to be better than alternatives.
Non-virtual methods are harmful to understanding OOP
Another misconception that I have had for a long time is that objects and methods are essentially equivalent to data structures and functions operating on them. Objects appeared to be syntactic sugar to keep data and code together. Using C++ as a language to learn OOP only cemented this concept.
But a critically important detail is missing here.
Objects should contain function pointers, not functions. In other words, methods should be
virtual. The external caller of a method does not know which exact effect the latter would cause.
This simple indirection allows for essential complexity reduction in programs. All type-dependent
switches in every place a non-virtual method would be called turn into just calling a virtual method. There is no need to repeatedly make the same decision about which function to call for a given data structure at hand. The decision has already been implicitly made once, when constructing that object instance. The decision travels around the program within that object.
This is a very big reason why teaching OOP using C++ is harmful. In C++, methods are non-virtual by default, meaning they do not define an interface that can be employed at runtime.
virtual method was taught to be something you do not do in C++ unless it was absolutely unavoidable. Compared to that, all languages coming after C++ (starting from Java) and many OOP languages predating C++ make methods virtual by default.
Even if you look at a 100% pure C system, such as Linux kernel, you will find widespread usage of virtual methods. The filesystem, memory management and dynamic module subsystems all have many
struct types which define data storage members alongside with function pointers for behavior to be dynamically resolved at runtime.
Why such a strange design decision was made for C++? Apparently, C++ had to compete with C in speed in the 1980’s. Taking an indirect function call was considered too steep a performance hit at that time. Hence the “optimization by default” of having methods resolved at compile time.
The damage the decision had on SW design habits of those who learned C++ as a first OOP language cannot be ignored, and is clearly visible in the software we work with.
Mixing inheritance into it did not make things easier to understand either. For a long time, I thought that it is the inheritance that gave the virtual methods some value: because you could reuse the implementation of said base methods. It turned out to be almost the opposite: virtual methods are valuable because you can call them on almost unrelated entities sharing no implementation as long as these called entities agree on the interface.
I guess this is what I mean under The Old OOP:
- Not recognizing the danger of premature optimization of non-virtual methods.
- Too much focus on inheritance as means of code reuse instead of being a method to define interfaces.
- Not explaining the role of polymorphism in reducing control flow duplication.