Sunday, July 17, 2011

Why inheritance for code reuse should be avoided

The question of when to use inheritance and when not to has been asked many times. I have noticed answers can be put in one of three categories:

1. Use as much as you can. It's a convenient way to maximize code reuse with very few keystrokes. Actually, this answer is seldom explicitly given, but if I judge by the code I see, this is a wide-spread opinion.

2. There are situations when inheritance is appropriate, and situations when it isn't. Namely, inheritance is appropriate to express IS-A relationships.

3. Never use it for the purpose of code reuse. Some take it even further and say never use it at all. Inheritance is flawed by design.

Throughout my life as a programmer, I have gone through all of these stages, in order. I think most of the controversy focuses on inheritance for the purpose of code reuse, and I'll start with that.

Inheritance for the purpose of code reuse
The first answer is easy to discard, as explained in this answer on the programmers stackexchange. Systematically using inheritance quickly leads to excessively large objects. The author of the code probably won't see this as a problem at first, and may indeed be oblivious to the problem. Unfortunately, this makes it very easy to fall into this trap.

The second answer begs for an answer to the question "what is an IS-A relationship?". I find that all IS-A relationships can be expressed as HAS-A, and vice-versa. Taking the usual example of cars, we can say a car has four wheels, and it is a four-wheeled vehicle. A car has an engine, and it is an engine-powered vehicle. Which is the correct way of expressing relationships between cars and their components?

Instead of relying on IS-A vs HAS-A, one can use the Liskov Substitution Principle (LSP), as pointed out in this other answer.

Informally, the principle states that if B inherits from A, then B must be usable in all places where A is used.

This is a bit of a loose definition, and even the more formal definition by Barbara Liskov and Jeanette Wing stated below invites further questions.

Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.

What is the proof system assumed here? What kind of sentences can be expressed? Do the axioms include knowledge about the program?

Informally, how should "usable" be interpreted in the informal statement of the principle?

I don't think there is a single fit-all answer to this question. There are a number of aspects to consider, which all affect the initial development time and degree of maintainability of a software project.

Language Semantics.
Note that a language which allows to query the concrete type of an object makes it impossible to meet the LSP.

let f (x : A) =
    match x with
    | :? B -> failwith "Failing!"
    | _ -> x.DoSomething()    

Obviously, an instance of B cannot be used with f, as it would cause the program to behave differently as when provided with instances of A (or some other subtype of A).

Notion of equivalence.
That instances of B can be used instead of A means that the behavior of the software when B is used is "observably equivalent" to that of the software when A is used.

There are different levels of strictness regarding equivalence. Including internal state in the observable behavior is a bad idea, as this forbids method overriding and polymorphism.

Even limiting oneself to "user-visible events" may be too limiting, as this closes the door to plug-ins that extend or modify functionality.

In the end, I think a popular but very weak notion is whether the program crashes or not. And even that may be too strong, as it's probably acceptable for instances of B to work where instances of A crash (OK, that's nit-picking).

Scope.
Does "all places where A is used" refer to the places as they exist in the current form of the software, in the current and expected future forms, or in all possible forms?

There is a bit of overlap with the first aspect. When assuming the scope is limited to the current state, and if no concrete type tests on instances of A are made anywhere, then meeting the LSP seems feasible.

Assuming the second alternative (current and expected future forms) requires either a clear idea of what the future looks like, or a coding standard prohibiting the use of all language constructs potentially incompatible with the LSP.

If you are designing a library or framework to be used by other parties you don't control, you are condemned to the last option, "all possible forms". Unless you have a low level of ambition on the notion of equivalence, the LSP is impossible to meet in this setting.

There may well be other aspects I haven't thought of, but I hope I made it clear that the LSP isn't quite as simple as it may seem at a first glance. Using inheritance only "when it conforms to the LSP" often equates to "never" if one really cares about the LSP. Moreover, it requires an understanding of the principle itself as well as clearly identifiable decisions on all the points mentioned above, two conditions which are not met that often out-there-in-the-real-world.

Note that it's possible and indeed desirable to specify limitations on language features, equivalence and scope on a per-type basis. Doing it on a project-wide level would probably make it impossible to respect the LSP.

Inheritance limited to interfaces
My standing on the question of inheritance when limited to interfaces is not quite clear yet. I can't quite put my finger on the exact problem yet, so I'll leave it to another day to express my thoughts in details on the subject.

Shortly, it seems to me this kind of inheritance brings little, if anything at all, over composition. The argument of the "keystroke minimality" applies only to subtyping between interfaces, which I have had little practical use for so far.

Alternatives to inheritance
Composition is often cited as the right way to take advantage of code reuse. To put it mildly, mentioning it often fails to trigger the enthusiasm of proponents of inheritance for code reuse. It's easy to understand why:

type A =
   ...
   member this.DoSomething1() = ...
   ...
   member this.DoSomething15() = ...

type OhForGodsSake_B =
   let an_a : A = ...
   member this.DoSomething1() = an_a.DoSomething1()
   ...
   member this.DoSomething15() = an_a.DoSomething15()

type WhatABreeze_B() =
   inherit A()


Indeed, there is something to be said for the second approach. I wish F# had some way to achieve the conciseness of inheritance through composition, but that's not the case. This may be an oversight, or may be by design. As Matthieu M. writes in his answer, forcing verbosity can be beneficial:

[It] means that at least a couple keystrokes are needed for each method, and suddenly people begin to think about whether or not it's such a great idea to reuse this "fat" class for just a small piece of it.

I would add that this would also force the author of B to think about whether all 15 DoSomething() methods really are needed in B, or if it's enough to provide just a few of them.

By the way, should the methods in B have the same name as their counter-parts in A? If A is "VehicleWithWeels", and "B" is "Airplane", it makes sense to expose "VehicleWithWeels.Brake" as "Airplane.BrakeWithWheels", as airplanes have different means of braking. It's easy to miss that subtle point when using inheritance.

All-in-all, I don't think I should value my laziness and comfort over cleaner code, at least not in a professional setting. Still, I can't help being lazy... Programming isn't all about writing clean code, it's about having fun too.

When dealing with implementation relationships between concrete types and interfaces, I am happy with seeing it as "HAS-A" kind of relationship, or more precisely (and verbosely) "OFFERS-FUNCTIONALITY-AS-SPECIFIED-BY-INTERFACE". This can be achieved in two ways in F#:

// Explicit implementation
type B1 =
   ...
   interface A with
      method this.DoSomething() = ...

// Composition using object expressions
type B2 =
   ...
   member this.AsA() =
      { new A with
           member x.DoSomething() = ... // Note "x" vs "this". "this" refers to B2.
      }

The first approach can be used when it's clear B1 can only provide A in one single sensible way at all times, the second approach is more general and flexible. One might wonder if the second approach shouldn't always be preferred. Where do extremes end? ;)

Friday, July 15, 2011

OOP without classes in F#

Did you know that F# allows to use some object-oriented features with types that are not classes? This includes methods, dynamic dispatch, open recursion and encapsulation.

Methods are functions or procedures which are tightly coupled to a type of object.

Dynamic dispatch is the ability to execute different implementations of an operation depending on the concrete types of arguments. A special case which often has dedicated syntax is when the operation is a method, and the argument controlling the dispatch is the object itself.

Open recursion makes the object available to its methods through an identifier, typically called this or self.

Encapsulation restricts access to the internal data and operations of an object to a restricted area of the code. Typically this area is the methods of the object itself, F# works at the module level instead.

The code below shows how a vector type can be defined using records.

[< CustomComparison >]
[< CustomEquality >]
type Vector2 =
    private { x : float
              y : float }
with
    // Constructors
    static member New(x, y) = { x = x; y = y }
    static member Zero = { x = 0.0; y = 0.0 }

    // Accessors
    member this.X = this.x
    member this.Y = this.y
    member this.Length = let sq x = x * x in sqrt(sq this.x + sq this.y)

    // Methods
    member this.CompareTo(other : obj) =
        match other with
        | null -> nullArg "other"
        | :? Vector2 as v -> this.CompareTo(v)
        | _ -> invalidArg "other" "Must be an instance of Vector2"

    member this.CompareTo(other : Vector2) =
        let dx = lazy (this.x - other.x)
        let dy = lazy (this.y - other.y)

        if dx.Value < 0.0 then -1
        elif dx.Value > 0.0 then +1
        elif dy.Value < 0.0 then -1
        elif dy.Value > 0.0 then +1
        else 0
        
    // Overrides for System.Object
    override this.ToString() = sprintf "(%f, %f)" this.x this.y
    override this.Equals(other) = this.CompareTo(other) = 0
    override this.GetHashCode() = (this.x, this.y).GetHashCode()

    // Explicit interface implementations
    interface System.IComparable<Vector2> with
        member this.CompareTo(other) = this.CompareTo(other)

    interface System.IComparable with
        member this.CompareTo(other) = this.CompareTo(other)

    interface System.IEquatable<Vector2> with
        member this.Equals(other) = this.CompareTo(other) = 0

The use of private on line 4 makes the representation (not the type itself or its methods) private to the enclosing module (Thanks to Anton for pointing that out in the comments).
Members can access the object through an identifer (this), and open recursion is covered.
Vector2 can be used as one would expect to be able to use an object, as illustrated below.
Dynamic dispatch is exercised by Array.sortInPlace (which calls CompareTo).


let playWithIt() =
    let v = Vector2.New(0.0, 2.0)
    let v2 = Vector2.New(v.y, -v.x)
    printfn "%s and %s have %s lengths"
        (v.ToString())
        (v2.ToString())
        (if (let epsilon = 1e-6 in abs(v.Length - v2.Length) < epsilon)
            then
            "close"
            else
            "different")
        
    let rnd = System.Random()
    let vs = [| for i in 1..10 -> Vector2.New(rnd.NextDouble(), rnd.NextDouble()) |]
    Array.sortInPlace vs
    printfn "%A" vs

I cannot write an entire article about OOP and not mention inheritance. F#-specific datatypes do not support inheritance, but I will try to show in a later post why this is not as big a problem as one might think.

Friday, July 8, 2011

Reference-counting just isn't enough and actually is harmful

I recently came across a blog entry titled "Why Garbage Collection is not Necessary and actually Harmful".

I used to share the same opinion in my C++ days, when I used to think no better language could be written. A few years later, I have drastically changed my opinion on the subject. I am not going to address all points made against garbage collection, and instead will focus on one issue: The claim that reference-counting is a viable alternative.

Reference-counting is an automatic memory management system where memory allocated to an object is freed when the object is no longer referred to. C++ makes it relatively easy to replace pointers by wrappers that act like pointers, with the addition that reference counts in objects are automatically increased and decreased, as wrappers are created and destroyed.

Reference-counting by itself is flawed as it does not account for cycles. The aim of automatic memory management is to deallocate memory when it is no longer reachable from the program. Reference-counting deallocates memory when it is no longer reachable from reference-counted objects. These two coincide only in the absence of cycles (and provided a number of other conditions are met, e.g. all reference-counted objects are referenced through wrappers)

Edaqa says:

simple reference counting also covers the vast majority of memory management needs. Sweeping collection has a distinct feature of being able to resolve cyclic dependencies. That is when A points to B and B points to A. Reference counting cannot manage this in the general case.

Obviously if you have such a feature you will make use of it. I don’t see however that cyclic dependencies are a significant obstacle in most programs. Such object relationships can normally be resolved, or significantly mitigated, by one of a number of other techniques.

I strongly object to the claim that cyclic dependencies are not a significant obstacle in most programs. It may be so in version 1.0 of a program, but even that is stretching it. A reference-counting based memory recollection can introduce enough chaos into a software product that early retirement may be needed earlier than expected.

Potential cyclic dependencies are hard to detect.

It might seem at first that it should be possible and easy to detect cycles when looking at the class diagram. In practice, detecting loops involving more than 3 steps is in my experience tricky. Maybe not when you first design your classes, but when you or someone else alters it later. A developer who does not have the entire class diagram at his disposal or in his mind simply cannot know whether an addition can introduce cycles or not.

Actual cyclic dependencies are harder to detect.

Cycles at the class level are not necessarily problematic. There are moreover very common. Consider for instance the usual definition of lists:

template<typename T>
struct Node<T> {
   T data;
   struct Node<T>* next;
};

Although it is in theory possible to create a cyclic list, this should never happen in any run of the program. How does one convince oneself that this is indeed the case? The problem is solved for lists through the use of a safe API, but home-made more complex types seldom offer such an API.

There is no easy way to deal with cycles.

In order to avoid memory leaks, cycles must be broken in one way or another. This involves manually decreasing the count of an object, or simply not accounting for a reference.

The problem here is which link to choose in the cycle? Obviously, the link must be part of the cycle. Finding a candidate can be difficult by itself for the reasons mentioned above, and there is another restriction. The candidate link must never be a potential member of a cycle-free path for any run of your program, or you might end up deallocating objects that are still in use.

Update (May 2012): Jon Harrop in the comments point out there are languages that prevent creating cycles, and there are algorithms to detect and collect cycles. In this post I was mostly focused on C++ and the traditional ways of doing ref-counting, namely using shared_ptr.

Conclusion

The real problem with reference-counting and avoiding cycles is that there is no good way to convince oneself of the absence of bugs through static analysis or inspection of source code. Absence of cycles is a dynamic property of program runs, and is therefore difficult to analyse. Testing might help, but to my knowledge, few write and maintain test cases specifically for the purpose of validating reference counting.

What makes matters worse is that it's very easy to mistake reference-counting pointer wrappers for substitutes to classic pointers. Systematically using the wrappers often ends up introducing memory leaks. Fixing them introduces dangling pointers.

Mark-and-sweep is no silver bullet, but at least it's a bullet aimed in the right direction.