Nothing is something

Inspired by Sandi Metz’s video about the Null Object pattern and related things.

Language of integers without zero

Imagine you have a programming language with a type for non-zero integers of arbitrary length, denoted by type int_t. I.e., the language natively supports creating, storing, retrieving, producing (as results of calculations) and representing such values.

What this language does not support natively is zero. How bad could such a limitation be? It really depends on what operations you would need to do with the numbers in your program.

If you only need to add and multiply numbers, you will most likely be OK. Results of additions and multiplications of positive integers are positive integers. Numerical results of arbitrary magnitude are handled by the language so we do not need to concern ourselves about them in our programs. As long as we can express our problem domain in positive integers and multiplications, additions and comparisons over them, we do not need to complicate our programs with anything.

The need for zero

But here comes a new requirement: our programs will have to use subtraction. When subtracting two numbers, sometimes you end up with negative numbers and with zero as result. But our language does not implement zero, which means we will have to handle the situation of zero result in our programs. First of all, we need to store the fact of having a zero for all calculation results somewhere. A data structure to represent a “zero or a number” value would be a first step:

struct {
    bool is_zero;
    int_t number; // only valid when is_zero == false
} enhanced_int_t;

From here, using it can go in two directions.

  1. We can leave the responsibility of checking for zeros to the clients of enhanced_int_t. In every place where a zero can come as an input, an if check has to be added to handle the special case of the zero input or output. Which is to say, most of the code that does anything but few trivial things has to be littered with handling of the corner case.
  2. We can hide the decision making into an abstract data type, thus centralizing the handling of zeros inside a set of functions defined over enhanced_int_t.

Abstract data type

Having an abstract data type means that guts of the type may no longer be accessed by the client code. Instead, we provide a number of functions/methods to represent basic operations performed over numbers: creation, addition, multiplication and subtraction, comparison, etc.

All of these function receive value (or references/pointers to such structures, this depends on how the memory management model of the language works) and return other values of enhanced_int_t.

It is very convenient if we require from the start that these methods may not mutate their inputs. We can treat instances of enhanced_int_t as true values, i.e., things without identities but with defined comparison operation.

In such situation, our implementation needs to provide only a single allocated instance of enhanced_int_t representing zero: const zero:enhanced_int_t = {true, <unspecified>};.

Sort of a singleton, albeit benign because it is immutable.

All the operations should hide the special nature of zero from clients of enhanced_int_t. What they should provide is the expected semantics: a+b, a*b, a-b, and a==b? Well, except that the comparison can always be used against “another” zero value obtained from elsewhere to detect when we have a zero result. But the necessity for that will now be limited to much fewer places (instead of figuratively everywhere in the client code).

The need for infinity

Life is great again. But here comes another requirement: support integer division. Our existing enhanced_int_t can get one new operation to divide two enhanced (zero or int_t) numbers.

Given that the algorithm for division can be expressed in subtractions, multiplications, comparisons (and table lookups for speed), we might not even need to look into the “guts” of the existing abstract data type. We can implement it using already available operations for enhanced_int_t. Or, for the sake of performance we might allow ourself an exception and look into the internal representation, no big deal as it changes nothing below.

But we all know that one case is not handled: result of division by zero is not representable in neither int_t nor enhanced_int_t. So, we have the same two alternatives: 1) pollute our code with checks for “division-by-zero” result, or 2) hide them inside yet another abstract data type:

struct {
    bool is_zero;
    bool is_infinity;
    int_t number; // only valid when is_zero == false and is_infinity == false
} enlightened_int_t;

The internal representation can vary. E.g., two boolean flags can be compressed into a single tri-state enum. It does not matter as long as we can tell two “special” cases and the “normal” case apart.

As before we can supply clients with a set of methods over enlightened_int_t, now including division, that will handle the case of something divided by zero. That division can now return a new “special” value of infinity_int, of which we need one copy in the whole program (if we have more, the clients would still not be able to tell them apart because we do not provide them an identity comparison operation).

Freedom of defining the rules

How would we redefine the operations to handle the new value of infinity_int? Here comes another benefit of using an abstract data type: we can define them in whichever way to suite our clients best. We do not have to follow any prescribed rulesets of mathematics or logic (although knowing them should help to avoid admitting internal inconsistencies for the rules we create).

Possible examples of how to treat the new special case of infinity.

  1. Treat infinity_int as “the biggest number” and implement arithmetic with saturation: a + ∞ = ∞, a × ∞ = ∞, a - ∞ = ∞, ∞ == ∞ etc.
  2. Treat arithmetic operations with infinity_int as prohibited and throw an exception for any calculation attempt beyond the comparison operation. Users of these operations will only occasionally have to compare against infinity in the “normal” code paths, and mostly handle it in their exception handling logic.
  3. Support both cases above and offer clients one set of operations for “saturated” logic (no exceptions) and another set of operations for “finite” logic (an exception to represent accessing infinity).
  4. Support a combination of all above (e.g., only allow adding to infinity but not subtracting it, if that is what the business logic dictates), or anything else really, but in a centralized manner (e.g. log every situation when infinity and zeros are encountered in enlightened_int_t through a side channel).

Now we have handled two special cases in a centralized manner. This frees the client code from the need of repeated mundane checks. There will still be places when the clients of abstract data types will have to handle a special case for specific values, but those will more likely be tied to the business rules of the client application, rather than dealing with limitations of the language or machine these rules are implemented on.

Not only for number theorists

With some fantasy, the same approach can be applied to concepts with more internal structure than numbers (let’s pretend for a millisecond that numbers are easy and our business domain cases are hard).

A collection of purchase orders implemented as an abstract data type can be instructed to handle cases of empty or absent data in a centralized manner, e.g., by making them unrepresentable or handling them according to domain’s business rules (providing a safe default, throwing an exception, logging it etc.). A database connection may have a special case of a lost connection to be represented by a “zero” connection object that returns empty results for all queries sent to it.

Language Null versus abstract data type’s Null Object

How is it better than representing a special situation as NIL/NULL/None/nullptr/(void *)0 of many programming languages? Because, with a properly supplied null object of the target domain, we retain control over behavior of operations that are applied to it. We can “teach” our own null object to properly react on them. Compared to that, language-provided syntax for “no value” have fixed semantics. Usually it is either throwing “failed to dereference” or “does not have method” exceptions, or, in case of C, undefined program behavior. Allowing language-provided primitive values to reach client code shifts the burden of checking to that code. And there are much more client code using an abstraction than code implementing that abstraction.

Another benefit is that we can handle more than one distinct class of special situations for our data type separately. The language Null has to compress them into one thing, forcing the client code to disambiguate them.

Boolean or Enum are not much better than Null

Returning language Null is not the only way to make maintenance of client code hard. Returning non-opaque enlightened_int_t (i.e., allowing client code to inspect its internals) is almost as bad. Remember that accessing the number field inside it returns undefined result unless the boolean flags for special cases (is_zero and is_infinity) have been properly checked before. It is very similar to returning a possibly-Null reference: dereferencing it or invoking a method on it before the nullness check has been performed causes undesirable effects. By exposing special cases to client code, we burden it with handling them.

In this regard, enums used to enumerate special case can be seen are a generalization of a boolean flag that does not make the problem go away. It is still client code’s responsibility to not forget to discriminate its behavior on the enum value.

On (im)mutability

While we sometimes have to have mutable objects to represent entities of target domain and operations that change internal state of already existing objects, objects for special cases benefit from being immutable. It provides an optimization of using a statically allocated singleton object to represent the single null value. That in turn simplifies handling of identity comparisons.

But immutability requires designing API that returns new objects (or references to objects) instead of mutating its inputs in-place. That ensures that we can return a (reference to) null object back when an operation over some inputs leads up to a special case situation.

Sometimes it is not really possible to know how many “special” objects we will need (maybe a null object has to use/store a value that becomes known only at runtime). In this case, of course, the singleton approach to their allocation is not viable. You will likely end up with multiple equal but not identical null objects alive at the same time. Provided they are still immutable and your code does not rely on identity comparison but uses value comparison, it will still be fine.

Postscriptum

I planned to have this post to be about the Null Object pattern, but it turned out to become an essay about abstract data types. Or maybe about both. Huh.


Written by Grigory Rechistov in Uncategorized on 02.01.2024. Tags: null object, null object pattern, special case, abstract data types,


Copyright © 2024 Grigory Rechistov