**OO vs Functional. The Differentiator.**

I've been reading the blogs of a guy called Steve Yegge lately - his blog posts are always *very* long and tend to ramble a lot... so in honour of that, this one's going to be very long too!

**Object modelling**

Question: In an object oriented program, is it correct to model a Square by "sub-classing" a Rectangle?

That is, suppose you're writing some sort of GUI program in some object oriented language such as C#. Is the following a good idea or a bad idea?

class Square : Rectangle

I was listening to an interview with Bob Martin (aka "Uncle Bob") where they talked at great length about this.

The problem is, your "Rectangle" class might look like this:

class Rectangle { private int length; private int height; public void SetLength(int length) { this.length = length; } public void SetHeight(int height) { this.height = height; } // other methods }

When your square class extends Rectangle, it should always have its width and height be equal, because otherwise it's not a square.

After much discussion about "The Liskov Substitution Principle", Uncle Bob concluded: "I would say that there's no reason to say that there's a relationship between squares and rectangles at all".

Seems a shame though that such a "natural" concept should not work in this case.

You might say that this is a rather articially contrived example, and you'd be right. What about this then:

```
class Array<Triangle> : Array<object>
```

All triangles are objects (after all, everything is an object), therefore an array of triangles "is-a" array of shapes. Sounds perfectly natural and correct. And indeed, if you take a look at the .NET library, they've made their arrays work this way - the following compiles without any warnings or errors:

object[] array = new string[1];

Yet, there's a problem. What if my next line says this?

array[0] = 2;

I've somehow managed to stick a number into my string array! Well, actually, I haven't - the code will fail at runtime that a string array can't accept an integer... But our intuition on correct modelling has failed us again.

The problem here is **mutability.** I wanted to shout it aloud when listening to Uncle Bob going on and on.

An **immutable** square is an **immutable** rectangle. An **immutable** array of triangles is an **immutable** array of objects. The intuitive relationships *work* when we keep things immutable.

OK - so when writing OO code, it's best to try and keep all of your objects immutable - then, generally, you can just listen to your intuition and concentrate on writing correct code. For the last couple of years, I've been writing code with a strong preference to "immutable" objects except for a few special cases.

**A symbolic differentiator**

A few years back, for a project at work, I ended up having to write a "differentiation" program. This is a program where you give it a symbolic formula, such as "f(x)=3x^{2}sin(x)", and then ask it "what's the derivative of f with respect to x?"...

df/dx = 6xsin(x) + 3x^{2}cos(x)

There's two aspects to this problem. One is "parsing" - take a string representing the above expression and then convert it into an expression "tree" which we can then use. The "parsing" bit is generally quite messy and actually orthogonal to the problem. In the case of what I was doing at work, there was no need to even write a parser - we were taking an existing structure and converting it directly into an "expression tree".

So I just started directly with what my "expression" tree needs to look like, and came up with something like this (the original program was in Java - I've done things this time in C# but there's really hardly any difference between the two as far as a program like this is concerned).

interface IExpression { IExpression Differentiate(string wrt); string ToString(); }

We have two types of "simple" expressions: variables, and constants.

The derivative of a constant function with respect to any variable is always 0:

class Constant : IExpression { private readonly string value; public Constant(string value) { this.value = value; } public IExpression Differentiate(string wrt) { return new Constant("0"); } public override string ToString() { return value; } }

Whereas the derivative of a variable with respect to anything but itself is 1... otherwise it's 0:

class Variable : IExpression { private readonly string name; public Variable(string name) { this.name = name; } public IExpression Differentiate(string wrt) { return new Constant(name == wrt ? "1" : "0"); } public override string ToString() { return name; } }

The derivative of the sum is the sum of the derivatives:

class Sum : IExpression { private readonly IExpression lhs; private readonly IExpression rhs; public Sum(IExpression lhs, IExpression rhs) { this.lhs = lhs; this.rhs = rhs; } public IExpression Differentiate(string wrt) { return new Sum(lhs.Differentiate(wrt), rhs.Differentiate(wrt)); } public override string ToString() { return string.Format("({0} + {1})", lhs, rhs); } }

And the derivative of the product of two expressions follows "the product rule":

class Product : IExpression { private readonly IExpression lhs; private readonly IExpression rhs; public Product(IExpression lhs, IExpression rhs) { this.lhs = lhs; this.rhs = rhs; } public IExpression Differentiate(string wrt) { return new Sum(new Product(lhs.Differentiate(wrt), rhs), new Product(rhs.Differentiate(wrt), lhs)); } public override string ToString() { return string.Format("{0} * {1}", lhs, rhs); } }

So our strategy is to build up a tree of expressions, differentiate the tree, and print out the results.

Here's the complete program, which is pretty close to what I used once before:

interface IExpression { IExpression Differentiate(string wrt); string ToString(); } class Constant : IExpression { private readonly double value; public Constant(double value) { this.value = value; } public IExpression Differentiate(string wrt) { return new Constant(0); } public override string ToString() { return string.Format("{0}", value); } } class Variable : IExpression { private readonly string name; public Variable(string name) { this.name = name; } public IExpression Differentiate(string wrt) { return new Constant(name == wrt ? 1 : 0); } public override string ToString() { return name; } } class Sum : IExpression { private readonly IExpression lhs; private readonly IExpression rhs; public Sum(IExpression lhs, IExpression rhs) { this.lhs = lhs; this.rhs = rhs; } public IExpression Differentiate(string wrt) { return new Sum(lhs.Differentiate(wrt), rhs.Differentiate(wrt)); } public override string ToString() { return string.Format("({0} + {1})", lhs, rhs); } } class UnaryMinus : IExpression { private readonly IExpression arg; public UnaryMinus(IExpression arg) { this.arg = arg; } public IExpression Differentiate(string wrt) { return new UnaryMinus(arg.Differentiate(wrt)); } public override string ToString() { return string.Format("-{0}", arg); } } class Sin : IExpression { private readonly IExpression arg; public Sin(IExpression arg) { this.arg = arg; } public IExpression Differentiate(string wrt) { // Again, we need to use the chain rule to take into // account the derivative of our function return new Product(new Cos(arg), arg.Differentiate(wrt)); } public override string ToString() { return string.Format("sin({0})", arg); } } class Cos : IExpression { private readonly IExpression arg; public Cos(IExpression arg) { this.arg = arg; } public IExpression Differentiate(string wrt) { return new Product(new UnaryMinus(new Sin(arg)), arg.Differentiate(wrt)); } public override string ToString() { return string.Format("cos({0})", arg); } } class Product : IExpression { private readonly IExpression lhs; private readonly IExpression rhs; public Product(IExpression lhs, IExpression rhs) { this.lhs = lhs; this.rhs = rhs; } public IExpression Differentiate(string wrt) { return new Sum(new Product(lhs.Differentiate(wrt), rhs), new Product(rhs.Differentiate(wrt), lhs)); } public override string ToString() { return string.Format("{0} * {1}", lhs, rhs); } } public class Differentiator { // Some helper functions to make construction of expressions reasonably // simple. // private static IExpression v(string name) { return new Variable(name); } private static IExpression c(double value) { return new Constant(value); } private static IExpression product(IExpression lhs, IExpression rhs) { return new Product(lhs, rhs); } private static IExpression sum(IExpression lhs, IExpression rhs) { return new Sum(lhs, rhs); } private static IExpression sin(IExpression arg) { return new Sin(arg); } private static IExpression cos(IExpression arg) { return new Cos(arg); } public static void Main(string[] args) { // 3*x^2*sin(x) IExpression e = product(c(3), product(product(v("x"), v("x")), sin(v("x")))); System.Console.WriteLine("{0}", e); System.Console.WriteLine("{0}", e.Differentiate("x")); } }

I first wrote a differentiation program along the lines above about 4 years ago. The time I did it, I thought it was one of the most elegant programs I had written in a while. I even thought that it was a great example of object oriented style.

Since then, I've come to realise that there are quite a few problems with it:

- I kept having to remember to implement "the chain rule". Notably, I couldn't just write "d[cos(x)]/dx = sin(x)"... I had to remember to multiply by the derivative of x itself.
- It seems like most of my code is constructing objects! I'm still amazed by how much code I've written that's like this:

class Product : IExpression { private readonly IExpression lhs; private readonly IExpression rhs; public Product(IExpression lhs, IExpression rhs) { this.lhs = lhs; this.rhs = rhs; }

That's 7 lines of code that does nothing except to say "initialize the Product instance with its members".

I hadn't noticed it before - I was "blind" to it, but when writing OO code, over and over I'll go through the following steps:

- Declare a class, which is usually implementing some interface
- Declare the members of the class
- Write a constructor for the class that initializes those members
- Construct the class somewhere else in the code

It gets really tedious. I'm a pretty fast typer, but you get to a "terminal velocity" in which you can't actually type fast enough to keep up with what you're trying to write.

I recently had to write another quite recursive and complicated algorithm at work. It was a lot more complicated than this differentiator thing - because it didn't have quite as nice and clean an abstraction behind it... I implemented it in a similar style - lots of immutable objects (immutable objects are good remember?)... yet I found myself just writing *loads* and *loads* of the above boiler plate "construction" code...

I've also been reading a lot about programming lately. Here are some statements I've come across recently:

I kind of already knew about the first idea - except I'd always seen things the other way around - that closures were just a special case of objects, which were more powerful, because you could declare multiple methods on an object (Differentiate, Evaluate, etc.) whereas you could only implement one implicit "method" on a closure (if you viewed a closure as an object).

The thing you can do with objects that you can't do with "functions that know their context" is, suppose we wanted to also evaluate our expressions according to some set of "variable bindings"... so that we could then plot them or something like that, then it's very easy to just add another method to our "IExpression" interface:

double Evaluate(IDictionary<string, double> bindings);

But this can go wrong too. What tends to happen is that more and more methods get added to your interface:

interface IExpression { IExpression Differentiate(string wrt); double Evaluate(IDictionary<string, double> bindings); void RenderToLatex(ILatexWriter latex); IExpression Simplify(); void CollectVariables(IList<string> variables); string ToString(); }

Before long you realise that all you've really done is stumble into the "Visitor" pattern all over again.

By this point, the advantage of being able to have multiple methods all following the same "hierarchy" becomes less obvious.

The second statement really threw me. That guy works on functional languages, and for him to make such a bold claim, made me realise, I needed to teach myself a functional programming language.

**Differentiator in Scheme**

So, with all this in mind, I've decided to teach myself Scheme. And I figured that this "symbolic differentiator" would be a good first program to write.

OK - so here it is.

(define join-sym (lambda (sym args) (if (eq? (length args) 1) (car args) (cons sym args)))) ; Make a function which simply adds the specified symbol to the ; beginning of the list of arguments (define symbol-op (lambda (symbol) (lambda args (cons symbol args)))) ; Rules for differentiating different operators ; Associated with each symbol is a function ; The function should return a list of each of the derivatives ; of its arguments (define diff-rules (list (cons '* ; d[xyz]/dx = yz d[xyz]/dz = xy (lambda args (map (lambda (arg) (join-sym '* (remove arg args))) args))) (cons '- (lambda args ; d[x - y]/dx = 1, d[x - y]/y = -1 (if (eq? (length args) 1) (list -1) (list 1 -1)))) (cons '/ (lambda (x y) (let ((ddx `(/ 1 ,y)) ; d[x/y]/dx = 1 / y (ddy `(/ (- ,x) (* ,y ,y)))) ; d[x/y]/dy = -x /(y * y) (list ddx ddy)))) (cons '^ (lambda (x y) (let ((ddx `(* ,y (^ ,x (- ,y 1)))) ; d[x^y]/dx = y . x^(y - 1) (ddy `(* (ln ,x) (^ ,x ,y)))) ; d[x^y]/dy = ln x . x ^ y (list ddx ddy)))) (cons 'sin (lambda (x) (list `(cos ,x)))) (cons 'cos (lambda (x) (list `(- (sin ,x))))) (cons 'ln (lambda (x) (list `(/ 1 ,x)))) (cons '+ (lambda args (map (lambda (x) 1) args))))) (define lookup (lambda (sym alist) (let ((spot (assq sym alist))) (if spot (cdr spot) (raise (format "No rule for symbol ~s" sym)))))) (define make-tree-evaluator (lambda (reducer terminal) (letrec ((evaluator (lambda (arg) (if (pair? arg) (let* ((op (car arg)) (args (cdr arg)) (evaled-args (map evaluator args))) (reducer op args evaled-args)) (terminal arg))))) evaluator))) (define eval-rules (list `(+ . ,+) `(- . ,-) `(* . ,*) `(/ . ,/) `(sin . ,sin) `(cos . ,cos) `(ln . ,log))) (define apply-op (lambda (rules op args) (apply (lookup op rules) args))) (define make-differentiator (lambda (wrt) (let ((list-diff (lambda (op args diff-args) (let ((grad-args (apply-op diff-rules op args)) (inner-product (lambda (x y) (join-sym '+ (map (lambda (x y) (list '* x y)) x y))))) (inner-product grad-args diff-args)))) (terminal (lambda (arg) (if (eq? arg wrt) 1 0)))) (make-tree-evaluator list-diff terminal)))) (define make-evaluator (lambda (bindings) (let ((list-eval (lambda (op args evaled-args) (apply-op eval-rules op evaled-args))) (terminal (lambda (arg) (if (number? arg) arg (lookup arg bindings))))) (make-tree-evaluator list-eval terminal)))) (define differentiate (lambda (e wrt) (display (format "d/d~s ~s:" wrt e)) (newline) (display ((make-differentiator wrt) e)) (newline)))

Just to prove that it works, here's the output for a few expressions:

d/dx x: 1 d/dx (* x 2): (+ (* 2 1) (* x 0)) d/dx (* x x): (+ (* x 1) (* x 1)) d/dx (^ 2 x): (+ (* (* x (^ 2 (- x 1))) 0) (* (* (ln 2) (^ 2 x)) 1)) d/dx (^ x x): (+ (* (* x (^ x (- x 1))) 1) (* (* (ln x) (^ x x)) 1)) d/dx (^ x (/ 100 x)): (+ (* (* (/ 100 x) (^ x (- (/ 100 x) 1))) 1) (* (* (ln x) (^ x (/ 100 x))) (+ (* (/ 1 x) 0) (* (/ (- 100) (* x x)) 1)))) d/dx (sin x): (* (cos x) 1) d/dx (cos (* x x)): (* (- (sin (* x x))) (+ (* x 1) (* x 1))) d/dq (* x y z q b): (+ (* (* y z q b) 0) (* (* x z q b) 0) (* (* x y q b) 0) (* (* x y z b) 1) (* (* x y z q) 0))

There's more code there than actually executes - I put evaluation in there too alongside differentiation.

I was going to write more about all of the thought processes that went into this- but it's getting late and I have work tomorrow! But here's a few brief thoughts:

In earlier versions, I attempted to implement the "object oriented" dispatch mechanism described in articles such as this one. However, I concluded that it's cleaner to "lookup" the function to execute based off the input symbol. This ties in with my "Visitor" comments above: when I'm "differentiating" the expression, there's one sort of behaviour I want to polymorphically dispatch on, when I'm "evaluating" the expression, there's something else. So I don't think the "single-dispatch" mechanism in languages like Java and C++ buys you much for this particular problem.

If you take a close look at my "diff-rules", I've solved the problem of having to "remember" to implement the chain rule. Each of my differentiation "functions" only has to return its derivative with respect to each of its arguments. This is also known as the "grad" of that function. So, the "rule" for "sin x" simply has to say "cos x" - it doesn't need to recursively "differentiate" x and multiply the two together.

You'll notice too that I didn't even have to implement the "product rule". Actually, if I'd restricted my "product" rule to only accept two arguments, as I did in the C# example early, then it would have been almost comically trivial - "(lambda (x y) (list y x))". That is, d[x*y]/dx = y. d[x*y]/dy = x. The only reason the version in the full listing is more complicated is to handle the case of more than two arguments (the derivative with respect to each argument is the product of all of the other arguments).

It all follows from what's called the "multi-variate chain rule". That is, if you have a function of two or more things, which are themselves functions of something else, then the derivative of the overall function is nothing more than the sums of the products of the derivatives of each of the arguments:

d[f(x(t), y(t))]/dt = df/dx * dx/dt + df/dy * dy/dt

This means that my differentiator automatically handles notorious expressions like "(^ x x)" (x^{x}), without any special coding required.

There's a whole elegant "vector" abstraction for doing calculus at this level... The "multi-variate" chain rule can really be thought about as the normal "chain" rule - but with "vectors" for the derivatives, and we're just performing the "inner product". It actually even generalizes further - you can have matrix derivatives, and even higher "tensor" derivatives. There's an elegant "tensor calculus" that can be used to generalize all of this... this is why Maths is so interesting and I need to learn more of it, but for now, because I don't really know much of what I'm talking about in this area, I'll stop!

**Conclusions**

I haven't really written enough Scheme to give a solid verdict yet. There is a place for associating "operations" with "types" - so I think an "object oriented" approach probably still has merit for some things... But I also think, a language that **only** gives you objects and interfaces as a way to approach all programming problems is too deficient.

Although the initial learning curve was steep I really liked the elegance of working in Scheme. It's important to get a good editor - I recommend PLT Scheme which gives a nice GUI environment.

In the process of teaching myself I've been reading the classic Scheme text "Structure and Interpretation of Computer Programs" ... a book written nearly 30 years ago. It's incredible, yet a bit depressing, how many of the ideas you still see being "hyped" were already discussed in academia 30 years earlier.

Yet, there's a lot of interesting stuff happening right now. You can now write .NET applications in F#, which is based on OCaml. I recently heard a very interesting Podcast interviewing Brendan Eich, the inventor of JavaScript (a language that has gotten a bad rap).

More and more I am coming around to the view that these strongly type safe languages (Java, C#, C++) have been a practical diversion while compiler/interpreter technology caught up... but hopefully in 10 years we won't be using or needing them. More elegant, more powerful and more efficient languages are out there, or will be written sooner.