Here I explain "tell, don't ask" in terms that explain when and why it's valuable, without the resorts to metaphor and simplistic heuristics often used in the teaching of object-oriented design.
One of the widely respected principles of object-oriented programming is "Tell, don't ask." In the unhelpful metaphorical language often used to describe OO design principles, it is said that, rather than asking objects about their state, you should tell them what to do: send them a message, don't ask them a query. Especially don't ask them a query and then use the results of the query to decide how to modify their state; if you're doing that, you should instead move the logic into that object.
The problem with all of this is that, so stated, it doesn't have enough context. What are we trying to achieve with tell-don't-ask? When might the drawbacks of telling exceed the drawbacks of asking, if ever? How can you tell when you've missed an opportunity to tell instead of asking? What do you do when you think you want the benefits of tell-don't-ask but you can't figure out how to get telling to do what you want? That's what this post is about.
In the land of compiler implementation, especially the Scheme school, tell-don't-ask is known as programming in "continuation-passing style", or "CPS". It turns out that, if you have polymorphic method calls or closures, and you don't have a stack-depth limit (or you have tail-call elimination) there is actually no limit to the expressiveness of a program that never uses any return values. You can take a program written with return values and transform it, completely mechanically, into a program where no function has a return value.
In a sense, this is obvious: machine code doesn't, generally speaking, have functions or return values, just subroutine call and return instructions. So to execute a program, a compiler must transform it, mechanically, into a form that doesn't need return values. Typically, return addresses and function arguments are pushed onto a stack in RAM, or stuffed into registers and then later placed on the stack. There are various ways to think of the resulting process. CPS takes the position that the return-address and local-variable data on the stack for the calling function is essentially a closure, which is invoked as the last thing that the callee does.
This shows how to transform a function call into a CPS function call: you take the rest of the caller, after the function call is to return, and package it up into a closure that you pass to the function. For example, given this JavaScript function:
function add_numbering(parent, domnode) {
parent.numbering[parent.numbering.length-1]++;
var number = parent.numbering.join(".") + ") ";
insert_before(text(number), domnode.firstChild);
}
Suppose we want to transform the concatenation of the trailing ") "
into CPS style. It's an invocation of the built-in
string-concatenation operation +
, which yields the return value we
are going to turn into a text node and insert into the document. A
generic CPS version of +
would take an additional argument that is a
closure to which to pass the concatenation result:
function concat(a, b, k) { k(a+b) }
And we can make the transformation by packaging up the tail end of the function like so:
function add_numbering(parent, domnode) {
parent.numbering[parent.numbering.length-1]++;
concat(parent.numbering.join("."), ") ", function(number) {
insert_before(text(number), domnode.firstChild);
});
}
If you've used Node.js this should look really familiar; Node uses explicit continuation-passing style for I/O functions. It does this because one of the freedoms CPS gives you (to be listed later!) is the freedom to suspend a computation and resume it later.
Generalizing this, if you have some function call