I've been thinking a bit about why closures are called closures, and not something else. It's mostly uninformed speculation – actually, looking it up would be the least entertaining option – but since I'm not trying to be canonical I think that's okay.
I looked to math for inspiration (closures as programming constructs basically come from Lisp, I think, which basically comes from Church's Lambda Calculus, which is rather more mathematical than the FORTRAN/RATFOR/Algol/C line of language development. Edit: I'm wrong, it seems; see btilly's comment). A set that's closed under a binary operator is one that you "can't get out of" – applying the operator to any two elements of the set will always produce another element in the set. There are other definitions on MathWorld that involve calculus or topology, but they seem to express basically the same idea.
Take, for example, the set of natural numbers and the addition operation. N is closed under addition – adding two nats will never give you a non-nat. More interestingly (barely), take the set {0,1} and the multiplication operation. {0,1} is closed under multiplication.
So basically, a set closure is a function, and all the data we'll ever need to evaluate that function. Hmm... that sounds familiar.
Now let's look at closures in computer languages, like, oh, let's say Perl. A closure in Perl looks something like this:
{ my $x = 0; sub foo { # foo is closed over $x return $x++; } }What have we here? Well, foo taken by itself is just a function. Inside foo's scope, we see reference to something called x – but x isn't inside foo's scope. foo by itself isn't closed; only the combination of foo and the enclosing scope's x is closed. So, foo plus (union) x is a closure.
Now that we've started to talk about scopes and variables, we're drifting away from our simple pleasant mathematical model of closures as functions and sets: we don't care about foo's domain or codomain (which we cared about implicitly when we started talking about binary operators on sets) – we care about symbol table hits and memory reads and writes. (In a language without side effects, we get a bit closer to the mathematical ideal, which is one reason why functional-programming zealots tend to make such a big deal about assignments.) So here, a "closure" is "a function, and all the memory it needs to access when it's evaluated".
Computer-language closures are also closely (heh) related to predicate-logic sentences with free variables, but I can't find any appropriate terminology so I'm not going there. :-)
{
my $x = 'default';
sub xify_foo {
my $foo = shift;
$x = $foo->$x(@_)
}
}
$foo isn't a free variable
FoxtrotUniform on 2004-11-02T01:11:28
In your example, xify_foo by itself has one free variable: $x. $foo isn't free in xify_foo: it's given by the caller. Hence, xify_foo only needs to close over $x. Technically (at least by my way of thinking when I wrote the journal entry), the xify_foo closure has all the memory space it needs: $x in the closure, and $foo on the stack. That is, of course, nit-picking.
This is the sort of thing I was handwaving about when I mentioned predicate logic. I suppose I should've gone into more detail.
:-)
They were borrowed from there by Scheme circa 1975 or so. And after that rapidly expanded through other Lisp dialects. Now the only commonly used Lisp that does not support closures is EMACS Lisp. (Which also doesn't support tail-recursion, also popularized by Scheme.)
As for why a closure is called a closure, I think that the phrase, "closes over the environment" is highly suggestive...