It's amazing how often the same idea resurfaces in computing. Perhaps computing is so difficult to fathom because it exhibits a kind of fractal nature; small subpatterns repeating in larger contexts.
Here's a concrete example: what is the proper granularity for task X? Here are two usability scenarios for using an ATM:
Here's a more concise task description:
This is a long-winded example of use case modelling, as described by Larry Constantine in _The Peopleware Papers_ (yes, I'm still reading that book), and Alistair Cockburn in _Effective Use Cases_. The first use case is very exacting, but actually wrong in a few places. If this seemingly precise use case ignored a step (how about balance transfers?) then a significant portion of the scenario needs to be updated.
The second use case on the other hand is much more high level. It is at least as clear what is going on (to someone with an understanding of how ATMs operate), but leaves out lots of details that are quite specific, and possibly quite wrong. How difficult would it be to add a new type of transaction to this scenario? Not difficult at all.
Now, let's use a more concrete example. Look at this Perl code (something that came up in a code review tonight at the DC.pm meeting):
open (FILE, "some-filename.txt");
while (<FILE>) {
chomp;
push (@interesting_values, $_);
}
close (FILE);
That code is a trivial example of something that is
written at the wrong level of granularity, much like the first use case. This while loop is unnecessary, and actually obscures the intent of the code. Here are two possible ways to clean up this code:
open(FILE, "some-filename.txt");
push(@interesting_values, <FILE>);
chomp(@interesting_values);
close(FILE);
open(FILE, "some-filename.txt");
chomp(@interesting_values = <FILE>);
close(FILE);
Now, all three versions of this particular code are reasonable. However the last two take advantage that files can be read in list context, and as a result remove a control structure from the code. As a result, the last two examples are more Perlish; they might be less straightforward to the beginning Perl programmer, but to the journeyman Perl programmer, the intent is made clearer by focusing less on how we want something done (statements made within a while loop), rather than what we want done (use of common Perl builtin functions to get the job done).
Tonight is the first time I recognized this pattern in a new light. The issue of making coarse grained/fine grained use case scenarios is the same issue as overstating/understating intent in code. Learning the right level of granularity takes time, and using it well is the mark of a journeyman programmer.
Re:Old habits die hard
ziggy on 2001-11-07T17:09:57
We've seen the while loop example so many times in so many different programming languages that it's become an "entrenched meme" of sorts.The difference with Perl is that it supports the old C-style tell-the-computer-exactly-what-to-do-and-be-very-e xplicit-about-each-and-every-step approach of programming, and the shorter, more concise get-this-done style of programming. There are advantages to the second style, since it takes less code, and therefore concisely conveys intent instead of blindly yielding to the program counter.
Abstracting the details into a subroutine is one way to approach the problem, but the question remains: which details? Do we through a while loop into a sub because it is performing a specific, well-understood function? In this case, I would (and often do, in subs like read_lines($filename) or read_text($filename)). But down that road lies madness, and subs like chomp_list_elements(@list), which re-create basic behaviors in Perl.