Last night, Paul Ceruzzi came to talk to the DC Perl Mongers. His topic, appropriately enough, was the history of computing. (His book, A History of Modern Computing is about to be updated with a second edition, covering important events through 2001.)
Towards the end of the evening, one of the people in the group asked why computing didn't start before the 1940's. After all, Charles Babbage was doing his work in the 1850's, and Herman Hollerith had automated the 1890 census with punched cards. So why didn't computing take off until Alan Turing entered the scene?
It turns out the answer is quite interesting, and sheds light on just how profound Turing's contributions to society were. First, prior to Turing, innovations in the realm of calculating machines were focused on creating ways to alleviate the tedium of calculation. Babbage used clockwork to compute numbers accurately, and Hollerith used plug boards to automate sorting and counting. Both of these gentlemen were focused on a base-10 representation of the world. Both focused on creating machines that could be configured at the start of a computation (turning the columns in Babbage's case, wiring the plugboard in Hollerith's case) thereby allowing the machines to run until completion.
In the early 20th Century, electronic switches and vacuum tubes entered the scene. However, those who were tinkering with computing in some way shape or form were laboring under the paradigm that "computing machines" were devices that were configurable at setup time and dealt with base-10 values.
This is still true through the end of WWII. I forget what Konrad Zuse was doing, but I'm pretty sure he hadn't hit upon the concept of a stored program architecture. Presper Eckert and John Maunchley were using vacuum tubes and other electronic switching devices to ferry base-10 values around ENIAC (which was "programmed" by connecting switches at the start of a computation). Most of the Harvard Mark I was on hold during the war, but I recall that it too worked on a configurable start state concept to shove around base-10 values (for creating tables of Bessel Functions; when was the last time you needed to refer to one of those?).
So, what did Turing do that changed the world? First off, Alan Turing linked together a couple of seemingly unrelated chains of thought, and added a few of his own. First, he gave meaning to the seemingly meaningless base-2 arithmetic, allowing engineers to do interesting things with electronic switches and relays (thus simplifying the design and development of computing hardware). Secondly, he introduced the concept of a stored program that sat in the computer's memory. All of a sudden, computers became much more malleable, since it would now be possible to save state and restore it without spending hours plugging wires into plugboards just to start a long summation, integration or tabulation.
But you probably knew all that already. :-)
The question is why I did not use this concept in 1939 if I already knew about it. Well, at that time it would have been senseless to try to build that sort of machine, as the necessary facilities were simply not available. For example, storage capacity was not big enough to cope - an efficient program memory needs to be able to store several thousand words.
From: Konrad Zuse, actually, from the second page.
Re:It's weird isn't it?
chaoticset on 2002-03-07T19:25:13
Speaking of self-modifying code, Malbolge is an interestingly bizarre language, courtesy of Ben Olmstead.