On the bright side...

scrottie on 2010-03-26T06:36:40

My six month contract is up. It's been a busy six months.

I tell people that consulting is feast or famine. I've enjoyed the feast. I've been eating good food from the farmer's market. Before that, I've killed a 25 pound sack of rice and a 50 pound one along with crazy amounts of dried black beans. I've put on weight. I was down to 155 pounds for a while.

The firewall/gateway machine was apparently suffering some severe hardware problem that caused it to crash more and more frequently. I got it booting memtest only to see that memtest was reporting the same unhandled page faults that Linux was reporting. Daaamn. That Toughbook was replaced with another. I have to find somewhere to recycle a 20 pound blob of magnesium now. That's just one misadventure of mine these guys had to endure that's now taken care of and the next guys won't.

Speaking of Toughbooks, poor old fluffy, the CF-R1, got replaced and retired, first with a CF-51 then with a CF-73. The 51 proved too bulky to haul around under my own power all over creation. It actually worked the backpack zipper loose once and leapt out hitting the sidewalk at a good speed (and survived, minus a chink in its armour). I could have pulled the 32gb Compact Flash card out of fluffy and stuck it directly into the 51 then 73 but fluffy was still on a 2.4 kernel and upgrading to 2.6 was one of the goals. 2.6 wasn't setting up AGP correctly on the thing so I had to keep it on 2.4 and glibc had long since passed 2.4 by and that was making it difficult to use things like alien and klik to install software on it. That and the 256 megs of RAM made upgrading critical. I decided to try to upgrade the existing OS and keep the software. That proved to be a huge time sync and a disaster. If I had it to do over again, I'd have just done a clean install. I used to be able to manage that but low level Linux deps have gotten far more complex. Worse, Slackware on these Toughbooks with the Intel WiFi hardware was -- and this blows my mind but it's perfectly replicatable -- looses my packets in Chicago. I don't think it's the Intel WiFi either though the firmware crashes constantly and constantly gets restarted. traceroute in Knoppix on the same machine shows almost no packet loss or lag; traceroute in Slackware on the same machine shows massive packet loss, terrible network performance, and high ping times. It may or may not impact wired connects. It may or may really have anything to do with the router in Chicago; the WiFi stack may have just been systematically losing replies and leaning heavily on retransmits of unack'd packets. This problem proved disastrous. Trips to Seattle and Minnesota as well as in-town coffee shops (often when the home network was down!) left me still stranded without network. An income left me the chance to upgrade hardware and that chance bit me in the ass. But, on the bright side, it's sorted out and next go, I won't be fighting with this one. I should be able to squeeze a couple years of this machine. And I have a spare. These puppies cost me $60 each on eBay. I've been bashing the Linux users that act like Microsoft users in reinstalling the latest version of the OS at the first sign of trouble, but I have to give it to these guys for being plucky and jumping into battle with just the tools de jour and doing a fantastic job of wielding them.

The thing with Slackware and Chicago reminds me of a certain FreeBSD NAT at a small company in Scottsdale that absolutely would not speak to a Windows webserver that one of their clients needed for work.

I got to spend some more time with jquery and I love it. I almost hate to say it, but HTML (heh, DHTML) is turning into a first class windowing toolkit. Compared to AWT or Swing or most things built on top of X, all of the redraw event and clipping stuff is hidden from you and still optimized, and HTML+CSS is far richer for describing a GUI than the XML that can be used to build GTK windows. HTML isn't a programming language but it's a fantastic example of an imperative language nevertheless. It declares things. Perl does things, one at a time. Creating apps that run in a webbrowser feels like a terrible abstraction inversion but I have to remember that these things change with time. Hell, Apples run "display PDF" and render HTML widgets in the toolbar. Anything is possible. I was playing with JavaScript in Netscape 1.2 (or was that 1.3?). Almost everything was arrays with numeric subscripts. Elements didn't have ids. You'd diddle document.form[5]. Things were buggy beyond description. It's come a looong way, baby.

I got to spend some serious quality time with DBIx::Class. I tried hard to be open minded but this has really cemented my feelings about ORMs. SQL is a powerful, expressive language. We're living in an age that finally values domain specific languages. Regex is one; it rocks. SQL is another. It rocks at imperative "programming" against relational datasets. Trying to replace SQL with Perl is dumb. That would be like trying to rewrite a good awk script in QBASIC. Or like writing a Perl program to, step by step, add visual elements to a screen (hey, that's what Java Swing does!). Sure, QBASIC is a more general purpose language, but it does not do what awk does, at least not cleanly and that's in the simple case. In the complex case, it's just downright painful. I know people don't like to mix Perl and SQL but for chrissakes, we're working with HTML, JavaScript and CSS already and probably lots of other things. There are some useful abstractions in DBIx::Class. My stabs at abstracting DBI dealt with those rather nicely I think. I should release some of that. I guess I was doubting that it's still relevant but I think it is. One thing DBIx::Class does do that's neat is deploy the DDL (data definition) for you. If you deploy Perl somewhere and point it at a database, it'll create the tables, constraints, and so on. Sweet! I described using DBIx::Class as reminding me of a Mr. Bean episode where he winds up trying to carry a sofa and other furniture home in a tiny car and has to rig up a system for steering, braking and accelerating using broom sticks that he actuates from on top of the sofa on top of the car. All of the indirection did not help; it didn't even just get in the way; it made the job almost impossible, comically so. Rather than just using identifiers that exist in the database, relations get nicknames in the Schema files. With this auto-generated stuff, you're refering to foreign tables by using the name that that table uses to refer to the current tables primary key. I think I have a hard time wrapping my head around anything I cannot fathom; so much of my hackery has been based on colliding my own design sense with others and anticipating good designs that I find it almost impossible to anticipate a bad design. But I'm better educated in this department to brush up my code and release it.

Then I got to spend some quality time with git. I did sit down and read about its internal datastructures. Bootstrapping knowledge is hard. A lot of stuff on the Web is misleading. This happens with young technologies -- people who really shouldn't pretend to be experts. Tutorials assume overly simplistic scenarios that get blown away when you're working with people that know what they're doing. You need to know a certain amount to be able to see signs that something is misleading or incorrect. I think git's reputation stems from the sort of alpha geek who early adopted git. These people get excited by cool technology but lack some human-human teaching instinct. They declare things to be "easy!" very readily and rattle off strings of commands that can easily go wrong if considerations aren't taken into account. They have no idea they're doing this. I'm generalizing several git users I've been exposed to for some time, here. I think Perl users tend to fit this same class. We're so good at what we do and we've been doing it for so long, we forget the pitfalls that novices step into. Every problem is "easy!". We're jumping to give the correct bit of code but failing to communicate what's needed to conceptualize what's happening or to otherwise generalize the knowledge. To those on the outside, this creates the impression that the technology in question is overly fickle and overly complex -- somewhat ironically, since the attempt was to portray it as easy. Anything unpredictable and hard to conceptualize is going to seem "hard". But I'm beginning to be able to conceptualize this thing. At least one individual in the git camp showed enough self awareness here to communicate that understanding git's datastructures is the key to understanding git. git does awesome things, no doubt, and with power comes a certain amount of necessary complexity. This complexity, be it in git or Perl, cannot be swept under the rug.

Then there's Zeus. I spent a week, on and off, just looking all over creation for the WSDL, to use as a cheat to figure out what the data being sent to the thing was supposed to look like. Turns out that even though the Zeus site insists that it has it, it doesn't but it can be found in the most unlikely place -- the help screen of the Zeus machine itself. Even though Zeus's admin and API are implemented in Perl, there are no Perl examples in the docs of using the datastructures. The various resources that must get built to set up a functioning loadbalancer through the API are numerous, haphazard, and badly non-normalized. The documentation lists almost every function as taking (Char[] names, Char[] values). Bloody thanks a lot. Names of which other resource? What's valid for the values? Sorting out the API took a lot of the same sort of reverse engineering I was doing right around 2000 trying to puzzle out the numerous bank credit card processing gateways before Authorize.net came along, published some *good* documentation, and ran everyone else out of business overnight (even though Authorize.net ran on Windows and had outages that would sometimes be long enough that the credit card companies reversed the charges -- something like 5 days). It's always good to get the chance to work with a product that's *expensive*. I can play with Linux all day long but you have to get a job at the right place to get to touch certain things. I should do an online article detailing what I've learned.

Oh yeah. And I got to spend a little time with Nagios. The logic for sucking Perl plugins in is just cranky. It violates an anti-pattern in caching failure and it doesn't signal failure in any useful way. I actually doubt that it knows the difference itself.

I have to thank these guys for investing in me to learn one thing after another. I wish I could repay that investment.

I haven't lost my touch in tickling bizarre bugs.

I practiced being nice, at the urging of friends. I'm still a deer in the headlights when confronted with hu-mans; I can never conceptualize what's going on in their heads, and I think my stress in facing them stresses them back out. It's like if you encounter an alien in the woods and you're all like "ARHGH!" and he's all like "ARHGGH!" and so you're all like "ARGHGH!". It's just no good. Even being nice, I have to learn how to distribute warm fuzzies. My normal model seems to be to antagonize people into answering questions. If things don't make sense, I tease people for being apparently silly. People *hate* being unintentionally, apparently silly, so this is a fantastic way to get them to answer things -- they stop what they're doing and vigorously explain away. Putting down that tool was hard. Learning other tools I expect will also be hard. Anyway, this was a wanted chance to experiment with that one.


Zeus zxtm

jjore on 2010-03-27T18:11:31

I use Zeus zxtm's SOAP API. IIRC, when I look at the wrong place I get documentation like "this is a char[]". The SOAP API documentation does actually get around to naming what the available keys are. Thus far I've never actually had to resort to reverse engineering because the entire API really did seem to be documented (if lightly fragmented).

I'm using using WSDLs from Rake in Ruby. Works fine.

Nagios

thickas on 2010-03-29T08:51:22

Yeah, I did it.

At least all the bad bits (in the embedded stuff released with Netsaint 6.x Nag 1 - 3.x.

A local .Au guy solved a problem (of wasting a copy after a fork) for Perl plugins. I think he did a good thing.

Unfortunately, I couldn't (and still can't) see how to preserve the existing REPL semantics (ie schedule a plugin, fork, exec the plugin and return whatever exec() returns to Nagios) and not re-eval the Perl plugin each time its scheduled.

Also, I was too stupid to think that with no adequate knowledge of Perl internals and in the face of well known warnings (eg leaks from embedded interpreters), I could offer something.

As you say, there are many people who should shut the up. I am definitely one, so I don't know why I am bothering with this.

Other than to say that a published author and a thoughtful man like yourself could do a much better job.

Ethan has never declined a patch for embedded Perl. Go right ahead.

Cheering !!

Lastly, on some of the other remarks, here's what Gary Numan is quoted as saying in http://en.wikipedia.org/wiki/Gary_numan#Personal_life "Polite conversation has never been one of my strong points. Just recently I actually found out that I'd got a mild form of Asperger's syndrome which basically means I have trouble interacting with people. For years, I couldn't understand why people thought I was arrogant, but now it all makes more sense."

Re:Nagios

scrottie on 2010-04-09T22:40:21

I'm sorry, you did what now? You wrote the logic that sucks plug-in code into Nagios? I'm confused.

REPL means something other than "schedule a plugin, fork, exec the plugin, ..."... REPL means read, eval, parse, loop. That's the style of environment provided by old BASIC interpreters that accepted a command, ran it, and printed the output. Forth and a number of other systems are famous for it, and Python does that too (as a default when run without other arguments).

Did I say that many people should shut up? It's quite possible (not being sarcastic) but I don't remember saying that in this context.

Okay, I'm taking that bold "much" as sarcastic. Sorry if I hit a nerve.

Everyone has Asperger's. Technology seems to induce it and we're all surrounded by technology.

I don't remember what I wrote about Nagios and I don't care to go look, but I think it was along the lines of "I had trouble with it and couldn't figure one thing out and couldn't find good diagnostics".

The unavoidable situation is that 90% of the people who use your software have no ability to improve it; of the remaining 10%, 90% of those have no personal or professional interest or motivation; and of those remaining 10%, 90% don't have the time. I have neither the time nor the interest or motivation. That's not a reflection on Nagios. That doesn't mean that it's not worthy of soemone's time and attention. I'm simply busy with other things. But that's not going to stop me from remarking on my experiences with it, especially when talking about it in the context of what I've been doing -- and trying to do -- lately.

Talk is cheap and easy. Treat it as such.

Part of the reason for my writing that was re-affirming to myself that, despite my struggles, I'm working on marketable technologies. Nagios is often requested. Commercial or free, people who can master the hard parts of desired technologies have something to offer.

You said you were advised against the approach taken. Treating plugins as ordinary programs muddles the semantics. The mechanism for reloading them is ill defined. What they are and aren't allowed to do is ill defined. Etc. You're aware of this. Don't be surprised if people happen to notice it ;)