I am not you. Please keep that in mind.

Ovid on 2008-08-29T15:52:52

You know, it's OK to suggest that I do X instead of Y. There's a good chance that I'll agree with you. However, there's also a damned good chance that I have a reason for doing Y. These might be legacy, environment, personal, whatever. At the end of the day, if I insist upon continuing to do Y, deal with it. Why do people who don't feel obligated to help me still seemingly feel obligated to tell me what I'm doing wrong?

Case in point: I've had some people tell me, in no uncertain terms, that I shouldn't write Test::Aggregate the way I chose to write it. They raised some fair points and had valid concerns. I have a test suite which now takes almost an hour without it. It takes 21 minutes with it. I have reasons for this choice. We've sometimes been frustrated with it, but I don't think anyone on our team would give it up. If you don't like my decisions, don't make 'em yourself.

Some people don't understand why I don't sprinkle magical pixie programming dust and just make my code perform better. I challenge them to come here and let's find out what dust they're snorting. It's not always the case that you can just make things work the way you want them. You have to compromise, but I am astonished at purists who readily acknowledge many areas where "good enough" is appropriate, so long as it's not an area they care about. Just because a problem isn't obviously NP-complete doesn't mean that you are always able to have perfect solutions.

We work in different environments. We face different challenges. They aren't always technical ones, either. It's OK to tell someone to "quit" if they're being asked to do something unethical. It's NOT OK to tell someone to quit just because their company is afraid to upgrade the MySQL database to the latest version. And yet this is the sort of asinine response I often see when someone is asking for help. (Ever think that the programmer might have a husband and kids and it's the only job in a small town?)

If someone needs help, it's OK to make suggestions, but for cryin' out loud, get off your damned high horse when you condescendingly tell them "you're doing it wrong".


Well said!

soulchild on 2008-08-29T16:41:51

There's this zero-tolerance attitude many coders exhibit which only allows perfect code which might work in the lab, but as soon as you throw it into production environments problems surface. And often nobody's willing to change the implementation because it would involve a workaround which might break the perfect design.

Telling somebody that the database you've been using for the last 10 years is the culprit and you should switch to another one (just to make this one module work correctly) is just plain ridiculous!

I have the feeling that the Perl community could learn a thing or two from the PHP folks which seem to be better nowadays in just getting the job done.

Re:Well said!

Ovid on 2008-08-29T19:16:36

Funny you should mention PHP because I was thinking about a blog post defending PHP programmers. At the end of the day, those people who rip on PHP (and its programmers), are often bad about asking why it's so successful. Many people rail against the issues with PHP, but I doubt that most, if any, of them have produced perfect code. Hell, we're Perl programmers. We might defend lack of method and subroutine signatures because "that method is deep in the system and can't get bad arguments", but that's a lousy excuse. Sometimes we need to just crank something out and if perfection is your only criteria, I don't want you on my team.

If you could start all over, back in 1994

brian_d_foy on 2008-08-30T15:15:54

I still have in my mind some sort of graphical test tool for Perl, but I don't know what I quite want it to do and what new features, techniques, and consequences would come out of it.

Test::Aggregate certainly comes out of the consequences of mainstream Perl testing and the framework that most everyone uses. It's what works right now: "Do what you can, with what you have, where you are".

Without any goal or intention of actually implementing something new, if you started all over with whatever test harness you wanted, but would you want it to do and how would you like it to work?

For instance, I'd love to have something where I can cherry pick tests. Since we test like we do, I can't do that. It's not a tool issue because I've written all those .t files to fit the current tools and modules.

Re:If you could start all over, back in 1994

Ovid on 2008-08-30T18:45:28

Off the top of my head, there are a few things I would want. First, I'd want tests installed with modules. That would allow us to test our current installation, not just the module we intend to install.

Second, I'd want a way to track tests over time. That would be difficult and I'm not sure how we would do it, but basically, if test some-cpan-module/t/foo.t fails in a particular way that we can live with and it fails in the same way after we upgrade other modules, we can at least have a controlled environment. Right now, even if we could install the tests, we have no way of knowing if particular failures are behavioral changes or if they were always there. Yes, all tests should pass, but right now, we don't even know if those tests would fail, much less pass. If we knew they would fail, we could devise better strategies for dealing with them (that's why we have a "patches" directory at work which comes first in our @INC, thereby ensuring that we can at least have somewhat safer local changes).

As for cherry picking tests, that's one of the reasons we want to add 'tags' to the next version of TAP. You could tag tests as 'database' and run only the database tests, if desired. Test::Class is very close to being able to do this now, but it's not quite there yet.

For those who think that testing is simple, imagine trying to run and maintain a test suite for your entire installation. That significantly increases the difficulties involved, particularly after installing huge packages which might pull in hundreds of dependencies. Some of them (Jifty, I think) ignore dependency test failures on the theory that if their tests pass, that's good enough. The potentially leaves people with broken modules having been installed and no clear way of dealing with it.

I might add that lately I've been making major changes to Test::Harness' App::Prove::State module. I'm giving it a proper API so that I later I can subclass it and capture test run information in a database. This should provide interesting ways of dealing with some of these issues.