OSCON proposals are due in a few weeks, what should I do? I tend to undervalue my past work, or that nobody wants to hear about THAT again, and worry that my current stuff is incomplete. So some more objective suggestions are welcome.
"Simple Ways To Be A Better Programmer" was popular so I'll probably submit a tweaked version of that. Or maybe split it up into two tutorials, one about "code" and one about "people". I'm always surprised at how many people find the refactoring and effective version control stuff so... new.
I'm tempted to submit something like "Human Interface Design For Programmers" to run through how the basic principles of interface design provide a way to think about what is and is not good style and good APIs. There's too much "this feels right to me" going on resulting in a lot of non-productive arguments that cannot be resolved.
Something about testing, I guess. I've been reading Steve Krug's "Don't Make Me Think" and I'm very impressed that he wrote a book about the web without mentioning any code at all! Thus it is universal and timeless (uhh, in web years anyway). I've been pondering how to do that with testing.
I'm submitting a talk to a govt workshop in Norway about how CPAN illustrates a way to coordinate without centralizing. A grand talk on how CPAN works and why it's so damned awesome and why everyone else keeps screwing up their attempts at reimplementing it would be nice.
An idea I got from Josh Schachter that wasn't accepted at Pgh.pm might be fun at OSCON, "That Sucked". It's a "how I learned from failure" discussion, but if nothing else it's nice to have a bunch of gurus up on stage talking about how they fucked up, just like mere mortals. I'd love to run a session of that.
Something several people have asked me to do is a specific tutorial for a specific sort of newbie programmer. That being a tutorial on how to go from writing single file programs to multi-file distributions and all the necessary complexities that go along with it.
Uhh, what else? Not a whole lot of Perl in there.
One thing which surprises a lot of developers after they've been doing testing for a while is discovering that testing doesn't just make their code more reliable, it makes them a better programmer. Your functions tend to do less, but do it better. You learn to decouple things. You learn that your function with 13 arguments might be poorly designed. You learn how to write less code and get more done. You learn how to refactor safely. You design better APIs. You learn first-hand the pain of violating the Liskov Substitution Principle. As a result, a talk explaining the evolution of a TDD programmer might be worthwhile.
Testing best practices might also be good. Lots of people talk about how to test, but they ignore the best practices while writing tests. There's also the curious thing that best practices while writing tests are subtly different. For example, when writing code, you learn that keeping data out of your code is a good thing. However, you need fixtures in your tests, so you have your data there. But how do you manage your fixtures? By keeping standardized fixture data in one place, you have a good refactoring, but it can resemble a strange "action at a distance" when running your tests. As a result, that's one of many areas where managing a large test suite can be cumbersome. (And no talk about "best practices for testing" would be complete without explaining the benefits of inherited tests).
Re:Talks I Want to See (if not give)
schwern on 2008-01-10T12:26:20
Yes, something that shows the symbiotic relationship between tests and coding and debugging would be useful. Not sure how to formulate that.
I'd never heard of the Liskov Substitution Principle, but after reading the Wikipedia article it sounds like it's related to the protocol good neighbor principle "be lax in what you receive, strict in what you output".Re:Talks I Want to See (if not give)
Ovid on 2008-01-10T13:06:52
Liskov is something that confuses a lot of programmers because it's often explain in arcane ways. Condider the explanation from the Wikipedia link you provided:
Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.
Well, that's true, but since most of us aren't computer scientists, it can be confusing, particularly since many programmers don't realize that classes are merely types and operators are merely shortcuts for methods or messages.
One easy way to explain it is to state that any place you can you use an class or instance of a class, you should be able to drop in a subclass or instance of a subclass and still get correct results. In other words, you the subclass should still present the same interface as the superclass while still being able to extend it.
That's not to say that you can't change the behavior of an overridden method, but you should be able to use it the same way. For example:
foreach my $payment (@payments) {
$order->apply_payment($payment);
}You might have a Payment abstract class with subclass of Payment::Credit::Card, Payment::Credit::Account, Payment::Check, Payment::Cash, etc. Clearly each of those subclasses will behave differently internally, but if they alter the parent interface then you get the following problem:
foreach my $payment (@payments) {
if ( $payment->isa('Payment::IncompatibleType') ) {
# do something different
else {
$order->apply_payment($payment);
}
}And you wind up duplicating the special case logic all over the place.
Of course, calling isa is a code smell and should be investigated carefully.