There are many ways you can categorize tests, but there's a basic distinction between developer tests and customer tests. The latter are also known as "acceptance" tests. They're about the customer experience. They're complete, end-to-end integration tests and basically assert if p, then q. Done properly, they are also redundant tests.
We're fortunate because we have an embedded tester on our team. He writes acceptance tests constantly. We've been struggling to manage our test suite effectively and not paying proper attention to his role has been part of the problem because we're losing independent verification.
Consider the following function:
sub pi { 22/7 }
Now that's clearly wrong. Now imagine the test:
is pi(), 22/7, 'pi should be correct';
That's useless because it passes, but it's wrong. Even if, for some reason, 22/7 was an acceptable value of π for you, the test is still wrong because you're using the same method of calculating π. One way of dealing with this is to assert the raw value:
is pi(), 3.14285714285714, 'pi should be correct';
This still might look wrong for this trivial example, but it works when you look at this:
my %recip_of = ( 1 => 1, 2 => .5, 4 => .25, ); while ( my ( $num, $recip ) = each %recip_of ) { is recip($num), $recip, 'The reciprocal of $num should be correct'; }
Even though the values are hardcoded, you at least have independent verification. That's very important when testing. Merely duplicating your logic means duplicating your bugs.
That's why we've had a problem with our acceptance tests. They verify many of the tests that our code already verifies and are thus redundant, but here's what we've done: acceptance tests fail? Fix the code!
That's wrong. If an acceptance test fails, you absolutely must make sure you have developer tests which replicate the failure. If you don't, write them. Preferably, have completely separate frameworks for developer and acceptance tests (acceptance tests for Web-based systems can use Watir or Selenium). Acceptance tests should mirror the production environment and customer experience as closely as possible and should not be under developer control. Otherwise, you lose independent verification.
We've fallen into the nasty habit of fixing our code once an acceptance test fails. We should be able to just run our developer tests to feel rather confident in the health of our system, but we can't, so we have to rely on the acceptance tests. As a result, this adds 15 minutes to our test run.
If we don't have a corresponding developer test failing for every acceptance test failing, we have a problem. When you run a code coverage report, you should do it separately for developer and acceptance tests. By maintaining independent verification, you increase the chances of finding bugs and you can improve the overall reliability of your code.