In DBD::CSV, certain assumptions are made:
All is not lost, though, as one of my more than capable coworkers discovered a while back. You can override all these on a per-database-handle, per-table basis. For example, my employee table is in the file "EMPLOYEE.TXT" and has Macintosh line endings. I just:
$dbh->{csv_tables}->{employee} = { file => "EMPLOYEE.txt", eol => "\r", };
That works great, but now I'm working on a database with 20 tables. Every one of them is in a .csv file, has UNIX line endings, and is semicolon delimited instead of comma delimited. A big loop at the beginning of the program suggested itself, but didn't feel natural.
Then, a sure sign that I'm finally starting to think like a real perl programmer, the obvious solution presented itself: tie the $dbh->{csv_tables} hash. Of course!
As far as I know, I have never tied a hash in production code. In fact, I think I've only done it once or twice just to experiment with the feature. I'm glad tying is finally presenting itself as a tool in my arsenal.
In related news, I just discovered the ecode tag.
If you work with CSV databases, you know that validating the database can be a pain. I wrote a CSV database validation program that you might find useful. You can develop the schema in a syntax very similar to SQL, designate unique fields (such as IDs), foreign key constraints, and even specify a regex to validate individual fields against. Very useful, IMHO.