Working in a directory with a large number of files:
$ time ls -1 # GNU ls real 2m22.169s user 0m2.080s sys 0m5.850s $ time /bin/ls -1 # Solaris ls real 0m18.013s user 0m1.230s sys 0m1.390s $ time ~/perlls.pl # Custom Perl program real 0m22.355s user 0m1.200s sys 0m1.260s
Interesting facts. I'd been fretting about how slow using the standard utilities was, until I remembered that I had installed GNU versions of these utilities in my ~/bin, presumably because they were better. I'm sure GNU ls does something like load the entire directory into memory before printing it out or something, in order to use a "different" implementation method and avoid copyright infringement.
Maybe I should install busybox...
For the record, I ended up using a custom Perl program to do what I was trying to do, and it still seemed the fastest solution (even including development time for the program).
Update: it made more sense when I got the proper figures posted, instead of duplicating the figures for Solaris ls. That turns out to be a bug with Cygwin cut and paste...
-Dom
Re:Turn off sorting
jdavidb on 2004-06-01T20:51:02
That's a -1, not a -l. The reason I added -1 to the test was to make sure none of the ls versions was spending extra time trying to compute the number of filenames that could fit on a line or whatever. (And also because my little hacked-up Perl version couldn't do that.)
I posted to the GNU coreutils (combination of fileutils, which includes ls, and two other packages) bugs mailing list, and they responded by asking the version of my GNU ls and my OS. Turns out ls from GNU coreutils 5.0 (my version) has this problem, but the latest version did not. As always, the solution always seems to be to upgrade.
:) Yikes. I'm the only person I've ever known who has filed a bug report on ls!