The filesystem was tested with millions of files, so the number of files is
fine. Keeping the files per directory count where you have it will speed up
random accesses.
The two issues you are most likely to run into are:
1) If these are all on a single volume, many backup apps have 1 backup
stream per volume (check w/your vendor to verify this). So, if you have 2
million files on one volume, they are backed up sequentially. If they are
on 2 volumes, then they are read 2 at a time, etc. So if you will be
backing this data up, keep it in mind.
2) Access via a gui. Most gui file managers (e.g. Explorer) enumerates all
files in a folder when it is opened. This can take a loooong time as the
file count in a directory increases (minutes). If you aren't accessing via
a gui (i.e. via cmd line), then this isn't an issue.
One other suggestion is to try to have the disk cluster size as close to the
file size as possible. This minimizes disk seeks (which on large file count
volumes can be a big perf hit) and fragmentation. So, if the average file
size is <4k, use 4k clusters; <8k, use 8k. Larger than 8k, you will have to
make a call as to the amount of wasted space you are willing to lose in
pursuit of performance. 16k clusters is a good size for larger files (and
on Win2k3 is what Shadow Copy uses anyway), the problem being that if your
files are all 50k, then you will lose 14k per file (16 * 3= 48k, too small;
16 * 4 = 64k; 64k-50k = 14k wasted). On the good news side, the file would
be able to be extended without worrying about induced fragmentation.
Pat