To expand just a bit, NTFS relies more or less on a BTREE to locate files,
using directories as an index. So, if you double the number of files, it
increases the worst case recursion count by 1. So, if you have 1 million
files in a directory, NTFS may need to do (at most) 20 lookups to find it
(2^20 ~=1million). But, going to 4 million would only induce 2 more (2^22
~= 4million). We have quite a few folks running in the 100k-->1million
range OK, but they are not relying on GUI based file management. Normally
these files are programmatically accessed (e.g. web server image storage)
and perf is acceptable. I know of a few with 5 million files and normal
operations are OK. Backups/Restores are actually the problem there and when
you get into the large file counts (millions), file level backups begin to
not do very well (volume level backups are necessary due to the seek
overhead of jumping between files).
I am a bit curious about what the original poster's actual needs are. How
many files are we talking about? How are they being created (i.e.
consolidation from multiple other machines, programmatically created, etc.)?
Pat