NTFS performans on Win2k server

  • Thread starter Thread starter klagron
  • Start date Start date
K

klagron

Does anyone know if it is possible to use Win2k to store
alot of files in one directory?
Filenames are long and almost of the same size.

I know it's not a good chose but we already 256 directorys
with 100000 files each. Is it possible to use 65536
directorys.

/Klas
 
Hi, Klas.

See this page from the Win2K Pro Resource Kit online:
File Systems
http://www.microsoft.com/windows200...techinfo/reskit/en-us/prork/prdf_fls_pxjh.asp

Or the XP version:
Size Limitations in NTFS and FAT File Systems
http://www.microsoft.com/technet/tr...prodtechnol/winxppro/reskit/prkc_fil_tdrn.asp

Remember that this kind of limitation depends on the FILE system (FAT32 vs.
NTFS), not on the OPERATING system (Win9x/ME vs. Win2K/XP).

Since Windows treats a folder/directory as "just another file", the only
practical limit on the number of directories these days is the size of the
volume - and one HD can contain several volumes.

RC
 
The filesystem was tested with millions of files, so the number of files is
fine. Keeping the files per directory count where you have it will speed up
random accesses.

The two issues you are most likely to run into are:
1) If these are all on a single volume, many backup apps have 1 backup
stream per volume (check w/your vendor to verify this). So, if you have 2
million files on one volume, they are backed up sequentially. If they are
on 2 volumes, then they are read 2 at a time, etc. So if you will be
backing this data up, keep it in mind.

2) Access via a gui. Most gui file managers (e.g. Explorer) enumerates all
files in a folder when it is opened. This can take a loooong time as the
file count in a directory increases (minutes). If you aren't accessing via
a gui (i.e. via cmd line), then this isn't an issue.

One other suggestion is to try to have the disk cluster size as close to the
file size as possible. This minimizes disk seeks (which on large file count
volumes can be a big perf hit) and fragmentation. So, if the average file
size is <4k, use 4k clusters; <8k, use 8k. Larger than 8k, you will have to
make a call as to the amount of wasted space you are willing to lose in
pursuit of performance. 16k clusters is a good size for larger files (and
on Win2k3 is what Shadow Copy uses anyway), the problem being that if your
files are all 50k, then you will lose 14k per file (16 * 3= 48k, too small;
16 * 4 = 64k; 64k-50k = 14k wasted). On the good news side, the file would
be able to be extended without worrying about induced fragmentation.

Pat
 
Back
Top