Problems with large directories

  • Thread starter Thread starter SteveM
  • Start date Start date
S

SteveM

Can someone provide or direct me to the
limitations/problems of NTFS and directories with
hundreds of thousands of files?

I have a web service that saves thousands of thumbnail
images in a directory a day. Access to that directory is
becoming excessively slow. I know making subdirectories
helps, but was wondering if there is a tweak or something
that will help otherwise.

Please Advise!!
-Steve
 
Hy

In fact there are limitations. The socalled MFT (masterfiletable) has to
correspond with its size to the discsize and the number of files on the disc.

I have seen prod. systems with more than 13 millions of images on it. There
was only one problem with NT4 SP4 !!!

you can check (Registry):
- ntfsdisablelastaccess ...
- ntfsdisable8dot3 ...
- MS Knowledge Base 174619
http://support.microsoft.com/defaul...port/kb/articles/q174/6/19.asp&NoWebContent=1

For the exact description you will find a lot of info on the internet with
my given keywords.

Cheers Marcel
 
Thank you for your help. I did alot of research on this
last night and found out some interesting facts. One that
was curious was KB article 130839

http://support.microsoft.com/default.aspx?scid=KB;en-us%
3Bq130839

This article nail my situation to a 'T', but offers no
resolution..

A few quick questions, if all my files average 1kb each,
what should my cluster size be? Some places say 2k
cluster is the smallest you want to go to ensure
everyting ends on even boundaries, but that wastes a lot
of disk space. I also hear that files 1k and smaller are
actually stored in the MTF zone and virtually referenced.
Is this true? Is there a way to force these files to be
stored normally, and if so, would that help performance?

Also, if I bump NtfsMftZoneReservation to 4 to make room
for the 1kb files stored in the MTF zone, will that help?
What happens when the 1k files take up more than 50% on
the drive, are they stored outside the MTF??

Thanks for the help!
-Steve
 
Good Morning...

In fact there are more cases to be checked than described in the MS article.
But you are on the way! :-)

- If your files are 1kB in size, your clusters should be 2kB! Because
remember all the loosen discspace or the loosen performance! Good idea!

- Don't forget to think about a virusscanner... You may have to turn off
certain extensions.

- MFT will be fragmented and full and corrupt. That's all worse! MFT is
always bound to a partition and will be created with an NTFS format. There is
a cool tool NTFSINFO which you can find on the internet and have a look at
the actual size of MFT (MasterFileTable). This should give you feedback,
because of the counter 4. If MFT is full, your data will not be stored! But
you can read :-)

- The very fastest, effordable but still expensive solution ist also to
create a RAID10 device. Those are about 3 mirrored disks, spanned with each
other. Some RAID Controllers can do that. The Controller shoul have a lot of
Controllerbased RAM on it. Socalled Writebackcache! Should be enabled. I'm
talking about SCSI... Sorry!

- I built up 2 RAID10 discs and organized the application to access both! In
this case you would have more spindles. Those two RAID10 discs are on
different SCSI buses! Jupp!

Cheers Marcel
 
Back
Top