10% wastage on 10 volumes is one whole hard disk wasted. Significant? Hell
yes, it increases costs! You should've known that considering it's the job
you do.
That's worst-case. And if the files are 2M, you'd have to use 100 HDs
to need the extra one, which really is a shrug.
I know about the shuffling it performs. It is not as terrible as you make
it seem. For the benefit of the others reading this thread - Basically 12%
of the initial part of the HD is set aside initially (saving the 16
metafiles) for the MFT to prevent fragmentation (it's not even used at this
point of time). As and when data is required, the MFT is adjusted. Nothing
very "problematic" or sinister in there.
And if the MFT is caught halfway through this process?
Perhaps I don't see the bad mileage if FATxx that you do, because I
don't use one big C: for everything. All of my FATxx-based systems
have C: with 4k clusters, limited to just under 8G, and that's where
most of the write traffic is going on. The bulk of the capacity in E:
is FAT32 with big clusters, but there's not much traffic there.
I'm certainly not recommending 120G HDs be set up as one big FAT32
volume; even though that still has maintainability advantages over
NTFS, the reliability gap may be as wide as you claim.
There is one factor that could lead to software crashes, and that is
uncertainty about free space. An app may query the system for free
space, be told there's enough, and then dump on the HD without
checking for success. Normally it would be concurrant traffic and
disk compression that would cause this "oops", but FAT32 (not FAT16)
does add an extra factor; the free space value that is buffered in the
volume's boot record, which so often gets bent after a bad exit.
Then again, AFAIK startup always checks and recalculates this value;
it's one of the extra overheads of FAT32 at boot time.
I have had an NTFS volume with as less as 25MB free out of 10GB
running with absolutely no problems over a long period of time.
Well I should hope so, as that's the mileage I'd expect in FATxx as
well. It shouldn't blink even if free space fell to 5M. C: might
look like it blew up with 25M to go, but that's prolly because a few
seconds ago it may have been down to 0k free due to a temp or swap
splurge... then again, I've seen 0k-free C: and I haven't seen data
carnage. The data carnage I see is usually where RAM has been flaky
for some time, or there's been a malware strike, or the HD is failing,
or the PC was overclocked, etc.
FATxx doesn't just fall over for no good reason, from what I've seen,
though persistant issues from bad exits might compound into
cross-links later. Not sure if that does happen, but possible, though
I'd expect to see a lot more cross-links if that were the case.
Lost data OK. How about corrupt data.
Depends what has gone wrong. Corrupt data can be better than none,
especially where text is concerned; OTOH, half a .DLL is no bread
It's important to know what is corrupted and what is not - and that is
what I have against "auto-fixing" junk. It's the equivalent of
throwing the needles back into the haystack.
The data remains intact. I have had power failures while working with
files and defragging. The files remain - albeit at the cost of the
changes. No issues there. I don't know what and why you're trying to
project this wrongly.
What happened to the incomplete transaction? That's the data I want
back. I don't want some clueless fixer deciding for me that it's
corrupted and therefore should be discarded without a trace.
It's very easy to look bulletproof if you simply destroy everything
that may be damaged. You could do that in FATxx as well - in fact,
the duhfault behaviour is close - simply by maintaining a list of
files open for writes, which are automatically deleted on bootup.
But that is not data preservation.
I don't understand this statement. Cut the flowery language and start
explaining.
Throwing away any transaction that is not complete is not data
preservation. Basically, you get the worst-case auto-fixing Scandisk
result, i.e. as if you'd let Scandisk automatically fix all errors and
throw away all lost cluster chains.
All of this nonsense about transaction rollback "doing away with the
need" for disk maintenance utilities hinges on the only problem being
the interruption of sane file operations.
What about damage from other causes, such as wild disk writes, bad RAM
corrupting content and address of writes, program crashes, and
deliberate malware raw disk writes as per Witty?
Transaction rollback can do nothing useful here, because these issues
are below the "transaction" level of abstraction. It's like pointing
to a "silence please" sign in a library and thinking this will stop
you getting hit during a bombing raid.
In circumstances like that, the LAST thing you need is a dumbo
auto-fixer that ASSumes all anomalies it sees should be resolved as if
they were interrupted sane file operations.
I don't know why you are propagating falsehood. Maybe because by
implementing FAT32 you get more problems to attend to, and hence more
business.
Nope. I'm busy enough as it is, and I see plenty of NTFS systems with
sick HDs, "do I have a virus?" and messed-up file systems. The FATxx
systems are far quicker to deal with, and have better results. That's
the experience on which I base my recommendations.
Your site is as vague as the rest of your posts (I had not seen
it earlier) and the statement that NTFS is for people who need to hide data
more than recover it is just plain condescending. Don't you understand that
even a basic org would want to implement best practices; public and private
data, templates and files.
Who'se talking about "orgs"? This is consumerland we are talking
about; NT is no longer sold only to professionally-administered
installations. Yes, with pro backup and so on, you may care less
about what happens to live data - and some installations will indeed
see unauthorised access as a bigger crisis than data loss. For those
priorities and that environment, NTFS is a useful *part* of what makes
up a secure and robust system.
It's also interesting that you refer to my site as "vague", given the
only evidence that the NTFS file system is strucurally less prone to
corruption has been "transaction rollback!" and "because MS says so".
I've yet to see any structural detail to support that claim, whereas
my site does delve into that level in the FATxx recovery topics.
Perhaps that's why you and your biz will always cater to small/medium sized
clients. Unless you think big, you don't get big.
Quite a revealing comment that - implies we should all seek to serve
only the biggest clients, and that small clients aren't worthy of
serious attention. Also implies "one size fits all", i.e. that it is
appropriate to foist solutions derived in big business on small
clients, because bigger is better.
Yes, I will probably stay with small clients, because they interest me
in the same way that large clients do not. XP Home is intended for
this market, and it is from that perspective that I will assess it.
As far as the sig goes, regarding certainty, an uncertain person or one
in two minds (with a disclaimer hanging on his neck) will never take
crucial decisions - It shows lack of confidence in yourself.
Nope. When you're certain, you stop looking, and it takes longer to
realise that your assumption base has been kicked out from under you -
and then you fall further, and harder.
It's dumb-ass certainty that preceded the Titanic, Hindenberg and
other disasters. It's thinking that code was certain to behave as
designed or intended that leads to holes and exploits - the holes
would be there even without the certainty, but the doubt would have
led to better contingency planning, such as being able to turn off a
damaged service without crippling the whole system.
This latter-day awareness is dawning on MS, as the documentation of XP
SP2 illustrates. Broken certainties have become so common that the OS
needs updating allmost as often as av software did a few years ago;
there has to be a mechanism in place to do this *routinely*. SP2's
documentation speaks for the first time of how to manage exploits
*before* patches become available, which is refreshingly realistic.
Essentially, your advice could be viewed as a dis-service to society
and the computing fraternity in general, since you're stuck in a
generation no one really cares about.
Nope. When NTFS gets the maintenance tools it deserves, it may be
seen as a best-for-all-jobs solution. But right now, I'd rather have
a maintainable, recoverable and formally-av-scannable solution.
As your current generation has not been able to prevent hardware from
getting flaky, and has notably failed to make malware a thing of the
past, I'll prefer something that offers management of these crises.
-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...