Bill said:
Cost vs. reliability with a touch of security involved in that equation,
Absolutely.
some of which I suspect are minimal factors for you so you are ignoring
them for others.
Actually, I'm probably considerably better aware of them than you are,
considering the content of your post.
Network storage makes the storage hardware and every part of the network
(including network administration) a chain of points of failure,
Ah, but not *single* points of failure in the sense that a typical local
hard disk is. There's a major difference, as anyone evenly passingly
familiar with the effects of component redundancy on MTTF should understand.
(Plus, of course, the fact that the network exists *anyway* for other
purposes such as communication and data-sharing: it's not as if you
could get along without it if you didn't centralize the workstation
storage. And any well-run corporate network is already at least
somewhat insulated from single points of failure, with redundant paths
among switches and offices such that typical failures - if they are
user-visible at all - at worst require switching your Ethernet cable
from one wall jack to an adjacent one.)
and
points of attack on security.
Au contraire, I'm afraid: the central and (easily ensured) competent
management of security of a server (and the network to get to it) makes
it considerably *more* secure than any individual workstation - and
insulates at least the read-only data used by each workstation from any
unpleasant software (or flaky hardware) that the workstation's user may,
deliberately or inadvertently, allow to run there.
That means that not all cases have the
same solution, "least capatal expense" isn't the same as lowest TCO,
Exactly. For example, the TCO of storage is estimated to be close to an
order of magnitude higher than the purchase cost of the storage - and
that's for *centralized high-end* storage, so it's got to be at least
two orders of magnitude higher than the kind of storage we're talking
about here (e.g., something resembling an Isilon NAS).
Centralize that inexpensive storage and you not only pay for the network
to distribute it in saved management costs but get more general
centralized workstation support in the bargain.
The cost of a failed local hard disk containing significant amounts of
work backed up casually if at all amounts to anything from $1000 on up
(that being a conservative estimate of the cost of one person's overhead
- salary, benefits, workspace overhead - for a day's work, pretty much
ignoring the value of any data lost since the last time it was backed
up, plus the cost of the person who has to deal with the resulting
problem: installing a new disk, OS, and all relevant applications, and
then helping the user get everything personalized back to the way it
used to be). The far more common cost, however, is simply that of
setting up the workstation in the first place (in the absence of any
hardware failure at all) - which can rival the hardware cost of the
workstation itself (not just the disk inside it).
particularly if the cost of downtime or data exposure is high.
Right again - it's amazing how you can draw such garbage conclusions
from basically correct input.
Centralized storage largely eliminates workstation downtime due to
storage problems - and with suitable snapshot-style facilities (let
alone the various 'continuous data protection' mechanisms which are
beginning to appear) can significantly (or in the case of CDP
completely) protect the workstation user from *any* loss of persistent
data, even due to fumble-fingers or active malware.
The cost of a PC class system is not much more than the cost of
terminals, because fewer people use terminals.
Right yet again. Which is why it makes little sense to place CPUs back
in the data center along with the data they access: unlike storage,
cycles are *not* as effectively provided remotely, are *not* as easily
time-shared without potential loss of performance, and *are* easily
replaceable with complete transparency (e.g., if someone's diskless
desktop unit dies, it can be replaced in 5 minutes with another equally
usable one).
The servers that can be used to provide that centralized data access are
based upon the same very inexpensive components that the PCs themselves
are based on, and running low on server power (you won't run out of
server *capacity* unless you're saving a bundle by using far
fewer/smaller disks than you would have been using in individual
workstations: central capacity planning is just another benefit of the
system) just means plugging in a few more servers.
The only real remaining argument for operating independence beyond that
provided by diskless workstations is for systems, such as laptops, which
are customarily disconnected from the network. And even there it's
questionable whether it makes sense to treat *corporate* (rather than
personal) laptops more as independent entities that receive periodic
central services like backup than as caching devices which can survive
periods of independence but are basically part of the normal structure
(with all data up to the last point of disconnection mirrored centrally).
A lot of NAS vendors haven't quite gotten to where start-ups like Isilon
are yet, but it won't take them long (IBM is sort of there already,
albeit with its focus on very large systems and expensive storage
hardware, HP's mid-range EVA line at least offers the kind of
incremental expandability required, though in a box with hard limits on
eventual size and, again, major up-front costs, and NetApp's integration
of Spinnaker is a step in the same direction). Windows isn't yet
necessarily as amenable to diskless operation as it might be, but that
won't take *all* the much work (a great deal of it merely involves
suitably segregating read-only from updatable data - as more
time-sharing-oriented OSs have always done).
But those aren't *architectural* problems.
- bill