Journalling on a filesystem doesn't help here - but if the OS supports
COW snapshotting then that allows online disk imaging. I was referring
to offline disk imaging to a file, but it is interesting to know that
online disk imaging is also possible. I assume that's what third-party
apps like True Image use?
Yes, as well as Windows' own pre-installed backup utilities. Some backup
utilities have been around since before VSS (like the one I'm using
Macrium Reflect), and they came up with their own snapshotting services,
so they can use either their own, or the Microsoft one.
This, I think, is the key point - it is not something people would much
want to do. It's the sort of thing that can be useful to experts on odd
occasions. But to include support for mounting a filesystem as read-only
in windows would mean interfaces would need to be created, options would
need to be added to Device Manager, documentation and training would
have to support it, etc., etc. The actual /implementation/ would be
trivial. But the cost of everything around the feature would be too high
for something so rarely needed.
I don't think it's even too useful to experts, whether it's an odd
occasion or not. Unix experts find it useful because that's the way
things were done on Unix for years, and was necessary there. A lot of
repair on Unix is done manually after the basic automatic repairs fail
(like using alternative Superblocks to fix a badly munched filesystem,
requires human intelligence). On Windows that sort of manual repair work
isn't necessary, the utilities are a bit more intelligent.
I can't think of a time when the file system repair utilities couldn't
fix everything themselves in Windows. The only interaction necessary was
whether you want to run the quick repair or the thorough repair. In
fact, I've found the NTFS repair utilities in Linux to be much simpler
to use than the various Unix filesystem fsck's
Windows won't even let you mount a filesystem unless it's been marked
clean by the filesystem repair utils. I know I can mount NTFS in Linux
as read-only if it hasn't been fully repaired. But in Windows the same
filesystem has to be totally repaired before it can be mounted. I've
found it useful to mount an NTFS filesystem read-only in Linux just to
make sure all of the files I'm looking for are still there, but I'd
eventually find out the same thing anyways after I repaired and mounted
it properly in Windows. It's just a matter of patience and how much of
it you have.
That's one reason - but there are others. For example, / is re-mounted
as read-only during shutdown, so that it is static while services like
raid and lvm are closing. That's not really user-visible, but helpful to
the system.
That's just an internal issue for the OS itself. That's how Unix handles
it, Windows has its own other way of doing it.
But I think the most common user-visible use of read-only
mounts is when booting from a live CD. It means you have access to your
files, but can be sure that you won't accidentally change anything. This
is especially useful if you are working with a Windows machine that has
had trouble - you probably don't want to mount the NTFS drives in
read-write mode, or do any file system checks from within Linux, as the
Linux NTFS drivers are not fully capable of handling unclean NTFS
shutdowns.
I did talk about that above. The Linux NTFS repair utilities aren't that
bad anymore, and as I said, they seem to have less manual repair modes
than Linux's own filesystem fsck's. As I said above, I sometimes do like
to boot into Linux to quickly view the contents of an unrepaired NTFS
filesystem, but I could just as easily let the filesystem get repaired
properly in Windows which would take a little while depending on
complexity of repairs and size of the filesystem.
There is no such thing as "server" and "desktop" editions of Linux. Some
distributions are aimed at particular targets, and come pre-installed
with different utilities, or provide more server-oriented or
desktop-oriented options during installation. But the capabilities of
the system are the same.
So that simply means, despite the long argument, that there _are_
separate server and desktop editions.
It's not any different than Windows. The server and desktop editions of
Windows are basically the same in the end too, just packaged differently.
LVM has been around since 1998, and Linux software raid has been around
since at least 1996 - that's the oldest reference I could find. Layered
block devices and raid are not new to Linux (though there have been
continual improvements and enhancements, and there are more in the works).
You'll find that internal RAID schemes in hardware storage arrays are
not nearly as sophisticated as all of these software RAID schemes, but
nobody really misses the sophistication.
These things are more for server markets than for desktop users. And
people using windows servers are used to paying lots of money - you
don't need sophisticated software raid if you just buy a hardware raid
solution (though Linux software raid is more flexible than hardware raid
cards). And people wanting more complicated setups will either buy
third-party systems, or use Linux, or both.
In my Solaris days, I've seen people continuing to use Veritas Volume
Manager along with hardware raid arrays. They'd map a simple software
volume on top of a hardware raid volume for easy management in a shared
cluster filesystem arrangement. The software volumes had features that
allowed them to manage cluster filesystems. So they'd be paying for both
the expensive hardware array and the expensive software volume manager
for the same purpose.
Yousuf Khan