Windows RAID

  • Thread starter Thread starter Tom Del Rosso
  • Start date Start date
Tom Del Rosso said:
Has anyone else modified XP to give it the RAID 5 capability of Windows
Server?
According to the result of his test (porting the Windows RAID to another
system) it looks like Dynamic Disks are a lot more robust than I expected.

http://www.tomshardware.com/reviews/windowsxp-make-raid-5-happen,925.html

In principle. software RAID is more reliable and robust
than hardware RAID, as the overall system is simpler and
better adjusted. My experience with Linux software RAID
confirms that. Unless MS messed up (again), there is no
reason why Windows software RAID should have robustness
problems either.

Arno
 
Haven't tried, mind you now that I have Windows 7 Ultimate, the RAID
capability is already enabled by default. However, Windows will not
build RAID arrays using drives attached via USB, only on internal drives.

It is a mystery to me why these morons at MS cannot treat storage
uniformly like averybody else. Or maybe this is just to reduce
support calls.
Yes, in fact, I prefer software RAID over hardware RAID, as the software
RAID is portable between systems and you don't have to worry about
maintaining the same hardware controllers. You could change the
motherboard, the processor, everything, and as long as Windows runs on
that hardware, your RAID will be able to come back with you.

Indeed. Performamce should also be fine, unless you use SSDs in
your RAID. For that it may depend on your controllers and
other factors. But with SSDs, most hardware controllers will
also be slow.

Arno
 
Arno said:
In principle. software RAID is more reliable and robust
than hardware RAID, as the overall system is simpler and
better adjusted. My experience with Linux software RAID
confirms that. Unless MS messed up (again), there is no
reason why Windows software RAID should have robustness
problems either.

You know Linux RAID, but I gather that you don't have experience with
Windows Dynamic Disks. The fact that they enable RAID and resizable
partitions at the same time makes me think it might be a fragile design,
because they might have changed too much at once when developing the format,
but that's just a suspicion and I'm becoming less suspicious of it.
 
David Brown wrote
Arno wrote
RAID using external disks has an additional challenge - you need to
have the disks online before enabling the raid. That would require an
interface and procedure to separate the concepts of plugging in a
drive, and mounting a file system
Yes.

- something that neither windows nor its users are ready for.
Wrong.

Even with mounting after the drives are all connected, it's quite easy for a user to get the ordering wrong and end up
with degraded or failed arrays.
Yes.

Remember, windows has to be dumbed down to avoid
confusing users - it doesn't really cater for expert users.

Oh bullshit with Windows Server and NT and 2K.
I've no doubt that MS /could/ have supported raid on removable drives - it may even have been easier for them to do
so. But the can't give users a tool that is too risky.

They can and do in plenty of areas.

No reason why those mounting issues cant be dealt with safely, but its
understandable if they chose not to support RAID in that config as well.

It isnt the only thing that deliberately isnt supported with external removable drives.
 
David Brown wrote
Rod Speed wrote
When you plug a removable disk into a windows system, it is "mounted"
- it is given a drive letter.

Not necessarily, it doesnt have to be with modern versions of Win.
The only exception I know of is if the drive (or partitions on it) are unformatted, or in a format Windows doesn't
understand.

Then you need to get out more. It hasnt been like that for a long time now.
AFAIK, there is no way to mount or unmount the file system on a removable disk.

You're wrong.
It's possible to "mount" a file system on a directory (junction points, if that's the right term)

No it isnt.
- but that's something very rarely used.

Wrong, again.
There is a lot that I can easily do with disks and filesystems in Linux, that are (to my knowledge) impossible on
windows

Wrong, again.
- even server versions (which, I agree, have more advanced features). An example here would be different raid types
on different types of media, or layered raids.

Wrong, again.
There isn't any technical reason why Windows couldn't
support, for example, raid 5 built on top of raid 1 mirrors.

You can do that if you want to.
But the more of that sort of thing you add, the bigger the chance of people getting confused or having accidents.

And Win doesnt stop you doing that.
Well, they gave users Internet Explorer :-)

Pathetic.

\>> No reason why those mounting issues cant be dealt with safely, but its
I agree that the decision is both deliberate and understandable. But it is sacrificing flexibility for expert users
to avoid non-experts
making mistakes.
Nope.

MS is a business - they will balance costs like
support calls against the worth of features.

**** all Win users bother to pay for support.
 
RAID using external disks has an additional challenge - you need to have
the disks online before enabling the raid. That would require an
interface and procedure to separate the concepts of plugging in a drive,
and mounting a file system - something that neither windows nor its
users are ready for.

I know that. But in a sane OS design the RAID layer will not
know whether a disk is external or not.
Even with mounting after the drives are all connected, it's quite easy
for a user to get the ordering wrong and end up with degraded or failed
arrays.

If that is the case with MS software RAID, then they truely messed
it up. The order of the disks should be detected correctly by the
software. With Linux software raid, you plug them in in any order
or storage interface you like. Authodetection will assemble
the RAID correctly.
Remember, windows has to be dumbed down to avoid confusing
users - it doesn't really cater for expert users. I've no doubt that MS
/could/ have supported raid on removable drives - it may even have been
easier for them to do so. But the can't give users a tool that is too
risky.

And so they break the whole storage design. Figures.
Well, Windows's is the toy in OS-land.

The biggest problem with software raid on windows is that you can't use
it for your windows system drive. It's only for data drives (or
additional program drives in non-standard directories).

What? That is its main usage scenario! Stupid, stupid.
No such limitation on Linux.

Arno
 
I don't mean the physical order of the disks (according to the linked
article, you can re-arrange the disks as you fancy, just as with most
raid systems) - I think I was a bit unclear. I mean the order of
actions - if you've started mounting the filesystem(s) before assembling
the raid, or started assembling the raid before you've connected all the
drives or waited for them to be registered by the OS. You can get this
sort of thing wrong with other systems too - if you have a Linux raid
set with metadata format 0.90 (at the end of the disk) and use the drive
as an LVM physical volume, then at boot time it could be that LVM grabs
the disk before md.


So that is why the newer metada is at the start. I wonderd about that,
since the end is much better when only considereing RAID.

As to starting an incomplete array, the solution is rather obvious:
"You are attempting to start an incomplete array. This removed redundancy
and may cause data loss. Are you sure you want to do this?"

As Windows is targetted at inept users, I can understand making
the safe behaviour the default. Not having the other behaviour
avaliable though marks Windows as an unprofessional tool. I do
know there are people that understand what they are doing
and still have to use windows. Inconveniencing them is
unforgivable.

Arno


 
David Brown wrote
Yousuf Khan wrote
Yes, but that removes the drive from the system. If you plug a USB stick with a FAT32 (or NTFS) file system into a
Windows PC, it automatically gets assigned a drive letter

Not necessarily.
- the file system is mounted. If you "safely remove device", the device is disconnected - it is not just the file
system that is unmounted. It's gone from the list of usb devices, and gone from the "disk management" part of
"computer management".

And can be scanned for again.
I haven't tried this with a USB device with multiple partitions, as I don't have one handy, but it would be
interesting to see what happens then.

Nothing special.
Of course, you may well ask /why/ someone would want to attach a
device and not mount the filesystems on it, or unmount a filesystem
without removing the device. I can think of several use-cases that I have used on Linux systems, that I cannot - as
far as I know - do on Windows.

You're wrong, again.
One is for rescuing disks or filesystems with problems - I prefer to
make a low-level copy of the partition before attempting filesystem
repair or other recovery, and before even mounting the filesystem
(which may then trigger a repair).

Thats optional with Win.
On Linux, I use dd (or dd_rescue) to copy the whole partition to an image file on another disk. That file can then be
copied again, and mounted as a loopback device - if something
goes wrong, at least no more damage is done.

You can do that in Win, trivially.
Another is for use with virtual machines - sometimes you want them to have direct access to a partition on a disk
without the host OS being involved.

You can do that in Win, trivially.
And of course there are the simple cases of sometimes wanting to mount a filesystem as read-only to avoid accidents,
or using
alternative mount options (a concept unknown in Windows, AFAIK).

You're wrong, again.
I haven't use Unix, as such, for a couple of decades (SunOS and
Solaris count as "Unix", don't they?). But on Linux, layering is
standard practice and doesn't involve any extra software.
The most common layering arrangement is LVM and md raid. This is the setup I have on a server:
Three disks, each partitioned into three partitions (sd?1 at 1G, sd?2
at 2G, sd?3 filling the rest of the disk). The three sd?1 partitions are tied together as a three-disk RAID1 set md0
for /boot. The point of using RAID1 here is that from the bootloader's viewpoint, it can view
the disk as a simple one. Next, the three sd?2 paritions are tied as
a three-disk RAID10,far set md1 for swap. Then the remaining three
sd?3 disks are a three-disk RAID10,far set md2. This is used as a
physical volume for an LVM setup. Then there are a number of logical
partitions (for /, /home, /var, and for the various light-weight
openvz virtual machines on the host) on the LVM base.
This setup has a lot of positive sides. It is redundant - any single
disk can die without stopping the system, or even making it much
slower (disk throughput drops by about a third), and it will boot
without problems. It is flexible - each filesystem is on its own
logical partition, and I can increase the size quickly while the
filesystem is still mounted and online (decreasing the size needs to
take the filesystem offline). I can take snapshots of the filesystems (good for backup purposes, or while testing
changes). Disk reads are on average faster than if the three disks were in a RAID0 set, because data is mostly read
from the faster outer cylinders - this also improves access times. Disk writes are a bit slower since two copies must
always be written.
Here's another setup I made while testing and learning about raid. It
was a temporary setup - I don't have any servers with enough disks to
make this a reality, but if I /did/ have such a big server, it would be a possible arrangement (though I'd use raid6
instead of raid5). I made a set of 4 image files of 512MB on a tmpfs system (that's basically a
ram-based file system, backed if necessary by swap), with these
configured as loopback devices so that the OS can use them just like
disks or partitions. I then made a set of 4 single-disk raid1
"mirrors", each using one of the loopback devices. All four raid1
"mirrors" were then used to form a raid5 set. I put a filesystem on
that raid5 set, mounted it, and put some files there (purely for
testing). The beauty of making the raid5 on the single-disk raid1
sets comes when swapping out disks, such as for upgrading the size of
the array or when swapping out a failing drive. So I made another set of
4 loopback devices at 1G each. Then for each raid1 set, I added a 1G
"disk" to the existing 512MB disk and let it synchronise (at 512M).
Then I removed the 512MB disks, and re-sized each raid1 array to 1G.
(In a real system, you can do that operation one raid1 at a time, or
in parallel if you have enough free disk slots.) Then I resized the
raid5 array to use the 1G raid1 sets, then resized the filesystem.
Now, that all sounds very complicated - especially compared to the
traditional method of just swapping out a disk with a new bigger one
and letting the raid5 rebuild, then moving on to the next disk. But at every stage, my system had at least one disk
redundancy - with the
traditional method, you are vulnerable to disaster during each
rebuild. And it would work just as well with raid6.

Another way to resize a raid5 array safely in Linux is add an extra
drive (external USB if you want) and temporarily turn the array into a
raid6 with an odd layout - keeping the second parity on the external
disk (this keeps the other disks in standard raid5 layout, and avoids
any unnecessary data movement). This gives you redundancy while you
swap out the disks in the main array, waiting for a resync between
each disk change. Finally, you can remove the external disk and move
back to raid5.
Oh, and Linux supports raid6 in software - for free. And it may
support triple-parity raid too. And it supports RAID10 sets on two
or three disks, with a layout that is significantly faster for most
operations than traditional RAID10.
Are these things sophisticated? Yes. Are they useful? Absolutely.
Are they possible with Windows? The answer is somewhere between "no",
and "yes, if you have enough money".

You're wrong, again.
 
David Brown wrote
Yousuf Khan wrote
eSATA is just SATA with an external cable,

Wrong, again.
http://en.wikipedia.org/wiki/ESATA#eSATA
typically connected to the motherboard's SATA plugs

Wrong, again.
http://en.wikipedia.org/wiki/ESATA#eSATA
- the OS can't tell the difference.
Hotplugging makes a difference - I don't know how Windows handles hotplugging on SATA.

It handles it fine.
Linux "system disk" - its / mount - can be on LVM, RAID5, or whatever
you fancy. No problems there.
 
David Brown wrote
Rod Speed wrote
eSATA electrical specs are a slight tightening of the original SATA specs

Significantly tightened in fact.
I think in practice you would be very hard pushed to find a SATA port that didn't also conform to the eSATA specs.

You're wrong, again.
The plug is slightly different and eSATA cables are better shielded

And are much longer legally.
that's obviously irrelevant to the OS.

Irrelevant to that error of yours.
Actually, that's common practice unless your motherboard (or add-in card) has an explicit eSATA port.

And hordes of them do, for a reason.
If you actually /read/ the wikipedia article,

Corse I read it you pathetic excuse for a bullshit artist.
it also mentions "passive adaptors" - i.e., cable/plug changer. The maximum cable
length is reduced to 1m instead of 2m if the interface only conforms to SATA specs.

So you are just plain wrong with your original claim.
This is, of course, the only relevant point here.

Nope, you ****ed up, again.
Does it treat the drive as "removable" (i.e., it has a "safe eject" but no dynamic disk support) or as "fixed"?

It can be setup either way.
 
David Brown wrote
Rod Speed wrote
Well, if all the drive letters are currently in use, then it won't.

That isnt the only situation where it isnt.

There has been for a long time now a check box in Disk Management
that specifys whether it gets a drive letter or not.
Other than that, this is the behaviour I have always seen.

Then you need to get out more.
If you know situations when this will not happen, feel free to give examples

Just did.
- without more information, your comments are useless.

You never ever could bullshit your way out of a wet paper bag.
You can re-scan by unplugging and re-inserting the disk. Do you know of any other way?

Yep, you dont have to do a thing physically to the drive.
What I'm curious about is whether "safely remove device" will unmount
all the partitions/filesystems, or if it is possible to unmount them separately.

Safely remove is for physical devices, not for partitions.
I assume you mean that they /can/ be done on Windows.
Yep.

If that's true, I would genuinely like to know how.

The most obvious example is when cloning hard drives.
I don't want to hear just "you're wrong" or "that's trivial"

No one gives a flying red **** what you might or might not want to hear.
no one can lend any credence to such comments.

That is a bare faced lie.
But if you can say /how/ it is done, then I'll listen. I'll admit to being wrong

How odd that you didnt when your nose was rubbed in the basics with esata.

You in fact just desperately attempted to bullshit your way out of your predicament, again.
but only if you can prove it, not just claim it.

Just did.

You have to explicitly repair the file system, it isnt done auto.

You use an app that can do that like True Image or one of the countless other imagers.
I know that I can give direct partition, disk, or usb device access
to a virtual machine (in VirtualBox, for example). But the host OS
gets to it first (mounting the filesystem), before it is accessible
to the virtual machine. I'd like to be able to avoid that.

And you can, trivially.
Again, how?

By using the OS config capabilitys.

Even you should have noticed the read only flag in Win file systems.
Do you care to expand,

You dont need enough money to do that with Win.
by showing how to achieve similar setups without additional cost (other than Windows - server version if you> like),

You use a free one, stupid.
or was this just over-enthusiasm?

Nope.
 
The OS can tell the difference, if the drive is marked as removable.
That's one of the features of eSATA, hotplugging.

Aehm, normal SATA is also hotpluggable if the controller supports
it. I am doing this regularly under Linux. In fact it only depends
on the controller, not the (e)SATA type.
Anyways, I just tried an experiment on my Windows 7, I just turned on
the eSATA-attached drive in the middle of a running session. It came on,
the Windows Autoplay started on it, and it worked pretty much like any
USB-attached drive would work like. So my feeling is that eSATA would
get treated much like USB drives.

Well, it should be. In Linux it is.
It works fine as long as you're using the controllers in native AHCI
mode, rather than the compatibility IDE mode.
Yes, I forgot about the mini-bootloader system in Linux. That's
something that most other Unixes didn't have either (although they had
customized BIOSes which served a similar function).
Windows does now have a mini-bootloader too, but it does have one now.
Actually it's had a couple of them, in Win2K/XP it had NTLDR.EXE, and
now with Vista/7 it's got BCDBOOT.EXE. Still I'd say it would be a
challenge to load it off of a software raid5 partition. However, it
might be relatively easy for it to boot off of a mirrored partition.

Well, until it gets to be a supported feature, I would count
it as basically not present. Incidentally, Linux has support for
booting of RAID1 for a long time, since a RAID1 drive looks like
an ordinary drive with the .9 RAID superblock version that
is stored at the end of the disk. Unless you start writing,
you do not need the RAID to run and the bootloader does not
write.

Arno
 
David Brown wrote
Rod Speed wrote
You insert a USB device, it gets a drive letter.

Not necessarily.
You go to Disk Management and specify that it should /not/ get a letter.

You've mangled that utterly with removable drives.
That's fair enough, and I know about that

So you were just plain wrong, again.
but the drive has first been given a letter and then you remove it.

Wrong, again.
I'm guessing that Windows will remember that decision next time it sees the same disk, but you've still not given a
way to take a disk and attach it without it getting a letter.

Wrong, again.

If you start with an unpartitioned and unformatted drive, when
the Win system first sees it, it has no drive letter at all.

When you partition and format it in Disk Management, you
specify whether you want it to have a drive letter at that time.
Since you can't explain how,

You're lying, again. You do that in the Device Manager.
I'll take that as a "no".

More fool you, again.
How do you clone hard drives from windows?

No one said anything about 'from windows'
/That/ was my big mistake?

Didnt say that.
You're pathetic.

You never ever could bullshit your way out of a wet paper bag.
It is started automatically if the system knows that it needs checked

Wrong, again.
(such as through an unclean shutdown, or if the force check flag is set).

Wrong, again.
Can such third-party apps access disks or partitions before the OS gets them?

Irrelevant to what you said there about what you can do with Linux.
Can they copy to an image file, which can then be mounted
and used as a normal filesystem?

Yep, the better ones can.

You can also do that with iso files too.
Third-party apps means that it is /possible/ with Windows,

Doesnt have to be a third party app.
but certainly not "trivial"

Corse its completely trivial, any decent imager can do that.
"trivial" means it is functionality built into the OS and standard OS utilities, and easily accessible.

Wrong, again. We have different words for a reason.
Ah, yes, I'd forgotten about the free raid6 implementation for
Windows. And the free raid10 implementation that works with two or
three disks (or any other number, not just the standard multiple of 4). And the free windows raid system that works
even on the system disk. And the free logical volume manager. And the free lightweight virtualiser.

Like I said, you never ever could bullshit your way out of a wet paper bag.
 
David Brown wrote
Rod Speed wrote
OK, I can see that. I was trying to find out if there is a way to
avoid mounting the filesystem at least once even though there is a
valid windows-recognised filesystem on the USB stick. But I didn't
specify that condition in my question.

You never ever could bullshit your way out of a wet paper bag.
I can find no way to do that

Your problem.
"refresh" and "scan for hardware changes" do not find the "safely removed" USB device that is still plugged in.

Then the Win you are doing that on is ****ed.

So you have egg all over your face, again.
This part of the discussion was about the limits of Windows compared to other systems

Everyone can see for themselves that you are lying, again.
- didn't you notice?

You never ever could bullshit your way out of a wet paper bag.
So let me get this straight

Not even possible, you are much too stupid for that.
- it is, to your knowledge, not possible to clone hard drives from within Windows?

Completely irrelevant to what is being discussed, whether it is perfectly
possible for an app to do that if it wants to.
That would confirm my suspicions.

Pigs arse it would.
I said "I can do this on Linux". You said "You can do that in Win,
trivially - using apps like True Image". But it seems you /can't/ do
what I asked from within Windows.

Irrelevant. What the OS itself can do is completely irrelevant to what
is being discussed, you pathetic excuse for a lying bullshit artist.
If I understand the web pages for True Image correctly, you can backup a /working/ partition and filesystem from with
in Windows.

You can do a hell of a lot more than just that.
But to clone a hard disk, you need to boot from a True Image cd - which apparently runs Linux.

Irrelevant to your stupid pig ignorant claim.
I know about doing it with iso files (I find "Daemon Manager" very useful for that).

So you have egg all over your face, yet again.
But I don't know how to take an image file of a FAT32 or NTFS
filesystem and mount it as a directly accessible file system from
within Windows.

Irrelevant to what is being discussed.

If an app can do it, it can be done.
I didn't see any indication of this in True Image's feature lists,

Then you need new glasses, BAD.
but obviously you know its features much better than I.

And with Win in spades.
So tell me straight up - can I do the following within Windows:

Irrelevant to what is being discussed.
dd if=/dev/sdb1 of=fs.img
mkdir m
mount -o loop fs.img m
ls m
In Linux, I've made an image of a disk partition to a file - without any risk of modifying the partition.

Any decent imaging app can do that.
I've then mounted the image as a normal read-write file system, accessible from any applications just like any other
file system.

Any decent imaging app can do that.
Can I do the same on Windows?

Any decent imaging app can do that.
Can I do it with True Image, or other software?

Any decent imaging app can do that.
Do you mean I can mount image files as without any third-party applications?

Irrelevant to what is being discussed.
So which of these "decent imagers" are /not/ third-party?

Irrelevant to what is being discussed.
I.e., which "decent imagers" are included in a standard off-the-shelf Windows installation?

Irrelevant to what is being discussed.
Let's simplify this to something even you can understand.

You never ever could bullshit your way out of a wet paper bag.
Can I set up a raid6 array within Windows, for no cost other than the disks?

Irrelevant to what is being discussed.
Obviously it is /possible/ with Windows, by buying an expensive hardware raid6 card.

That aint the only way to do it.
Can I set up a raid10 array within Windows on two disks?

Irrelevant to what is being discussed.
To my knowledge, /no/ hardware raid card supports this, so you can't even buy a solution.

Irrelevant to what is being discussed.
It seems that some hardware raid cards (such as Intel's server cards) will do some striping if you build raid1 with
more than two disks, but still don't take full advantage of the disks' bandwidth.

Irrelevant to what is being discussed.
 
David Brown wrote
Yousuf Khan wrote
Journalling on a filesystem doesn't help here - but if the OS supports
COW snapshotting then that allows online disk imaging. I was referring to offline disk imaging to a file, but it is
interesting to know that online disk imaging is also possible.

And dd doesnt do that.
I assume that's what third-party apps like True Image use?

Yes, TI does, but not all of them do, particularly the dinosaurs.
Changing the permissions is something very different from mounting the
filesystem as read-only.
Nope.
This, I think, is the key point
Nope.

- it is not something people would much want to do.

But you can do it anyway.
It's the sort of thing that can be useful to experts on odd occasions.

It isnt just useful to experts on odd occasions.
But to include support for mounting a filesystem as read-only in windows would mean interfaces would need to be
created,
options would need to be added to Device Manager, documentation and training would have to support it, etc., etc.

Wrong, again. Particularly with training because you claim that
only experts would use it. They dont need training.
The actual /implementation/ would be trivial. But the cost of everything around the feature would be too high for
something so rarely needed.

Wrong, again.
I know I have never actually felt the need to mount a filesystem as read-only in Windows. But I also know that on the
occasions I /have/ wanted to mount something read-only, I've had Linux CDs available for the job.

Its possible to do it with Win, even if you dont realise that.
That's one reason - but there are others. For example, / is re-mounted as read-only during shutdown, so that it is
static while services like
raid and lvm are closing. That's not really user-visible, but helpful
to the system. But I think the most common user-visible use of
read-only mounts is when booting from a live CD. It means you have
access to your files, but can be sure that you won't accidentally
change anything. This is especially useful if you are working with a
Windows machine that has had trouble - you probably don't want to
mount the NTFS drives in read-write mode, or do any file system
checks from within Linux, as the Linux NTFS drivers are not fully capable of handling unclean NTFS shutdowns.

You can do that in Win.
There is no such thing as "server" and "desktop" editions of Linux.

Wrong, again.
Some distributions are aimed at particular targets, and come
pre-installed with different utilities, or provide more server-oriented or desktop-oriented options during
installation.

Same thing, different words.
But the capabilities of the system are the same.

Wrong, again.
LVM has been around since 1998, and Linux software raid has been around since at least 1996 - that's the oldest
reference I could find. Layered block devices and raid are not new to Linux (though there have been continual
improvements and enhancements, and there are more in the works).

Just like with any OS.
User and administrator tools for conveniently managing these systems
is a different matter - these have come from server-oriented distributions or even third-party apps.

Just like with Win.
There are still a few features that are available with top-range
third-party systems that you can't yet do natively in Linux.

Just like with Win.
A couple of examples are hierarchical volume management, and transparent
de-duplication (there are a few projects for this, but nothing mainstream).

Just like with Win.
These things are more for server markets than for desktop users. And people using windows servers are used to paying
lots of money

Wrong, again.
- you don't need sophisticated software raid if you just buy a hardware raid solution (though Linux software raid is
more flexible than hardware raid cards).

So your claim has just blown up in your face and covered you with black stuff, again.
And people wanting more complicated setups will either buy third-party systems, or use Linux, or both.

Or neither.
 
David said:
Rod Speed wrote
<snip the moody teenager rants>

Like I said, you never ever could bullshit your way out of a wet paper bag.

What you actually snipped was you face down in the mud, as usual.
Well, I know I've learned a bit in this thread - Windows has a few capabilities that I didn't know about before.
Again.

Linux is still more flexible,

Thats a lie, most obviously with COW imaging.
with a cleaner and more consistent way of handling
filesystems, block device layering, raid, etc.

Another lie.
And it will still be the first port of call when I want to do something out of the ordinary.

You have always been, and always will be, completely and utterly irrelevant.
But Windows can do a bit more than I thought, especially
when using additional software at additional expense.

Doesnt have to be at additional expense.
But as usual, I've learned more from helpful, informative posters
(Yousuf in particular) rather than the grumpy kid with his automated
replies. Rod, it's clear that you know a lot about Windows, and some
types of programs (True Image in particular). But when you can't
communicate, it's of little help to anyone.

Everyone can see for themselves that you are desperately attempting
bullshit your way out of your predicament, just like you always do when
you have got done like a ****ing dinner, time after time after time.

Anyone with any balls would just take it on the chin without this sort of childish shit.
 
Journalling on a filesystem doesn't help here - but if the OS supports
COW snapshotting then that allows online disk imaging. I was referring
to offline disk imaging to a file, but it is interesting to know that
online disk imaging is also possible. I assume that's what third-party
apps like True Image use?

Yes, as well as Windows' own pre-installed backup utilities. Some backup
utilities have been around since before VSS (like the one I'm using
Macrium Reflect), and they came up with their own snapshotting services,
so they can use either their own, or the Microsoft one.
This, I think, is the key point - it is not something people would much
want to do. It's the sort of thing that can be useful to experts on odd
occasions. But to include support for mounting a filesystem as read-only
in windows would mean interfaces would need to be created, options would
need to be added to Device Manager, documentation and training would
have to support it, etc., etc. The actual /implementation/ would be
trivial. But the cost of everything around the feature would be too high
for something so rarely needed.

I don't think it's even too useful to experts, whether it's an odd
occasion or not. Unix experts find it useful because that's the way
things were done on Unix for years, and was necessary there. A lot of
repair on Unix is done manually after the basic automatic repairs fail
(like using alternative Superblocks to fix a badly munched filesystem,
requires human intelligence). On Windows that sort of manual repair work
isn't necessary, the utilities are a bit more intelligent.

I can't think of a time when the file system repair utilities couldn't
fix everything themselves in Windows. The only interaction necessary was
whether you want to run the quick repair or the thorough repair. In
fact, I've found the NTFS repair utilities in Linux to be much simpler
to use than the various Unix filesystem fsck's

Windows won't even let you mount a filesystem unless it's been marked
clean by the filesystem repair utils. I know I can mount NTFS in Linux
as read-only if it hasn't been fully repaired. But in Windows the same
filesystem has to be totally repaired before it can be mounted. I've
found it useful to mount an NTFS filesystem read-only in Linux just to
make sure all of the files I'm looking for are still there, but I'd
eventually find out the same thing anyways after I repaired and mounted
it properly in Windows. It's just a matter of patience and how much of
it you have.
That's one reason - but there are others. For example, / is re-mounted
as read-only during shutdown, so that it is static while services like
raid and lvm are closing. That's not really user-visible, but helpful to
the system.

That's just an internal issue for the OS itself. That's how Unix handles
it, Windows has its own other way of doing it.
But I think the most common user-visible use of read-only
mounts is when booting from a live CD. It means you have access to your
files, but can be sure that you won't accidentally change anything. This
is especially useful if you are working with a Windows machine that has
had trouble - you probably don't want to mount the NTFS drives in
read-write mode, or do any file system checks from within Linux, as the
Linux NTFS drivers are not fully capable of handling unclean NTFS
shutdowns.

I did talk about that above. The Linux NTFS repair utilities aren't that
bad anymore, and as I said, they seem to have less manual repair modes
than Linux's own filesystem fsck's. As I said above, I sometimes do like
to boot into Linux to quickly view the contents of an unrepaired NTFS
filesystem, but I could just as easily let the filesystem get repaired
properly in Windows which would take a little while depending on
complexity of repairs and size of the filesystem.
There is no such thing as "server" and "desktop" editions of Linux. Some
distributions are aimed at particular targets, and come pre-installed
with different utilities, or provide more server-oriented or
desktop-oriented options during installation. But the capabilities of
the system are the same.

So that simply means, despite the long argument, that there _are_
separate server and desktop editions.

It's not any different than Windows. The server and desktop editions of
Windows are basically the same in the end too, just packaged differently.
LVM has been around since 1998, and Linux software raid has been around
since at least 1996 - that's the oldest reference I could find. Layered
block devices and raid are not new to Linux (though there have been
continual improvements and enhancements, and there are more in the works).

You'll find that internal RAID schemes in hardware storage arrays are
not nearly as sophisticated as all of these software RAID schemes, but
nobody really misses the sophistication.
These things are more for server markets than for desktop users. And
people using windows servers are used to paying lots of money - you
don't need sophisticated software raid if you just buy a hardware raid
solution (though Linux software raid is more flexible than hardware raid
cards). And people wanting more complicated setups will either buy
third-party systems, or use Linux, or both.

In my Solaris days, I've seen people continuing to use Veritas Volume
Manager along with hardware raid arrays. They'd map a simple software
volume on top of a hardware raid volume for easy management in a shared
cluster filesystem arrangement. The software volumes had features that
allowed them to manage cluster filesystems. So they'd be paying for both
the expensive hardware array and the expensive software volume manager
for the same purpose.

Yousuf Khan
 
David said:
Yousuf Khan wrote
Not really, no.

Fraid so.
The kernel is the same, and you can install server-oriented or desktop-oriented packages on either sort of distro.

And you can choose either approach with Win too.
So while particular features or apps may be developed with a
particular target audience in mind, they are available to all users.

Same with Win, you get to choose which version you want to use.
The different flavours of Windows are also basically the same.

Wrong, again.
But there are differences in the apps and utilities that you get - many of which you cannot get for the other
editions.

**** all in fact.
There are also differences in the kernels,

Wrong, again. With Win7 there is no difference what so ever.
though many restrictions are from licensing rather
than technical issues or changes in the code.

Because they choose to charge more for the fancier features, stupid.
For example, different editions have limitations on the number of cores you can use,

Wrong, again.
or the size of memory supported.

Thats just because of the limitations of 32 bit systems.
Some editions (aimed at small, cheap notebooks) have limitations on the number of applications you can run at a time.
And you can't swap between them - you can (I believe) use a server system as a desktop,

Corse you can.
but you can't install a desktop edition of Windows and use it like a full-featured server edition.

Because the full featured system costs more, stupid.
You can't just install the additional apps (and pay an appropriate new license fee) and turn it into a Windows server.

Wrong, again. Thats what in place upgrades do.
 
Back
Top