Re-post: RAID Question

  • Thread starter Thread starter Ron
  • Start date Start date
R

Ron

Greetings, gentlemen. I wonder if you might educate me in an area in which
I lack experience. As far as the principles of RAID, I am OK. Indeed I
have configured and run several arrays -- both with cards and integrated --
over the last few years. But I have never had to deal with this issue, and
perhaps a little preparation will save me some trouble!

The SATA connectors on the mobo are labelled PRI and SEC, whereas in the Sil
BIOS, the drives are labelled 0 and 1. Would it be logical to assume that
the drive that is attached to the PRI port is drive 0?

And so...having initially installed everything onto the PRI drive [a few
months ago], I recently added a second drive, and I asked the BIOS to create
a mirrored array. After a helluva long time it finished, and it seems I
have a perfectly functioning RAID1.

If I unplugged one of the drives, no doubt the BIOS would inform me somehow
upon boot-up. But if I left it alone, I assume the comp would still boot up
normally. (Yes?)

Then, after running for a while, lets say I shut down and re-connect the
second drive. Again I presume I'd see some sort of message from the Sil
BIOS...but would it continue booting if I left it alone? Or would it
continue booting, but auto-rebuild during/after boot-up? Or would it halt
and wait for me to choose "rebuild"? And if it DID halt -- or I halted
it -- would the "rebuild" command do exactly that?

New scenario; What if one of the drives fails? Would the aforementioned
events unfold in basically the same manner? And how would I know which
drive had failed?

Assuming that I *was* able to determine which drive had failed, and assuming
that the comp would run normally for a week [whilst I procured a
replacement] would the BIOS rebuild the array once I added the new drive?

And what if I decide to revert to a single drive? Once I remove one of the
two drives, I assume that I would need to DELETE the array in the Sil BIOS?
Can I do this without loss of data [on the remaining drive]?

Lastly, are the SATA drives hot-swappable? Admittedly I do not need to
avoid downtime like a large server might...but I'm curious. (I would never
dream of hot-swapping a PATA drive!)

Many thanks for your patience & expertise.
Ron
 
Ron said:
Greetings, gentlemen. I wonder if you might educate me in an area in which
I lack experience. As far as the principles of RAID, I am OK. Indeed I
have configured and run several arrays -- both with cards and integrated --
over the last few years. But I have never had to deal with this issue, and
perhaps a little preparation will save me some trouble!

The SATA connectors on the mobo are labelled PRI and SEC, whereas in the Sil
BIOS, the drives are labelled 0 and 1. Would it be logical to assume that
the drive that is attached to the PRI port is drive 0?

Seems logical.
If your using Silicon Image controller, they have a GUI utility that
provides a wealth of info and can be configured to send an email after an
event. eg failure
http://www.siimage.com/products/sataraid.asp


And so...having initially installed everything onto the PRI drive [a few
months ago], I recently added a second drive, and I asked the BIOS to create
a mirrored array. After a helluva long time it finished, and it seems I
have a perfectly functioning RAID1.

If I unplugged one of the drives, no doubt the BIOS would inform me somehow
upon boot-up. But if I left it alone, I assume the comp would still boot up
normally. (Yes?)

Then, after running for a while, lets say I shut down and re-connect the
second drive. Again I presume I'd see some sort of message from the Sil
BIOS...but would it continue booting if I left it alone? Or would it
continue booting, but auto-rebuild during/after boot-up? Or would it halt
and wait for me to choose "rebuild"? And if it DID halt -- or I halted
it -- would the "rebuild" command do exactly that?

New scenario; What if one of the drives fails? Would the aforementioned
events unfold in basically the same manner? And how would I know which
drive had failed?

Assuming that I *was* able to determine which drive had failed, and assuming
that the comp would run normally for a week [whilst I procured a
replacement] would the BIOS rebuild the array once I added the new drive?

And what if I decide to revert to a single drive? Once I remove one of the
two drives, I assume that I would need to DELETE the array in the Sil BIOS?
Can I do this without loss of data [on the remaining drive]?

Lastly, are the SATA drives hot-swappable? Admittedly I do not need to
avoid downtime like a large server might...but I'm curious. (I would never
dream of hot-swapping a PATA drive!)

Many thanks for your patience & expertise.
Ron
 
Ron said:
Greetings, gentlemen. I wonder if you might educate me in an area in which
I lack experience. As far as the principles of RAID, I am OK. Indeed I
have configured and run several arrays -- both with cards and integrated --
over the last few years. But I have never had to deal with this issue, and
perhaps a little preparation will save me some trouble!

The SATA connectors on the mobo are labelled PRI and SEC, whereas in the Sil
BIOS, the drives are labelled 0 and 1. Would it be logical to assume that
the drive that is attached to the PRI port is drive 0?

And so...having initially installed everything onto the PRI drive [a few
months ago], I recently added a second drive, and I asked the BIOS to create
a mirrored array. After a helluva long time it finished, and it seems I
have a perfectly functioning RAID1.

If I unplugged one of the drives, no doubt the BIOS would inform me somehow
upon boot-up. But if I left it alone, I assume the comp would still boot up
normally. (Yes?)

Then, after running for a while, lets say I shut down and re-connect the
second drive. Again I presume I'd see some sort of message from the Sil
BIOS...but would it continue booting if I left it alone? Or would it
continue booting, but auto-rebuild during/after boot-up? Or would it halt
and wait for me to choose "rebuild"? And if it DID halt -- or I halted
it -- would the "rebuild" command do exactly that?

New scenario; What if one of the drives fails? Would the aforementioned
events unfold in basically the same manner? And how would I know which
drive had failed?

Assuming that I *was* able to determine which drive had failed, and assuming
that the comp would run normally for a week [whilst I procured a
replacement] would the BIOS rebuild the array once I added the new drive?

Everything you say to here makes sense and from my expierence should work as
you say. Otherwise why bother with RAID if there isn't a recovery method.

I mostly use software RAID and it has procedures for breaking the mirror
without destroying the data. Theoretically you could use either drive of a
mirror as they are identical - right. A simple test would be to install
enough o/s to run the RAID and try everything. To test my system, I run a
minimal o/s and yanked a drive, then replaced it to make sure it worked. I
have also updated to larger drives by replacing the drives one at a time
(replace one, rebuilt, replace two, rebuild). The RAID size didn't change,
but I added new partitions to the larger drives and added capacity without
disturbing the original RAID.
And what if I decide to revert to a single drive? Once I remove one of the
two drives, I assume that I would need to DELETE the array in the Sil BIOS?
Can I do this without loss of data [on the remaining drive]?

This I would test for sure. The Promise SATA single drive has a different
driver then the RAID configuration. If you run it in RAID mode with one
drive, it takes forever to boot.
Lastly, are the SATA drives hot-swappable? Admittedly I do not need to
avoid downtime like a large server might...but I'm curious. (I would never
dream of hot-swapping a PATA drive!)

No - at least my manual (K8V) says NO.

Finally, if you're running Linux or any Win server, software RAID5 will give
you better performance and capacity. You need a minimum of three drives
though.
 
Ron said:
Greetings, gentlemen. I wonder if you might educate me in an area in
which I lack experience. As far as the principles of RAID, I am OK.
Indeed I have configured and run several arrays -- both with cards and
integrated -- over the last few years. But I have never had to deal with
this issue, and perhaps a little preparation will save me some trouble!

The SATA connectors on the mobo are labelled PRI and SEC, whereas in the
Sil BIOS, the drives are labelled 0 and 1. Would it be logical to assume
that the drive that is attached to the PRI port is drive 0?

Yeah, Primary, secondary; one, two; 0, 1, whatever! :-p
And so...having initially installed everything onto the PRI drive [a few
months ago], I recently added a second drive, and I asked the BIOS to
create a mirrored array. After a helluva long time it finished, and it
seems I have a perfectly functioning RAID1.
Good.

If I unplugged one of the drives, no doubt the BIOS would inform me
somehow upon boot-up. But if I left it alone, I assume the comp would
still boot up normally. (Yes?)

From a recent post, I believe that it just works. No error messages,
nothing. Maybe that was when the failure was when already running Windows.
There is a Windows utility that can send emails, pop up alerts, etc.
Then, after running for a while, lets say I shut down and re-connect the
second drive. Again I presume I'd see some sort of message from the Sil
BIOS...but would it continue booting if I left it alone? Or would it
continue booting, but auto-rebuild during/after boot-up? Or would it halt
and wait for me to choose "rebuild"? And if it DID halt -- or I halted
it -- would the "rebuild" command do exactly that?

Couldn't tell you. I would suspect that the BIOS just carries on (why not,
that's kinda the point of RAID - makes failures "transparent") and that the
utility lets you know.
New scenario; What if one of the drives fails? Would the aforementioned
events unfold in basically the same manner? And how would I know which
drive had failed?

I would think so.
Assuming that I *was* able to determine which drive had failed, and
assuming that the comp would run normally for a week [whilst I procured a
replacement] would the BIOS rebuild the array once I added the new drive?

I would suspect so.
And what if I decide to revert to a single drive? Once I remove one of
the two drives, I assume that I would need to DELETE the array in the Sil
BIOS? Can I do this without loss of data [on the remaining drive]?

I would hope so :-p
Lastly, are the SATA drives hot-swappable? Admittedly I do not need to
avoid downtime like a large server might...but I'm curious. (I would
never dream of hot-swapping a PATA drive!)

In theory? The spec says it can be. In practice, not so. I'm not sure if
the SiI BIOS supports it, and most likely your drives do not. If you don't
mind a bit of corruption (unless you're running a journaling filesystem -
gotta love Linux) pulling a SATA power connector shouldn't really do any
damage as the connectors are designed for it, but I'd be reluctant unless
the drive says it can.
Many thanks for your patience & expertise.
Ron

Sorry I have few definite, I've never run RAID. Hopefully somebody with
hands on experience will be able to fill the blanks.

Ben
 
Many thanks for replies. I will need to experiment a little, I reckon!
Perhaps when I get my next box up & running.

Regards,
Ron
 
Back
Top