SO, with 100 drives the propability is 100%? It just don't work that way,
sorry.
The discussion is about MTBFs. If a single drive has an MTBF or 10000
hours and you have 100 drives in the system then the MTBF of the system
would be 100 hours. That doesn't mean that the system will fail once every
100 hours like clock work. It just means that on average you would see a
drive failure every 100 hours. The consequences of a drive failure are
dependent on the RAID configuration. In a JBOD system with two drives the
likelihood of a failure that causes the loss of 50% of your data (i.e. a
single drive failure) in any one hour is 1/10000 (using a made up MTBF
number of 10000 hours). The probability of a failure that causes the loss
of 100% of your data, i.e. two drives failing in a hour, is 1/(10000 *
10000). In a RAID0 system the failure of a single drive causes the loss
of 100% of your data so the probability of a 100% data loss in any given
hour is 2/10000. In a RAID1 system the loss of a single drive doesn't
cause the loss of any data, in order to lose 100% of your data you must
have a double failure. As stated earlier the probability of two drives
failing in any give hour is 1/(10000 * 10000). However once you've
experienced a single drive failure you are left vulnerable because the
remaining good drive has the probability of failure of 1/10000. So in
figuring out what your chances are of losing your data in a RAID1 system
is you not only have to take into account the Mean Time Between Failures
but also the Mean Time to Repair. If you immediately shut the system down
and replace the bad disk then your probability of losing all your data on
a single two drive RAID1 system is close to 1/(10000 * 10000). However if
you keep the system running and it takes on average 100 hours before your
replace the bad drive then the mean time between data loses would be
100/(10000 * 10000) or 1/1000000.