RayLopez99 said:
On Thursday, December 6, 2012 10:31:13 PM UTC+2, Paul wrote:
[good to know stuff]
Tx Paul and others.
Found the site for SSD's from a guy who has been writing on his website for over 10 years.
http://www.storagesearch.com
I reproduce one article below (he's got a ton) that I found interesting. What it goes to is 'write endurance'. For most apps, you read more than write, something like 5:1. But assume it's 1:1, and making the worse case assumptions, for a relatively modern "March 2007" SSD drive the figure, says the author, is "51 years" before you get write endurance failure due to a 'rogue program' (since I code, this is a concern for me). But, since the author appears to be a SSD enthusiast, you have to figure some propaganda is possible. So instead of 2 M cycles, I figure 100k cycles before failure/saturation. Which cuts his 51 years by 20, or 51 years / 20 = 2.6 years. Actually that is still good--remember, we're taking worse case.
The other problem I have is whether my old mechanical drive, a Western Digital 500 GB drive from a few years ago, will go on my 3 GB/s SATA 2.0 ports of my mobo found here:
http://www.asus.com/Motherboards/Intel_Socket_1155/P8H67M_LE/#overview I think the answer is clearly yes (I would be shocked otherwise), which leaves 2 x SATA 6Gb/s connector(s) for the SATA III SSD drive. In fact, now that I write this, I can put both the old WD HD and the new SSD hard on the two SATA III 6Gb/s connections, since I only have two drives = two connections.
So now the issue is how to set up the SSD. I am going to do a clean install of Windows 7 Professional, so that simplifies things. I assume there's a setting in BIOS to handle SSD? From this thread:
http://www.tomshardware.co.uk/forum/254452-14-does-require-special-drivers-bios-settings I see I must Google "change windows to ahci" I get this Wikipedia link:
http://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface and I get this MSFT link:
http://support.microsoft.com/kb/922976#method1
and this link:
http://www.ithinkdiff.com/how-to-enable-ahci-in-windows-7-rc-after-installation/
Question for the board: without reading these articles, because I kinda know what they are getting at, is it fair to say that if, for a clean installation of Windows 7 Pro, and if I change the BIOS before I do the clean install to "AHCI" (I assume I can do this), then, when I install Windows 7 for the first time, I'll not have any problems? I think the answer is "YES".
Finally, the issue is: what do you do with your "C:" drive, do you keep it SSD or make it a traditional HD? I think the answer is the former. I think you store data like video and photos on your "D" (mechanical) HD except stuff that needs to be fastly loaded, like for me my source code and libraries in Visual Studio, which I'll put on the SSD (C) drive.
One more noob question: I'll have a "D" drive being a mechanical drive, that's a SATA - 600 WDC WD5000AAKX-001CA0 drive. This, as I said above, I assume I can put on the SATA 3 Gb/s connections of the Mobo (I think this is called Sata II) and the new SSD drive on the SATA III 6 Gb/s connections, or, since I have two SATA III connections, put both on those two connections. Does this mean that somehow the old mechanical drive will slow down the new SSD drive? I assume 'No', since SATA is parallel (even though it says serial) not like the old ATA drives where you had a master/slave ribbon. Of course if data is on both C: and D: drives then you'll get a bottleneck from the D: drive, but that's a different issue.
One more Noob question: what size drive for C:, the SSD? Newegg / Amazon sells 240 GB for USD$220, and 120 GB for half that. If I have a 500 GB HD for D:, I think 240 GB is big enough for "C", yes? 1:2 ratio seems about right in my mind.
Yet one more Noob question: I've heard that if you get a crash, power surge, or failure on an SSD, the data is wiped out. But if you backup doing Acronis on an external USB (traditional) HD, then of course you can reinstall Windows, Acronis, and restore your image files, yes? That should be an obvious yes, just double checking.
Thanks in advance for any answers.
RL
This article was written March, 2007--
http://www.storagesearch.com/ssdmyths-endurance.html
The nightmare scenario for your new server acceleration flash SSD is that a piece of buggy software written by the maths department in the university or the analytics people in your marketing department is launched on a Friday afternoon just before a holiday weekend - and behaves like a data recorder continuously writing at maximum speed to your disk - and goes unnoticed.
How long have you got before the disk is trashed?
For this illustrative calculation I'm going to pick the following parameters:-
Configuration:- a single flash SSD. (Using more disks in an array could increase the operating life.)
Write endurance rating:- 2 million cycles. (The typical range today for flash SSDs is from 1 to 5 million. The technology trend has been for this to get better.
When this article was published, in March 2007, many readers pointed out the apparent discrepancy between the endurance ratings quoted by most flash chipmakers and those quoted by high-reliability SSD makers - using the same chips.
In many emails I explained that such endurance ratings could be sample tested and batches selected or rejected from devices which were nominally guaranteed for only 100,000 cycles.
In such filtered batches typically 3% of blocks in a flash SSD might only last 100,000 cycles - but over 90% would last 1 million cycles. The difference was managed internally by the controller using a combination of over-provisioning and bad block management.
Even if you don't do incoming inspection and testing / rejection of flash chips over 90% of memory in large arrays can have endurance which is 5x better than the minimum quoted figure.
Since publishing this article, many oems - including Micron - have found the market demand big enough to offer "high endurance" flash as standard products.)
AMD marketed "million cycle flash" as early as 1998.
Sustained write speed:- 80M bytes / sec (That's the fastest for a flash SSD available today and assumes that the data is being written in big DMA blocks.)
capacity:- 64G bytes - that's about an entry level size. (The bigger the capacity - the longer the operating life - in the write endurance context.)
Today single flash SSDs are available with 160G capacity in 2.5" form factor from Adtron and 155G in a 3.5" form factor from BiTMICRO Networks.
Looking ahead to Q108 - 2.5" SSDs will be available upto 412GB from BiTMICRO. And STEC will be shipping 512GB 3.5" SSDs.
To get that very high speed the process will have to write big blocks (which also simplifies the calculation).
We assume perfect wear leveling which means we need to fill the disk 2 million times to get to the write endurance limit.
2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.
That's a meaningless number - which needs to be divided by seconds in an hour, hours in a day etc etc to give...
The end result is 51 years!
But you can see how just a handful of years ago - when write endurance was 20x less than it is today - and disk capacities were smaller.
For real-life applications refinements are needed to the model which take into account the ratio and interaction of write block size, cache operation and internal flash block size. I've assumed perfect cache operation - and sequential writes - because otherwise you don't get the maximum write speed. Conversely if you aren't writing at the maximum speed - then the disk will last longer. Other factors which would tend to make the disk last longer are that in most commercial server applications such as databases - the ratio of reads to writes is higher than 5 to 1. And as there is no wear-out or endurance limit on read operations - the implication is to increase the operating life by the read to write ratio.
As a sanity check - I found some data from Mtron (one of the few SSD oems who do quote endurance in a way that non specialists can understand). In the data sheet for their 32G product - which incidentally has 5 million cycles write endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.
How to interpret these numbers?
With current technologies write endurance is not a factor you should be worrying about when deploying flash SSDs for server acceleration applications - even in a university or other analytics intensive environment.
Your "WD Blue 500 GB SATA Hard Drives ( WD5000AAKX)" is 126MB/sec max sustained.
You can put that on a SATA II port, without really hindering its performance.
http://www.wdc.com/global/products/specs/?driveID=896&language=1
*******
Your main concern for picking a disk controller operating mode,
might be what modes support TRIM.
Otherwise, AHCI might help with server style work loads. And
on servers using rotating hard drives.
The thing is, since a SSD has virtually zero seek time, and
no "head movement", there's no need to reorder operations to
optimize head movement, and complete the operations out of order.
I don't see how AHCI is a win otherwise, for an SSD. Just as
defrag of an SSD isn't necessary, even if the "colored graph"
in your third-party defragger says otherwise. It isn't necessary,
because the seek time is so low. You can read 8000 fragments just
as fast as reading one contiguous file (more or less). In terms
of user perception, I doubt you'd notice the difference.
http://en.wikipedia.org/wiki/TRIM
"Windows 7 only supports TRIM for ordinary (AHCI) drives
and does not support this command for PCI-Express SSDs
that are different type of device, even if the device
itself would accept the command"
I'm not going to stick my neck out and say that's the only
way to get TRIM. But it might be a safe assumption. The
Windows built-in AHCI driver is MSAHCI, but it's also possible
to install a custom driver (like the one from Intel) and
perhaps use an Intel version of it. I haven't memorized all
the details.
Changing disk operating modes in Windows 7, is a matter of
"re-arming" the disk discovery process at startup, by modifying
some registry settings. Otherwise, once the OS gets "comfortable"
with a certain discovered driver, it stops trying all of them
at startup. And that's where the "re-arm" comes in.
The situation gets slightly more complicated, if you install
your own copy of the Intel driver at some point (IASTOR versus
IASTORV versus RST ???). There is undoubtedly information
out there for you to look up on the topic.
*******
If money is no object, maybe the fluff in that storage article
can be achieved. But you and I will be buying commodity disks.
For decent size (you quote "240 GB for USD$220, and 120 GB for half that"),
those will be MLC flash drives. With write endurance in the 3500 to 5000
cycle range. Intel released some small 20GB drives in SLC. Those
might have a higher write endurance, but you'd need a crapload
of those in some RAID mode, to make an OS drive.
If the price of the whimsical SSD is high enough, people can
make DRAM based storage devices, with no wearout mechanism
at all for an equivalent price. So there's a cap on how much
they can charge for a whimsical SSD, how much overprovisioning
they can do and so on. Material cost still matters.
You might not see specialized enterprise drives at your local
computer shop. You might not even see them advertised on the web.
And even if you did, they'd be thousands of dollars if they
had whimsical write endurance. Nobody in their right mind, sorts
through flash chips and "picks out good ones just for your drive".
Sorting is done at the silicon fab, and perhaps they're graded there
to have fewer initial defects, but the physics of the devices don't
change. They all come off the same wafer. If you read the research
papers, about what affects the ability to write to flash, you'd get
an entirely different perspective on pushing the claimed write
endurance.
And as each generation doubles or quadruples density, the
write endurance drops. The ECC code becomes slightly longer,
to cover for the inevitable errors in each read. The code allows
a trivial number of errors to be corrected, in the same sense
that the codes used on CDs, allow a scratched or dust coated
CD to continue to be readable. The ECC code is picked in each
generation, with an eye to the error characteristic. But personally,
I'm not at all that enthusiastic about the direction all of this
is taking. I'd rather have the capacity of drives remain fixed,
and generational changes improve the quality of drives, rather
than have a "4TB MLC SSD" with the reliability of toilet paper.
If I need crappy 4TB drives, I can get that in a mechanical
drive.
http://en.wikipedia.org/wiki/Multi-level_cell
You know, as soon as TLC devices become available, they'll be
crammed into commodity drives, and the write endurance will drop
some more.
http://www.anandtech.com/show/5067/understanding-tlc-nand/2
I don't see a problem with buying SSDs, as long as you have
the right mindset. Take my handling of USB flash sticks. I
don't store the only copy of data on them. I assume I'll plug
one in some day, and it'll be completely dead. My copy on the
hard drive, will save me. While SSDs talk a great show,
you can still find people who wake up in the morning,
switch on, and the BIOS can't find the drive. For those
moments, you have backups (and warranty support for your
SSD).
Ask "John Doe" poster, what he thinks of SSDs so far.
Before buying, read the reviews carefully. If an SSD
has obvious controller or firmware design flaws, it
might show up in the Newegg or Amazon customer reviews.
And then you get some pre-warning, before you become
a victim.
Paul