Alojzy said:
Paul, I do not exactly see your point.
In this benchmark
http://tinyurl.com/l3po79 the SATA->IDE adapter was
tested when used to connect A-Data SATA SSD:
http://www.adata.com.tw/en/product_show.php?ProductNo=ASE1SAMPL
which may not for sure introduce any bottlenecks since these disks are
really very fast...
BTW, why for e.g. Trancend SSDs 2 very similar (analogue) models, and
the only difference between them is that one is SATA and the second
one PATA, then there is so huge difference in transfer rate, not
justified mereley by IDE/PATA interface limitations?
Please, consider these 2 specs:
1.) IDE
http://tinyurl.com/ks4n9t
Performance:
SLC - Read up to 74MB/s, Write up to 62MB/s (8GB to 32GB)
Read up to 80MB/s, Write up to 70MB/s (64GB)
MLC - Read up to 74MB/s, Write up to 45MB/s (32GB, 64GB)
Read up to 68MB/s, Write up to 46MB/s (128GB)
2.) SATA
http://tinyurl.com/loawg2
Performance:
SLC - Read up to 150MB/s, Write up to 90MB/s (8GB)
Read up to 150MB/s, Write up to 100MB/s (16GB)
Read up to 150MB/s, Write up to 120MB/s (32GB)
Read up to 170MB/s, Write up to 140MB/s (64GB)
MLC - Read up to 150MB/s, Write up to 50MB/s (16GB)
Read up to 150MB/s, Write up to 90MB/s (32GB to 192GB)
Its the same but SATA version is at least 2x faster than IDE, why?
PATA version should be capped by just 100MB/s PATA limit - and not
~70MB/s like the above...
OK, I'll explain the difference between the "burst" measurement in HDTune,
versus "sustained" measurement such as your CrystaldiskMark 2.2 result here.
http://forum.cdrinfo.pl/attachments...iz-133mb-s-od-gory-bench_sata_pata_bridge.jpg
(parts of hard drive)
cable
Southbridge_UDMA6_133MB/sec -----------------> cache_RAM ---> platters
Transfer rate "burst" |<------ 119MB/sec ---->|
Transfer rate "sustained" |<------------- 45MB/sec ------------>|
When you do a "burst" test, the transfer is very small, so small in fact,
that the transfer fits easily in the cache_RAM chip on the hard drive
controller board. By doing a "burst" transfer test, you're presumably
getting as close to the cable limits as possible. A cache_RAM should at least
be fast enough, to devour the data at the cable rate. Otherwise, there
isn't much point installing a cache RAM, if it cannot do that.
If a longer transfer is done, that is a "sustained" transfer. At
that point, the cache_RAM chip is full, and the transfer bogs down.
The transfer is then limited by the speed of transfer where the
heads meet the platter. In the case of the Maxtor drive I tested,
that rate is about 45MB/sec.
By using a burst transfer test, I was able to observe a situation where
119MB/sec were transferred. If the design of the storage device was
such, that the head to platter interface was not a limitation,
then perhaps I would also see 119MB/sec on a sustained transfer.
The CrystalDiskMark test appears to be doing a sustained transfer
test, and as such, doesn't help me predict what would happen if
the storage device had no internal limits. HDTune or some other
"burst" test, is what you want, for determining how fast it could
have gone, with a better controller design.
*******
When you engineer busses, there are some things to consider. I first
saw some of these effects more than 20 years ago.
If I take any single data bus standard, it may successfully transfer
at a decent rate
bus
host ------- device Transfer rate 40MB/sec
Now, if I concatenate two busses (bridge chip between busses if necessary),
the transfer rate does not have to match the characteristic of either
bus. We were shocked when we first saw this, but it didn't take
long to figure out why. (Back in those days, we didn't have a lot
of fancy simulation tools, and paper analysis methods aren't always
that good at detecting these problems.)
bus bus
host ------- bridge ------- device Transfer rate 1MB/sec
The reason this kind of thing happens, is because pipelining is
not being used during transfers. Some aspect of the transfer is
serialized. For each data item sent in the forward direction, the
host waits for an acknowledgment before handling the next one.
I used to study things like this in the lab. Some of the first
computers we built, highlighted these problems, which is how I
learned about them. In more recent years, some of the Tundra Semiconductor
documentation for some of their products, went into some detail
about the same kinds of effects - concatenated busses that
did not perform anywhere near their limits. Few companies
provide the level of detail offered in the Tundra documentation.
When busses are designed, it is important that the protocol does not
limit the ability to send data. That involves more than just the
clock rate and width of the bus. It also involves the protocol
choices, when acknowledgments are sent and so on. Using the
same principles as are used in TCP networking, but at a bus
level.
I don't know all the details of SATA to IDE or IDE to SATA bridging,
and whether there are any rate-limiting steps in the protocol.
IDE, on the one hand, allows streaming of data with no headers involved.
SATA, is a packet protocol, so presumably involves a data packet and
an acknowledgment packet. Can the protocols be pipelined ? Can you
start sending a packet, if the IDE interface hasn't finished streaming
it yet ? Are there any steps which can serialize the transfer process and
slow it down ? I don't know. I haven't designed one.
If you were in a well equipped lab, you might use a digital storage
scope and a logic analyzer, to determine when things are sent,
and what responses happened. You could get trace information, as
to what happened.
In design, people also use simulation, to predict how the product
will respond. In some cases, manufacturers are honest about these
results, and give potential customers some warnings about
limitations. For example, there is a Silicon Image part, with
an internal 110MB/sec transfer rate limitation, due to the
internal processor used to handle operations. But that kind of
honesty is rarely displayed by companies, and when there is
an issue with the way something works, you simply *avoid* stating
the level of performance. That is how dishonesty in datasheets
happens. The marketing people will politely ask you to
remove those kinds of details.
Something else to remember about SSD flash designs, is they
currently *do not* use caching on the interface. When data
comes in, it is processed immediately. That was stated in an
Anandtech article about SSDs. When a RAM chip is seen inside
a SSD design, it is being used for other purposes, such as
handling wear leveling, and what blocks are free and so on.
So caching is currently not part of the design. The implication
is, the device continues to accept data, until the flash
controller needs to write it out. At that point, there could be
a delay. So the SSD may not be as extravagantly designed as
the controller on a hard drive is. And as such, it means the
response of the SSD is not as simple as the hard drive
case. The SSD can stop, because of internal activity, so
you may not see as smooth a transfer pattern as you might
see on a hard drive.
The fact that HDTach and HDTune do not currently report
correct results for SSDs, means that it is pretty hard
to decide anything about SSDs. And that means you're reduced
to sustained transfer tests, like the kind you can run with
a stopwatch. But sustained tests don't tell you anything,
about how much room there is for improvement.
Paul