Serial ATA: Is it 1.5Gbps or 150MBps?

  • Thread starter Thread starter Anonymous Joe
  • Start date Start date
A

Anonymous Joe

Built a new PC, used a single 160GB SATA drive. Noticed the drive said
"Serial ATA" with "1.5Gbps" under it. The software provided with the
motherboard confirmed this.

Why have I heard people quote it as 150MB/sec? If it is 1.5Gbps, where it
would be giga-bits, then it is either going to be 1500Mb/sec, or 1536Mb/sec
(depending on how they are determining what a gigabit is). Just converting
that to megabytes, you quickly see it is either 187MB/sec or 192MB/sec....
That's far from 150MB/sec. It seems the only way to get 1.5Gbps to come out
to 150MB/sec, is if a giga-bit = 1000 mega-bits, and 10 bits = 1 byte. That
is so wrong, I cannot imagine any forum would allow such measurments!
 
Anonymous said:
Built a new PC, used a single 160GB SATA drive. Noticed the drive said
"Serial ATA" with "1.5Gbps" under it. The software provided with the
motherboard confirmed this.

SATA is currently 150MB/sec. 1.5Gb/sec is just wrong.


-WD
 
Built a new PC, used a single 160GB SATA drive. Noticed the drive said
"Serial ATA" with "1.5Gbps" under it. The software provided with the
motherboard confirmed this.

Why have I heard people quote it as 150MB/sec? If it is 1.5Gbps, where it
would be giga-bits, then it is either going to be 1500Mb/sec, or 1536Mb/sec
(depending on how they are determining what a gigabit is). Just converting
that to megabytes, you quickly see it is either 187MB/sec or 192MB/sec....
That's far from 150MB/sec. It seems the only way to get 1.5Gbps to come out
to 150MB/sec, is if a giga-bit = 1000 mega-bits, and 10 bits = 1 byte. That
is so wrong, I cannot imagine any forum would allow such measurments!

Well, typically when converting from bits to bytes on
serial transmissions, you allow a certain number of bits
as "overhead" anyway. (e.g. a 56kbps modem usually
delivered 5.6KB/s transfer rates, roughly 10 bits per
byte)

Also, the product is branded as SATA/150, not
SATA/1.5...
 
Toshi1873 said:
(e-mail address removed) says...
Well, typically when converting from bits to bytes on
serial transmissions, you allow a certain number of bits
as "overhead" anyway. (e.g. a 56kbps modem usually
delivered 5.6KB/s transfer rates, roughly 10 bits per byte)

Thats due to a different effect, start and stop bit stripping.
 
SATA is currently 150MB/sec. 1.5Gb/sec is just wrong.


-WD


1.5Gb is 120MB/sec. As far as marketing performance specs go, they are
close. Not that you'll see anything like this speed in a real system
for a long time. I just put a Maxtor 160GB SATA drive on an Asus
A7N8X mobo and HDtest showed 40MB/sec compared to 30MB/sec for a good
WD pata drive. (numbers from memory, don't quote me.)
 
Toshi1873 said:
Well, typically when converting from bits to bytes on
serial transmissions, you allow a certain number of bits
as "overhead" anyway. (e.g. a 56kbps modem usually
delivered 5.6KB/s transfer rates, roughly 10 bits per
byte)

Also, the product is branded as SATA/150, not
SATA/1.5...

Ah, ok, that makes sense. I do have to say, though, that teh 160GB Maxtor
SATA I used did say "Serial ATA" and "1.5Gbps" under it. While that is not
explicity SATA/150 or SATA/1.5 either way, sure leads toward 1.5Gbps....
 
Why in the world would you divide by 10?

There are EIGHT, count'em EIGHT (8) BITS in a BYTE.

You divide by 8.

Tom
 
I'm not trying to start a fight, but there ARE 8 bits in a byte. Surely
you're joking that you think it is 10.

Tom
 
Any high-speed serial bus uses 10/8 encoding, 10 clocks per 8 data bits. It is
necessary to reduce high frequencies. I think FDDI was the first over 10 years
ago, then fast Ethernet.
 
Tom said:
I'm not trying to start a fight, but there ARE 8 bits in a byte. Surely
you're joking that you think it is 10.

Well, actually there are however many bits in a byte the machine designer
chose to put there. All the currently popular machines have 8-bit bytes so
8 bits has come to be assumed but there is nothing sacred about that
number.

When talking about data communications it's important to consider exactly
what you mean by "throughput". If you count every bit that goes down the
wire you get one number. If you discount the bits that carry the overhead
of the data-link protocol then you get another number. If you discount the
bits that carry the overhead of the transport protocol you get a third, and
so on. In data communications a byte is often assumed to be ten bits to
allow for protocol overhead and get a more realistic view of actual
throughput.
 
A quick visit to www.serialata.org brings the following info:
1500MHz embedded clock
x 1 bit per clock
x 80% for 8b10b encoding
/ 8 bits per byte
= 150 Mbytes/sec
Key words are MHz, embedded clock, 8b10b encoding. Data is encoded/scrambled
to regenerate clock on receiver's end and to minimize RF emissions. No
wonder you get less data troughput than MHz number would suggest.
 
Tom Scales said:
Why in the world would you divide by 10?

There are EIGHT, count'em EIGHT (8) BITS in a BYTE.

Yup. Pity that bytes aren't sent over a serial interface, but bits.
With modems you used to call them "baud". Depending on (lengtht of) start and
(number of) stopbits you divide by between 9 to 11.5 for the number of bytes.
 
J. Clarke said:
Well, actually there are however many bits in a byte the machine designer
chose to put there. All the currently popular machines have 8-bit bytes so
8 bits has come to be assumed but there is nothing sacred about that number.

That may be so for words, but not bytes.
The PDP* had 12-bit words but a byte was still 8 bits, afaik.
 
J. Clarke said:
Well, actually there are however many bits in a byte the machine designer
chose to put there. All the currently popular machines have 8-bit bytes so
8 bits has come to be assumed but there is nothing sacred about that number.

That may be so for words, but not bytes.
The PDP* had 12-bit words but a byte was still 8 bits, afaik.
 
That may be so for words, but not bytes.
The PDP* had 12-bit words but a byte was still 8 bits, afaik.

From the Jargon File (aka The New hacker's Dictionary)

byte /bi:t/ n.

[techspeak] A unit of memory or data equal to the amount used to
represent one character; on modern architectures this is usually 8
bits, but may be 9 on 36-bit machines. Some older architectures used
`byte' for quantities of 6 or 7 bits, and the PDP-10 supported `bytes'
that were actually bitfields of 1 to 36 bits! These usages are now
obsolete, and even 9-bit bytes have become rare in the general trend
toward power-of-2 word sizes.

Historical note: The term was coined by Werner Buchholz in 1956 during
the early design phase for the IBM Stretch computer; originally it was
described as 1 to 6 bits (typical I/O equipment of the period used
6-bit chunks of information). The move to an 8-bit byte happened in
late 1956, and this size was later adopted and promulgated as a
standard by the System/360. The word was coined by mutating the word
`bite' so it would not be accidentally misspelled as bit. See also
nybble.


Does this put an end to this thread, please ?
 
Back
Top