SATA performance vs standard IDE for small database server

  • Thread starter Thread starter Bib
  • Start date Start date
B

Bib

I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests. There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

SCSI would be the way to go, but it's not in the budget for the spec,
sadly.

Thanks
Barclay McInnes
 
Bib said:
I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests.

SATA vs std IDE isn't the issue. The speed of the HD is the issue. The
fastest ATA HD is the WDC Raptor and it happens to be SATA. Use Raptors.
There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

Use Raptors.
 
Previously Bib said:
I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests. There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

Not with the current generation. Command queuing is only in the next
SATA standard. It is also doubtful whether it will reach SCSI's
performance level, IMO. Today SATA gives you about the performance of
IDE, with some problems thrown in for a not entirely mature
technology.
SCSI would be the way to go, but it's not in the budget for the spec,
sadly.

More memory could also help. Or splitting the database over several
disks. Maybe also move to a different OS or differend database software.
What are you using?

An other problem is that good server performance and good interactive
responsiveness are two different things.

Arno
 
Arno Wagner said:
Not with the current generation.
Command queuing is only in the next SATA standard.
Nonsense.

It is also doubtful whether it will reach SCSI's performance level, IMO.

Your opinion obviously isn't worth a thing when you say things like that above.
 
Arno Wagner said:
Not with the current generation. Command queuing is only in the next
SATA standard. It is also doubtful whether it will reach SCSI's
performance level, IMO. Today SATA gives you about the performance of
IDE, with some problems thrown in for a not entirely mature
technology.

Native command queuing is available on current SATA drives. SiI has it in
their 3114 and 3124 controllers.
Management incompetence. You have to pay for performance. But not much more
than SATA Raptors.
 
Native command queuing is available on current SATA drives. SiI has it in
their 3114 and 3124 controllers.

Since it is the HDD that has to do the command queuing for it to be
effective (only the HDD knows its geometry and can improve performance
by reordering access), having it in the computer-side controller does
not help performance-wise. RAID-access optimisation (what I deduce
the Sil chips you mention have) is something else, since it is not HDD
specific.
Management incompetence. You have to pay for performance. But not much more
than SATA Raptors.

Agreed here. Get SCSI or stay slow. It is something else if you have
mostly linear access. For heavy random access, SCSI is massively ahead
of (S)ATA.

Arno
 
Bib said:
I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests. There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

Check that box and make sure that DMA is enabled for the drive that is
showing that response.

Now, that said, if you are using single SATA drives there's not going to be
a whole lot of difference due to SATA. However, there are some other
considerations--first, 10,000 RPM ATA drives are available only with SATA
interfaces, not PATA. Second, the latest generation of ATA RAID
controllers is available only for SATA--their performance should be better
than the previous generation. Between the two, you can get somewhat better
performance out of SATA than out of PATA.
 
Bib said:
I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests. There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

SCSI would be the way to go, but it's not in the budget for the spec,
sadly.

Have you logged any system activity with perfmon?

Some databases can be optimized by using multiple disks. For example,
Microsoft has recommendations for configuring a SQL database, backups, and
logs across multiple disks.

I've seen SQL database performance improve by enabling caching on controllers
as well, specifically compaq scsi and fiberchannel arrays. But, caching is
sometimes not recommended for some applications.
 
Arno Wagner said:
Since it is the HDD that has to do the command queuing for it to be
effective (only the HDD knows its geometry and can improve performance
by reordering access),

What exactly did you not get in:
"Native command queuing is available on current SATA drives"?
having it in the computer-side controller does not help performance-wise.

It does when it keeps tag (pun intented) of the outstanding IO instead of
the CPU.
RAID-access optimisation (what I deduce the Sil chips you mention have)
is something else, since it is not HDD specific.

Deduce all you want.
Agreed here. Get SCSI or stay slow.

The second Raptor is just as fast as any other 10k rpm SCSI.
You won't get 15k rpm SCSI for 10k rpm prices.
It is something else if you have mostly linear access. For heavy random access,
SCSI is massively ahead of (S)ATA.

LOL. Time for bed, Arnie, you're obviously loosing it.
 
I'm looking at building a small database box for departmental use and
am curious if anyone has looked into if SATA would work better than
standard IDE for an application with a good number of simultaneous
read/write requests. There's a box doing something similar right now
in the office with regular IDE and I am very dissatisfied with its
performance. At certain times of the day it'll seem to freeze (even
in a shell!) for 2-3 seconds while it processes a lot of drive I/O.
Is this partially alleviated with SATA, or not really?

SCSI would be the way to go, but it's not in the budget for the spec,
sadly.

SATA/PATA really won't make a diff. Specs that matter
are access times (under 8ms is best) along with 8MB
caches. RPMs matter more for long-duration transfer
rates (which may or may not matter).

Raptors are probably a good bet (I don't know their
specs off hand).

3 year warranty is also a must.

The problems with "freezing" that you're experiencing is
a problem with your motherboard chipset (most likely).
I have a VIA motherboard here that is *horrible* about
doing multiple things at once when compared to my older
Abit KT7-RAID which handles multiple PCI activities
without a hitch.

RAID1 is a must for a departmental file server...
ideally with a hot spare drive.

So make sure you get a good quality motherboard (Tyan),
preferably with 64bit PCI slots to match up with a
*quality* PATA/SATA raid controller (e.g. the 3Ware
Escalades). A good server motherboard will cost $250 or
so (anyone seen cheaper?) and the 3Ware 2-port Escalade
7000-2 PCI raid card can be had for $100. Or get the 4-
port SATA PCI card ($300 or so) along with the hot-swap
bays ($250) and do a RAID5 setup with a hot-spare.

Pricing for all that will probably run around $2500-
$3000 for a 4 drive system, good case, good motherboard,
drives and the RAID card. Equivalent SCSI solution
would be more like $5500-$7500.
 
Toshi1873 said:
SATA/PATA really won't make a diff. Specs that matter
are access times (under 8ms is best) along with 8MB
caches. RPMs matter more for long-duration transfer
rates (which may or may not matter).

NO, RPM matters most for average access time and NOT for STR..
 
Back
Top