Jase said:
It's really hard to say because the application will be a database driven
website. It will be transactionally heavy and the maximum load depends on
the peak number of simultaneous users I guess - and this depends on how
popular the site is. I would hazard a very rough guess at around 50-100
simultaneous users for the first year or so.
This depends on your definition of simultaneity.
I often have a fair number of users more or less simultaneously
looking at my site (in the sense that they've arrived and haven't yet
left), but from the computer's point of view, there's nobody using the
system 99.99% of that time, as the load generated by each user is
extremely small (an occasional burst of a couple page loads, and
that's it).
Something else you may wish to keep in mind is that, unless you have a
lot of bandwidth to your user community, the speed of your Net
connection is likely to limit your capacity a lot more than your
server performance. Your connection will saturate long before you run
out of CPU and probably long before you're pushing your disks to their
limits as well.
The majority of transactions will be SQL select statements, although a
significant minority will be updates and inserts.
Relational databases generate a lot of I/O but there are still a lot
of variables to consider. If you already have statistics showing the
details of your user activity that can help you to plan for an
appropriate server configuration to handle it.
Well the application will not be business critical, although obviously the
more uptime the better.
In that case--if you can survive a few hours down on rare
occasions--you can save a lot of money by not going with high-end SCSI
server disk drives and the like. Mission-critical servers cannot
afford to be down at all, and so a lot of money on server
configuration for such systems is put into drives that have extremely
high reliability (as opposed to extremely high capacity--it's
difficult to have both), and redundancy such as RAID arrays that keep
the system up even if any one drive fails.
Essentially the last 0.1% of uptime can cost almost as much as the
first 99.9% of uptime, because insuring that a system is _always_ up
is extremely expensive. If it only has to be up "most of the time,"
you can save a lot of money.
Are there any AMD 64 939 "server" quality motherboards out there?
I don't know. I think MSI or others are known for their server
motherboards. Motherboards from any reputable vendor aren't likely to
fail very often if you treat them well, but it's also true that a
motherboard failure can take your server offline for quite a while,
since it often means replacing the motherboard, and you can't slide
one out and slide another in.
Will the Western Digital Raptors be suitable disks for a transactionally
heavy database server? Or would I be much better going for SCSI?
The most reliable disks are often SCSI disk simply because users who
have a need for high reliability (for servers) also prefer the other
advantages of the SCSI interface. So vendors usually put the two
together.
The more inexpensive desktop drives have a ton of capacity but they
aren't as reliable overall. They may run for ten years, or they may
fail after ten days. For desktops this isn't too much of an issue,
but it's important for servers. Similarly, servers need interfaces
that can handle very high data and connection rates, so something like
a USB interface obviously wouldn't do, and server disks have to have
high rotational speeds to deliver the data rates.
The price difference is a bit alarming. At my local computer
warehouse, a 250 GB Seagate 7200 RPM drive is about €119. A 300 GB
Hitachi SCSI drive at 10000 RPM is €899!
Yeah I would be looking to have 3GB of RAM.
Fortunately RAM is cheap. However, if the server is _very_ heavily
loaded, fast and/or reliable RAM might be best, and that can be more
expensive.
Remember that most fast CPUs today never come anywhere close to their
potential speeds because they spend a lot of their time waiting for
the system RAM to react. RAM as fast as the CPU costs a fortune.
But in this area remember that you need lots of RAM to help reduce the
I/O traffic to the disks; the RAM doesn't have to be fast from the
CPU's viewpoint. So you can get by with ordinary RAM from the corner
computer store in most cases. Your system is much more likely to have
trouble handling I/O than it is to have trouble handling the
processing load, in most scenarios.
Keep in mind also that any desktop PC configuration today is a hundred
times faster than the mainframe systems that kept hundreds or
thousands of users happy a few decades ago. So CPU is not likely to
be a problem. Most CPU power on desktops is spent just driving the
video display; since a server doesn't need a fancy video display, a
lot more CPU power can be dedicated to handling remote users, and with
several billion instructions per second routine today, that's a _lot_
of remote users.
Another, straightforward way of looking at things is: How much will a
failure or overload (drop in response time) on your server actually
cost you in terms of lost business? That will give you some idea of
how much you can and should spend on the server. If three hours of
downtime will cost you one purchase worth $40, you can easily afford
to cut costs on hardware and tolerate the possibility of that
downtime. If downtime costs you $500,000 a minute (and yes, some of
the largest online systems can cost that much), you should spare no
expense in setting up your servers.