4-way Opteron vs. Xeon-IBM X3 architecture

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Y

Yousuf Khan

There was a lot of hay made about how a 4-way Xeon system with the IBM
X3 chipset beat out a similarly configured 4-way Opteron system from HP
(albeit with much greater cost, $1.83M vs. $0.48M). IBM ran the TPC-C
tests with 64-bit Windows and 64-bit DB2; HP did it with 64-bit Windows
and 64-bit SQLserver. The HP Opteron machine managed 202,551 vs. the IBM
Xeon machine doing 221,017. Beating the Opteron by over 9%!

Well, HP just redid those tests, and the HP Opteron system managed
236,054 in TPC-C this time. So what was the difference? Did HP use
better Opterons? No, they used Opteron 880's before, and they continued
to use them this time. Did AMD improve the Hypertransport in the
meantime? Nope, still the same revision of Hypertransport they had
previously. So what was it? HP replaced the SQLserver with IBM's own
DB2. So this wasn't so much about Opteron beating Xeon+X3 by 6.8%, but
that DB2 beat SQLserver by 16.5%!

IBM has been using these tests as a means of demonstrating the
superiority of its X3 architecture over Hypertransport, on the hardware
side. But it was secretly downplaying the superiority of DB2 over other
databases, on the software side.

Journal of Pervasive 64bit Computing: HP beat IBM at its own game
http://sharikou.blogspot.com/2005/12/hp-beat-ibm-at-its-own-game.html

Yousuf Khan
 
Did you actually look at the submission? Are the two HP systems fully
identical? I rather doubt it...

DK
 
David said:
Did you actually look at the submission? Are the two HP systems fully
identical? I rather doubt it...

Very close to identical, here's a list of the components from each HP
machine:

component: MS; IBM
OS: Windows Server 2003 x64; same
DB: SQLserver 2005 (x64); DB2 UDB v8.2
CPU: 4 x Opteron 880 (2.4Ghz, DC, 1MB L2); same
RAM: 32GB; same
Disks: 15,005GB; 13,984GB

Slight difference is total disk space (the faster system actually has
*less* disk space). And of course major difference in database software.

Yousuf Khan
 
David said:
Did you actually look at the submission? Are the two HP systems fully
identical? I rather doubt it...

Well, now that it's been proven that these two systems are pretty nearly
identical: what were you thinking would've been the difference?

Yousuf Khan
 
Yousuf said:
Well, now that it's been proven that these two systems are pretty nearly
identical: what were you thinking would've been the difference?

Storage, clients and their configurations, the OS. Memory timings
could make a difference.

One thing people don't realize is that most K8 systems have to
downclock the memory when they are > half loaded with DRAM. The IBM
submission is quite a bit older, so there could easily have been
improvements since then.

DK
 
Storage, clients and their configurations, the OS. Memory timings
could make a difference.

One thing people don't realize is that most K8 systems have to
downclock the memory when they are > half loaded with DRAM.

Obviously your lack of experience with K8 is revealed through your
ignorance - AMD has written specs to cater for "average" DIMMs; high
quality memory makes a difference.
The IBM
submission is quite a bit older, so there could easily have been
improvements since then.

Just go look at the tpc results - the DB2/SQL performance gap is well
illustrated with other systems too.
 
One thing people don't realize is that most K8 systems have to
Obviously your lack of experience with K8 is revealed through your
ignorance - AMD has written specs to cater for "average" DIMMs; high
quality memory makes a difference.

That is apparently not true:

http://h18004.www1.hp.com/products/servers/proliantdl585/specifications.html

Upto 32GB DDR400, 48GB @ DDR333 and 128GB @ DDR266.

Now, to me that looks an awful like they are decreasing memory speeds
as capacity goes up. In fact, since they only sell the system with
DDR333 and DDR400, it looks to me like if you want 128GB, you have to
downclock the memory.

Funny, that's pretty much what I said...

I'm sure that HP is fully capable of getting high quality DIMMs, so
please don't give me that line. It is quite clear that to fully load
the system with memory, you have to decrease the speed.

This is quite common by the way, to fully load a p/iSeries system, you
have to drop down from DDR2 to DDR1...
Just go look at the tpc results - the DB2/SQL performance gap is well
illustrated with other systems too.

I agree that generally DB2 scores better, but there are only a few data
points to draw on. Like 3-5.

DK
 
David said:
One thing people don't realize is that most K8 systems have to
downclock the memory when they are > half loaded with DRAM. The IBM
submission is quite a bit older, so there could easily have been
improvements since then.

The IBM submission might be quite a bit older, but it's also for a
system that won't even be available till March 2006! While the HP
systems are available immediately. So in effect an upcoming future
system is older than an already-available present system! Time dilation
effects, apparently. :-)

Yousuf Khan
 
That is apparently not true:

http://h18004.www1.hp.com/products/servers/proliantdl585/specifications.html

Upto 32GB DDR400, 48GB @ DDR333 and 128GB @ DDR266.

Now, to me that looks an awful like they are decreasing memory speeds
as capacity goes up. In fact, since they only sell the system with
DDR333 and DDR400, it looks to me like if you want 128GB, you have to
downclock the memory.

They're specs fer chrissakes - experience has shown differently, if you'd
only look around.
Funny, that's pretty much what I said...

I'm sure that HP is fully capable of getting high quality DIMMs, so
please don't give me that line. It is quite clear that to fully load
the system with memory, you have to decrease the speed.

That'd be a first if an OEM acquired only high quality memory.
This is quite common by the way, to fully load a p/iSeries system, you
have to drop down from DDR2 to DDR1...

Of course it's common - you load up a memory bus and you generally have to
back off on timings... even Intel.
I agree that generally DB2 scores better, but there are only a few data
points to draw on. Like 3-5.

Dunno what "Like 3-5" means - there's enough evidence of similar systems
with either that it *always* scores better.
 
That is apparently not true:
They're specs fer chrissakes - experience has shown differently, if you'd
only look around.

Experience? What experience? Sadly, I don't have dozens of these guys
running around in my house. Can you show systems with benchmarks that
use 128GB of memory running at full speed?

Look at the TPC-C submission:
http://tpc.org/results/individual_results/HP/HP_DL585_4P_2.4DC_DB2_ES.PDF

They are using PC2700, not 3200.
That'd be a first if an OEM acquired only high quality memory.

You're assuming that HP uses a particular quality of memory; I don't
know whether they do or not
Of course it's common - you load up a memory bus and you generally have to
back off on timings... even Intel.

Can you cite examples in the 2P & 4P realm of Intel based systems
running the memory slower to achieve high capacity? I cannot, but I
may very well be overlooking stuff.

I know IBM's xSeries does not trade memory speed vs. capacity.
Dunno what "Like 3-5" means - there's enough evidence of similar systems
with either that it *always* scores better.

What I mean is that there are perhaps 3-5 systems with TPC-C
submissions that use both MS SQL and DB2. I haven't even gotten down
to examining the differences in memory, processors, OS, etc.

Now certainly, there are a couple of cases of directly comparable
systems being used with both DBMS, and usually DB2 scores higher.
However, I can only think of IBM's x366 and the HP DL585.

Anyway, that's hardly the point. The issue I brought up was that
memory timings tend to vary a bit more for K8 systems since they are
capacity dependent.

DK
 
Yousuf said:
The IBM submission might be quite a bit older, but it's also for a
system that won't even be available till March 2006! While the HP
systems are available immediately. So in effect an upcoming future
system is older than an already-available present system! Time dilation
effects, apparently. :-)

That's obviously the result of proximity to the Jobs Reality Distortion
field...

DK
 
That's obviously the result of proximity to the Jobs Reality Distortion
field...

Hmm, K8, IBM, March 2006, HP, "available immediately", Jobs? What
*are* you talking about? Backtracking with the Kanter Smoke Field?
 
Storage, clients and their configurations, the OS. Memory timings
could make a difference.

The two HP systems have *very* similar (though not 100% identical)
storage configurations. OS is the same, though optimizations and
settings could be slightly different. Both the HP Opteron systems
also used PC2700 memory.
One thing people don't realize is that most K8 systems have to
downclock the memory when they are > half loaded with DRAM.

The two HP systems using identical processors, system boards and
memory, so I doubt that had any impact on things.
The IBM
submission is quite a bit older, so there could easily have been
improvements since then.

If by "quite a bit older" you mean 5 weeks, then I guess you might
have a point. However practically speaking I don't think IBM made too
many changes between Oct. 31, 2005, when they submitted their result,
and Dec. 5, 2005 when HP submitted their latest Opteron results.
 
Experience? What experience? Sadly, I don't have dozens of these guys
running around in my house. Can you show systems with benchmarks that
use 128GB of memory running at full speed?

The 128GB is not the issue - you said "> half-loaded" so what's important
is how you get to 128GB and AMD's specs talk of number of ranks of memory.
There are certainly Web sites who have tried successfully to run at DDR400
with a full rank count on the memory channels - sorry but I did not
bookmark them. Obviously the number of CPUs matters as well with AMD...
how you divide your 128GB up.
Look at the TPC-C submission:
http://tpc.org/results/individual_results/HP/HP_DL585_4P_2.4DC_DB2_ES.PDF

They are using PC2700, not 3200.

No way to know what they tried and possibly failed with, or if they're just
following specs. I'd say for tpc that loss is not a big deal - if the
clock speed has to be reduced with an obvious loss in bandwidth... with
quality memory, the latency can also be reduced (i.e. improved) so that
there's no increase in delay time, which would be a bit more important
(than bandwidth) for a typical tpc load.
You're assuming that HP uses a particular quality of memory; I don't
know whether they do or not

I'm assuming that they buy their memory DIMMs much like any other large OEM
- by the boatload and for a price point.
Can you cite examples in the 2P & 4P realm of Intel based systems
running the memory slower to achieve high capacity? I cannot, but I
may very well be overlooking stuff.

Unless you have a very recent DIB system, the Intel Xeon multi-drop FSB is
derated (vs. an equivalent P4 system) anyway. Is it currently 533MT/s or
667MT/s?... I don't recall... so no in general, a reduction in memory
channel speed is not necessary. I see no point in running FSB/memory
non-clock-locked.
I know IBM's xSeries does not trade memory speed vs. capacity.


What I mean is that there are perhaps 3-5 systems with TPC-C
submissions that use both MS SQL and DB2. I haven't even gotten down
to examining the differences in memory, processors, OS, etc.

Now certainly, there are a couple of cases of directly comparable
systems being used with both DBMS, and usually DB2 scores higher.
However, I can only think of IBM's x366 and the HP DL585.

It's a while since I looked and I don't have time right now, but this issue
has been discussed here before wrt tpc benchmarks - it's more than
"usually"... it's quite consistent.
Anyway, that's hardly the point. The issue I brought up was that
memory timings tend to vary a bit more for K8 systems since they are
capacity dependent.

And you tried to use it counter the argument that the difference in
performance is due to the DB2 advantage over SQL Server... which is the
real point. I've no experience with Opteron s940s but certainly when
configuring an Athlon64 system, I *do* try to keep the rank count down,
because AMD does have specs which I'd rather not bump into and because we
have to deal with commodity mbrds; OTOH, as noted above I don't think the
capacity dependency is as bad or as important as you're suggesting.
 
Experience? What experience? Sadly, I don't have dozens of these guys
The 128GB is not the issue - you said "> half-loaded" so what's important
is how you get to 128GB and AMD's specs talk of number of ranks of memory.
There are certainly Web sites who have tried successfully to run at DDR400
with a full rank count on the memory channels - sorry but I did not
bookmark them. Obviously the number of CPUs matters as well with AMD...
how you divide your 128GB up.

Websites? No website has a server with 128GB of memory for testing, be
serious. Besides, even if they did, it doesn't matter. What HP, Sun
and IBM will sell and support matter; what specially engineered and
tweaked boxes they send to reviewers is irrelevant.
No way to know what they tried and possibly failed with, or if they're just
following specs. I'd say for tpc that loss is not a big deal - if the
clock speed has to be reduced with an obvious loss in bandwidth... with
quality memory, the latency can also be reduced (i.e. improved) so that
there's no increase in delay time, which would be a bit more important
(than bandwidth) for a typical tpc load.

Actually I think you'll find that bandwidth is a lot more important for
TPC-C than unloaded latency (which is what you are referring to).
Loaded latency is key for TPC-C, and that is strongly influenced by
bandwidth.

[snip]
I'm assuming that they buy their memory DIMMs much like any other large OEM
- by the boatload and for a price point.

Perhaps, perhaps not. Given how much they charge for said memory, they
can clearly afford to use good DRAMs.

[snip]
Unless you have a very recent DIB system, the Intel Xeon multi-drop FSB is
derated (vs. an equivalent P4 system) anyway.

As it so happens I do have such a box : ) The dual busses are useful
for keeping pace with initial multicore designs. Once Intel does
shared caches, they won't have as many problems with FSB speeds.

Anyway, the older Lindenhurst systems have a full speed bus (800MT/s)
for 2P. Twin castle (for 4P) uses dual busses and is also at 800MT/s I
think, but someone should fact check that.
Is it currently 533MT/s or
667MT/s?... I don't recall... so no in general, a reduction in memory
channel speed is not necessary. I see no point in running FSB/memory
non-clock-locked.

The Blackford chipset can do 1066MT/s and 1333MT/s for each bus, it
will come out in March. The current chipset does 800MT/s for two
sockets IIRC.

[snip]
It's a while since I looked and I don't have time right now, but this issue
has been discussed here before wrt tpc benchmarks - it's more than
"usually"... it's quite consistent.

I agree that every sign I've seen points to the fact that for TPC-C,
DB2 is better than MSSQL. However, there are 2-3 comparable systems
tested with both. Also, most of the DB2 submissions are for AIX
systems, which for now, tend to perform better than others. Again, you
can see trends, but there are so many other variables contributing to
them; the only way to control for OS, memory, storage, etc. is to get
identical or near identical systems for comparison. Unfortunately,
there are only 2-3 of those at best.
And you tried to use it counter the argument that the difference in
performance is due to the DB2 advantage over SQL Server... which is the
real point. I've no experience with Opteron s940s but certainly when
configuring an Athlon64 system, I *do* try to keep the rank count down,
because AMD does have specs which I'd rather not bump into and because we
have to deal with commodity mbrds; OTOH, as noted above I don't think the
capacity dependency is as bad or as important as you're suggesting.

I really think it depends. For some people it matters, for some it
doesn't. I happen to think that that HP's score would improve if they
were capable of using full speed memory, since it would boost their
bandwith by a fair amount ~20-25%.

DK
 
Websites? No website has a server with 128GB of memory for testing, be
serious. Besides, even if they did, it doesn't matter. What HP, Sun
and IBM will sell and support matter; what specially engineered and
tweaked boxes they send to reviewers is irrelevant.

You're not making sense wrt AMD memory arrangements - capacity is only an
issue in that it loads up the channel with mutiple ranks. Where did you
get this stuff? Why did you bring up an extreme case with 128GB of memory?
Is that the basis of support for your contentions?
Actually I think you'll find that bandwidth is a lot more important for
TPC-C than unloaded latency (which is what you are referring to).
Loaded latency is key for TPC-C, and that is strongly influenced by
bandwidth.

I think not. If TPC benefited greatly from cache then I don't see its
applicability to the target market... which would be err, futile.
[snip]
I'm assuming that they buy their memory DIMMs much like any other large OEM
- by the boatload and for a price point.

Perhaps, perhaps not. Given how much they charge for said memory, they
can clearly afford to use good DRAMs.

Huh? You really think price is related to cost?
[snip]
Unless you have a very recent DIB system, the Intel Xeon multi-drop FSB is
derated (vs. an equivalent P4 system) anyway.

As it so happens I do have such a box : ) The dual busses are useful
for keeping pace with initial multicore designs. Once Intel does
shared caches, they won't have as many problems with FSB speeds.

Ah, so what is this here "box"? I wasn't aware that there were any DIB
systems in the err, wild... apart from IBM Hurricane based?... or are you
saying you have access to Intel prototype systems?
Anyway, the older Lindenhurst systems have a full speed bus (800MT/s)
for 2P. Twin castle (for 4P) uses dual busses and is also at 800MT/s I
think, but someone should fact check that.

Yeah please do check it... and the effective bandwidth. From what I see
Xeons just don't do very well in bandwidth tests compared with equivalent
speed P4s. In fact AMD can well afford to drop a speed grade, if
necessary, and still be well ahead.
The Blackford chipset can do 1066MT/s and 1333MT/s for each bus, it
will come out in March. The current chipset does 800MT/s for two
sockets IIRC.

Can do?... or will do? I'm interested in what can be done practically and
not some projected vapor using a memory channel technology which is looking
less and less practical, the more we see of it. As for the current
chipsets, the results I've seen do not reflect the supposed theoretical
MT/s.
[snip]
It's a while since I looked and I don't have time right now, but this issue
has been discussed here before wrt tpc benchmarks - it's more than
"usually"... it's quite consistent.

I agree that every sign I've seen points to the fact that for TPC-C,
DB2 is better than MSSQL. However, there are 2-3 comparable systems
tested with both. Also, most of the DB2 submissions are for AIX
systems, which for now, tend to perform better than others. Again, you
can see trends, but there are so many other variables contributing to
them; the only way to control for OS, memory, storage, etc. is to get
identical or near identical systems for comparison. Unfortunately,
there are only 2-3 of those at best.

The *point* is that the differences between DB2 and SQL Server systems are
massive compared with any hardware differences - a few MHz here or there is
not going to come close to those kinds of gaps. Insisting on "identical"
is unnecessary... and piddling.
I really think it depends. For some people it matters, for some it
doesn't. I happen to think that that HP's score would improve if they
were capable of using full speed memory, since it would boost their
bandwith by a fair amount ~20-25%.

Bandwidth boosts just do *not* result in huge differences in effective
performance... as reflected in the HP score as is.
 
You're not making sense wrt AMD memory arrangements - capacity is only an
issue in that it loads up the channel with mutiple ranks. Where did you
get this stuff? Why did you bring up an extreme case with 128GB of memory?
Is that the basis of support for your contentions?

I specifically demonstrated that several memory capacities require the
user to lower the speed. 48GB+ for HP boxes, and I don't think many
other folks have K8 boxes that go more than 32GB.
I think not. If TPC benefited greatly from cache then I don't see its
applicability to the target market... which would be err, futile.

TPC-C definitely benefits from bandwidth, although not linearly since
cache helps provide effective bandwidth. Your second sentence is
entirely incomprehensible...
Huh? You really think price is related to cost?

Well, actually it is, just weakly. My point was that if it was a
quality of DRAM/DIMM issue, HP would be able to simply use higher
cost/quality memory. However, they don't...which seems to imply it's
not a quality issue and has more to do with the rank limitations and
the design of the memory controller and DDR bus.
[snip]
Can you cite examples in the 2P & 4P realm of Intel based systems
running the memory slower to achieve high capacity? I cannot, but I
may very well be overlooking stuff.

Unless you have a very recent DIB system, the Intel Xeon multi-drop FSB is
derated (vs. an equivalent P4 system) anyway.

As it so happens I do have such a box : ) The dual busses are useful
for keeping pace with initial multicore designs. Once Intel does
shared caches, they won't have as many problems with FSB speeds.

Ah, so what is this here "box"? I wasn't aware that there were any DIB
systems in the err, wild... apart from IBM Hurricane based?... or are you
saying you have access to Intel prototype systems?

Yes, I have a Bensley system sitting next to my desk. I wrote an
article about it, using at least one or two relevant benchmarks:

http://www.realworldtech.com/page.cfm?ArticleID=RWT112905011743

You can find SPECjbb2005 results in there for a very modestly
configured box, that is one benchmark that I can use. Also, I believe
Intel's reference Xeon MP platform uses two busses. However, the
independence of the busses in blackford is....minimal. The two busses
are independent when using a snoop filter (i.e. a partial directory).
Blackford does not use a snoop filter, Green Creek (the work station
version) does.

Feel free to suggest other benchmarks for future inclusion...
Yeah please do check it... and the effective bandwidth. From what I see
Xeons just don't do very well in bandwidth tests compared with equivalent
speed P4s. In fact AMD can well afford to drop a speed grade, if
necessary, and still be well ahead.

I suspect that the timing for the bus may be slightly different, but
they are the same MT/s for 1-2P systems. Also, most P4 systems don't
use registered, ECC DIMMs, while most Xeon systems do.

Generally, Intel's bus works for 3 drops (i.e. MCH, and 2 processors),
it starts to drop off in speed at more drops. The problem with their
systems right now is that the Paxville MPUs have 2 bus interfaces, so a
2 socket system can have as many as 5 controllers trying to talk.
Can do?... or will do? I'm interested in what can be done practically and
not some projected vapor using a memory channel technology which is looking
less and less practical, the more we see of it. As for the current
chipsets, the results I've seen do not reflect the supposed theoretical
MT/s.

Can do. It can also do 1333MT/s, but Dempsey does not do 1333, while
Woodcrest does. The system I have uses two 1066MT/s busses.
[snip]
It's a while since I looked and I don't have time right now, but this issue
has been discussed here before wrt tpc benchmarks - it's more than
"usually"... it's quite consistent.

I agree that every sign I've seen points to the fact that for TPC-C,
DB2 is better than MSSQL. However, there are 2-3 comparable systems
tested with both. Also, most of the DB2 submissions are for AIX
systems, which for now, tend to perform better than others. Again, you
can see trends, but there are so many other variables contributing to
them; the only way to control for OS, memory, storage, etc. is to get
identical or near identical systems for comparison. Unfortunately,
there are only 2-3 of those at best.

The *point* is that the differences between DB2 and SQL Server systems are
massive compared with any hardware differences - a few MHz here or there is
not going to come close to those kinds of gaps. Insisting on "identical"
is unnecessary... and piddling.

I guess it depends on how much you are willing to assume or know about
DBMS. I honestly don't consider myself an expert, so I tend to place
more faith in statistics than other sources. If Jim Gray came along
and said that DB2 performs better than MSSQL, I'd probably believe him,
but it's hard to ascertain someone's level of experience over the
internet.

Bandwidth boosts just do *not* result in huge differences in effective
performance... as reflected in the HP score as is.

It all depends on the system...if you look at queuing theory models
(which apply to TPC-C, since the benchmark has to be in equilibrium),
increasing bandwidth can be very helpful. Obviously,
d(tmpc)/d(bandwidth) < 1 usually, but it would be a really interesting
experiment to take a K8 and P4 based system and then underclock the
memory and see how benchmark scores vary. I just don't use or know
TPC-C...nor do I have a license, nor do I have clients, nor do I have
the requisite number of disks.

David
 
I specifically demonstrated that several memory capacities require the
user to lower the speed. 48GB+ for HP boxes, and I don't think many
other folks have K8 boxes that go more than 32GB.

You "demonstrated" nothing. What you did was cite some specs from a Web
site which may or may not be accurate or up to date -- even Intel's (and
AMD's) site, including their datasheets, is riddled with errors -- and
apparently extrapolated to all AMD systems.
TPC-C definitely benefits from bandwidth, although not linearly since
cache helps provide effective bandwidth. Your second sentence is
entirely incomprehensible...

Oh, "TPC" above meant the benchmark suite. If it's still incomprehensible,
I dunno what to say but your comment about cache/bandwidth is backwards to
me... if it even applies much to the problem at hand at all. At any rate,
in case you hadn't noticed, Netburst is err, bust.
Well, actually it is, just weakly. My point was that if it was a
quality of DRAM/DIMM issue, HP would be able to simply use higher
cost/quality memory. However, they don't...which seems to imply it's
not a quality issue and has more to do with the rank limitations and
the design of the memory controller and DDR bus.

It's the high volume OEM market - they get parts which conform to some LCD.
I suspect that the timing for the bus may be slightly different, but
they are the same MT/s for 1-2P systems. Also, most P4 systems don't
use registered, ECC DIMMs, while most Xeon systems do.

Generally, Intel's bus works for 3 drops (i.e. MCH, and 2 processors),
it starts to drop off in speed at more drops. The problem with their
systems right now is that the Paxville MPUs have 2 bus interfaces, so a
2 socket system can have as many as 5 controllers trying to talk.

The performance drop-off I see in various benchmark tests is more than can
be accounted for by registering & ECC - I brought it up here a while back
when the GamePC tests were being discussed. At first I thought they had
blundered on configuration & setup but later wasn't so sure.
Can do. It can also do 1333MT/s, but Dempsey does not do 1333, while
Woodcrest does. The system I have uses two 1066MT/s busses.

IMO this is prototype stuff... not to be compared with field systems.:-)
[snip]

It's a while since I looked and I don't have time right now, but this issue
has been discussed here before wrt tpc benchmarks - it's more than
"usually"... it's quite consistent.

I agree that every sign I've seen points to the fact that for TPC-C,
DB2 is better than MSSQL. However, there are 2-3 comparable systems
tested with both. Also, most of the DB2 submissions are for AIX
systems, which for now, tend to perform better than others. Again, you
can see trends, but there are so many other variables contributing to
them; the only way to control for OS, memory, storage, etc. is to get
identical or near identical systems for comparison. Unfortunately,
there are only 2-3 of those at best.

The *point* is that the differences between DB2 and SQL Server systems are
massive compared with any hardware differences - a few MHz here or there is
not going to come close to those kinds of gaps. Insisting on "identical"
is unnecessary... and piddling.

I guess it depends on how much you are willing to assume or know about
DBMS. I honestly don't consider myself an expert, so I tend to place
more faith in statistics than other sources. If Jim Gray came along
and said that DB2 performs better than MSSQL, I'd probably believe him,
but it's hard to ascertain someone's level of experience over the
internet.

With some general knowledge/expertise in computing I don't think one needs
to be an expert in the guts of DB systems to look at a few data points and
recognize a pattern... especially one so glaring as this.
 
I specifically demonstrated that several memory capacities require the
You "demonstrated" nothing. What you did was cite some specs from a Web
site which may or may not be accurate or up to date -- even Intel's (and
AMD's) site, including their datasheets, is riddled with errors -- and
apparently extrapolated to all AMD systems.

Ok, then why don't you find some AMD systems which can run 128GB of
memory at PC3200 speeds? My claim at least has some evidence, you have
been spouting all sorts of denials, but with NO EVIDENCE to back it up.
IMO this is prototype stuff... not to be compared with field systems.:-)

Absolutely, but I'll wager money that Bensley will do 1066MT/s when it
comes out.
With some general knowledge/expertise in computing I don't think one needs
to be an expert in the guts of DB systems to look at a few data points and
recognize a pattern... especially one so glaring as this.

A pattern? 2 data points don't form a pattern, be serious.

DK
 
Back
Top