Non-intel benchmarks on Conroe vs AMD's AM2 FX62

  • Thread starter Thread starter The little lost angel
  • Start date Start date
http://www.hexus.net/content/item.php?item=5692&page=3

Well, it doesn't look like smoke and mirrors anymore does it? :P

The bit I'm curious about is, is there any mistake in the Sciencemark
2.0 latency results? Without an onboard memory controller, the Conroe
E6600 is faster than the FX-62. Strangely enough, the faster E6700 has
higher latencies than either of the former.

It depends what you're measuring the latency of? I don't have Sciencemark
-- does it cost $$?:-) -- and have never bothered to look at its details
but it is not possible that this is DRAM latency which is being reported.
Certainly when using the exact same DIMMs connected to the exact(?) same
chipset at the exact same clockrate, with the exact same FSB, one should
not expect latency to improve by ~38%.

Why did they use 2GB of memory with the FX-62?... to slow it down?

I still have to say, as I did the last time, that even at 1024x768, it's
always been my impression that the video card dominates the measured
performance of FPS in game-play benchmarks. There's something else besides
the CPU at play here.
 
http://www.hexus.net/content/item.php?item=5692&page=3

Well, it doesn't look like smoke and mirrors anymore does it? :P

The bit I'm curious about is, is there any mistake in the Sciencemark
2.0 latency results? Without an onboard memory controller, the Conroe
E6600 is faster than the FX-62. Strangely enough, the faster E6700 has
higher latencies than either of the former.

OK, we'll see what AMD will come up with 2 months later. At least it
will be a speed grade, possibly two or more. However most likely
Intel will be ahead until AMD gets 65nm ready (December?) My bet -
July will be a good time to buy INTC - probably will slide to around
15 or less by that time, and January will be the time to dump INTC and
use the profits to buy AMD.

NNN
 
OK, my error, but I am biased, (towards AMD), so they give no data on the board,
maybe used slow memory, wrong settings....
Intel cannot be better :-)

They gave all the details on page 2 of the article, you just have to
go back a page from the link L.Angel provided:

http://www.hexus.net/content/item.php?item=5692&page=2

The FX-62 chip used the most and the fastest memory of any of the
systems tested, 2GB of PC2-8000 memory vs. 1GB of PC2-5300 for the two
Intel systems and 1GB of PC-3200 for the Socket 939 Athlon64 system.

There were a few slightly odd disparities, like the two AMD-based
systems used different hard drives and monitors than the two
Intel-based systems, but that shouldn't have much of an effect on
performance.

I think even you may need to face facts, it looks like Intel will have
the upper hand when it comes to performance.
 
Ok, but the FX62 is the slowest sort of AMD's fastest speed grade.
;-)
OK, my error, but I am biased, (towards AMD), so they give no data on the board,
maybe used slow memory, wrong settings....
Intel cannot be better :-)

Until all NDAs are off, they ship to anyone who wants one (price?),
and there are *INDEPENDENT* results, who cares what Intel bench-
marketeering says?
 
There were a few slightly odd disparities, like the two AMD-based
systems used different hard drives and monitors than the two
Intel-based systems, but that shouldn't have much of an effect on
performance.

According to the authors' notes in their forum, the systems were
physically located in two different countries that's why the drive and
monitors are different.
 
On a sunny day (Tue, 23 May 2006 19:12:26 GMT) it happened
[email protected] (The little lost angel) wrote in
<[email protected]>:

There is a nice, mainly command line oriented (but it has X) Linux
CD image you can download www.grml.org (it is running my system).
And that one you can also install to hard disk.

I use this because the rest seems to get or pick up bloat.

I hate the bloat that comes with most distribution. But I went with
Ubuntu at the moment because I need something that I can eventually
use as both a normal laptop/desktop OS (for other people in the family
as well!) and small server.

Since Ubuntu is based off Debian which is supposed to be good for
server environment, I picked it to reduce any additional confusion I
might get from trying to learn two different OS at the same time.

Only had it for a day or two, still trying to figure out why/how
Firefox 1.5 wouldn't work on it. That's the main problem with Linux I
guess, you can't just download the latest app and expect it to run
just like Windows versions.
 
I still have to say, as I did the last time, that even at 1024x768, it's
always been my impression that the video card dominates the measured
performance of FPS in game-play benchmarks. There's something else besides
the CPU at play here.

I think supposedly at lower resolutions, modern cards aren't realy
ruffled by the load and the limit's on the CPU. Keep in mind the
7900GTX used in the systems are almost as capable as a PAIR of 7800GT
SLI just barely a year ago. If you look at the higher resolutions
tests where they turned on all AA/AF, you see they all plateu at
around the same point so the CPU should be primarily responsible for
the difference at the lower res.
 
krw said:
Ok, but the FX62 is the slowest sort of AMD's fastest speed grade.
;-)


Until all NDAs are off, they ship to anyone who wants one (price?),
and there are *INDEPENDENT* results, who cares what Intel bench-
marketeering says?

Ah, anyone who is trying to determine whether it would be wise to buy a
processor now, or wait another few months??
 
George Macdonald said:
It depends what you're measuring the latency of? I don't have Sciencemark
-- does it cost $$?:-) -- and have never bothered to look at its details
but it is not possible that this is DRAM latency which is being reported.
Certainly when using the exact same DIMMs connected to the exact(?) same
chipset at the exact same clockrate, with the exact same FSB, one should
not expect latency to improve by ~38%.

Why did they use 2GB of memory with the FX-62?... to slow it down?

I still have to say, as I did the last time, that even at 1024x768, it's
always been my impression that the video card dominates the measured
performance of FPS in game-play benchmarks. There's something else
besides
the CPU at play here.

Rubbish. There are plenty of benchmark tables showing that when games are
run at a low resolution with a high-end graphics card, then CPU performance
largely determines the outcome.
 
a?n?g?e? said:
I think supposedly at lower resolutions, modern cards aren't realy
ruffled by the load and the limit's on the CPU. Keep in mind the
7900GTX used in the systems are almost as capable as a PAIR of 7800GT
SLI just barely a year ago. If you look at the higher resolutions
tests where they turned on all AA/AF, you see they all plateu at
around the same point so the CPU should be primarily responsible for
the difference at the lower res.

People still use 1024x768?? <shudder>
 
krw said:
Horseshit. Anyone delaying a purchase because of this FUD is as
stupid as they come. You?

Shows how much you know. There have already been a lot of independent
benchmarks confirming the results that Intel originally showed. Hence,
anyone who went ahead and purchased a system thinking that the original
benchmarks were FUD have shown how stupid they in fact are.

 
krw said:
Shows how much you know. There have already been a lot of independent
benchmarks confirming the results that Intel originally showed.

No, there have been *NONE*. The NDAs are still in place.
Hence, anyone who went ahead and purchased a system thinking that the original
benchmarks were FUD have shown how stupid they in fact are.

"Shows how much you know."
 
krw said:
No, there have been *NONE*. The NDAs are still in place.

Garbage. Check out:

http://forumz.tomshardware.com/hard...Data-Core-Duo-Core-Extreme-ftopict183765.html

For your information, Intel needs to release the chips to various
manufacturers, and these chips then sometimes get into the hands of other
people (eg, some Conroe's have even been sold on ebay!). People who get the
chips through these channels aren't bound by an NDA, and also quite often
don't even have a website that Intel could penalize.
 
The little lost angel said:
Although the same page claims that it's an ungraceful way
to exit and most wm/desktops provide a better way... the
problem is finding it I guess! :P

Yes, that is usually the problem. But CABs works on all x86 X.
How's that to be done? Sorry, I'm practically a noob with
C and last touched any kind of assembly before I first came
into the NG :/

I'd have to re-write it to use the RDTSC instruction.
Not difficult, but I'm not sure what would be gained.
Maybe I'll try.

-- Robert
 
The little lost angel said:
Since Ubuntu is based off Debian which is supposed to be
good for server environment, I picked it to reduce any
additional confusion I might get from trying to learn two
different OS at the same time.

There is some confusion between distros, but it isn't horrible.
Deb/Ub, RH, and most of the rest have SysV style inits and
/etc/rc.d startup scripts. It is rumoured that this mess'o'symlinks
is easier to maintain (at least across a cluster), but I cannot
abide the complexity. So I still run Slackware after 10+ years.
Only had it for a day or two, still trying to figure out
why/how Firefox 1.5 wouldn't work on it. That's the main
problem with Linux I guess, you can't just download the
latest app and expect it to run just like Windows versions.

It probably is missing dependancies [libraries]. One of the main
reasons to use the OS supplier's packaging system. Sometimes these
can be downright horrible. I wanted to compile and install the
latest GNUMERIC spreadsheet. I needed to dl & build 10 libs to
turn my KDE system GNOME enough for GNUMERIC to compile and run.
This is a little extreme, but you get the idea.

-- Robert
 
Jan Panteltje said:
The systemm, in Linux, is nion-idle very often, you will
note a zillion modules and deamons running. ps avx lsmod
So the chance that some process runs during one of the test
runs is not zero, making the results different.

Not on my system. I know everything loaded & running. I put
it there! Of course, KDE starts up ghosts, and I probably
should change to something lighter, like fvwm or twm.
Not on my system! Things will get VERY slow if you type
nice -n -19 yes in an xterm
without 'yes' hdparm -T /dev/hdc /dev/hdc: Timing cached
reads: 740 MB in 2.00 seconds = 369.98 MB/sec

with 'yes' hdparm -T /dev/hdc /dev/hdc: Timing cached reads:
416 MB in 2.05 seconds = 203.11 MB/sec

This is amazingly fast with an IO hog like -19 `yes`.
`hdparm` is still running at half speed! By rights,
it ought only to be running at 5%.

-- Robert
 
I think supposedly at lower resolutions, modern cards aren't realy
ruffled by the load and the limit's on the CPU. Keep in mind the
7900GTX used in the systems are almost as capable as a PAIR of 7800GT
SLI just barely a year ago. If you look at the higher resolutions
tests where they turned on all AA/AF, you see they all plateu at
around the same point so the CPU should be primarily responsible for
the difference at the lower res.

So you're calling 1024x768 a low resolution for game benchmarking? My
point is that the benchmarks at 640x480 have always been where CPU
differences were most marked - even 1024x768 was too high.
 
Rubbish. There are plenty of benchmark tables showing that when games are
run at a low resolution with a high-end graphics card, then CPU performance
largely determines the outcome.

Exactly my point - 1024x768 is not that low a resolution for game
benchmarks... otherwise the benchmarks would not be run at 640x480 to show
the difference.
 
Back
Top