Intel backs away from FB-DIMMs

  • Thread starter Thread starter YKhan
  • Start date Start date
I don't think so. It uses a custom motherboard that will only be
produced in low volume. Low volume = high cost.

We'll see the pricing and the volume a few months after it's released.
You may be or may be not right on this.
Secondly, if AMD prices their 4x4 CPUs higher than Opterons, nobody
will buy them. If they price their 4x4 CPUs lower than Opteron, then
nobody would bother buying Opteron (unless they feel they really need
ECC).

Even less so they will bother about 1s Intel servers if they can get
2s for not much more.
Uh huh. Sure it is. Have you considered the fact that AMD is going to
have to compete with all of Intel's old P4 based systems, that are
priced EXTREMELY low.

You mean liquidation of obsolete SKUs? There would be not too many
takers. Computers are almost perishables these days, and not too many
would try and save a buck buying something that already started to rot
;-)
While Intel's Woodcrest can demand higher
premiums, because it offers higher performance.
Much higher premiums for not exactly too much of extra performance.
You really need to think these things through better. 4x4 is not going
to threaten anyone, except AMD's existing product lines. Having 4
processors is useless for gaming; there's really no way around it.

Why is it limited to gaming only? Besides, gamers dig for "cool", pay
exorbitant prices for paint jobs and cosmetic parts that would not
give a .01 of extra fpc, and pay extra for brightly colored
motherboard etc. that have the same spec as regularly priced and
colored parts. It's "mine is bigger than yours" thing, and two A64 is
surely "bigger" (not necessarily faster at single threaded games, but
"bigger") than 1 C2D.
It's going to sell in ridiculously small numbers.
Count the numbers when they are in the accounting ledger, not when
they are in your chrystal ball.
Let me know when that happens. I won't be holding my breath.


You should try to use 'reality' glasses, not 'virtual unreality' ones.
The performance gap between Woodcrest and socket F opterons is far too
large for 4x4 to make a difference. Even worse, if the workload
requires a lot of memory, 4x4 will basically be stuck going to disk.

Most of these systems will start as 32 bit, and will not be able to
use more than 4GB anyway. Even in 64 bit mode there's no immediate
Woodcrest advantage. Single K8 can handle up to 8GB unbuffered RAM,
16GB if both sockets have own RAM banks. I so far could not make my
old rig run out of its 1GB and swap to HDD, even when I had 2
instances of Visual Studio.NET and 1 VB6 open and a movie being
encoded in the background, not counting the usual few IE/Firefox
windows, Word, email, etc (my bad habit is to have at least 2 rows on
apps on the task bar)
Opteron does not 'slightly trail' Woodcrest.
Depends on benchmark. At least not as badly as Netbust trails K8.
Suggest away, and I'll be ignoring you. I have yet to hear a
compelling argument for what market 4x4 addresses, how it will do so,
and how AMD will avoid shooting themselves in the foot.

DK

The market will decide if there are any arguments and how compelling
they are.

NNN
 
If buying decisions were made on performance only, Conroe would take
We'll see the pricing and the volume a few months after it's released.
You may be or may be not right on this.

I'm sorry, but there is no way 4x4 is going to be cheap. It's a
premium gaming solution aimed at gamers with no sense. It is designed
for SLI, which is already a rather niche market. What percentage of
desktop systems use SLI?

The whole product is aimed at the high end of a segment, which means
very expensive. The limiting factor may in fact be, the price of
comparable Opteron systems.

Does anyone know if 4x4 boards will be selling through newegg as bare
parts? Or will it just be offered through alienware, falcon NW, etc.
Even less so they will bother about 1s Intel servers if they can get
2s for not much more.

That's not possible, try to come up with realistic scenarios. 4x4 is
going to be a premium product.
You mean liquidation of obsolete SKUs? There would be not too many
takers. Computers are almost perishables these days, and not too many
would try and save a buck buying something that already started to rot
;-)

That's right, it's a firesale on old Intel products. Of course,
according to AMD that firesale has caused significant price erosion in
the market.
Much higher premiums for not exactly too much of extra performance.

Err? Look at the SPECjbb2005 scores. That is not a small gap.
Why is it limited to gaming only? Besides, gamers dig for "cool", pay
exorbitant prices for paint jobs and cosmetic parts that would not
give a .01 of extra fpc, and pay extra for brightly colored
motherboard etc. that have the same spec as regularly priced and
colored parts. It's "mine is bigger than yours" thing, and two A64 is
surely "bigger" (not necessarily faster at single threaded games, but
"bigger") than 1 C2D.

I don't think so. Gamers will want what has the highest performance
for gaming. 4x4 will have lower gaming performance than C2D.
Count the numbers when they are in the accounting ledger, not when
they are in your chrystal ball.

You are of course entitled to your own opinion.
Most of these systems will start as 32 bit, and will not be able to
use more than 4GB anyway. Even in 64 bit mode there's no immediate
Woodcrest advantage. Single K8 can handle up to 8GB unbuffered RAM,
16GB if both sockets have own RAM banks.

Woodcrest can go up to 64GB for one processor.
I so far could not make my
old rig run out of its 1GB and swap to HDD, even when I had 2
instances of Visual Studio.NET and 1 VB6 open and a movie being
encoded in the background, not counting the usual few IE/Firefox
windows, Word, email, etc (my bad habit is to have at least 2 rows on
apps on the task bar)

Depends on benchmark. At least not as badly as Netbust trails K8.

Show me a several (3-5) server benchmarks where Opteron is within 5% of
(or exceeds) Woodcrest.
Woodcrest JBB2005: 111K
Opteron JBB2005: 68K

http://www.spec.org/osg/jbb2005/results/res2006q3/jbb2005-20060623-00145.txt
http://www.spec.org/osg/jbb2005/results/res2006q3/jbb2005-20060815-00183.txt

The performance for the P4 based Xeons trails Opteron, but very
slightly for SPECjbb2005. Even if the P4 is the low-end/value model,
which it is, it will still depress the price of Opteron systems.

DK
 
I'm sorry, but there is no way 4x4 is going to be cheap. It's a
premium gaming solution aimed at gamers with no sense. It is designed
for SLI, which is already a rather niche market. What percentage of
desktop systems use SLI?

The numerical "percentage" is irrelevant. How can you so totally miss
this? The corporate desktop may be large in numbers; as far as margins go,
it's a bit of a wash. Why do you think Intel is trying to emulate SLI and
Crossfire? Why did Dell buy Alienware?

Your contempt for gamers with $$ says more about you than the gamers. In a
way I admire them and credit them for driving the industry forward in many
ways - their "overclock the hell out of it" activity has spawned a whole
new market and wakened up all the old industry stodgies. As a result, we
have components of superior quality, with better (signalling) margins, and
building & running a stable baseline system is now a much easier
proposition.

Oh and SLI is also a workstation market - CAD people are now saying:
"hey... nice"! Try to find a full featured non-economy mbrd without the
SLI feature - barely exists any longer.... SLI is now mainstream desktop,
whether you use the dual video or not.
The whole product is aimed at the high end of a segment, which means
very expensive. The limiting factor may in fact be, the price of
comparable Opteron systems.

Does anyone know if 4x4 boards will be selling through newegg as bare
parts? Or will it just be offered through alienware, falcon NW, etc.

Why would they *not* be sold through NewEgg et.al.?
That's not possible, try to come up with realistic scenarios. 4x4 is
going to be a premium product.

Uhh, you *think* it's not possible. As you said below, you are entitled to
"think" that.
That's right, it's a firesale on old Intel products. Of course,
according to AMD that firesale has caused significant price erosion in
the market.

It's hurt Intel too... possibly more than it's hurt AMD. Those old P4s are
for the kiddy market... the Wally World crowd... even more so now that
nobody is going to want a "5B" model P4.
You are of course entitled to your own opinion.

As are you... and me. Personally I have some doubts about 4x4 but none of
us know exactly what AMD is up to. Without that knowledge, your outright
rejection is more a sign of your apparently rabid allegiance than anything
else.
 
I don't think so. Gamers will want what has the highest performance
for gaming. 4x4 will have lower gaming performance than C2D.

At high resolutions (and that's how the real gamers play it - not
800x600 anymore) any marginally good CPU that is capable to supply the
GPU(s) with what data they need would have about the same frame rate
as top of the line CPU, give or take a fraction of fpc that is about
as large as rounding error. Games are video-bound. Most C2Ds will go
with Intel chipsets that are compatible with neither SLI nor Xfire.
Unless NVDA (possible but don't hold your breath) and ATI (snowball in
hell) modify their drivers, C2Ds with single graphics board will lose
out to even midrange A64 with SLI/Xfire graphics, and do so really
badly. NVDA is and for the forseeable future will be a marginal
player in Intel chipsets (marginalized by no one else but INTC), and
if I am not mistaken it's Intel's intent to lock ATI out of C2D market
completely. Oh, yeah, don't forget also VIA and SIS ;-))))))) Have
any hope that i740-II will be comparable to NVDA's and ATI's best and
fastest? Make no mistake - 4x4, NVDA or ATI-based, will have 2
PCIEx16 slots - that's even cheaper to implement than 2 CPU sockets.
And 4x4, not only being fast, also will be cool (not in terms of low
heat dissipation ;) Wouldn't it be cool for somebody to say "I'm
gaming off full speed and have a virus scan running in the
background"? Why anyone would do that? Just because he can, and
others can not.

NNN
 
George,

In an effort to steer this conversation in a productive direction, I'd
like to simply stop talking about any specific products.
The numerical "percentage" is irrelevant. How can you so totally miss
this?

No, volume is essential for semiconductors. Aggregate costs generally
look like:

C(q) = F + qV(q)

V(q) is your variable cost and should be relatively low, F is almost
always very high. A MPU might be in the range of $50-500M depending on
whether it is brand new from the ground up, a shrink, a compaction, a
modification of an existing product, etc.

The key element here is that in order to profit:

ASP >= F/q + V(q)

When you have low volume products you cannot amortize your fixed cost.
This is why HP believed PA-RISC to be economically unsustainable in the
long run.

Low volume markets, like gamers require high prices, because otherwise
they wouldn't be profitable.
The corporate desktop may be large in numbers; as far as margins go,
it's a bit of a wash.

Yes but the corporate desktop market is:
1. High volume (i.e. amortize F)
2. More lucrative than the home market
3. Attached with service contracts, where the real $$$$ are
Why do you think Intel is trying to emulate SLI and
Crossfire? Why did Dell buy Alienware?

Intel is not emulating SLI and Xfire. Dell bought alienware because
the ROI was higher than investing in their existing business.
Your contempt for gamers with $$ says more about you than the gamers. In a
way I admire them and credit them for driving the industry forward in many
ways - their "overclock the hell out of it" activity has spawned a whole
new market and wakened up all the old industry stodgies. As a result, we
have components of superior quality, with better (signalling) margins, and
building & running a stable baseline system is now a much easier
proposition.

I appreciate the impact of gamers on the industry, but I think you give
them too much credit.
Oh and SLI is also a workstation market - CAD people are now saying:
"hey... nice"! Try to find a full featured non-economy mbrd without the
SLI feature - barely exists any longer.... SLI is now mainstream desktop,
whether you use the dual video or not.

The workstation market uses different GPUs, with vastly higher price
points, ECC and server chips.

DK
 
George,

In an effort to steer this conversation in a productive direction, I'd
like to simply stop talking about any specific products.


No, volume is essential for semiconductors. Aggregate costs generally
look like:

C(q) = F + qV(q)

V(q) is your variable cost and should be relatively low, F is almost
always very high. A MPU might be in the range of $50-500M depending on
whether it is brand new from the ground up, a shrink, a compaction, a
modification of an existing product, etc.

The key element here is that in order to profit:

ASP >= F/q + V(q)

When you have low volume products you cannot amortize your fixed cost.
This is why HP believed PA-RISC to be economically unsustainable in the
long run.

Low volume markets, like gamers require high prices, because otherwise
they wouldn't be profitable.

All fine and dandy but irrelevant. 4x4 will use plain vanilla AM2
chips, A64 or Opteron 1xxx. No extra CPU and/or chipset development
and/or production cost involved. Is it too much of an expense to add
to a motherboard an extra cHT link and another socket? And (for
highest end of it) extra RAM banks? Modify the BIOS to account for
2nd CPU?
Yes but the corporate desktop market is:
1. High volume (i.e. amortize F)
2. More lucrative than the home market
3. Attached with service contracts, where the real $$$$ are


Intel is not emulating SLI and Xfire. Dell bought alienware because
the ROI was higher than investing in their existing business.


I appreciate the impact of gamers on the industry, but I think you give
them too much credit.


The workstation market uses different GPUs,
But the same PCIEx16 slots to stick the Quadro/FireGL cards in
with vastly higher price
points, ECC and server chips.
ECC is not limited to buffered RAM, you can get it unbuffered as well.
Server chips were used only for the reason that there was no consumer
level chip (since P3 were retired) capable of 2 socket operation. 4x4
solves this. Besides, Opteron 1xxx *is* a server chip, if this formal
differentiation is necessary for any workstation maker for marketing
or whatever other reason.

Rgds,
NNN
 
George,

In an effort to steer this conversation in a productive direction, I'd
like to simply stop talking about any specific products.

Yeah, it didn't take long to find some benchmarks where Woodcrest is only
"slightly" faster and even gets pipped.:-)
No, volume is essential for semiconductors. Aggregate costs generally
look like:

C(q) = F + qV(q)

V(q) is your variable cost and should be relatively low, F is almost
always very high. A MPU might be in the range of $50-500M depending on
whether it is brand new from the ground up, a shrink, a compaction, a
modification of an existing product, etc.

The key element here is that in order to profit:

ASP >= F/q + V(q)

When you have low volume products you cannot amortize your fixed cost.
This is why HP believed PA-RISC to be economically unsustainable in the
long run.

Low volume markets, like gamers require high prices, because otherwise
they wouldn't be profitable.

No, simplistic equations don't cut it I'm afraid. High volume is only
bread 'n' butter and often sold at a slight loss; the prestige and profit
is in high ASP... as well illustrated by AMD's K6 era.
Yes but the corporate desktop market is:
1. High volume (i.e. amortize F)
2. More lucrative than the home market
3. Attached with service contracts, where the real $$$$ are

But you have to make *some* profit to to amortize.
Intel is not emulating SLI and Xfire. Dell bought alienware because
the ROI was higher than investing in their existing business.

So what is Intel's err, "Bifurcated PCI Express Graphics"? For Dell,
Alienware is the "low volume" high ASP/margin with credibility.
I appreciate the impact of gamers on the industry, but I think you give
them too much credit.


The workstation market uses different GPUs, with vastly higher price
points, ECC and server chips.

I'm afraid your i-sandbox is broken: ECC is no sweat - I know, I just did a
small Athlon64 server: a few $$ extra for unbuffered ECC. The CAD GPUs are
close enough to the high-end game graphics GPUs that they piggy-back on the
same base technology; if the latter did not exist as a product, neither
would the former... and there are many CAD professionals who use the
mainstream "consumer" graphics because they work fine for many CAD apps.
You don't seem to appreciate how much AMD, nVidia & ATi have blurred the
high-power desktop & workstation market - server "chips" are not really
necessary.
 
Looks like now that Intel has dropped support for FB-DIMM, AMD is no
longer even going to bother with it anymore either.

The Tech Report - AMD to forgo FB-DIMM adoption?
http://techreport.com/onearticle.x/10791

Yeah but their "evidence" is all Inquirer articles. In fact the only
references to microbuffer I can find on a quick search point back to the
Inquirer. Discussion on some of the Web Fora are confused: some say
microbuffer is a Rambus technology; others say it's Intel's; some of the
same discussions also say that Intel is getting set to buy out Rambus.
Whadya wanna believe?:-)
 
George said:
Yeah, it didn't take long to find some benchmarks where Woodcrest is only
"slightly" faster and even gets pipped.:-)

OK, bring on those benchmarks. I haven't seen any in this thread. I
simply am tired of AMD versus Intel debates. It's kind of sad really.
No, simplistic equations don't cut it I'm afraid. High volume is only
bread 'n' butter and often sold at a slight loss; the prestige and profit
is in high ASP... as well illustrated by AMD's K6 era.

Um, no. Look at the high prestige and high ASP products throughout
history:

VAX
Minicomputers
Proprietary RISC systems
Most pre-RISC designs

What do they all have in common? They're dying! Here's a hint, high
margin and low volume doesn't work when you have to compete with Intel.
Just ask DEC, HP, Sun, SGI or any number of other companies.
But you have to make *some* profit to to amortize.

That depends. IBM basically gives away hardware to get consulting.
Profits on corporate desktops are surely higher than those on consumer
desktops.
So what is Intel's err, "Bifurcated PCI Express Graphics"? For Dell,
Alienware is the "low volume" high ASP/margin with credibility.

You mean the bad axe systems with 2 PCIe graphics ports?
I'm afraid your i-sandbox is broken: ECC is no sweat - I know, I just did a
small Athlon64 server: a few $$ extra for unbuffered ECC. The CAD GPUs are
close enough to the high-end game graphics GPUs that they piggy-back on the
same base technology

So? Totally different drivers and optimizations. Why don't you try
and take two of these 'identical' GPUs, and then benchmark them against
each other. You might be surprised.
if the latter did not exist as a product, neither
would the former

They certainly did for a long time. It wasn't up until very recently
that the last of the professional graphics vendors gave up to ATI and
NV.
... and there are many CAD professionals who use the
mainstream "consumer" graphics because they work fine for many CAD apps.

Good for them.
You don't seem to appreciate how much AMD, nVidia & ATi have blurred the
high-power desktop & workstation market - server "chips" are not really
necessary.

Good luck getting expensive workstation applications qualified on a
geforce. Why don't you talk with a schlumberger engineer about that?

DK
 
So? Totally different drivers and optimizations. Why don't you try
and take two of these 'identical' GPUs, and then benchmark them against
each other. You might be surprised.

I think it's generally true that the high-end CAD GPUs trail the games
GPUs by at least a little while in terms of release. So you need to
compare what's currently available in each place, then the results are
not as clear as to which is faster.
They certainly did for a long time. It wasn't up until very recently
that the last of the professional graphics vendors gave up to ATI and
NV.


Good for them.

It is very good for them - equal or higher performance at a much lower
cost.
Good luck getting expensive workstation applications qualified on a
geforce. Why don't you talk with a schlumberger engineer about that?

DK

Not all expensive workstation applications are only qualified for the
extreme top end GPUs. In the 3D world there a few that are qualified
for GeForce. Lightwave is one of them. Any arguments about whether
it is professional level should be directed to the studios using it
for feature films, television and commercials - sounds pro to me.

Ryan
 
David Kanter said:
OK, bring on those benchmarks. I haven't seen any in this thread. I
simply am tired of AMD versus Intel debates. It's kind of sad really.


Um, no. Look at the high prestige and high ASP products throughout
history:

VAX
Minicomputers
Proprietary RISC systems
Most pre-RISC designs

What do they all have in common? They're dying! Here's a hint, high
margin and low volume doesn't work when you have to compete with Intel.
Just ask DEC, HP, Sun, SGI or any number of other companies.

The rumors of the death of PowerPC are greatly exaggerated.
Likewise, SUN is a pretty lively corpse.

Now speaking of low volume, how bout that itanium?
 
Del said:
The rumors of the death of PowerPC are greatly exaggerated.
Likewise, SUN is a pretty lively corpse.

I don't seem recall noting in there that PPC is dead or dying. It
certainly is changing, as is zSeries. IBM uses both as a vehicle to
sell software services and support.

Sun can only sell services and support, and it is unclear to me whether
SPARC will live. However, it is clear that SPARC experienced a
precipitious decline and may continue to decline.
Now speaking of low volume, how bout that itanium?

I'd look at my full quote. IPF does face challenges due to volume
issues, but it doesn't compete head to head with Intel/x86 in the same
way that SPARC and x86 do. More specifically, Intel's strategy is/was
to make a corpse out of Sun's proprietary hardware, ditto for IBM.
Their strategy is not to kill IPF (so far as we know).

DK
 
George said:
Yeah but their "evidence" is all Inquirer articles. In fact the only
references to microbuffer I can find on a quick search point back to the
Inquirer. Discussion on some of the Web Fora are confused: some say
microbuffer is a Rambus technology; others say it's Intel's; some of the
same discussions also say that Intel is getting set to buy out Rambus.
Whadya wanna believe?:-)

Intel buying out Rambus? Shades of 1999, Batman!

Yousuf Khan
 
David said:
I'd look at my full quote. IPF does face challenges due to volume
issues, but it doesn't compete head to head with Intel/x86 in the same
way that SPARC and x86 do. More specifically, Intel's strategy is/was
to make a corpse out of Sun's proprietary hardware, ditto for IBM.
Their strategy is not to kill IPF (so far as we know).

So how exactly does Sparc and x86 compete against each other, that
Itanium doesn't do with x86? Is there a Windows for Sparc? Is there a
Windows for Itanium?

Yousuf Khan
 
OK, bring on those benchmarks. I haven't seen any in this thread. I
simply am tired of AMD versus Intel debates. It's kind of sad really.

What *is* sad is that nobody can utter AMD around here without being
pounced on by you... and now you have the nerve to tell us you're tired of
the debate.....Ô_õ
Um, no. Look at the high prestige and high ASP products throughout
history:

Uhh we're talking about the Opterons & Xeons here as high ASP, with
Celerons & Semprons at the other end of the spectrum... the umm, sawdust of
the processor world.
VAX
Minicomputers
Proprietary RISC systems
Most pre-RISC designs

What do they all have in common? They're dying! Here's a hint, high
margin and low volume doesn't work when you have to compete with Intel.
Just ask DEC, HP, Sun, SGI or any number of other companies.

What they all have in common is that they're not even close to a "Personal
Computer" and don't have microprocessor CPUs apart from a couple of bottom
of range abberations. VAX has been dead for years and DEC hasn't existed
since they changed their name to Digital and got, first screwed by Intel
and then purchased by err, Carleton.
That depends. IBM basically gives away hardware to get consulting.
Profits on corporate desktops are surely higher than those on consumer
desktops.

IBM's strategy has nothing to do with the subject at hand; switching the
focus is not helping you here.
You mean the bad axe systems with 2 PCIe graphics ports?


So? Totally different drivers and optimizations. Why don't you try
and take two of these 'identical' GPUs, and then benchmark them against
each other. You might be surprised.

Contriving a quote of "identical" which I never used is worthy of the
Kentster... at his worst. nVidia has been coy about revealing specific
differences between Quadro and GeForce but they are both based on the same
chip with "feature" differences.

It seems like one of the differences is overlay planes, which is actually
an SGI legacy feature, which any software mfr can work around with zero
penalty. Another is number of clip regions and it appears that
Vista/DirectX 10 is going to require that more be added to GeForce
anyway... just another little scandalous "secret" of the video chip
industry.
They certainly did for a long time. It wasn't up until very recently
that the last of the professional graphics vendors gave up to ATI and
NV.

We live in different times now - an annual roll-out of premium GPUs with
defeatured product following in the schedule. That the video chip mfrs get
away with adding $1000. to the price for "CAD-enabled" for such a meagre
difference in feature set is a disgrace. Oh and another is that they offer
no drivers for Windows Server... !unbelievable!... that you're supposed to
spend more for a crippled version of the GPU chip, which has server
drivers.
Good for them.

Yes it is - it stimulates the CAD industry all around and keeps the big
guys on their toes if some guy can cook up the next big CAD app in his den.
Good luck getting expensive workstation applications qualified on a
geforce. Why don't you talk with a schlumberger engineer about that?

I don't need to - I know some of the guys in the trenches. They are
happily doing advanced surfacing and other engineering & design tasks with
PCs with mostly consumer-grade GPUs. Between Quadro and GeForce, the
hardware differences are minimal: if you have a "design center" with many
different, possibly unanticipated apps, you may have to get Quadro to cater
for the oddballs; for a single shop, you buy software which does not pander
to the video chip industry. The usual laws will dictate which software
mfrs survive.
 
So how exactly does Sparc and x86 compete against each other, that
Itanium doesn't do with x86? Is there a Windows for Sparc? Is there a
Windows for Itanium?

Why don't you *actually* read what I wrote, it should be rather clear.

DK
 
What *is* sad is that nobody can utter AMD around here without being
pounced on by you... and now you have the nerve to tell us you're tired of
the debate.....Ô_õ


Uhh we're talking about the Opterons & Xeons here as high ASP, with
Celerons & Semprons at the other end of the spectrum... the umm, sawdust of
the processor world.


What they all have in common is that they're not even close to a "Personal
Computer" and don't have microprocessor CPUs apart from a couple of bottom
of range abberations. VAX has been dead for years and DEC hasn't existed
since they changed their name to Digital and got, first screwed by Intel
and then purchased by err, Carleton.


IBM's strategy has nothing to do with the subject at hand; switching the
focus is not helping you here.


Contriving a quote of "identical" which I never used is worthy of the
Kentster... at his worst. nVidia has been coy about revealing specific
differences between Quadro and GeForce but they are both based on the same
chip with "feature" differences.

It seems like one of the differences is overlay planes, which is actually
an SGI legacy feature, which any software mfr can work around with zero
penalty. Another is number of clip regions and it appears that
Vista/DirectX 10 is going to require that more be added to GeForce
anyway... just another little scandalous "secret" of the video chip
industry.


We live in different times now - an annual roll-out of premium GPUs with
defeatured product following in the schedule. That the video chip mfrs get
away with adding $1000. to the price for "CAD-enabled" for such a meagre
difference in feature set is a disgrace. Oh and another is that they offer
no drivers for Windows Server... !unbelievable!... that you're supposed to
spend more for a crippled version of the GPU chip, which has server
drivers.


Yes it is - it stimulates the CAD industry all around and keeps the big
guys on their toes if some guy can cook up the next big CAD app in his den.


I don't need to - I know some of the guys in the trenches. They are
happily doing advanced surfacing and other engineering & design tasks with
PCs with mostly consumer-grade GPUs. Between Quadro and GeForce, the
hardware differences are minimal: if you have a "design center" with many
different, possibly unanticipated apps, you may have to get Quadro to cater
for the oddballs; for a single shop, you buy software which does not pander
to the video chip industry. The usual laws will dictate which software
mfrs survive.

IIRC, in AGP era some GF cards could be easily modded, sometimes as
easily as BIOS flash, to think of themselves as Quadro, and behave
like Quadro. Even GF-MX would mod to entry-level "professional" card,
whatever its designation. All advanced driver features, usually
grayed out or just not present, would all of a sudden become available
- if we believe to the posts at the links prowided by the Inq's likes.
Not sure though if this still applies, especially to multi-GPU setup.

NNN
 
Back
Top