Mojo said:
Hi guy
Now that I may look at buying nForce4 SLI. Lets go back in time to my
first big 3D card buy.........3dfx Voodoo2. I bought two to use for
SLI, and they was not cheap back then. These voodoo rocked. The best
and fastest on the market. How can a company go from being Number #
1 to out of business in just a few years. ?
The rise and fall of empires. Makes for good bathroom reader...It's
happening on a macroscale. Check yer six...
ANYONE remember the
history of 3dfx and what happened to them.
Several classic FUs:
1) Voodoo Rush (Harvey Fong, then at Hercules, had a whole skivvie-load of
hatemail for that one...)
2) Banshee, another top seller (NOT!)
3) Being late with V5, positioning it against the GF2
4) Top-heavy management
5) Buying STB and using them as sole point of prod/distro, effectively
factoring everyone else out of the picture, who then went to Nvidia, and the
rest is history. Of course, it resulted in the death of one of the better
OEMs out there, while 3dfx was busy lining up both barrels on both big toes
and blowing them into pulp as a final salute. STB was the prom queen in a
world of hinge-toothed beyatches, prolly the only one without pigeon-toes, a
fishy smell, and a big nose-pimple like Diamond or Creative (or *anyone* who
had S3 in the sack)...note that this is not to imply STB was the prettiest
bowlegged ginch at the meat market, just the least skanky FWTW...then they
both show up late to the dance underdressed with V5 and no makeup...
5.5) The proprietary API. Now don't get me wrong, Glide was (is) easy to
use, just about any decent codemonkey can whip something off in short time,
but it had its obvious limitations, the hardware being foremost. OpenGL and
DirectX are the two biggies, it's been that way since DX7 when M$ started
getting serious about their old Rendermorphics API, and stealing ideas from
OpenGL thru Fahrenheit, same way M$ always does, by offering cooperation and
then co-opting ideas and incorporating them in their IP so it looks like the
originator of the idea has no leg to stand on. They got this business model
from the Japanese, I'd imagine, who practice it on their less-loved rivals.
With Microsoft, who has no such restraint, the less-loved rival is *everyone
else*, including you. It is wise to use the long-handled spoon when supping
with devils and demons of their ilk. But enough about who we really love to
hate, and vice versa. This is about Glide being predestined to failure.
Which, incidentally, it was, in no small part due to a certain
aforementioned competitor, but also in that it was essentially a convenient
(and effective at the time) way to mask noncompliance with either other API,
whose future was and is set in stone. Resistance really was futile. It was a
foregone conclusion Glide would be killed off someday. Expecting M$ not to
advance their agenda of global proliferation is like standing in front of a
moving train. 3dfx pulled out one rabbit and overworked it into starvation,
so the poor bugger was a bonerack when it got pulled out of Hats #
4,5,6---so it couldn't drag that creaky wagon uphill for another season. So
they got walked over by Nvidia, who simply had a faster product, more
complete OpenGL feature set (good Linux support too...), etc. They were off
their game, so they skinned Mr. Bunny, sold the meat, lined their pockets
with fur, and gave Nvidia a much-needed lesson in display filtering quality
among other things. Then ATI finally pulled their thumb out in time to
shovel ashes on 3dfx' grave and bite Nvidia in the ass, lest they sit around
too long. And it's been back-and forth ever since (except with OpenGL and
Linux, which I have yet to see ATI reach Nvidia's level of support and
challenge their dominance thereof), complete with "Star-Bellied Sneeches"
fanboiism, "leaked" memos, dirty driver pocket pool, payola to da
Futuremarket (the Standards & Poor's of the graphics world, with all that
implies...can you say "protection racket"?), the whole shebang. Just another
day of business as usual.
6) Insufficient clock speed: Feature limitations could have been forgiven
(no EMBM, no trilinear, no T&L, etc.) if they had managed to get 200+ MHz
yields. The "w00t factor" might have pulled them out of the fire even if
they were a little late to the party. V5 @ 200+ MHz in it's day? W00t! Any
questions?
Bulky design: "Scalable Architecture" my left one, like anyone but someone
with four or more of 'em on their pro level card would care. It's all better
off on one chip, within practical limitations.
There's still driver development going on for the V5...
While we're at it, let's pick another paperweight out of the
sack...Rendition! If they'd ever got the silicon right, the drivers might
have been a little better, methinks...
Now I'se gots to go start helpin' fixin' some vittles for this here...uhh...
"extended nuclear family"...happy turkey-day, y'all...