RDRam

  • Thread starter Thread starter Diane
  • Start date Start date
Scott Alfter <salfter@salfter said:
It's a 2MB ISA SVGA card based on the Trident TVGA8900D. The eight metal
boxes on the left are the card's memory. They're about a quarter-inch tall
and have 23 pins each in a 5x5 grid pattern (with two pins missing,
presumably for keying). Is this some sort of standard memory technology in
an oddball package, or is it something truly weird?

Are the cubes stamped IBM by any chance? The last time I saw memory
packaged like that, it was on memory expansion boards for the IBM PS/2
model 70 and 80 computers.

Here: found a pic. Look at the memory cards on the left of the
computer, sandwiched between the power supply and the left hand hard
disk:

http://john.ccac.rwth-aachen.de:8000/alf/ps2_80311/ps2_80311_2_full.jpeg
 
On Wed, 15 Dec 2004 13:20:35 +0200, "cquirke (MVP Win9x)"
Unless they foresee the GPU as generating system input in some way?

red herring, in any case: any PCI Express link has pairs running in both
directions.
Yep. I only understand the last bit about AGP clock; also thinking
that whenever they up the data rate, they have to drop voltage to stop
the wires frying, and I'm wondering at what point VR will be too
granular to maintain voltage consistency.

So does PCI Ex solve this by adding more physical wires? That's
interesting if so, given the original "parallel for data speed"
approach that swung to the "serial to avoid cross-talk and reduce pin
count" phase we are currently enjoying with S-ATA and USB.

Other than the Big Fat Hose implementation (16 bits), PCI Express uses fewer
wires than PCI or PCI-X Mode 1.

A "link" (ie: a connection between endpoint devices) consists of low-voltage
unidirectional differential pairs running what is essentially 2 to 2.5 gigabit
full-duplex network layers. The minimum link implementation uses two wire
pairs for a one-bit, full-duplex link, for 2-2.5Gb DTR (uncooked, each way).
Otoh, BFHs consist of 32 pairs - quickly approaching 64-bit PCI/PCI-X
implementations wrt wire & pin count, but boasting up to 40Gb DTR
(~ 5 Gigabytes/second per BFH).
Thinking back on it (especially the initial rocky and costly rollout),
PCI's been a pretty good bus. It gained traction here in around 1995,
so it's served us for 10 years - ?as long as ISA-16.

We resist change.

/daytripper (Blindly, in most cases...)
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Are the cubes stamped IBM by any chance? The last time I saw memory
packaged like that, it was on memory expansion boards for the IBM PS/2
model 70 and 80 computers.

They're unmarked.
Here: found a pic. Look at the memory cards on the left of the
computer, sandwiched between the power supply and the left hand hard
disk:

http://john.ccac.rwth-aachen.de:8000/alf/ps2_80311/ps2_80311_2_full.jpeg

Those look similar. Maybe IBM dumped a bunch of unmarked parts when the
PS/2 tanked, and they got snapped up by some videocard manufacturer who then
put out a card that undercut the competition.

_/_
/ v \ Scott Alfter (remove the obvious to send mail)
(IIGS( http://alfter.us/ Top-posting!
\_^_/ rm -rf /bin/laden >What's the most annoying thing on Usenet?

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Linux)

iD8DBQFBwIjmVgTKos01OwkRAkwkAKD7aqvJvpvw/S6QuzcEM2cPFfJJaQCg3XHw
i9yVWyEiBAH3Pcj9uWc4uwM=
=nagj
-----END PGP SIGNATURE-----
 
I can intuit what you mean, but can't quite grasp the difference
between several interconnects and a bus. I presume it means that
traffic on the same wires (I assume they are the same wires?) is
mediated in a different way or at a different level?


Now I get it! It's that other devices are now crowding PCI into
obselescence, e.g. Giga-LAN, S-ATA etc. so instead of AGP + PCI slots,
we need at least 3 x fast slots. PCI-Ex sounds better designed to
handle this gracefully, i.e. allocate width as needed without having
to shatter old standards and set new ones (as AGP ?x now does)

Yes well basically, it's not a multi-drop bus with shared wires and
connectors - with only two devices per interconnect you get to crank up the
clock and go (multi-)serial, which involves some protocol overhead. The
peak bandwidth claimed for PCI-Express is 4GB/sec in its 16-lane (video)
form but it's been suggested that the protocol overhead reduces that
considerably. It's hard to find the truth since it's all under the PCISig
umbrella now and downloads cost $$.

Note that Intel has hidden the AGP docs now - the AGP 3.0 final spec is
still here:
http://www.intel.com/technology/agp/downloads/Spec_1_0_final_Sep10.pdf but
I got that from an external link - couldn't find any path to it from the
Intel Web site, which keeps taking you to their PCI-Express blurb. Some of
the older AGP specs can still be gotten here:
http://www.motherboards.org/articles/tech-planations/920_4.html.

Ultimately, it shakes out to:
- the highest CPU clock the CPU('s cache) can handle
- the highest RAM clock the current RAM standard can handle
- a high standard for bits that have to be in the case (PCI Ex?)
- a standard for bits that have to be outside the case (USB?)
- a standard for bits that are wire-less

The trend will be to either toss stuff out of the case (so that dumb
retail can sell them safely) or build it into the mobo, and ultimately
processor core, as Moore's Law allows. Perhaps at some stage we won't
have the "has to be inside the case for speed" layer at all.

I *think* that's what Infiniband wanted you to think but M$ said no and it
seems to be stewing on a back burner for the moment.
In the original VL-Bus vs. PCI sense, I doubt if we will ever see a
"local bus" again, given how RAM out-paces other cards and devices.

In what sense is PCI Ex not a mezzanine bus?

It depends on who implements it and the topology they adopt I suppose but
compared with recent AGP implementations, you don't have a single
PCI-Express hanging off the "North" hub, like we had a single AGP - you
have a set of PCI-Express connects hanging off it instead... with the usual
I/O, including PCI, hanging off the "South" hub.
Unless they foresee the GPU as generating system input in some way?

I'm still trying to figure why on that?... other than, I guess it allows
UMA for the bottom end.
Yep. I only understand the last bit about AGP clock; also thinking
that whenever they up the data rate, they have to drop voltage to stop
the wires frying, and I'm wondering at what point VR will be too
granular to maintain voltage consistency.

So does PCI Ex solve this by adding more physical wires? That's
interesting if so, given the original "parallel for data speed"
approach that swung to the "serial to avoid cross-talk and reduce pin
count" phase we are currently enjoying with S-ATA and USB.

It's more of the same.... serial point to point but someone who has access
to the docs could elaborate better than I. What's funny is that if you go
to Intel's PCI-Express page
http://www.intel.com/technology/pciexpress/devnet/desktop.htm and click on
What is PCI-Express, you get a document extolling the virtues of 3GIO - no
mention of PCI Express at all.
Thinking back on it (especially the initial rocky and costly rollout),
PCI's been a pretty good bus. It gained traction here in around 1995,
so it's served us for 10 years - ?as long as ISA-16.

Yes it has been good and considering the appearance back then of
proprietary, VL and then EISA "alternatives", it was just in the nick of
time.
And yet it seems that VIA still couldn't get the hang of it, as
recently as a few years back (the UIDE corruption scandal).

Was that the PCI Latency thing? I've never had trouble with a VIA chipset
but I've never tried to use any esoteric PCI cards with them. While VIA
was not blameless, a lot of their bad name came from PCI card vendors who
tried to ignore them - IOW if it worked with an Intel chipset, that was
good enough.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Timmy? He was shilling here as recently as two weeks ago (December 4)
...at least that's the date according to the server I use.

Yep and I'm sure you also know who it was who "fed the troll".;-)

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
EDO 32-bit SIMMs, yes. EDO DIMMs is another matter...

EDO DIMMs were rare, but not nearly as rare as RDRAM. Crucial will
still sell you EDO DIMMs if you're willing to pay a bit of a price
premium.
in just about
all cases, mobos stayed SIMMs for EDO, DIMMs for SDRAM, and where both
were supported, both types of slots (just don't use both at once).

The cruel and unusual thing about these Dells were that they were
i830HX (lovely 64M+ chipset, shame it pre-dates SDRAM) but had only
DIMM slots, when there was no reason not to have only SIMM slots.

Yup, that was common on 430HX chipsets, I saw a number of them with
EDO-only DIMM slots. Dell definitely was not the only one doing this.
They didn't see too much use on desktops, much more common on
workstations and low-end servers (primarily because the main selling
point of the HX chipset was to use more memory than most desktop users
could afford at that time).
At least I think they were i430HX; they may have been pre-SDRAM Slot
One, i.e. the old PPro-generation i440FX. That, too, would have no
reason to have DIMM slots, as there's no SDRAM support.

The 440FX-based boards for PPros and EARLY PII systems (the 440LX
chipset was delayed for a couple months after the PIIs release) used
EDO DIMMs almost exclusively. I remember supporting some VERY
troublesome Compaq Deskpro systems in the day that used these DIMMs
(the problem was not the DIMMs so much as that the PPro chips ran
REALLY hot and there was no CPU fan or case fan... and we tried to
deploy them in a factory where temperatures varied greatly from winter
to summer).
Sure; I'm on the fringe by drawing the lines that I do. If I didn't
have those requirements, I'd have been even less likely to i820.

Having worked a brief bit with the Intel i8xx chipsets at the time, I
would have gone for VIA instead due to stability reasons. I've never
being a big fan of VIA chipsets, but the early i8xx drivers stank,
VIAs were better. By the time the i815 came out though they had
turned things around.

Mind you, at this same time AMD's early Athlon chips were head and
shoulders over anything Intel was producing, so the point was kind of
moot.
True; at one time, I contemplated using i810 in such cases (this was
before Intel had done the 815, i.e. set the precident for SVGA+AGP)
but as it turned out, I didn't have anyone who needed that niche.
Even the office folks went i440BX and SVGA card (often i740).

I've still got some old Compaq Deskpro i810 based systems at the
office, and they are *REALLY* good systems (Intel had their drivers
straightened out by the time they shipped). The integrated video was
crap for gaming, but it gave a rather decent picture for 2D stuff (ok,
not up to Matrox standards, but it was better than what nVidia had to
offer at the time and with better drivers than ATI had). Performance
was just about on-par with the i815 chipset with an add-in card too.
Top it off with these being one of the first systems I used with a
7200rpm drive and the fact that they were packed into a small form
factor box, and I was rather impressed with them.
 
Yup, that was common on 430HX chipsets, I saw a number of them with
EDO-only DIMM slots. Dell definitely was not the only one doing this.

It's the only one we caught here. Specifically, I caught some medical
sware house trying to push these Dells at the members of the
representitive organization I belonged to. I was co-opted onto Exco
to sanity-check this sort of thing, and kicked some inviting butt.

I really can't see the point in EDO DIMM mobos. Either you're
shipping those chipsets when they were "fresh", in which case EDO
SIMMs were abundant, or you should have moved to i440LX (if Intel CPU
and chipset is your thing, that is).
They didn't see too much use on desktops, much more common on
workstations and low-end servers (primarily because the main selling
point of the HX chipset was to use more memory than most desktop users
could afford at that time).

Well, by the time SIMMs were getting rare, so was traditional Socket7,
esp. on the high-end. I can't see "we need a strong PC that will run
over 64M RAM, so let's stay on traditional Socket 7 that won't run PII
and won't run new non-Intel >66MHz-based CPUs either".
The 440FX-based boards for PPros and EARLY PII systems (the 440LX
chipset was delayed for a couple months after the PIIs release) used
EDO DIMMs almost exclusively.

Yes, but the ones I saw were SIMMs, not DIMMs.
Having worked a brief bit with the Intel i8xx chipsets at the time, I
would have gone for VIA instead due to stability reasons.

Were those the early "PIII should be RDRAM" horrors? I passed that
whole era by, sticking with i440BX, and when the same thing happened
more forcefully on P4, I mainly stayed off P4 until there were
Celerons and i845G to drop the platform out of the clouds.

I did a couple of pre-S478-Celeron P4s, though I don't know if any
were S423 or whatever it was. Seemed to me, the original
RDRAM-dominated S423 was the same sort of early-adopter graveyard as
P60/66 and PPro (except unlike PPro, it didn't rock much at anything)


---------- ----- ---- --- -- - - - -
On the 'net, *everyone* can hear you scream
 
It's the only one we caught here. Specifically, I caught some medical
sware house trying to push these Dells at the members of the
representitive organization I belonged to. I was co-opted onto Exco
to sanity-check this sort of thing, and kicked some inviting butt.

I really can't see the point in EDO DIMM mobos. Either you're
shipping those chipsets when they were "fresh", in which case EDO
SIMMs were abundant, or you should have moved to i440LX (if Intel CPU
and chipset is your thing, that is).

Once the 440LX chipset arrived it rendered all these designs obsolete
pretty much overnight, they all pre-date them. The trick was that you
could get more memory in a system using EDO DIMMs than you could using
EDO SIMMs. That's what they were used mainly on workstations with the
HX and FX chipset.
Well, by the time SIMMs were getting rare, so was traditional Socket7,
esp. on the high-end. I can't see "we need a strong PC that will run
over 64M RAM, so let's stay on traditional Socket 7 that won't run PII
and won't run new non-Intel >66MHz-based CPUs either".

These systems all predate the PII, with the exception of a VERY small
number of the first run of PII systems while the 440LX was delayed.

Getting more than 128MB of memory on a board using SIMMs was very
difficult to do, but much easier using DIMMs, hence the main reason
for the design.

Case-in-point, if you have a look at the EDO memory sold at Crucial,
they have 32MB EDO SIMMs and 256MB EDO DIMMs. Even if you only had
half as many DIMMs you could still get 4 times as much memory in the
system.
Yes, but the ones I saw were SIMMs, not DIMMs.

Many of them had EDO DIMMs. It happened, it was a bit odd, but
nothing too crazy. As mentioned previously you can still find EDO
DIMMs for not too outrageous prices (from Crucial they are half the
price of SIMMs on a per-MB basis).
Were those the early "PIII should be RDRAM" horrors?

Partly, though it also included the first round of i810 chipsets
(SDRAM). It had nothing to do with the memory interface, more to do
with the fact that the chipset was TOTALLY different from the old
northbridge/southbridge design of the i440BX and all previous
chipsets. The drivers for these new chipsets stank, but to be fair to
Intel, they DID fix their drivers (unlike VIA who had semi-crappy
drivers for about 10 years).
 
Once the 440LX chipset arrived it rendered all these designs obsolete
pretty much overnight, they all pre-date them.

Yep. Then the i440BX's 100MHz support made 'LX an embarrasment :-)
The trick was that you could get more memory in a system using
EDO DIMMs than you could using EDO SIMMs.

Ahhhh - *now* I get it. It looks like Dell overproduced those mobos,
then, which is odd because you'd have expected thier close
relationship with Intel to have let them know to the month when i440LX
would be out. Still, being a big proprietary bland lame OEM, any
overproduced mobos they may have, they'd be stuck with; can't dump in
the generic market.

Hence attempts to push them on unsuspecting un-tech-savvy
professionals in the 3rd-world. Who turned out to be not unsuspecting
enough, and it cost the parties concerned++
These systems all predate the PII, with the exception of a VERY small
number of the first run of PII systems while the 440LX was delayed.

I built one or two of those early-adopter i440FX/PII, or at least
quoted on them (can't remember if there were any takers). What drove
the move to PII, even before the original (and ghastly!) Celeron, was
the availability of affordable AGP cards.
Getting more than 128MB of memory on a board using SIMMs was very
difficult to do, but much easier using DIMMs

Yes, I got it :-)

The thought of waiting for MemTest 11 to chug through 256M RAM at
iP55C-233 speed is uuurgh, but I can see the need.
Partly, though it also included the first round of i810 chipsets
(SDRAM). It had nothing to do with the memory interface, more to do
with the fact that the chipset was TOTALLY different from the old
northbridge/southbridge design of the i440BX and all previous
chipsets. The drivers for these new chipsets stank, but to be fair to
Intel, they DID fix their drivers (unlike VIA who had semi-crappy
drivers for about 10 years).

Yes, to have to try this week's beta 4-in-1 to carry your UIDE's
lifeblood is just way off the acceptability map for me. I don't want
to buy it until you've finished making it, thanks!

(cue thread drift into sware "we'll patch it later" rantfest) <g>


--------------- ----- ---- --- -- - - -
Tech Support: The guys who follow the
'Parade of New Products' with a shovel.
 
Yep. Then the i440BX's 100MHz support made 'LX an embarrasment :-)

Eventually yes, though not for 6+ months.
Ahhhh - *now* I get it. It looks like Dell overproduced those mobos,
then, which is odd because you'd have expected thier close
relationship with Intel to have let them know to the month when i440LX
would be out.

They DID move to the i440LX the month it was out, but that was
something like 2 years after the i430HX and i440FX came out. There
was a LOT of time in between for EDO DIMMs to make an appearance.
Hence attempts to push them on unsuspecting un-tech-savvy
professionals in the 3rd-world. Who turned out to be not unsuspecting
enough, and it cost the parties concerned++

Not really. When these system were sold there wasn't exactly a lot of
other options. It was that of the i430VX chipset with it's 64MB
cacheable limit. We're not talking about the era of 500MHz machines
here.
I built one or two of those early-adopter i440FX/PII, or at least
quoted on them (can't remember if there were any takers). What drove
the move to PII, even before the original (and ghastly!) Celeron, was
the availability of affordable AGP cards.

Or more to the point, Intel's none-too-gentle nudging towards the PII
due to them not producing any AGP-capable socket 7 chipsets. The
i440FX, of course, did not support AGP. Really the i440LX was the
chipset to bring the PII to market... it was just a couple of months
late.
Yes, I got it :-)

The thought of waiting for MemTest 11 to chug through 256M RAM at
iP55C-233 speed is uuurgh, but I can see the need.

Hence the reason why you can still find EDO DIMMs today. There was
demand and it was the only (easy) solution available. I suppose you
COULD have designed a board with 16 SIMM sockets or some such
nonesense, but 4 EDO DIMMs was more economical and sensible.
 
Back
Top