G
Greg Lindahl
daytripper said:Someone obviously never heard about 32b PCI's 64b "Dual Address Cycle"...
Hush! He was sounding very convincing, let's not burst his bubble.
-- greg
daytripper said:Someone obviously never heard about 32b PCI's 64b "Dual Address Cycle"...
Hush! He was sounding very convincing, let's not burst his bubble.
Bengt said:In comp.arch, Rupert Pigott
Actually, there are only two systems-vendors for SPARC systems (Sun
and Fujitsu) but more for Itanium systems (IBM, HP, Dell, SGI,
NEC...). Raw processors is not what is bought by most people. So the
argument isn't totally stupid.
In theory and legally you can clone SPARC, build Solaris systems to
compete directly with Sun. In practice, forget it. Sun has too much
momentum (Fujitsu has managed by building bigger systems than Sun).
"In Theory, there is no difference between Theory and Practice, in
Practice, there is."
Alexander Grigoriev said:In Windows, you can do different ways:
1. A driver can allocate a contiguous non-pageable buffer and do DMA
transfers to/from it. The buffer can be requested to be in lower 4 GB, or
even in lower 16 MB, if you need to deal with legacy DMA.
2. A buffer may be originated from an application. In this case, MapTransfer
function may move the data from/to a buffer in low memory (bounce buffer
you've mentioned), if a device is unable to do DMA above 4 GB. It's done by
HAL, and drivers don't need to bother. If a device supports scatter-gather,
only some parts of the transfer may need bounce buffers.
Bengt Larsson said:In comp.arch, Rupert Pigott
Actually, there are only two systems-vendors for SPARC systems (Sun
and Fujitsu) but more for Itanium systems (IBM, HP, Dell, SGI,
NEC...). Raw processors is not what is bought by most people. So the
argument isn't totally stupid.
Nick said:|>
|> > What is actually wanted is the ability to have multiple segments,
|> > with application-specified properties, where each application
|> > segment is inherently separate and integral. That is how some
|> > systems (especially capability machines) have worked.
|>
|> Thats what paging is for, and, IMHO, a vastly superior system that
|> gives you memory attributing while still resulting in a linear
|> address space.
|>
|> Having segmentation return would be to me like seeing the Third
|> Reich make a comeback. Segmentation was a horrible, destructive
|> design atrocity that was inflicted on x86 users because it locked
|> x86 users into the architecture.
|>
|> All I can do is hope the next generation does not ignore the
|> past to the point where the nightmare of segmentation does not
|> happen again.
|>
|> Never again !
I suggest that you read my postings before responding. It is very
clear that you do not understand the issues. I suggest that you
read up about capability machines before continuing.
You even missed my point about the read-only and no-execute bits,
which are in common use today. Modern address spaces ARE segmented,
but only slightly.
Regards,
Nick Maclaren.
Sander said:What does file size have to do with 32 vs 64bit? The OS I run on my desktop
has been supporting file sizes in excess of 4GB since at least 1994 when I
switched to it, *including* on plain vanilla x86 hardware.
Yousuf Khan said:The advantage of memmaps is that after you've finished the
initial setup of the call, you no longer have to make any more OS calls to
get further pieces of the file, they are just demand-paged in just like
virtual memory. Actually, not _just like_ virtual memory, it _is_ actual
virtual memory. Saves many stages of context switching in between this way.
daytripper said:Someone obviously never heard about 32b PCI's 64b "Dual Address
Cycle"...
Scott Moore said:Obviously you didn't live through the bad old days of segmentation,
or you would not be avocating it.
I have heard the arguments over and over and over (and over) again.
Obviously you didn't live through the bad old days of segmentation,
or you would not be avocating it.
Scott Moore said:It enables you to memory map files.
Yousuf Khan said:Peruse the members list for Sparc International Inc., the consortium
entrusted with maintaining the Sparc standards:
http://www.sparc.com/
It's considerably more than just Sun and Fujitsu. Some people are actually
building Sparcs for embedded applications.
Yousuf Khan said:Peruse the members list for Sparc International Inc., the consortium
entrusted with maintaining the Sparc standards:
http://www.sparc.com/
It's considerably more than just Sun and Fujitsu. Some people are actually
building Sparcs for embedded applications.
I meant general-purpose systems, like those built by Dell, IBM, HP.
What defines a platform there is operating-system-on-hardware and
application compatibility with it. For example, Windows-on-x86 is what
we can thank (or blame) for still having x86 in the way we do.
Compare for example the following:
Solaris-on-SPARC Linux-on-SPARC
OpenBSD-on-SPARC
NetBSD-on-SPARC
FreeBSD-on-SPARC
Linux-on-IA64
and which is the more open platform? I think it's clear that
HPUX-on-IA64 is more closed than any of these and Linux-on-x86_64 is
more open.
In comp.arch Bengt Larsson said:In comp.arch, Rupert Pigott
Actually, there are only two systems-vendors for SPARC systems (Sun
and Fujitsu) but more for Itanium systems (IBM, HP, Dell, SGI,
NEC...). Raw processors is not what is bought by most people. So the
argument isn't totally stupid.
In theory and legally you can clone SPARC, build Solaris systems to
compete directly with Sun. In practice, forget it. Sun has too much
momentum (Fujitsu has managed by building bigger systems than Sun).
"In Theory, there is no difference between Theory and Practice, in
Practice, there is."
As far as I recall, FX32 came out a long time after the P6 core introduced.
P6's first generation, PPro, was already obsolete, and they were already
into the second generation, PII. PPro was introduced in late 1995. I don't
think FX32 came out till sometime in 1997.