Interesting read about upcoming K9 processors

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Ken Hagan said:
Surely if Win32 were 64-bit clean, MS wouldn't have had to ship separate
Win64 headers, which they did, to the general horror of everyone who
expected a 64-bit "long".

Many of us have been working with 64-bit desktop CPU's, OS's, and C
compilers for over a decade now. Yeah, there are decisions to be made.
This was hardly the first time this particular decision was made that way.

Tim.
 
Yousuf Khan said:
I just remembered that pretty soon there will be dual-core Opterons too, so
that in itself will add another level of NUMA to take into account. Internal
chip-connect vs. Hypertransport connect vs. customized board-to-board
connect.

Does an OS really need to be aware of the difference between two cores on
the same chip?

Linux has a concept of a NUMA "node", where all of the processors in a node
are considered equivalent. It'll still try to schedule threads on the same
CPU they last ran on, but next it will try other CPUs in the same node
before giving up and sending it to any available node.

IIRC, the code already understands two-CPU nodes, because that is how
Intel's SMT chips are handled. Treating K8 CMP the same way sounds correct,
once AMD releases specs on how to recognize dual-core chips.

S
 
Oops, forgot to mention the Itanic port as well.
I don't know how you could come to any of these conclusions, given the
public information anyway.

Assuming MS is at least marginally competent, maintaining source that is
cleanly portable between 32-bit and 64-bit systems is required when they
have versions shipping for both sizes, and have (on and off) ever since NT
was created.
I agree with the others here. Force the issue by eliminating those who
won't convert. It seems Linux has done a reasonable job of supporting
AMD64 *without* all the resources M$ can bring to bare.

AMD64 is just another platform, and the third (or higher) platform supported
is of marginal cost compared to the second. The free software world has had
to contend with dozens of platforms for over two decades, and so the fact
Linux (and all the common apps) ported over cleanly is hardly surprising.

MS is in another boat; the NT kernel might be portable, but all the other OS
"stuff" and sundry apps may not be, and MS has never had more than one
platform with significant revenue that has forced them to adopt clean coding
practices.
Ah, so we have another conspiracy theory. So you're on the "incompetence"
side?

I am a firm believer in Hanlon's Razor: never attribute to malice what can
be adequately explained by stupidity.

S
 
Stephen said:
Does an OS really need to be aware of the difference between two
cores on the same chip?

Linux has a concept of a NUMA "node", where all of the processors in
a node are considered equivalent. It'll still try to schedule
threads on the same CPU they last ran on, but next it will try other
CPUs in the same node before giving up and sending it to any
available node.

IIRC, the code already understands two-CPU nodes, because that is how
Intel's SMT chips are handled. Treating K8 CMP the same way sounds
correct, once AMD releases specs on how to recognize dual-core chips.

I'm sure as a first cut, not treating them specially is the right way to go.
But eventually everybody tries to optimize down to the bone. AMD is even
suggesting not treating Hypertransport as NUMA but as simple SMP is quite
acceptable, and this suggestion is likely to hold for dual-cores too
(probably even more so).

Yousuf Khan
 
Regarding Microsoft's decision to keep "long" at 32-bits for Win64...


Tim said:
Many of us have been working with 64-bit desktop CPU's, OS's, and C
compilers for over a decade now. Yeah, there are decisions to be
made.
This was hardly the first time this particular decision was made that
way.

Ah, then I've just benefitted from the venerable maxim that the fastest
way to learn anything is to post wrong information to Usenet. I was
under the impression that everyone else had jumped the other way on this
one.

I am also under the impression that sticking to 32 bits hasn't actually
helped MS achieve their aims of source portability, since the rules on
MSDN suggest *very* strongly to me that all client code has to be
re-written to use their silly typedefs (like INT_PTR and LONG_PTR) in
any case. Therefore, the only consequence of the 32-bit long has been to
make it impossible to write C89 or C++98 code for that platform.

Doubtless you or someone else will now disabuse me of that opinion as
well.
 
Ken Hagan said:
Regarding Microsoft's decision to keep "long" at 32-bits for Win64...




Ah, then I've just benefitted from the venerable maxim that the fastest
way to learn anything is to post wrong information to Usenet. I was
under the impression that everyone else had jumped the other way on this
one.

Most had. But the 64-bit Alpha with the DEC C compiler, has 32-bit longs
(and 64-bit long longs).

Some will interpret Microsoft's decision to use 32-bit longs as evidence
that DEC/Cutler's influence in Alpha NT still lingers. But there are other
valid reasons to choose it (many of them related to portability under
an established set of coding rules).
I am also under the impression that sticking to 32 bits hasn't actually
helped MS achieve their aims of source portability, since the rules on
MSDN suggest *very* strongly to me that all client code has to be
re-written to use their silly typedefs (like INT_PTR and LONG_PTR) in
any case.

C has always been a basket case with respect to integer sizes. Until
recently you had to choose someone's silly typedefs or choose to
chart your own course. The new standards do help... but only if you use
them.

Tim.
 
Tim Shoppa wrote:

(snip)
Most had. But the 64-bit Alpha with the DEC C compiler, has 32-bit longs
(and 64-bit long longs).

I thought it was 32 bit int and 64 bit long. It broke a lot of
programs expecting 32 bit longs. I presume you mean OSF1.
There was (and is) also Alpha/VMS which may have had 32 bit long,
but I don't remember that long long even existed at the time.
Some will interpret Microsoft's decision to use 32-bit longs as evidence
that DEC/Cutler's influence in Alpha NT still lingers. But there are other
valid reasons to choose it (many of them related to portability under
an established set of coding rules).

-- glen
 
Most had. But the 64-bit Alpha with the DEC C compiler, has 32-bit longs
(and 64-bit long longs).

DEC C on Digital Unix has (and always had) 64-bit longs; same for
Compaq C on Linux-Alpha. If other OSs had 32-bit longs, they probably also
had 32-bit pointers (WNT).

Followups to comp.arch

- anton
 
Most had. But the 64-bit Alpha with the DEC C compiler, has 32-bit longs
(and 64-bit long longs).

Are you SURE? I am pretty sure that I investigated that, and found
it to be an urban myth. Yes, there is such a mode, but the normal
one is I32LP64, just like any sane system.
Some will interpret Microsoft's decision to use 32-bit longs as evidence
that DEC/Cutler's influence in Alpha NT still lingers. But there are other
valid reasons to choose it (many of them related to portability under
an established set of coding rules).

The first statement is doubtful. When it was claimed, a lot of people
provided evidence that it was an aberrant decision, and that Microsoft
compilers more often used I32LP64 than IL32LLP64.

The second statement is wrong. I am the person who investigated that,
and all of the actual evidence is that the portability problems caused
by IL32LLP64 are FAR greater than those caused by I32LP64. While I did
not investigate Microsoft's code, I did inspect the relevant coding
standards, and the problems my investigation detected would have arisen
as much in that as in the programs I did look at.
C has always been a basket case with respect to integer sizes. Until
recently you had to choose someone's silly typedefs or choose to
chart your own course. The new standards do help... but only if you use
them.

They do help - but only if you are not interested in serious (i.e.
long-term and widespread) portability! If you are, C99 is a disaster,
where C90 was merely a basket case.

The point is that, for such portability, you need to choose sizes by
PROPERTY not bit count. I.e. "The smallest signed integer type that
can address the largest allocatable object" or "the smallest integer
type that can hold A_INT_T_MAX*A_INT_T_MAX".

Now, making <float.h> usable by the preprocessor and the new symbols
for the limits DOES help, but the ghastly bit-count sizes are quite
useless. Despite repeated claims, nobody has ever produced evidence
that they help competent programmers - though I do agree that they
do help bozos.

The claim that they help with networking and other interfaces is
completely bogus, as they don't specify endianness (or some other
critical properties in some cases). Nor do structures specify
padding and alignment. So portable programs STILL have to do their
own packing.


Regards,
Nick Maclaren.
 
Are you SURE? I am pretty sure that I investigated that, and found
it to be an urban myth. Yes, there is such a mode, but the normal
one is I32LP64, just like any sane system.

DEC C V6.0-001 (which was rather current as of late 1998) under Alpha VMS 7.2:

$ type junk.c
#include <stdio.h>
main () {
int x;
long y;
long long z;
printf("int is %d bytes,long is %d bytes, long long is %d bytes\n",
sizeof(x),sizeof(y),sizeof(z));
}
$ cc junk.c
$ link junk
$ run junk
int is 4 bytes,long is 4 bytes, long long is 8 bytes

The compiler has a whole array of switches for changing default sizes,
not just of various int and float types, but also of the pointers.
The first statement is doubtful. When it was claimed, a lot of people
provided evidence that it was an aberrant decision, and that Microsoft
compilers more often used I32LP64 than IL32LLP64.

The second statement is wrong. I am the person who investigated that,
and all of the actual evidence is that the portability problems caused
by IL32LLP64 are FAR greater than those caused by I32LP64. While I did
not investigate Microsoft's code, I did inspect the relevant coding
standards, and the problems my investigation detected would have arisen
as much in that as in the programs I did look at.

I'm not really trying to defend those choices. Just pointing out that
similar choices were made over a decade ago. And that I kinda understand
why those choices were made (I was porting hundreds of thousands of lines
of C code written in the "all the world's a VAX" mode, and the defaults
made sense to *me*!) But I agree that they are not the most natural
choices if you know the whole world's moving (or in my case, has moved)
to 64 bit CPU's and OS's.

Keep in mind that in Alpha assembler-talk, a "word" is 16 bits and a "longword"
is 32 bits. The "quadword" is 64 bits. Yes, the first two are
anachronisms from PDP-11 days. Again, I'm not defending those choices,
I'm just pointing out that they were made before and will be made again
despite how strongly we feel they are anachronisms.

Tim.
 
That readjustment, which also included raised prices for a cpuple of
models, was on Monday and I'm going by prices paid for recent "shopping".
The price *has* been holding quite well for AMD64 CPUs compared with their
historical curves and even Intel's - even at the old higher prices, they
were definitely in quite tight supply.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who,
me??

Yeah, but I think this still show that they are starting to ramp up. What
this price adjustment shows me is that they are ready to take A64 into the
main stream market (by this I mean the mid-range pc market of course). Prior
to this price adjustment even the "low end" A64's were considered to be
parts reserved for highend/enthusiest boxes... Now the prices are low enough
that just last friday I got around to buying an A64 3000+ and a fairly nice
Chaintech board to go with it.

Carlo
 
The Alpha and MIPS ports of WNT were 32-bit. The 64-bit WNT for Alpha
never was released publically (and AFAIK Linux/MIPS died before work
on 64-bit WNT started). However, WNT for IA-64 is 64-bit, so 64-bit
WNT exists ("WNT" here includes W2K, WXP etc.).

Of course that's why I challenged the assertion based on the evidence
given. PPC/XBox-2 was a real red-herring thrown in there.
How does a good marketing machine and the ability to buy competitors
help in driver development? Or what kind of resources are you thinking
of that MS has and that Linux lacks?

- Piles of $$

- Marketing muscle that can force compliance with the (perhaps implied)
threat of impending death.

- Compatability listings

- About 50 billion ways
Linux has profited here from something that is sometimes seen as a
disadvantage: That hardware vendors usually don't do Linux drivers, so
Linux developers had to write them themselves. Making them work on
64-bit kernels is less work than writing them from scratch or getting
the hardware vendor to do the 64-bit port (for a Windows version that is
not released yet).

So you're belief is that it's somehow *harder* for the hardware vendor
to write a 64b driver (for Win) than a freebie programmer (Linux),
perhaps working without a complete set of specifications? Yes, I
believe it's amazing that Linux has prospered so well. This seems to
support my contention that M$ isn't trying as hard as it perhaps could.
To make 32-bit apps to work under a 64-bit OS, you need to convert the
32-bit system calls to 64-bit ones, and to provide all the 32-bit
libraries (and a way for the apps to link to the right libraries).

Or leave them 32bit until they "graduate". There is nothing preventing
32bit apps from running on a 64bit OS. Not everything has to be 64b on
day one.
If you have a more complex infrastructure (as Debian does), you have to
adapt that as well (and that's why Debian has a pure64 port, with the
multiarch i386/amd64 port waiting for upstream extensions to the
infrastructure last time I looked). However, WNT should have already
tackled this for the IA-64 port, no?

Again, there is something fishy.
Followups to comp.arch.

..chips included back in.
 
Some things are designed to be easily portable (such as DB2, perhaps?),
while others are not. The incompetence does not have to be with today's
coders, but could be due to yesterday's designers...

I surely *hope* M$'s architects learned something from OS/2 days. NT was
a complete re-write and one would suspect that they learned a few lessons
along the way.
 
DEC C V6.0-001 (which was rather current as of late 1998) under Alpha VMS 7.2:

I remember now (and have just got a colleague to check). Yes, VMS
uses that model, but Tru64 uses the normal I32LP64 one. As far as I
know, the sum total of C compilers on systems that anyone normal has
ever heard of that use IL32LLP64 is two, and both of those are relics
(i.e. I believe that Microsoft's future direction is I32LP64).
I'm not really trying to defend those choices. Just pointing out that
similar choices were made over a decade ago. And that I kinda understand
why those choices were made (I was porting hundreds of thousands of lines
of C code written in the "all the world's a VAX" mode, and the defaults
made sense to *me*!) But I agree that they are not the most natural
choices if you know the whole world's moving (or in my case, has moved)
to 64 bit CPU's and OS's.

Oh, I am not arguing with DEC's VMS choice - not at all - what I am
referring to is the decision to break all of the working C90 code
to support an essentially unused option. And the fact that the
claim that it was necessary to do so to avoid breaking existing
code was the CONVERSE of the truth.


Regards,
Nick Maclaren.
 
Keith said:
I agree with the others here. Force the issue by eliminating those who
won't convert. It seems Linux has done a reasonable job of supporting
AMD64 *without* all the resources M$ can bring to bare.

Define "support"? Linux has the advantage that it is still not for the
average home user so they lose compaibility with old apps and old hardware
and not care because the people running 64big Linux on AMD64 won't care.
Windows on the other hand doesn't that advantage. If they don't have at
least 95% support for everything XP supports at launch it's a real issue for
Microsoft.

Carlo
 
Carlo Razzeto wrote:

[SNIP]
Define "support"? Linux has the advantage that it is still not for the
average home user so they lose compaibility with old apps and old hardware
and not care because the people running 64big Linux on AMD64 won't care.

That is not true. Besides if it *were* true, Microsoft have done all
this and got the T-Shirt with IA-64 already, surely... Linux has too.
Windows on the other hand doesn't that advantage. If they don't have at
least 95% support for everything XP supports at launch it's a real issue for
Microsoft.

MS has no excuse, they've had well over a decade to get this right
during which they have had considerable clout over the hardware vendors
and they have a *lot* more resource to throw at the problem that the
Linux bods ($$$ for specialist dev. platforms etc).


Cheers,
Rupert
 
DEC C V6.0-001 (which was rather current as of late 1998) under Alpha VMS 7.2: ....
int is 4 bytes,long is 4 bytes, long long is 8 bytes

What is the pointer size? My guess is that this is the classic ILP32
model that we all know from the VAX. How much 64-bit support does VMS
have, BTW?

Followups to comp.arch

- anton
 
Carlo Razzeto said:
Define "support"? Linux has the advantage that it is still not for the
average home user so they lose compaibility with old apps and old hardware
and not care because the people running 64big Linux on AMD64 won't care.

I am running Linux/AMD64 (Fedora Core 1). My old apps still run (for
the really old ones (pre-libc6) I have to copy the dynamic libraries
from old distributions, however). E.g., the oldest binary on my
system is:

-rwxr-xr-x 1 anton root 1551772 Nov 1 1994 Mosaic

[/usr/local/bin:2107] file Mosaic
Mosaic: Linux/i386 demand-paged executable (ZMAGIC)

After copying the old dynamic loader and four libraries from the old
Slackware 2.1 tree, it just ran (an easier alternative would have been
to add the Slackware lib directories to the library search paths).

In contrast, I recently wanted to run one of my middle-aged games
(Magic the Gathering from Microprose (from 1997), patched with the
latest patch I could find), but it would not run properly on WME: it
did not react properly to user input, and it crashed after a short
while. Some of the middle-aged games still work nicely, though (e.g.,
Grand Prix Legends).

My old hardware still works on Linux/AMD64 (as far as the new hardware
supports it; I finally had to give up my Soundblaster Pro card,
because the new motherboard has no ISA slots). In particular, my
NE2000 clone (ethernet card) still works on Linux/AMD64, while it does
not with W98, or WME (it did work with W95, but unfortunately W95
stopped working once I got a CPU with 1000MHz or more); well, at least
I am safe from the various dangers the Internet has for Windows.

Followups set to comp.arch (an advocacy group would be more
appropriate, but I don't read any of them at the moment).

- anton
 
Keith said:
So you're belief is that it's somehow *harder* for the hardware vendor
to write a 64b driver (for Win) than a freebie programmer (Linux),
perhaps working without a complete set of specifications?

What I was comparing was an MS developer who has no driver yet (since
the 32-bit driver was supplied as binary by the hardware vendor) to a
Linux developer who has a working 32-bit Linux driver. In this case I
think the Linux developer has the easier time.

As for your comparison, the hardware vendor may also have a hard time
(they probably have no know-how for 64-bit ports, and probably often a
culture that makes this difficult).
Or leave them 32bit until they "graduate".

Not sure what you mean with "them", but if you are thinking about the
system calls, with a 64-bit kernel you have to convert at some point.

Or do you suggest completely partitioning the machine in a 32-bit part
with a 32-bit kernel and a 64-bit part with a 64-bit kernel, without
sharing any resources? That would be very hard (you probably want to
at least share the CPU), and not very practical (no file sharing
between 32-bit and 64-bit apps).

- anton
 
Back
Top