Is Itanium the first 64-bit casualty?

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Nate Edel said:
Well, you can kinda-sorta get the OS on the G5, as others have noted.
And Apple, while consumer hardware, is only kinda-sorta in the
mass-market game. Even within the Apple market, are the G5s down
throughout the line yet, or are they just the top-end model?

Does AIX which runs on Power4 chips work on the G5?

Yousuf Khan
 
I suppose if you write bad enough code, you need that much linear
memory for essentially parallel tasks. But typical applications
don't.

Not bad code, just LOTS of graphics. Eye candy and graphics seems to
be what sells in video games for the most part, so I expect that we'll
see the data set for games continue to expand at a rather prodigious
rate.
 
Tony said:
Not bad code, just LOTS of graphics. Eye candy and graphics seems to
be what sells in video games for the most part, so I expect that we'll
see the data set for games continue to expand at a rather prodigious
rate.

Sure, but that stuff doesn't need a linear address space. Segments
work just fine.
 
Does AIX which runs on Power4 chips work on the G5?

Not yet, but IBM plans to support AIX on the BladeCenter JS20 (ppc970) in
the third quarter of 2004. Don't know if that means it will run on
apple/non-ibm machines..



-jf
 
UT2003 has >2GB of stuff on disk it uses for rendering
Next UT engine will use >2GB of stuff in RAM for the rendering, they
are waiting for a 64bit platform in the mean time becouse Windows as it
is sucks above 2GB (unusable).


Pozdrawiam.

I suppose if you write bad enough code, you need that much linear
memory for essentially parallel tasks. But typical applications
don't.[/QUOTE]

Translation. I know buggerall about what anyone else in the computer
world does, but I have an opinion on it anyway.

Maynard
 
Sure, but that stuff doesn't need a linear address space. Segments
work just fine.

What segments? That was hashed to death here recently - segments don't
work for extending the address space in any practical fashion... unless you
mean PAE<cough><splutter>, which is more trouble than recompiling for
64-bit mode. Cheaper to just get a x86-64 CPU and recompile.

BTW there's more to x86-64 than 64-bit addressing which we want: 16 general
registers and 16 FP registers with direct look-up... i.e. lose that ****in'
stack.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
If "mass market" == Windows, sure. Linux runs AMD64 quite well. SuSE
9.1 is running on an Opteron quite happily at home.

The Debian port appears to be up and running as well. (ie, actually
usable, with a relatively simple install, rather than the install from
hell of a month or two ago). It's a pure-64 bit port -- if you want to
run 32 bit binaries then you'll need to install a 32-bit Debian in a
chroot. (Fortunately, debootstrap makes this a one-liner).

Phil
 
Warren said:
(e-mail address removed) (Yousuf Khan) wrote in



Perhaps this is the first case of a processor acting as a catalyst: The

No, quite definitely not the first. Plenty of architectures out there
that died a quiet death and were resurrected in another form for other
markets. :)
Itanium sparked the 64-bit-for-consumer trend, but isn't actually going to
take part in it ;-)

Much as I hate to say it : I think the Alpha did, NT was first ported
the Alpha/MIPS/PowerPC. IA-64 came much later. In practice I only saw
NT on Alpha actually in production, which is why I didn't say MIPS. :/

Worth noting that DEC did initially point Alpha at Embedded and low
end workstation space, and they continued their spasmodic efforts to
push it at the desktop for a long time.

Alpha appears to have had quite a large "Open Source" user base for a
long time, but that doesn't really count as consumer. However a lot of
that 64bit clean push was accomplished with Alphas, and that lowered
the barrier of entry for vendors of 64bit gear.

Cheers,
Rupert
 
In said:
Worth noting that DEC did initially point Alpha at Embedded and low
end workstation space, and they continued their spasmodic efforts to
push it at the desktop for a long time.

Care to elaborate on the difference between "low end workstation" and
"desktop"? Since 1994, all low end Alpha workstations have actually been
PCs with an Alpha processor instead of an Intel processor. What can be
more "desktop" than such a system?
Alpha appears to have had quite a large "Open Source" user base for a
long time, but that doesn't really count as consumer. However a lot of
that 64bit clean push was accomplished with Alphas, and that lowered
the barrier of entry for vendors of 64bit gear.

This is true. DEC OSF/1 exposed plenty of open source code that wasn't
64-bit clean. Plenty of proprietary code, too, which severely restricted
the number of commercial applications available for that platform, during
the first years.

Dan
 
Sure, but that stuff doesn't need a linear address space. Segments
work just fine.

Segments? Ya mean like PAE?!?! Not a chance in hell! Do this the
*RIGHT* way, ie 64-bit flat linear address space, not some
ugly-as-all-hell kludge!

64-bit may not be NEEDED to get more than 2GB (3GB in some cases) of
memory space, but it's the RIGHT way to do it. All the other
solutions are way more trouble than their worth.
 
The Debian port appears to be up and running as well. (ie, actually
usable, with a relatively simple install, rather than the install from
hell of a month or two ago). It's a pure-64 bit port -- if you want to
run 32 bit binaries then you'll need to install a 32-bit Debian in a
chroot. (Fortunately, debootstrap makes this a one-liner).

That's the last of 'em then. It looks like EVERY major Linux
distribution has managed to beat Microsoft to market with a usable
AMD64/x86-64 operating system (at least as long as you don't count
Slackware as a "major distribution", which most people don't these
days). SuSE, RedHat, Mandrake, Gentoo, Turbolinux and now Debian are
all out there now. Debian's distribution is still in the "unstable"
stream, but those who know Debian should know that Debian "unstable"
is roughly equivalent to pre-SP1 release of Windows rather than a beta
version.

Ohh, and FreeBSD and OpenBSD also have full support for AMD64 as well.
Kind of makes you wonder just what the heck is taking MS so long?!
 
This is true. DEC OSF/1 exposed plenty of open source code that wasn't
64-bit clean.

From the "flogging a dead horse" department: More precisely, code that
was not I32LP64 clean. I am pretty sure my code and lots of other
code was ILP64 clean, and lots of the I32LP64-clean code would not
work on an ILP64 system (IIRC the Cray T3E was such a system), and
probably lots of the ILP64-clean and I32LP64-clean code will need
changes to work on an IL32LLP64 system (Win64, right?). So you should
not use "64-bit-clean" for programs that are just I32LP64-clean.
Plenty of proprietary code, too, which severely restricted
the number of commercial applications available for that platform, during
the first years.

There were tricks around that (-taso etc.). Netscape was 32-bit code
until it was cleaned up in Mozilla. Or was it? Fedora Core 1 for
AMD64 still contains a 32-bit Mozilla.:-(

Followups to comp.arch.

- anton
 
Segments? Ya mean like PAE?!?! Not a chance in hell! Do this the
*RIGHT* way, ie 64-bit flat linear address space, not some
ugly-as-all-hell kludge!

64-bit may not be NEEDED to get more than 2GB (3GB in some cases) of
memory space, but it's the RIGHT way to do it. All the other
solutions are way more trouble than their worth.

That is quite simply wrong.

Using multiple hardware segments to create a software segment that
is larger than the indexing size is, I agree, an abomination. It
has been done many times and has never worked well.

But a single flat, linear address space is almost equally ghastly,
for different reasons. It is one of the surest ways to ensure
mind-bogglingly revolting and insecure designs.

What is actually wanted is the ability to have multiple segments,
with application-specified properties, where each application
segment is inherently separate and integral. That is how some
systems (especially capability machines) have worked.


Regards,
Nick Maclaren.
 
That's a bit rough on PAE. It's not a bad trick for the
OS to use when it needs lots of RAM to run lots of process
that each fit inside 32 bits. Transparent to applications.
Admittedly useless for single apps that need >32 bits.
But a single flat, linear address space is almost equally
ghastly, for different reasons. It is one of the surest ways
to ensure mind-bogglingly revolting and insecure designs.
What is actually wanted is the ability to have multiple
segments, with application-specified properties, where each
application segment is inherently separate and integral.
That is how some systems (especially capability machines)
have worked.

I'm not sure what you mean here. What is so wrong with flat
address spaces, virtualized through the paging mechanism?
That's what we have on all PMode OSes today. Apps usually
start at the same addresses (IP & stack) and think they
have the machine to themselves. Paging hardware (LDTs)
keeps processes separate.

-- Robert
 
Dan said:
Care to elaborate on the difference between "low end workstation" and
"desktop"? Since 1994, all low end Alpha workstations have actually been

Marketing and the perceptions of PHBs with the chequebooks.
PCs with an Alpha processor instead of an Intel processor. What can be
more "desktop" than such a system?

Not my call. Reminds me a little of the thread about some Intel dude
calling SPARC "proprietry".

I think we should play the Marketoids at their own game : Let's start
referring to IA-64 as "Legacy" now that we have a dual-sourced 64bit
architecture in the x86 world.


Cheers,
Rupert
 
That's a bit rough on PAE. It's not a bad trick for the
OS to use when it needs lots of RAM to run lots of process
that each fit inside 32 bits. Transparent to applications.
Admittedly useless for single apps that need >32 bits.

Yes, precisely. And it also allows a 32-bit application efficient
access to large memory-mapped files - a seek on such things is little
more expensive than a TLB miss.
I'm not sure what you mean here. What is so wrong with flat
address spaces, virtualized through the paging mechanism?
That's what we have on all PMode OSes today. Apps usually
start at the same addresses (IP & stack) and think they
have the machine to themselves. Paging hardware (LDTs)
keeps processes separate.

Yes, they do, don't they? And many of the foulest problems are
caused by various forms of corrupting one (logical) segment by
means of a pointer to another. There are many ways to tackling
that problem, and one of the best is secure segmentation - but
in the sense of a capability model and not a page table hack.

If you don't care about robustness and security, then obviously a
single flat address space is best. But even current systems are
now trying to separate the executable segments from the writable
ones, which is moving away from a single flat address space. If
you think about it:

Why shouldn't I be able to load a module onto the stack, or use
part of an executable as a temporary stack segment? We used to be
able to do that, after all, and a genuinely integral flat address
space would allow it.

The reason is that it is generally accepted to be sacrificing too
much robustness and security for flexibility.


Regards,
Nick Maclaren.
 
In comp.sys.ibm.pc.hardware.chips Nick Maclaren said:
Why shouldn't I be able to load a module onto the stack,

You can. Trampoline code does this, and making the stack
no-exec really doesn't increase security in the face of
buffer overflows corrupting return addresses. An exploit
can be pure data, no code.
or use part of an executable as a temporary stack segment?

Not so wise. Code pages are marked read-only to permit
all sorts of nice things like sharing between processes
(think of how many httpd processes running on a webserver).
It also permits discarding codepages instead of swapping out.
Just reload from disk image. Of course, a sufficiently
sophisticated CoW policy could handle this.

-- Robert
 
You can. Trampoline code does this, and making the stack
no-exec really doesn't increase security in the face of
buffer overflows corrupting return addresses. An exploit
can be pure data, no code.


Not so wise. Code pages are marked read-only to permit
all sorts of nice things like sharing between processes
(think of how many httpd processes running on a webserver).
It also permits discarding codepages instead of swapping out.
Just reload from disk image. Of course, a sufficiently
sophisticated CoW policy could handle this.

Er, that was the use of reductio ad absurdam, and the paragraph does
not make much sense on its own. I could provide reasons why either
of those is a good or bad idea (think exception handling for the
latter), and all combinations have been done in the past.


Regards,
Nick Maclaren.
 
Maynard Handley said:
I suppose if you write bad enough code, you need that much linear
memory for essentially parallel tasks. But typical applications
don't.

Translation. I know buggerall about what anyone else in the computer
world does, but I have an opinion on it anyway.[/QUOTE]

Thanks Mr. boogerall... but 64-bit is really not that essential for the
general consumer market just yet. It's a niche right now. That's why it's
important to have a "hybrid" 64-bit CPU that still works (perfectly) with
the old 32-bit code. No kludgy or slow emulations! If 64-bit is all we
needed, then Itanium would be selling like hotcakes. Face it, there is no
killer app creating a 64-bit buzz.
 
Back
Top