AMD/Linux vs Intel/Microsoft

  • Thread starter Thread starter E
  • Start date Start date
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

E said:
But I like your ideas better. Even with Intel, since Intel has
there own compiler, that is as far as I know, free to use with
Linux for non-commercial use. And who would know how to create a
compiler for an Intel chip better than Intel? Although I also read
[...]

Part of the issue is that the code in the Linux kernel has a lot of
"GCC-isms" in it. Quirks and way of coding things to make things work
correctly.

In the past the Linux kernel even had code which worked around known
bugs in earlier version(s?) of the GCC compiler. Even with the newest
2.6 kernel series I think (correct if wrong) that you still need to
use GCC 2.95.x to compile the kernel. Many distributions have one
compiler for the kernel and another for the userland.

I use gcc 3.x (whatever is the latest in Sid at the time I compile it)
and you can supposedly use icc (the intel compiler) to compile (for
intel cpu's only) Additionally, icc is supposed to have better
optimizations for intel cpus, probably at least in part, because they
don't have to support so many other architectures.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQE//e+nd90bcYOAWPYRAiBDAJ4jh8jSf+uQD+VJK+nAd5b1aMzh1gCgoLb/
bOJSs0CVQo8wTXqyeVNxQ0w=
=i1xj
-----END PGP SIGNATURE-----
 
OpenVMS has been ported successfully to the Itanium2 processor as well as
TRU-64 UNIX. Whether HP keeps tru-64 unix is another question tho.
When M$ has ported XP to the Itanium2 isn't known here nor have I seen any
ads on M$ website about it.

WinXP was ported to IA-64 long ago. Here's the webpage for WinXP
64-bit edition for IA-64:

http://www.microsoft.com/windowsxp/64bit/default.asp

I guess this wasn't really done for the Itanium2 since it predates
that chip, but it certainly will run with no troubles at all on it.
More recently MS released Win2003 Server for IA-64.

FWIW at least in theory WinNT was designed to be very portable. The
entire system was built with a hardware abstraction layer that was
supposed to minimize the amount of architecture-specific code. Now,
I'm not sure just how successful this attempt was, but the basis is
there. At various times NT was reported to have been running on
PowerPC and MIPS in addition to the Alpha, i386 and IA-64 instruction
sets that it was officially released for. Combined with the upcoming
AMD64 port that makes for a fairly impressive array of architectures,
though several of them were stillborn.
 
chrisv said:
You can't tell me it would take all THAT much manpower for a company
like ATI to compile their drivers for the half-dozen or so leading Linux
distributions.

It does take THAT much manpower and more. ATI hasn't had a stellar
record by any stretch even while compiling for wintel only and one can
only expect even lower quality when compiling for 6-10 different
platforms. nVidia has had an only slightly better record. But there
are other much smaller manufacturers of hardware which will be buried
under a load they cannot handle peoperly, so the users of ALL
platforms do suffer, which will only get worse.

Competition is usually good but the ideal in hardware/OS level would
be a standards based black box approach, where, say, both Intel and
AMD would agree to optimize for performance/price a black box
processor with predefined standards. Same for OS's. Of course nobody
would agree to that.
 
Kevin Lawton wrote:

I don't think there is much profit made out of drivers, though.

Except the hardware sales. I chose an epson printer rather than another
canon (I've owned several) because of the crappy linux support with canon
products. This cost canon the sale of a $400 photo printer not helping with
linux drivers for their products!
 
Del Cecchi said:
When I look at the brochure for the x450 and x455, it says "Supports
Microsoft Windows Server 2003, Enterprise Edition "
These are IA64 boxes. Is that XP?

It is XP's successor. Don't you just love vendor version numbering?
I am especially fond of "HP-UX 11i version 2" (aka HP-UX 11.23).

Cheers,
 
Stephen Sprunk said:
Were there any platforms besides i386, Alpha, and MIPS?

The RISC line that SGI line was using I think for a while. I do know that
SGI tried to push NT under their namebrand of X86 for a period, but it
brought them nothing but a bad name out of it.
I don't see why HP would continue supporting Tru64 if they've already got a
commitment to HP-UX on IA64. Also, HP appears to be putting some level of
support into Linux and GCC -- at least on IA64. How many different unix
flavors can a single vendor realistically ship and support?

Good point and one that is good to take to the bank. I only see it being
supported under the current federal contracts... but after that I suspect
its the axe for tru64.
Windows Server 2003 has been ported to IA64. Whether XP was ported or not
is now moot.

Most of the old DEC line uses Apache now anyway. If there is a mass
produced IA64 for public use, XP maybe only able to compete if the price is
real low. Other than that windows can't compete against OpenVMS on the
IA64.
 
When I look at the brochure for the x450 and x455, it says "Supports
Microsoft Windows Server 2003, Enterprise Edition "
These are IA64 boxes. Is that XP?

Dunno. What I meant was that I should have used the present continuous,
rather than the perfect. It is an ongoing port.


Regards,
Nick Maclaren.
 
In comp.arch Peter Köhlmann said:
David Magda wrote:
Set your wayback machine to the early-mid-90's and remember that
Microsoft sold Windows NT for a 64-bit platform (Alpha) before.
Rumors have it that other RISC platforms were targets back then
[...]
Actually it ran on PowerPC and MIPS as well, if I remember
correctly. This was NT 3.5(1) and maybe 4.0. It's one of the reasons
why NT has/had a hardware abstraction layer (HAL).
It did not run under NT4.
And the Alpha-version ran in 32-bit mode

I have a Finnish-language OEM NT4 CD that contains versions for Alpha,
i386, MIPS and PPC. By contrast, the service pack 4 CD enclosed in the
same package supports only the Alpha and i386 versions.

-a
 
It does take THAT much manpower and more.

Sorry, I don't understand how. They've done the driver in source code
already. They've written insructions on how to compile the kernel
modules. How much of the end-user's time do they expect this whole
process to take? An hour? That seems to be the maximum reasonable time
to expect an end-user to take to get a dang driver installed.

Now, how much time would it take for someone who really knew what they
were doing, because they worked with these drivers for a living? I would
expect a half-hour TOPS. Now, you multiply that by a half-dozen
distributions, maybe double it again for the two most recent versions of
XFree, and you have like ONE DAY of an engineer's time.

What am I missing? And even if my time estimates are unrealistic, it's
sure a hell of a lot easier for them to do it, ONCE for each distro/XFree,
rather than asking thousands of end-users to make the individual effort.

I'm the customer, ATI. I'm the guy with the money that YOU want. Make
some effort to help me out!
 
I don't know what problems you'll have with ATI cards and Linux (up to
now, I've mainly use nVidia cards, and a Matrox card), but if you go to
http:/ www.ati.com/support/faq/linux.html, you'll see that ATI does
support Linux, and does provide proprietary binary drivers on
http://mirror.ati.com support/driver.html.

These did not work. The install failed and the messages told me that I
had to compile kernel modules (using MD9.2). Googling, I found that others
were getting the same error messages that I was.
 
Tony Hill said:
FWIW at least in theory WinNT was designed to be very portable. The
entire system was built with a hardware abstraction layer that was
supposed to minimize the amount of architecture-specific code. Now,
I'm not sure just how successful this attempt was, but the basis is
there. At various times NT was reported to have been running on
PowerPC and MIPS in addition to the Alpha, i386 and IA-64 instruction
sets that it was officially released for. Combined with the upcoming
AMD64 port that makes for a fairly impressive array of architectures,
though several of them were stillborn.

My first NT box was a MIPS based DEC 5000 (or some such thing) prior to
Alpha being done (I was investigating graphics support for Alpha/NT). A DEC
group in Seattle (DECWest) did the work as far as I remember. At the first
NT developers conference, I recall that they made a lot of noise about how
NT was designed to be portable across architectures. Much later on, in
connection with some console firmware research - I noted that OpenBoot had a
"thin veneer" implementation of the "BIOS" interfaces that allowed NT to
boot on Alpha which apaprently had been used for PowerPC (again IIRC froma
hazy memory).

The basic problem really is that the Windows market is a shrink wrap SW
market. Despite interesting things like FX!32, other architectures just had
no real advantage unless SW vendors (including Microsoft!) would provide
native implementations of their apps.
 
chrisv said:
Sorry, I don't understand how. They've done the driver in source
code already. They've written insructions on how to compile the
kernel modules. How much of the end-user's time do they expect
this whole process to take? An hour? That seems to be the
maximum reasonable time to expect an end-user to take to get a
dang driver installed.

Now, how much time would it take for someone who really knew what
they were doing, because they worked with these drivers for a
living? I would expect a half-hour TOPS. Now, you multiply that
by a half-dozen distributions, maybe double it again for the two
most recent versions of XFree, and you have like ONE DAY of an
engineer's time.

What am I missing? And even if my time estimates are unrealistic,
it's sure a hell of a lot easier for them to do it, ONCE for each
distro/XFree, rather than asking thousands of end-users to make
the individual effort.

I'm the customer, ATI. I'm the guy with the money that YOU want.
Make some effort to help me out!

Adapting drivers to the quirks of various systems is non-trivial,
so I don't blame them for not issuing multiple versions. However
all manufacturers of anything should be publishing their complete
interface specification, timing requirements, etc. so that anyone
can build an accurate driver. This doesn't even require that they
publish the source to their own drivers, although doing so would
probably be helpful to both sales and the public, not to mention
driver quality.

For all you know a part of the driver may be required to upload a
program in goombah machine code to the device to launch it. That
may save a ROM, or ease modification, and the reluctance to
publish is because that goombah code exposes trade secrets.
 
Fred said:
The basic problem really is that the Windows market is a shrink wrap
SW market. Despite interesting things like FX!32, other
architectures just had no real advantage unless SW vendors (including
Microsoft!) would provide native implementations of their apps.

Shrink-wrap needn't be a barrier.

Two things that the average software house aren't willing to pay
for are...

Buying machines of every flavour of supported hardware.
Running a full testing exercise on all platforms.

If MS (or anyone else) had really wanted RISC editions of NT to
succeed, they could have done worse than given away zillions of
x86->RISC cross-compilers. Sure, it wouldn't have been properly
tested on the new platform, but most software isn't properly
tested on the developer's platform either. :o)

One thing the average user isn't willing to pay for is...

Buying new copies of all the apps he or she has already paid for.

MS presumably *had* fully tested and supported RISC versions of
(say) Office, but failed to include them on the distribution CD.
(It wouldn't have been hard, since most of the Office bloat is
in text and graphic resources. There's not much code in there.)
 
Adapting drivers to the quirks of various systems is non-trivial,
so I don't blame them for not issuing multiple versions. However
all manufacturers of anything should be publishing their complete
interface specification, timing requirements, etc. so that anyone
can build an accurate driver. This doesn't even require that they
publish the source to their own drivers, although doing so would
probably be helpful to both sales and the public, not to mention
driver quality.

IMHO - Not gonna happen.

Just exposing the register definitions isn't enough to write functioning
code - trust me - I've done this for a living. You need some theory of
operation to go along with it, and even better some code samples, or a
real-live engineer who can answer your questions. Nor is just understanding
the HW alone enough. Quite often it is the SW techniques which drive the HW
that is critical to get the card to perform. How that HW/SW interaction
works may well have been part of how the chip was designed - but it may not
be obvious by looking at the HW spec.

Exposing the HW spec and theory of operation (and/or code) exposes what the
vendors view as their "crown jewels" to their competition. So, a graphics
vendor develops the HW and a NT driver using their own resources and which
is _highly_ optimized. They sell the driver as a binary, and don't expose
their IP.

They hire a contractor, and give them some minimal HW documentation and
"perhaps" a peek at the NT code and have them write a Linux driver. The
Linux guy takes the DRI/DRM/XAA/Mesa base and hacks something up that mostly
works (but probably doesn't optimally work - heck it may punt TCL). They
don't expose the interesting interfaces directly to the public, but only the
relatively uninteresting ones. They don't have to take advantage of
whatever cool new stuff that the HW provides, and the result is "good
enough" for the handful of people using Linux for 3D graphics.

They execute legal agreements with system vendors to allow the vendor to see
the "good stuff" for porting to a proprietary UNIX. And they don't do this
often or lightly. Not enough volume. *And* you have given a 3rd party
access to your crown jewels.

Windows is where the volume and the money is at, you don't risk that for low
volume Linux/UNIX markets.
 
In comp.arch CBFalconer said:
For all you know a part of the driver may be required to upload a
program in goombah machine code to the device to launch it. That
may save a ROM, or ease modification, and the reluctance to
publish is because that goombah code exposes trade secrets.

IIRC Matrox' G-series cards require microcode for the triangle setup
engines and for some reason or other the instruction set could not be
released to the public. They got around it by including some precompiled
binary chunks in the public DDK and saying "use these".

-a
 
chrisv said:
Sorry, I don't understand how. They've done the driver in source code
already. They've written insructions on how to compile the kernel
modules. How much of the end-user's time do they expect this whole
process to take? An hour? That seems to be the maximum reasonable time
to expect an end-user to take to get a dang driver installed.

The whole point of running Linux is to "roll your own". There are DOZENS of
distro's out there and there is no point to releasing a binary level driver
for each one.

You have Linux, learn how to use it.
 

Aack. Curse you and your follow-ups. I guess I won't see anyone's
responses to this message, or to my last two messages in this thread,
because I'm in c.o.l.a., not a.c.h.

So, la de da, HAND, and *pbbbbttt* on your follow-ups.
 
| On Fri, 09 Jan 2004 03:47:06 +0000, Bogdan wrote:
|
||
||| You can't tell me it would take all THAT much manpower for a company
||| like ATI to compile their drivers for the half-dozen or so leading
||| Linux distributions.
||
|| It does take THAT much manpower and more.
|
| Sorry, I don't understand how. They've done the driver in source code
| already. They've written insructions on how to compile the kernel
| modules. How much of the end-user's time do they expect this whole
| process to take? An hour? That seems to be the maximum reasonable
| time to expect an end-user to take to get a dang driver installed.
|
| Now, how much time would it take for someone who really knew what they
| were doing, because they worked with these drivers for a living? I
| would expect a half-hour TOPS. Now, you multiply that by a half-dozen
| distributions, maybe double it again for the two most recent versions
| of XFree, and you have like ONE DAY of an engineer's time.
|
| What am I missing? And even if my time estimates are unrealistic,
| it's sure a hell of a lot easier for them to do it, ONCE for each
| distro/XFree, rather than asking thousands of end-users to make the
| individual effort.
|
| I'm the customer, ATI. I'm the guy with the money that YOU want. Make
| some effort to help me out!

It might be worth mentioning here that some card manufacturers DO consider
it worth thier while developing, testing and making available Linux (and
some other OS) drvers for thier products.
Avansys have done Linux drivers for thier SCSI cards for ages - I know -
I've been using them since the days of RedHat 4.x.
Matrox make Linux drivers available for most of thier Graphics cards,
though these are not supplied on CD - you have to download them.
Creative, on the other hand, have chosen to expose enough of thier sound
card proprietary 'secrets' to make third-party driver development
practicable.
At the end of the day, isn't it up to market forces to push things in the
right direction ?
I run Linux (and Windoze, BeOS, etc, etc) so I build machines using
products from Avansys, Matrox, Creative, etc.
If a manufacturer chooses to not supply drivers for your chosen OS then
what is the point in 'spitting into the wind' building a machine to run that
OS using parts lacking in decent driver support ?
Kevin.
 
Back
Top