Oh, believe me Intel has taken the idea to heart. Their goal is to
get software written to a proprietary ISA. They _they_ will own a
piece of the world economy forever.
Too bad for them the x86 instructions are by far and large generated by code
translation process called compiling. The reason for sticking to the x86
based architechture isn't how the software is written but that the switch,
assuming everyone suddenly agrees that they want it, wouldn't be possible
overnight so very small number of, say, developers are even giving it any
serious thought.
For a software developer it isn't that relevant what arthichture they are
developing for as long as it is atleast big or little endian and that the
word size is multiple of 8 bits and encoding of numeric values is using
two's complement. When that is true (and even if it isn't for a lot of the
time) the sourcecode they write in Java, (ANSI) C, (ISO/IEC) C++, C#, VB,
Delphi, OCaml, ADA and other higher level languages be they functional,
procedural or what not, it is not a very large portion of the sourcecode
that couldn't be written to be platform agnostic.
Mostly, when writing code for a specific platform using specific API's
locking-on to specific hardware platforms occur due to circumastances.. say,
Java VM and .NET MSIL are steps to the direction of supporting more hardware
with "unified" (if can teke the plunge to misuse terminology slightly to
make a point) compiler frontend (and to a degree, backend).
When software is deployed using, say, C, the code has been statically
compiled to a specific platform, right? The x86 is a well spread such
platform, right? Even this platform has a wide diversity.. instance of this
platform (assuming atleast 32-bit implementation) has wide range of
fragmentation:
- x86 32 bit (assumed always supported)
- x86 32 bit pro (pentium pro specific extensions)
- x87 floating-point co-processor (optional)
- mmx
- sse (includes mmx)
- sse2 (includes mmx, sse)
- sse3 (includes sse2, sse, mmx)
And that is just from Intel Corp. AMD has their own extensions like 3dnow!
and plus versions of 3dnow and mmx, and lately the x86-64 aka. AMD64.
Phew! That's a mouthful, and now, if the software developer wants to support
these instruction sets strengths specificly, it means a lot of work.. either
different code is implemented in different dynamic library, or single
dynamic library or executable dynamically chooses which "codepath" can be
taken and chooses the optimal using some heuristic built-in into the
generated code, either automatically by the compiler (think Intel C++ 8.1 or
similiar) or choises made by the developer. This isn't anymore a trivial
amount of work.
If the compiler's frontend could store and transmit the immediate format
results, which then are used in the later phases of compilation (done in the
client/host computer) in the compiler backend it would be a step forward.
Java VM and .NET MSIL are attempts to get this right, aren't they? The
implementations aren't yet optimal, but they are on cases, efficient enough.
Say, Java VM novadays uses linear register allocation.. it is a tradeoff
between processing speed and efficiency of the generated code. Say, using
colored graph would produce atleast 30% more efficient code but it would be
a lot more expensive, therefore these on-client code generators are
compromises and employ a bag of tricks and tradeoffs, including Hotspot,
erm, technology in JVM.
There is, however, no question about it: storing and transmitting the
intermediate format to the client and having the compiler backend as
installable, upgradeable component in the client environment is a step
forward from the static compilation and transmitting platform specific
binaries - if nothing else, this is a logistically expensive practise as it
is, and let's throw a wild prediction on the air: the Power architechture
will gain populatity on the desktop and AMD64 and the EM64T version of it
will also gain ground, suddenly there will be a great pressure to support
all of these. What you predict software developers will do at that point?
Well, ofcourse, they will seek to go where the fence's is lowest. ;-o
At any rate, it doesn't take a genious to see that the x86 is already pretty
bloated platform as far as diversity goes (just look at the number of
extensions Intel themselves have introduced above!). After the task of
supporting the latest-and-greatest while also working on older systems is
done the pragmatic way, it isn't a big stretch to make things completely
virtualized in this regard. It's been happening for years, folks, and you
would be kidding yourselves by thinking that the trend will suddenly reverse
itself. x86 won't go and die all of a sudden or even that slowly fade out,
it'll just mean there will be more competition than just AMD vs. Intel vs.
"the rest of the x86 vendors"-- it just means that these folks, if they want
to bring out x86 based systems have to bring out more power more cheaply
than anyone else and that's where they been good at all along.
There still will be choise to make which OS you want to use, Windows or
Something Else. If, however, applications use something like .NET Platform
or Java it means the choise of OS isn't as critical anymore than it is now
(example, you want to play some game you must look which platforms are
supported ;-) But, say, some application is written for .NET, you will be
able to run it on, say, Linux. Ofcourse this assumes there is no non-managed
code used, which isn't that sure. Does MONO support non-managed code anyway,
because quite frankly I don't have a clue. (<- attention! IMPORTANT! This is
the chance to sum up this post! Use it! ;-)
Actually, when I think of it, I don't care.