That explains the "x86", but I`m even more intrigued now. Was the
number 80386 chosen at random, or does it have some significance?
After all, it`s not divisible by 32. I can see I shall have to have a
search of Wikipedia. Thanks for putting me on the trail
Wikipedia may elaborate for you. I think this article is a decent
starting point:
http://en.wikipedia.org/wiki/8086
Jack's explanation isn't really complete, as the 32-bit-ness of the CPU is
irrelevant to the actual origin of x86. The "original" x86 CPU was the
8086, a 16-bit CPU. The original IBM PC used the 8088, an 8-bit version
of the 8086 (well, it had an 8-bit data bus, anyway). "x86" is used to
describe the entire line of CPUs that evolved from that beginning, and/or
the instruction set (e.g. AMD, Cyrix x86-compatible CPUs).
In other words, x86 isn't used to refer to 32-bit. It's used to refer to
an entire class of CPUs and their instruction sets. The marketing
genuises have simply complicated the issue by using x64 to refer to the
64-bit iteration of the x86 line. This has led to a sort of retro-active
re-definition of x86 as meaning 32-bit, but that's not really what it
means.
Kind of dumb, IMHO, but then I find most marketing kind of dumb (including
the term ".NET"
).
Pete