Is Itanium the first 64-bit casualty?

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Paul Gunson said:
but won't a lot of drivers still be the responsibility of third party
manufacturers? or will all the new 64 bit drivers appearing on places
like planetAMD be part of the new beta...? if not how else can MS make
any significant improvements.... i mean the OS does work quite well but
most of the 3rd party drivers are crap... or is it MS

Microsoft has a long history of writing drivers for popular hardware on
behalf of the vendors. Back in Sep 2003 when the public trial version was
built, this driver list was extremely limited; presumably Microsoft's goal
was to get people testing the OS itself, particularly WOW64, but not the
drivers themselves.

GPU drivers are still in pretty sad shape, though ATI and nVIDIA have made
great strides recently. Most other major vendors have shipped beta drivers,
and it's safe to assume Microsoft will enhance those, plus write their own
drivers for equipment from less-cooperative or defunct vendors, and run them
all through WHQL before the final release of XP64. IMHO that's the major
holdup in the release process...

Since beta participants are under NDA, it's hard to say what the exact
status of all this is; we'll get a better picture if/when Microsoft releases
a new public trial.
i was shocked to learn that it still required a floppy drive to install
XP-64.

That's only needed if you need additional drivers during install (e.g.
SATA); alternately, users can modify the installation ISO to include
additional drivers, but it's not for the faint of heart...

S
 
Floppy drives have been banned (or at least disabled) at many large corps
for years; USB flash drives, CD writers, etc. are headed towards the same
fate.

The scale of and scope for abuse with USB flash and CD is somewhat bigger
though. This could be what drives TCPA, or some extansion thereof,
forwards and into widespread corporate acceptance.
Amusingly, the same corporations often have no problem with email
attachments.

I'm seeing more and more restrictions there too in terms of content and
size though it's partially driven by SPAM/virus considerations.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Nick said:
Can you give one SOLID reason why sparse addressing should be provided
by hardware and privileged code? Note that, as usual, I am not talking

There is no explicit support for it in any of the languages I have
used and it appears that it would be hard to directly map it onto
C for example...

A "Might makes Right" kind of argument would be : On the systems
I've used it's always been done by the OS (ie: priviledged code +
hardware).

I guess those are not SOLID reasons, but it's the best I can think
of right now. :)
about implementing the current methods in applications and libraries,
but about providing equivalent functionality and efficiency.

No ideas are coming to mind about how I would go about faking holes
with other protection and mapping schemes... I usually can come up
with a few good/bad ideas fairly quickly. Drawing a blank is usually
a warning sign for me.

Cheers,
Rupert
 
In comp.arch Stephen Sprunk said:
Floppy drives have been banned (or at least disabled) at many large corps
for years; USB flash drives, CD writers, etc. are headed towards the same
fate.

Amusingly, the same corporations often have no problem with email
attachments.

yeah, and as long as you can connect to a web server outside the "perimter"
you can upload the doc via HTTP POST anyways. Security by mandate from those
who do not understand security at all is just an excercice in pointless
futility.

You either trust people you are letting to handle the sensitive documents and
can expect them to not play loose with them or they have to be on non-networked
computers with no removable media. And you had better be sure they aren't walking
in with a digicam or writing stuff down from memory.

Its a 'you lose or you lose a lot of money and lose anyways' situation.
 
Sander said:
yeah, and as long as you can connect to a web server outside the "perimter"
you can upload the doc via HTTP POST anyways. Security by mandate from those
who do not understand security at all is just an excercice in pointless
futility.

Or you make DNS requests for [counter].[data].outsideparty.com and teh
name server logs whatever [data] is requested, and [serial] increments
to keep data in order and to prevent caching.

This means you need to install a program to do this, but it leaks
information out pretty well.


Thomas
 
Sander Vesik said:
yeah, and as long as you can connect to a web server outside the "perimter"
you can upload the doc via HTTP POST anyways. Security by mandate from those
who do not understand security at all is just an excercice in pointless
futility.


The whole concept of a "perimeter" is pretty meaningless in the digital
world, and it's flawed at best in the physical world -- it assumes only good
guys are on the inside. Very few corps I've worked with do much in the way
of protecting internal data, other than what's legally required for
financial and HR stuff.

I think the problem is when you let physical security guys take over
information security; they just can't wrap their minds around the types of
threats that guys like us can come up with...
You either trust people you are letting to handle the sensitive documents
and can expect them to not play loose with them or they have to be on
non-networked computers with no removable media. And you had better
be sure they aren't walking in with a digicam or writing stuff down from
memory.

I've been to several non-classified military and defense contractor
facilities, and it's amazing the variations on the same broken theme...
Some search your stuff, some just ask what you have. Some hold floppies,
CD-RWs, flash drives, or some subset of those while you're inside, others
don't. Some take away digital cameras only, but others also take computers,
cell phones, etc. None have ever searched me on the way out, even when I'm
visibly bringing out papers or discs I didn't bring in.

A few places are draconian enough that it's more efficient for the staff to
have an off-site facility to meet with vendors, customers, etc. that isn't
subject to any security rules. Our tax dollars at work...

S
 
There is no explicit support for it in any of the languages I have
used and it appears that it would be hard to directly map it onto
C for example...

Actually, no, it's quite easy. The problem (as always with C) is that
the lack of restrictions means that it is too easy to access the data
improperly by oversight or negligence. And the better way (which needs
modern hardware to be upgraded to 1960s levels) would help with that.
A "Might makes Right" kind of argument would be : On the systems
I've used it's always been done by the OS (ie: priviledged code +
hardware).

Yes. The exceptions to that are mostly 1960s.
No ideas are coming to mind about how I would go about faking holes
with other protection and mapping schemes... I usually can come up
with a few good/bad ideas fairly quickly. Drawing a blank is usually
a warning sign for me.

Well, I don't have a blank, so here are two implementations:

1) Access is via macros, which are coded very like getc - i.e.
if the pointer is present, it is accessed directly, otherwise a
function is called. No operating system assistance needed, but
problems as above.

2) The hardware provides user-mode trapping of SIGSEGV, and the
logic is moved from the kernel to the language run-time system.
This is the right way to do this, in the sense that operations that
affect solely the process are handled solely by the process.


Regards,
Nick Maclaren.
 
Bob Niland said:
Does PCI Express fix this problem by mandating
64-bit compliance?

Sometimes the fix for old I/O problems is just
to junk the old standard. It was my impression
that PCI itself was in part a way to "solve"
the lack of shared-IRQ support on ISA.

I'm sure that PCI-E also fixes the voltage problem
(5v-tolerant 3.3v universal PCI cards are common,
but universal slots are uneconomical, with the result
that 66MHz and faster PCI slots are rare in retail PCs,
even though some of us could use the speed). And, without
having seen the spec, I'll bet PCI-E fixes the clocking
problem too (the max speed of shared slots is limited
to the slowest installed card).

PCI express fixes the "voltage problem" by mandating DC blocking caps.
And it fixes the "clocking problem" by only having one data rate,
although it does now have a "width problem" in that, I believe, like
InfiniBand it negotiates to the largest common width between the two
ends.
 
del cecchi said:
PCI express fixes the "voltage problem" by mandating DC blocking caps.
And it fixes the "clocking problem" by only having one data rate,

At least for now. Are you saying that we will never have a "version 2" that
supports a higher signalling rate? The evidence from all (well, at least
most) other serial interfaces is that such things will happen.
 
The scale of and scope for abuse with USB flash and CD is somewhat bigger
though. This could be what drives TCPA, or some extansion thereof,
forwards and into widespread corporate acceptance.

You don't need TCPA for that - traditional access control mechanisms are
quite enough.

How do these guys handle it when the USB ports are required to connect to,
e.g., printers or other peripherals?

Jan
 
You don't need TCPA for that - traditional access control mechanisms are
quite enough.
How do these guys handle it when the USB ports are required to connect to,
e.g., printers or other peripherals?

The only feasible thing, I think, is disabling the drivers
for the specific hardware; with USB keyboards being the norm,
I'm sure someone will come up with a USB vampire tap/hub to
use if there are no available or accessible USB slots.

Casper
 
How do these guys handle it when the USB ports are required to connect to,
The only feasible thing, I think, is disabling the drivers
for the specific hardware; with USB keyboards being the norm,
I'm sure someone will come up with a USB vampire tap/hub to
use if there are no available or accessible USB slots.

No, one should have security policies on classes of devices (vf. VMS).
That would, for instance, allow you to mount a USB disk read-only, if
wanted, or to deny access to that class of device totally, if so desired,
without recurring to such an error-prone procedure as disabling drivers.

Jan
 
Here, tho, I have to disagree: I can't think of any type-safe language
After using Pascal & Algol on a Unisys NX (nee. A series) machine, I really
have to disagree. Both make "segments" for every array dimension, and
record (structure) that is allocated.

In what way is that "making good use" of segments ?
For arrays, it can reduce the time taken to do array-bounds checks, but for
records it's a waste.

By the way, what do those "segments" look like? I.e. are the bounds
actually checked by hardware (I assume so, but IIRC the A series were fairly
unusual)? If so, where do you keep the size information, and who sets it
up (i.e. what is the cost in terms of memory management and such)?
The OS and the languages really don't have a problem with this.

Indeed, they shouldn't, but the question is: do you get any benefit
from using segments rather than using explicit software bounds checks
(which might be optimized away, of course)?
Yes, for a C compiler, because C (and the programs written in it) assume
you can do differences on pointers, that a pointer is a single "word", that
it can fit into some kind of integer, that the address space is flat, etc,
etc.
Use a language that doesn't let you assume those things (Algol or Pascal),
and a pointer is a pointer.

This part of the thread is specifically about compiling C (or a subset
thereof) safely.


Stefan
 
You don't need TCPA for that - traditional access control mechanisms are
quite enough.

For what? I'm talking about info theft/security in general. There's no
doubt that high capacity portable devices add risk for all kinds of
corporate info and corporations, even small ones, realized long ago that
people abuse privileges.
How do these guys handle it when the USB ports are required to connect to,
e.g., printers or other peripherals?

I'm not sure what was meant by "sealed off" and it hadn't actually happened
last time I umm "talked". I can think of several things: 1) driver shields
in the OS; 2) physical disconnection of all but keyboard & possibly mouse;
3) special interposer connectors. I don't see much need for physical
connection of printers and other peripherals in a corporate environment.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Stefan said:
Indeed, they shouldn't, but the question is: do you get any benefit
from using segments rather than using explicit software bounds checks
(which might be optimized away, of course)?

You can implement fine grained security models for less run-time
hit. Security is important now and will continue to become more
important as people become more dependant on electronic widgets.


Cheers,
RUpert
 
Indeed, they shouldn't, but the question is: do you get any benefit
You can implement fine grained security models for less run-time
hit. Security is important now and will continue to become more
important as people become more dependant on electronic widgets.

But if you can either trust the compiler or verify the compiler's output,
then you get the same benefit, right?


Stefan "who works on certifying compilation"
 
In comp.arch Jan Vorbr?ggen said:
No, one should have security policies on classes of devices (vf. VMS).
That would, for instance, allow you to mount a USB disk read-only, if
wanted, or to deny access to that class of device totally, if so desired,
without recurring to such an error-prone procedure as disabling drivers.

But this disables both usb based flas drives and usb connected cdroms in
one go. what if your machine has a usb (and not sata) based cdrom in a
couple of years?
 
No, one should have security policies on classes of devices (vf. VMS).
But this disables both usb based flas drives and usb connected cdroms in
one go. what if your machine has a usb (and not sata) based cdrom in a
couple of years?

Note the "mount...read-only" above. Alternatively, you might also want
to stop your lusers from importing malware into your intranet - and then
even CD-ROMs are a no-no.

Jan
 
I'm sure that PCI-E also fixes the voltage problem
(5v-tolerant 3.3v universal PCI cards are common,
but universal slots are uneconomical,

It's not that they're uneconomical. They simply don't exist. The PCI
standard does not define universal slots.

The intended migration path was that as PCI device chipsets advanced,
they'd go to being 3.3V with 5V tolerance. Once most cards were 3.3V
capable, hosts could switch to 3.3V signaling. There was never any
intent of offering dual voltage host slots.

What has actually happened, at least in the consumer market, is that
while a great number of the PCI cards you can buy today are in fact
"Universal", the motherboards have not followed suit. Backwards
compatibility and stagnation is the cheaper, safer option. So much so
that consumer boards never even took advantage of the real no-brainer
for PCI performance improvement, 64-bit slots.
 
Back
Top