32 vs 64

  • Thread starter Thread starter Jay
  • Start date Start date
J

Jay

Is there really a big difference in performance when running 64 bit Vista vs
32? I have an AMD Athlon 3200 Socket 939 64 bit processor on an Asus A8N
SLI Delux board (I dont use SLI). 2 GB RAM and a 250 GB HD. I find that in
Vista 64 (RTM) not all devices are supported and I cant seem to find
drivers. I'm considering running 32 bit Vista. Will there be a difference
in performance? When does one need 64 bit?
 
The real answer lies in the applicatoins. If you are using some very memory
intesive apps you might be better off on a 64bit version of hte app (if
available) then go 64bit.

For most home users 32 what i would use for its greater hardware/driver
support and application compatability
 
Jay said:
Is there really a big difference in performance when running 64 bit
Vista vs 32? I have an AMD Athlon 3200 Socket 939 64 bit processor
on an Asus A8N SLI Delux board (I dont use SLI). 2 GB RAM and a 250
GB HD. I find that in Vista 64 (RTM) not all devices are supported
and I cant seem to find drivers. I'm considering running 32 bit
Vista. Will there be a difference in performance? When does one
need 64 bit?

Have a look at
http://www.microsoft.com/windowsxp/64bit/facts/top10.mspx
Yes I know it is about XP but the reasons remains the same for x64 vs
x86
So unless you are using memory intensive 64-bit compiled applications,
you with only 2Gb of RAM probably running 32-bit applications are not
going to see any significant benefits and as you say you require x64
drivers which not all vendors have written yet.
 
I went back to 32bit, too many issues in 64bit as you noted. Performance,
maybe a little slower in certain areas but once you get used to it, no big
deal.

Ray
 
That's so far from true it's not even funny. 64-bit apps will use TWICE as
much memory.

Why? Because every standard data type is 64-bits-wide instead of 32-bits
wide, twice the size. Result: Let's say you're storing the number "8" 1000
times. There are 32 bits x 1000 = 32KB of memory used versus 64 bits x 1000
= 64KB of memory used.

Memory isn't the real reason (unless you've actually exhausted 4gb of ram or
whatever that magic # is). The real reason for me is so I don't get bit by
the "year 2037-ish bug" when the number of seconds since January 1st 1970
(or 1980? I don't remember) exceeds the maximum 32-bit number.

-Rob
 
Robert said:
That's so far from true it's not even funny. 64-bit apps will use TWICE
as much memory.

Why? Because every standard data type is 64-bits-wide instead of
32-bits wide, twice the size. Result: Let's say you're storing the
number "8" 1000 times. There are 32 bits x 1000 = 32KB of memory used
versus 64 bits x 1000 = 64KB of memory used.

Memory isn't the real reason (unless you've actually exhausted 4gb of
ram or whatever that magic # is). The real reason for me is so I don't
get bit by the "year 2037-ish bug" when the number of seconds since
January 1st 1970 (or 1980? I don't remember) exceeds the maximum 32-bit
number.

Rob:

This is wrong.

At least in C/C++, the most commonly used data types (int and double)
have the same size in Win32 and Win64 (32 and 64 bits respectively).

And well-written 32-bit apps are not subject to the 2038 problem.

The real change in Win64 is 64-bit addressing, which allows programs to
access more memory (if they need it). 64-bit pointers will themselves
use some additional memory, but the overall memory requirements will not
double as you suggest.

David Wilkinson
 
64-bit is the data size and instruction size too, I believe. So 64-bit
programs use instructions that are 64-bits wide. How do you think you read
and store those "64 bit addresses"? Answer: With 64-bit numbers.

-Rob
 
David Wilkinson said:
This is wrong.

Well, For the price of the information, you can't complain.
At least in C/C++, the most commonly used data types (int and double) have
the same size in Win32 and Win64 (32 and 64 bits respectively).

Who uses C/C++ nowadays? I thought microsoft pushed everyone to C#!
And well-written 32-bit apps are not subject to the 2038 problem.

Except, I believe it's an operating system problem (i.e. with RETRIEVING the
CURRENT date). Hold on a second, I have the pending POSIX refresh being
voted on (POSIX=Portable Operating System standard) updates, let me see what
it says about dates:

Page 372, <sys/time.h>
Defines tv_sec as time_t .. So the time type is stored as a number of
seconds since _some_ start time ..
How is time_t defined, Page 376 says:
-Time_t and clock_t shall be integer or real floating-point types

If they're integer types and integer is 32-bit, then time_t is bound to
rollover sooner at _some point_, if not 2038, then when?

64-bit has less of a chance of this happening in my lifetime.
The real change in Win64 is 64-bit addressing, which allows programs to
access more memory (if they need it). 64-bit pointers will themselves use
some additional memory, but the overall memory requirements will not
double as you suggest.

What are the instruction sizes?

-rob
 
Robert:

Inline:
Who uses C/C++ nowadays? I thought microsoft pushed everyone to C#!

I don't do C# or .NET, but I would be very surprised if it were not the
same in this regard.
Except, I believe it's an operating system problem (i.e. with RETRIEVING
the CURRENT date). Hold on a second, I have the pending POSIX refresh
being voted on (POSIX=Portable Operating System standard) updates, let
me see what it says about dates:

Page 372, <sys/time.h>
Defines tv_sec as time_t .. So the time type is stored as a number of
seconds since _some_ start time ..
How is time_t defined, Page 376 says:
-Time_t and clock_t shall be integer or real floating-point types

If they're integer types and integer is 32-bit, then time_t is bound to
rollover sooner at _some point_, if not 2038, then when?

64-bit has less of a chance of this happening in my lifetime.

Any programmer who used 32-bit time_t since the 2038 problem became
generally recognized should be shot.
What are the instruction sizes?

I thought we were talking about data. I believe a 64 bit executable is
bigger than the corresponding 32-bit one, but not twice as big.

David Wilkinson
 
David Wilkinson said:
Any programmer who used 32-bit time_t since the 2038 problem became
generally recognized should be shot.

Odd, I use the PHP Programming Language for many of my web-based programming
tasks, and it still has the year 2038 standard "typically".

http://us2.php.net/date

See "ChangeLog" where it says:
Version Description
5.1.0 The valid range of a timestamp is typically from Fri, 13 Dec
1901 20:45:54 GMT to Tue, 19 Jan 2038 03:14:07 GMT. (These are the dates
that correspond to the minimum and maximum values for a 32-bit signed
integer). However, before PHP 5.1.0 this range was limited from 01-01-1970
to 19-01-2038 on some systems (e.g. Windows)


Note: It doesn't say it's "different on 64 bit systems" (yet)..
I thought we were talking about data. I believe a 64 bit executable is
bigger than the corresponding 32-bit one, but not twice as big.

I'm talking about how much memory something takes up. A program is
something that is stored in memory. There may be more efficient (fewer # of
instructions to do the same thing) instruction sets in 64-bit, otherwise, I
believe, each instruction will take up 64-bit. "I believe" may just be a
fantasy, though, I'm schizophrenic.

-Rob
 
Back
Top