changing all int to int64

  • Thread starter Thread starter Abubakar
  • Start date Start date
A

Abubakar

Hi,
the project we are working on uses "int" for all integral type data (i have
also declared size_t at some places where crt functions return size_t type).
Now our application has to deal with files larger than 4gb. Typical places
where my code will have to deal with more than 4gb data/figure is
calculating the sizes of files and storing them in some place, this can be
like 6 or 7 or more gb of figure. Breaking down the 6 or 7 gb archive file
into small blocks of say 500kb. I was about to make some changes in my code
to change int to int64 *where needed*, when somebody came up to me and said
"why dont you just find-and-replace all the int with int64, that way you
wont have to think about where you have to make changes and wont have to do
any debugging."
Now i want to knowm can/should i do this find-n-replace all int with int64?
I mean what i have in mind is why should I change all of my code to use in64
datatype where as only a small portion of my code requires that change. Plz
advise.

Regards,
..ab
 
If it's not needed much, and you can keep track all the places to change it,
change just what you need.

If you can do a test version, and it's a hassle to find all the right places
to change it, you could change them all and see if there are any problems
(memory, speed).

If int is mostly used for counters and such it's probably not useful to
change. If they're used in loops, int is always the fastest for the
processor, where __int64 is just a bit slower on a 32-bit system> I've never
noticed the speed difference to be significant, since speed depends more on
cache hits than on variable size.

You might want to use typedefs, so undoing changes is one line.
 
Abubakar said:
Hi,
the project we are working on uses "int" for all integral type data (i
have also declared size_t at some places where crt functions return size_t
type). Now our application has to deal with files larger than 4gb. Typical
places where my code will have to deal with more than 4gb data/figure is
calculating the sizes of files and storing them in some place, this can be
like 6 or 7 or more gb of figure. Breaking down the 6 or 7 gb archive file
into small blocks of say 500kb. I was about to make some changes in my
code to change int to int64 *where needed*, when somebody came up to me
and said "why dont you just find-and-replace all the int with int64, that
way you wont have to think about where you have to make changes and wont
have to do any debugging."
Now i want to knowm can/should i do this find-n-replace all int with
int64? I mean what i have in mind is why should I change all of my code to
use in64 datatype where as only a small portion of my code requires that
change. Plz advise.

The person who made the suggestion does not sound like a programmer. Doing
a global find+replace will not save you from debugging and having to think.
For example, all your printf calls will be broken, you'll need to add casts
everywhere, etc.

Instead, make a typedef for fileoffset and change only the variables and
functions that deal with filesize. You'll also have to change which file
APIs you call, to use the larger variables.
 
Back
Top