P
PLS
I'm very puzzled by the code that VC++ 2005 generated for a simple
statement:
unsigned short *a, b;
b += *a;
generated this
0062CD6B mov eax,dword ptr [ebp-64h]
0062CD6E movzx ecx,byte ptr [eax]
0062CD71 movzx edx,word ptr [ebp-1Ch]
0062CD75 add ecx,edx
0062CD77 call @ILT+11960(@_RTC_Check_4_to_2@4) (560EBDh)
0062CD7C mov word ptr [ebp-1Ch],ax
What I don't understand is why the compiler widened both variabled to 32
bit and did the arithmetic in 32 bit. Why not do the arithmetic in 16
bit?
This code needs to reproduce a checksum from another machine, and it is
essential that the sum be 16 bits long.
statement:
unsigned short *a, b;
b += *a;
generated this
0062CD6B mov eax,dword ptr [ebp-64h]
0062CD6E movzx ecx,byte ptr [eax]
0062CD71 movzx edx,word ptr [ebp-1Ch]
0062CD75 add ecx,edx
0062CD77 call @ILT+11960(@_RTC_Check_4_to_2@4) (560EBDh)
0062CD7C mov word ptr [ebp-1Ch],ax
What I don't understand is why the compiler widened both variabled to 32
bit and did the arithmetic in 32 bit. Why not do the arithmetic in 16
bit?
This code needs to reproduce a checksum from another machine, and it is
essential that the sum be 16 bits long.