64 bit computing

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

Hi,

Introducation:
I am about to write a custom DB ( for video storage ) using unmanaged C++,
the optimal technology for such a DB would be usage of mapped files, BUT
32bit computing limit the address-space to ~4GB, to br able to manage more
then 4GB the machine and code should be 64bit compatible.

The Query:
Does the compiler that ships with VS.NET 2003 support 64bit instruction set?
Is there anything special I should take in mind while developing such a
system (concerning 64bit compatibility)?

Any comments would be appreciated...
 
Hi Nadav!
Does the compiler that ships with VS.NET 2003 support 64bit instruction set?
No. You need to use the latest PSDK:
http://www.microsoft.com/downloads/details.aspx?FamilyId=D8EECD75-1FC4-49E5-BC66-9DA2B03D9B92
Is there anything special I should take in mind while developing such a
system (concerning 64bit compatibility)?

If you first want to develop a 32-bit app, then you should enable
"detect 64-bit Protability Issues (/Wp64)"
Any comments would be appreciated...

Some general links:
See: Porting 32-Bit Code to 64-Bit Code
http://msdn.microsoft.com/library/en-us/vccore/html/vcgrfPorting32BitCodeTo64BitCode.asp

See: Overview of the compatibility considerations for 32-bit programs on
64-bit versions of Windows Server 2003 and Windows XP
http://support.microsoft.com/kb/896456/EN-US/

See: General Porting Guidelines
http://msdn.microsoft.com/library/en-us/win64/win64/general_porting_guidelines.asp

See: 64-Bit Issues (DDK)
http://msdn.microsoft.com/library/e..._f910e5d8-a732-4faa-a8d2-d4de021dc78d.xml.asp

--
Greetings
Jochen

My blog about Win32 and .NET
http://blog.kalmbachnet.de/
 
Nadav said:
Hi,

Introducation:
I am about to write a custom DB ( for video storage ) using unmanaged
C++, the optimal technology for such a DB would be usage of mapped
files, BUT 32bit computing limit the address-space to ~4GB, to br
able to manage more then 4GB the machine and code should be 64bit
compatible.

IMO, using a single linear file mapping is unlikely to be a good solution on
any platform due to address fragmentation and other issue. Note that SQL
server manages just fine to address 100's of Gb database on 32-bit windows.
The trick to handling larger files is to move one or more smaller file
mappings over "windows" into the entire file. With that technique you're
really only limited by disk space.

You don't need a 64-bit machine for what you're trying to do.

-cd
 
Hi Carl,

Thanks for your responce, usage of large file mappings doesn't prevent usage
of the technique you have suggested, mapping a file into memory will map the
physical pages to memory BUT will not actually load them, the pages are
loaded by the kernel as a result of a PAGE_FAULT caused when tring to access
the paged memory block, hence, accessing a fixed size memory window ( part of
the already mapped DB file ) will have the same fragmentation impact, usage
of large file mappings ( while accessing a small subset window of it at a
tiem ) will save the burthen of remapping a subset of the whole file each
time the 'window' is to be moved and this would simplify implementation...

Any comments would be appreciated,

Nadav.
 
Thanks for your responce, usage of large file mappings doesn't prevent
usage
of the technique you have suggested, mapping a file into memory will map the
physical pages to memory BUT will not actually load them, the pages are
loaded by the kernel as a result of a PAGE_FAULT caused when tring to access
the paged memory block, hence, accessing a fixed size memory window ( part of
the already mapped DB file ) will have the same fragmentation impact, usage
of large file mappings ( while accessing a small subset window of it at a
tiem ) will save the burthen of remapping a subset of the whole file each
time the 'window' is to be moved and this would simplify implementation...

Any comments would be appreciated,
I think I understand your method, but a few things come to my mind:

* How do you map this file? I think by loading the complete file in memory
first.

* I thought this is stored into a virtual file? And I always thought that
the virtual disk space was about 2 times the physical memory in that
computer. So even though the 64 bit exactuable has 16 exabytes of memory
addresses, only the amount of virtual memory will determin if it is possible
how big it will actually get.

* It only works for singel user.

I believe your system would be perfect if you have to access random bytes of
dat, like a 3D image, but for typical database applications this migth be
overkill. But I do agree that the access of memory will be very fast since
the OS does all this and you do not need to program anything.
 
I think I understand your method, but a few things come to my mind:

* How do you map this file? I think by loading the complete file in memory
first.
Well the actual data isn't loaded to the memory, rather, it uses the same
paging mechanism used with virtual memory, hence, each phisical disk page has
it's VIRTUAL memory address, the actuall memory allocation is done by the IO
Manager when a PAGE_FAULT is generated ( the PAGE_FAULT is generated when
tring to access a page that is commited out of memory ), taking what was just
said in mind, files being mapped to memory doesn't load until their mapped
address space is being accessed in which case a PAGE_FAULT is generated, this
causes the page to be read from disk and loaded into memory.
* I thought this is stored into a virtual file? And I always thought that
the virtual disk space was about 2 times the physical memory in that
computer. So even though the 64 bit exactuable has 16 exabytes of memory
addresses, only the amount of virtual memory will determin if it is possible
how big it will actually get.
Well, the manner I intend to use mapped files doesn't relay on virtual
memory, rather, a big existing DB file is being mapped to memory, all memory
pages in this range are 'mapped' to this file, when a page is pagedout of
memory it's data is commited to the file, this is a very convinient way to
access a file at the same manner memory is accessed.
* It only works for singel user.
Well, I would have to check it...
 
Nadav said:
Hi Carl,

Thanks for your responce, usage of large file mappings doesn't
prevent usage of the technique you have suggested, mapping a file
into memory will map the physical pages to memory BUT will not
actually load them, the pages are loaded by the kernel as a result of
a PAGE_FAULT caused when tring to access the paged memory block,
hence, accessing a fixed size memory window ( part of the already
mapped DB file ) will have the same fragmentation impact, usage of
large file mappings ( while accessing a small subset window of it at
a tiem ) will save the burthen of remapping a subset of the whole
file each time the 'window' is to be moved and this would simplify
implementation...

Yes, it will simplify the implementation IF (and only if) you're only
interested in dealing with video clips of limited size. Otherwise, you're
simply increasing the maximum time you can deal with before going to a
windowing/paging system, and you'll end up going to the moving window
technique even on 64 bit windows.

Unless you're doing something that requires very sophisticated random access
to large data, I wouldn't even consider using a file mapping. Simply using
unbuffered async I/O will give you greater efficiency in most cases
(afterall, that's what the filemapping is using behind your back).

-cd
 
Back
Top