Multiple simultaneous file writes from different users

  • Thread starter Thread starter Stephen Corey
  • Start date Start date
S

Stephen Corey

I'm writing an app that basically just appends text to a text file on a
Win2K Server. They fill out a form, click a button, and 1 line is
appended to the file. Multiple people will run this app at the same
time, and all will write to the same file. If I do an immediate flush()
on the file after writing the line, is there still a risk that 2
simultaneous writes will collide? If so, what's the best way to handle
this type of file write?

Thanks!
 
Stephen said:
I'm writing an app that basically just appends text to a text file on a
Win2K Server. They fill out a form, click a button, and 1 line is
appended to the file. Multiple people will run this app at the same
time, and all will write to the same file. If I do an immediate flush()
on the file after writing the line, is there still a risk that 2
simultaneous writes will collide? If so, what's the best way to handle
this type of file write?

One way is to acquire exclusive lock on the file for writing. Every user
should repeat file open until it succeeds, write its thing, and close
the file.

Another way is to access the file through a mutex. All users should open
the file in FILE_SHARE_WRITE mode, and wait on this mutex. When the user
acquires ownership of the mutex it should SetFilePointerEx to the end of
file, write its thing, and ReleaseMutex.

I vote for first approach, it's simpler. Second approach should be used
only if superior performance is desired (since the user opens the file
only once).
 
Stephen Corey a écrit :
I'm writing an app that basically just appends text to a text file on a
Win2K Server. They fill out a form, click a button, and 1 line is
appended to the file. Multiple people will run this app at the same
time, and all will write to the same file. If I do an immediate flush()
on the file after writing the line, is there still a risk that 2
simultaneous writes will collide?

The only way to avoid collisions would be to use directly WriteFile to
write *in one call* each line. I *think* that WriteFile guarantee
atomicity. If you use any upper layer caching facility (FILE*,
ofstringstream, whatever...), the cache will loose this guarantee (even
calling flush after each write).
If so, what's the best way to handle
this type of file write?
Use a named mutex to serialize file writing.

Arnaud
MVP - VC
 
The only way to avoid collisions would be to use directly WriteFile to
write *in one call* each line. I *think* that WriteFile guarantee
atomicity. If you use any upper layer caching facility (FILE*,
ofstringstream, whatever...), the cache will loose this guarantee (even
calling flush after each write).

Unfortunately, before WriteFile you must SetFilePointerEx to the end of
the file, and without syncronization there might happen the case where
each process first calls SetFilePointerEx, and after that WriteFile. The
outcome is that the first write is overwritten by the second, and if the
first write was longer then there will be the "tail" of it at the end of
the file.
 
Mihajlo said:
Unfortunately, before WriteFile you must SetFilePointerEx to the end
of the file, and without syncronization there might happen the case
where each process first calls SetFilePointerEx, and after that
WriteFile. The outcome is that the first write is overwritten by the
second, and if the first write was longer then there will be the
"tail" of it at the end of the file.

You're right : as soon as there is more than one API call for each
operation, you need to synchronize accesses with a mutex.

Arnaud
MVP - VC
 
Arnaud Debaene said:
You're right : as soon as there is more than one API call for each
operation, you need to synchronize accesses with a mutex.

Or, use LockFileEx to lock the tail of the file (and block until the lock is
granted), write a new line with WriteFile, and then unlock the region with
UnlockFile. No need for a separate Mutex to be
created/managed/communicated.

-cd
 
Carl said:
Or, use LockFileEx to lock the tail of the file (and block until the lock is
granted), write a new line with WriteFile, and then unlock the region with
UnlockFile. No need for a separate Mutex to be
created/managed/communicated.

I've never used it, so I don't know how it works, but how can you lock
something that doesn't exist yet? I mean, we're supposed to lock the
tail of the file, is that zero bytes at the end of the file? When the
lock is granted I think we still need to SetFilePointer, but if that
file pointer is outside the lock region what have we accomplished? It's
confusing, how about some example?
 
Mihajlo said:
I've never used it, so I don't know how it works, but how can you lock
something that doesn't exist yet? I mean, we're supposed to lock the
tail of the file, is that zero bytes at the end of the file? When the
lock is granted I think we still need to SetFilePointer, but if that
file pointer is outside the lock region what have we accomplished?
It's confusing, how about some example?

You can lock a range of bytes that extends beyond the current end of the
file.

GetFileSize() to determine current end
LockFileEx() to lock from current end to current end + allowance
SetFilePointer() to position to the actual current end of file
WriteFile()
UnlockFile()

Any other writers will block at LockFileEx. If there's no need for any
thread to read from the file at the same time, you can simply use
[0,MaxfileSize) as the range of bytes to be locked and not bother with
trying to estimate the position where you'll eventually write. If just
locking the tail (as outlined above), you'll have to lock a long enough tail
(e.g. 1Gb) to guarantee that when you are actually able to write that the
line you write will fall within the range you locked.

You can actually use a single file handle to implement many kinds of
synchronization by using the file locking APIs. The most straightforward
implementation of the readers/writers lock uses a file handle, since the
locking APIs already directly implement the readers/writers lock for files.
A file handle opened for shared access can be thought of as a range of
MaxFileSize individually lockable (and waitable) bytes. Of course, it's
likely that using the file locking API to try to simulate thousands of
mutexes would be inefficient compared to mutexes, but in theory it could be
done.

-cd
 
Carl said:
You can lock a range of bytes that extends beyond the current end of the
file.

...

Thanks for the useful info, from now on LockFileEx is my friend :-)
 
Back
Top