Kevin, I've got basically the same architecture. For the most part it works
well. I am having an issue with file sizes, though. I've got the max
request limit set at 1GB - we need the ability to upload large files - But
if I upload anything larger than, say, 450MB, I get an out of memory error.
Does the upload control try to cache the whole file before it writes it? Is
this an issue for your system? Thanks.
Jerry
Kevin Waite said:
The previous posts describe the basic machinery for doing an upload.
However, the typical reason for uploading a file is to grant access to it to
an audience. You could put it in a publicly-readble folder but this would
(obviously) allow anyone to view it which may not be what you want. We
avoid this by having all file access channelled through a page where I can
do authentication and access control. The actual physical location is
hidden. A second issue you might need to consider it having multiple
versions of a document. To handle this I rename all uploaded files to a
GUID (keeping the file extension) and then have a database table that maps
an upload ID to a physical file. The user things they are getting
'MyFile.doc' but instead they get '4a452..etc.doc'. This renaming/mapping
allows me to keep arbitrary versions of 'MyFile.doc' - this is essential in
some situations where we need to keep the state of a file at a given time,
not just the latest. I hope this gives some pointers.