Hi Marty,
I tried WGET and it didn't do what I wanted. Considering that I only played
with it for a few hours, I may have missed something. My dilemma is this.
We're getting the weather news as an XML feed from an outside site by
calling loadXML in a javascript function (hundreds of users on an intranet
getting the weather from the internet). All of a sudden, the weather site
went down and slowed down our intranet page immensely. Our solution to this
was to get the XML feed separately, and copy it to a local server (cached).
This way, users will load the local copy and the server will always check
for a newer version. If no new version exists or the server's slow or down,
users won't feel it. However, when I used WGET (after getting rid of those
filename.1, .2, .3 extras), the file kept getting replaced with an empty
HTML file that had a META tag REDIRECT link of zero seconds to that weather
page. So the cached copy would just keep trying to get the newer version
over and over. I didn't think it was WGET that was doing that, but clicking
on the link would not bring it up in the browser, instead I would get an
unknown error. I tried creating a link to the XML feed and see what happens
if I download it manually, I got a could not download error from IE that
usually appears whenever the file doesn't exist or the URL is broken.
Any suggestions on how to implement a cached copy of a file using WGET?
Thanks.
Bassam