Download URL File

  • Thread starter Thread starter Bassam Abdul-Baki
  • Start date Start date
B

Bassam Abdul-Baki

Greetings,

Is there a DOS command that will allow users to download a file from an HTTP
URL? Thanks.

Bassam
 
Hi Marty,

I tried WGET and it didn't do what I wanted. Considering that I only played
with it for a few hours, I may have missed something. My dilemma is this.
We're getting the weather news as an XML feed from an outside site by
calling loadXML in a javascript function (hundreds of users on an intranet
getting the weather from the internet). All of a sudden, the weather site
went down and slowed down our intranet page immensely. Our solution to this
was to get the XML feed separately, and copy it to a local server (cached).
This way, users will load the local copy and the server will always check
for a newer version. If no new version exists or the server's slow or down,
users won't feel it. However, when I used WGET (after getting rid of those
filename.1, .2, .3 extras), the file kept getting replaced with an empty
HTML file that had a META tag REDIRECT link of zero seconds to that weather
page. So the cached copy would just keep trying to get the newer version
over and over. I didn't think it was WGET that was doing that, but clicking
on the link would not bring it up in the browser, instead I would get an
unknown error. I tried creating a link to the XML feed and see what happens
if I download it manually, I got a could not download error from IE that
usually appears whenever the file doesn't exist or the URL is broken.

Any suggestions on how to implement a cached copy of a file using WGET?

Thanks.

Bassam
 
Hi Torgeir,

Please take a look at my response to Clay. I've used WGET before, but not
CURL. WGET wouldn't work for me, so I'll try CURL instead. Thanks.

Bassam
 
Bassam Abdul-Baki said:
Hi Marty,

I tried WGET and it didn't do what I wanted. Considering that I only played
with it for a few hours, I may have missed something. My dilemma is this.
We're getting the weather news as an XML feed from an outside site by
calling loadXML in a javascript function (hundreds of users on an intranet
getting the weather from the internet). All of a sudden, the weather site
went down and slowed down our intranet page immensely. Our solution to this
was to get the XML feed separately, and copy it to a local server (cached).
This way, users will load the local copy and the server will always check
for a newer version. If no new version exists or the server's slow or down,
users won't feel it. However, when I used WGET (after getting rid of those
filename.1, .2, .3 extras), the file kept getting replaced with an empty
HTML file that had a META tag REDIRECT link of zero seconds to that weather
page. So the cached copy would just keep trying to get the newer version
over and over. I didn't think it was WGET that was doing that, but clicking
on the link would not bring it up in the browser, instead I would get an
unknown error. I tried creating a link to the XML feed and see what happens
if I download it manually, I got a could not download error from IE that
usually appears whenever the file doesn't exist or the URL is broken.

Any suggestions on how to implement a cached copy of a file using WGET?

Thanks.

Bassam


I've never had any problems with WGET, and I can't think of anything special
you would need to do for XML files.

Post the command line you are passing to WGET.
 
I was using "wget -t inf -o results.log
http://www.weatherroom.com/xml/ext/22302". The default file in this case is
22302.xml. The link works now so I can duplicate the error. I tried using
cURL based on Torgeir's suggestion (see other reply), and that worked like a
charm.

Bassam

P.S. - I like your e-mail address.
 
Back
Top