Python equivalent of a given wget command
There is also a nice Python module named wget
that is pretty easy to use. Keep in mind that the package has not been updated since 2015 and has not implemented a number of important features, so it may be better to use other methods. It depends entirely on your use case. For simple downloading, this module is the ticket. If you need to do more, there are other solutions out there.
>>> import wget>>> url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'>>> filename = wget.download(url)100% [................................................] 3841532 / 3841532>>> filename'razorback.mp3'
Enjoy.
However, if wget
doesn't work (I've had trouble with certain PDF files), try this solution.
Edit: You can also use the out
parameter to use a custom output directory instead of current working directory.
>>> output_directory = <directory_name>>>> filename = wget.download(url, out=output_directory)>>> filename'razorback.mp3'
urllib.request should work. Just set it up in a while(not done) loop, check if a localfile already exists, if it does send a GET with a RANGE header, specifying how far you got in downloading the localfile. Be sure to use read() to append to the localfile until an error occurs.
This is also potentially a duplicate of Python urllib2 resume download doesn't work when network reconnects
import urllib2import timemax_attempts = 80attempts = 0sleeptime = 10 #in seconds, no reason to continuously try if network is down#while true: #Possibly Dangerouswhile attempts < max_attempts: time.sleep(sleeptime) try: response = urllib2.urlopen("http://example.com", timeout = 5) content = response.read() f = open( "local/index.html", 'w' ) f.write( content ) f.close() break except urllib2.URLError as e: attempts += 1 print type(e)