How to save an image locally using Python whose URL address I already know?
Python 2
Here is a more straightforward way if all you want to do is save it as a file:
import urlliburllib.urlretrieve("http://www.digimouth.com/news/media/2011/09/google-logo.jpg", "local-filename.jpg")
The second argument is the local path where the file should be saved.
Python 3
As SergO suggested the code below should work with Python 3.
import urllib.requesturllib.request.urlretrieve("http://www.digimouth.com/news/media/2011/09/google-logo.jpg", "local-filename.jpg")
import urllibresource = urllib.urlopen("http://www.digimouth.com/news/media/2011/09/google-logo.jpg")output = open("file01.jpg","wb")output.write(resource.read())output.close()
file01.jpg
will contain your image.
I wrote a script that does just this, and it is available on my github for your use.
I utilized BeautifulSoup to allow me to parse any website for images. If you will be doing much web scraping (or intend to use my tool) I suggest you sudo pip install BeautifulSoup
. Information on BeautifulSoup is available here.
For convenience here is my code:
from bs4 import BeautifulSoupfrom urllib2 import urlopenimport urllib# use this image scraper from the location that #you want to save scraped images todef make_soup(url): html = urlopen(url).read() return BeautifulSoup(html)def get_images(url): soup = make_soup(url) #this makes a list of bs4 element tags images = [img for img in soup.findAll('img')] print (str(len(images)) + "images found.") print 'Downloading images to current working directory.' #compile our unicode list of image links image_links = [each.get('src') for each in images] for each in image_links: filename=each.split('/')[-1] urllib.urlretrieve(each, filename) return image_links#a standard call looks like this#get_images('http://www.wookmark.com')