技术控

    今日:114| 主题:49431
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Downloading files from web using Python

[复制链接]
兜兜转转 发表于 2016-11-30 02:49:03
5 5

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Requests is a versatile HTTP library in python with various applications. One of its applications is to download a file from web using the file URL.
   Installation:First of all, you would need to download the requests library. You can directly install it using pip by typing following command:
  [code]pip install requests[/code]   Or download it directly from here and install manually.
  Downloading files

  [code]# imported the requests library
import requests
image_url = "https://www.python.org/static/community_logos/python-logo-master-v3-TM.png"

# URL of the image to be downloaded is defined as image_url
r = requests.get(image_url) # create HTTP response object

# send a HTTP request to the server and save
# the HTTP response in a response object called r
with open("python_logo.png",'wb') as f:

    # Saving received content as a png file in
    # binary format

    # write the contents of the response (r.content)
    # to a new file in binary mode.
    f.write(r.content)[/code]     This small piece of code written above will download the following image from the web. Now check your local directory(the folder where this script resides), and you will find this image:
   
Downloading files from web using Python-1 (following,download,contents,directly,received)
   All we need is the URL of the image source. (You can get the URL of image source by right-clicking on the image and selecting the View Image option.)
    Download large files

   The HTTP response content ( r.content ) is nothing but a string which is storing the file data. So, it won’t be possible to save all the data in a single string in case of large files. To overcome this problem, we do some changes to our program:
   Since all file data can’t be stored by a single string, we use r.iter_content method to load data in chunks, specifying the chunk size.
  [code]r = requests.get(URL, stream = True)[/code]   Setting stream parameter to True will cause the download of response headers only and the connection remains open. This avoids reading the content all at once into memory for large responses. A fixed chunk will be loaded each time while r.iter_content is iterated.
  Here is an example:
  [code]import requests
file_url = "http://codex.cs.yale.edu/avi/db-book/db4/slide-dir/ch1-2.pdf"

r = requests.get(file_url, stream = True)

with open("python.pdf","wb") as pdf:
    for chunk in r.iter_content(chunk_size=1024):

         # writing one chunk at a time to pdf file
         if chunk:
             pdf.write(chunk)[/code]  Downloading Videos

   In this example, we are interested in downloading all the video lectures available on this web-page . All the archives of this lecture are available here . So, we first scrape the webpage to extract all video links and then download the videos one by one.
  [code]import requests
from bs4 import BeautifulSoup

'''
URL of the archive web-page which provides link to
all video lectures. It would have been tiring to
download each video manually.
In this example, we first crawl the webpage to extract
all the links and then download videos.
'''

# specify the URL of the archive here
archive_url = "http://www-personal.umich.edu/~csev/books/py4inf/media/"

def get_video_links():
   
    # create response object
    r = requests.get(archive_url)
   
    # create beautiful-soup object
    soup = BeautifulSoup(r.content,'html5lib')
   
    # find all links on web-page
    links = soup.findAll('a')

    # filter the link sending with .mp4
    video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')]

    return video_links


def download_video_series(video_links):

    for link in video_links:

        '''iterate through all links in video_links
        and download them one by one'''
        
        # obtain filename by splitting url and getting
        # last string
        file_name = link.split('/')[-1]   

        print "Downloading file:%s"%file_name
        
        # create response object
        r = requests.get(link, stream = True)
        
        # download started
        with open(file_name, 'wb') as f:
            for chunk in r.iter_content(chunk_size = 1024*1024):
                if chunk:
                    f.write(chunk)
        
        print "%s downloaded!\n"%file_name

    print "All videos downloaded!"
    return


if __name__ == "__main__":

    # getting all video links
    video_links = get_video_links()

    # download all videos
    download_video_series(video_links)[/code]  Advantages of using Requests library to download web files are:
  
       
  • One can easily download the web directories by iterating recursively through the website!   
  • This is a browser-independent method and much faster!   
  • One can simply scrape a web page to get all the file URLs on a webpage and hence, download all files in a single command-          Implementing Web Scraping in Python with BeautifulSoup
      
  This blog is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected] See your article appearing on the GeeksforGeeks main page and help other Geeks.
  Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
友荐云推荐




上一篇:Codebattles: Competitive Coding in the Workplace
下一篇:7 Time Series Datasets for Machine Learning
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

saikik 发表于 2016-11-30 08:23:54
再踩踩,楼主辛苦了!
回复 支持 反对

使用道具 举报

qn6r0qnm 发表于 2016-11-30 11:16:15
珍爱生命,{username}果断回帖。
回复 支持 反对

使用道具 举报

chen987long 发表于 2016-12-1 06:32:31
做人必备的100项技能,看帖回帖是一项!
回复 支持 反对

使用道具 举报

haix610 发表于 前天 22:25
顶!顶!顶!
回复 支持 反对

使用道具 举报

步步惊云 发表于 前天 23:39
学习下
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表