V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
推荐学习书目
Learn Python the Hard Way
Python Sites
PyPI - Python Package Index
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
pank
V2EX  ›  Python

关于 Pycurl 使用遇到的问题

  •  
  •   pank · 2017-05-11 16:58:53 +08:00 · 2466 次点击
    这是一个创建于 2740 天前的主题,其中的信息可能已经有所发展或是发生改变。

    小弟主要想用 pycurl 的 muti_curl 来同时下载一批网页,使用过程中发现只有最后一个链接可以下载成功(文件内容不为空),前两个文件内容都是空的。

    可以确定是 while True 和 while num_handles 部分的问题。但是查询 pycurl 文档对这块描述的很粗略。有这个经验的大神们,请帮帮我,万分感谢。。。

    代码在下面:

    import pycurl
    import uuid
    import hashlib
    import os
    
    
    def get_filename(url):
        if not url:
            return None
        return hashlib.md5(url.encode()).hexdigest()
    
    
    class Fetcher(object):
        def __init__(self, urls, path):
            self.urls = urls
            self.path = path
            self.m = pycurl.CurlMulti()
    
        def fetch(self):
            if not urls or len(urls) == 0:
                print('empty urls...')
                return
    
            for url in urls:
                fdir = './%s/%s' % (self.path, get_filename(url))
                if os.path.exists(fdir):
                    print('%s exits, skip it...' % url)
                    continue
                f = open(fdir, 'wb')
                c = pycurl.Curl()
                c.setopt(pycurl.URL, url)
                c.setopt(pycurl.WRITEDATA, f)
                self.m.add_handle(c)
    
            while True:
                ret, num_handles = self.m.perform()
                if ret != pycurl.E_CALL_MULTI_PERFORM:
                    break
    
            while num_handles:
                ret = self.m.select(3.0)
                if ret == -1:
                    continue
                while 1:
                    ret, num_handles = self.m.perform()
                    if ret != pycurl.E_CALL_MULTI_PERFORM:
                        break
    
            print('downloading complete...')
    
    
    urls = ['xa.nuomi.com/1000338', 'xa.nuomi.com/1000002', 'xa.nuomi.com/884']
    fetcher = Fetcher(urls, 'download')
    fetcher.fetch()
    
    2 条回复    2017-05-11 21:50:47 +08:00
    blackeeper
        1
    blackeeper  
       2017-05-11 18:21:26 +08:00
    加 c.close() ?
    pank
        2
    pank  
    OP
       2017-05-11 21:50:47 +08:00
    @blackeeper 谢谢回复,已经找到问题了: IMPORTANT NOTE: add_handle does not implicitly add a Python reference to the Curl object (and thus does not increase the reference count on the Curl object). 应该是引用被冲掉了。修改一下变量名不重复就好了:


    ```
    for idx, url in enumerate(urls):
    f = open('./%s/%s' % (self.path, hashlib.md5(url.encode()).hexdigest()), 'wb')
    locals()['c'+str(idx)] = pycurl.Curl()
    locals()['c'+str(idx)].setopt(pycurl.URL, url)
    locals()['c'+str(idx)].setopt(pycurl.WRITEDATA,f)
    self.m.add_handle(locals()['c'+str(idx)])
    ```
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   2740 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 24ms · UTC 09:25 · PVG 17:25 · LAX 01:25 · JFK 04:25
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.