mercredi 6 juillet 2016

How to keep trying urllib2 request via proxy when we get any type of error(or timeout)?

I am making a html request using urllib2 method(inside a for loop) but sometimes i get errors or timeouts so i have to start all over the for loop. Could any one show me how i can modify my code so it keeps trying until we don't get error instead of restring the for loop all over again ?

    import urllib2,re

    proxy = "*.*.*.*:8080"
    proxies = {"http":"http://%s" % proxy}
    headers={'User-agent' : 'Mozilla/5.0'}

    //rest of code here

    for num,cname in enumerate(match):
        r = re.compile('epi/(.*?)/')
        m = r.search(cname[0])
        episodeId = m.group(1)

        url = "http://api.somesite.net/api/data/Episode/"+str(episodeId);

        proxy_support = urllib2.ProxyHandler(proxies)
        opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=0))
        urllib2.install_opener(opener)
        req = urllib2.Request(url, None, headers)
   try:
      html = urllib2.urlopen(req).read()
   except urllib2.URLError, e:
      raise MyException("There was an error: %r" % e)

  @retry(urllib2.URLError, tries=4, delay=3, backoff=2)

def urlopen_with_retry():
        return urllib2.urlopen("http://example.com")

Aucun commentaire:

Enregistrer un commentaire