ERROR

Max retries exceeded with URL in requests 오류 해결

sooyeoon 2021. 7. 4. 13:03

# 네이버 영화 웹 페이지에서 영화 제목 목록을 가져오는 코드

import requests

from bs4 import BeautifulSoup

 

headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}

data = requests.get('https://movie.naver.com/movie/sdb/rank/rmovie.nhn?sel=pnt&date=20200303',headers=headers)

 

soup = BeautifulSoup(data.text'html.parser')

 

trs = soup.select('#old_content > table > tbody > tr')

# select 는 결과가 리스트로 나온다

for tr in trs:

    a_tag = tr.select_one('td.title > div > a')

    if a_tag is not None:

        # tag에서 제목만 title에 담김

        title = a_tag.text

        print(title)

 

파이썬 웹크롤링 중 결과값이 나오지 않고 Max retries exceeded with url 라는 오류가 떠서

검색해보니

https://stackoverflow.com/questions/23013220/max-retries-exceeded-with-url-in-requests

 

Max retries exceeded with URL in requests

I'm trying to get the content of App Store > Business: import requests from lxml import html page = requests.get("https://itunes.apple.com/in/genre/ios-business/id6000?mt=8") tree = html.fromstring(

stackoverflow.com

 

from time import sleep 을 추가하면 된다는 답변이 있어서 그대로 추가해보았는데, 잘 해결되었다