<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          Python爬蟲(chóng)實(shí)現(xiàn)爬取百度百科詞條功能實(shí)例

          共 5438字,需瀏覽 11分鐘

           ·

          2021-09-06 15:27

          本文實(shí)例講述了Python爬蟲(chóng)實(shí)現(xiàn)爬取百度百科詞條功能。分享給大家供大家參考,具體如下:

          以下我寫(xiě)了一個(gè)爬取百度百科詞條的實(shí)例。

          爬蟲(chóng)主程序入口

          from crawler_test.html_downloader import UrlDownLoaderfrom crawler_test.html_outer import HtmlOuterfrom crawler_test.html_parser import HtmlParserfrom crawler_test.url_manager import UrlManager# 爬蟲(chóng)主程序入口class MainCrawler():  def __init__(self):    # 初始值,實(shí)例化四大處理器:url管理器,下載器,解析器,輸出器    self.urls = UrlManager()    self.downloader = UrlDownLoader()    self.parser = HtmlParser()    self.outer = HtmlOuter()  # 開(kāi)始爬蟲(chóng)方法  def start_craw(self, main_url):    print('爬蟲(chóng)開(kāi)始...')    count = 1    self.urls.add_new_url(main_url)    while self.urls.has_new_url():      try:        new_url = self.urls.get_new_url()        print('爬蟲(chóng)%d,%s' % (count, new_url))        html_cont = self.downloader.down_load(new_url)        new_urls, new_data = self.parser.parse(new_url, html_cont)        # 將解析出的url放入url管理器,解析出的數(shù)據(jù)放入輸出器中        self.urls.add_new_urls(new_urls)        self.outer.conllect_data(new_data)        if count >= 10:# 控制爬取的數(shù)量          break        count += 1      except:        print('爬蟲(chóng)失敗一條')    self.outer.output()    print('爬蟲(chóng)結(jié)束。')if __name__ == '__main__':  main_url = 'https://baike.baidu.com/item/Python/407313'  mc = MainCrawler()  mc.start_craw(main_url)

          URL管理器

          # URL管理器class UrlManager():  def __init__(self):    self.new_urls = set() # 待爬取    self.old_urls = set() # 已爬取  # 添加一個(gè)新的url  def add_new_url(self, url):    if url is None:      return    elif url not in self.new_urls and url not in self.old_urls:      self.new_urls.add(url)  # 批量添加url  def add_new_urls(self, urls):    if urls is None or len(urls) == 0:      return    else:      for url in urls:        self.add_new_url(url)  # 判斷是否有url  def has_new_url(self):    return len(self.new_urls) != 0  # 從待爬取的集合中獲取一個(gè)url  def get_new_url(self):    new_url = self.new_urls.pop()    self.old_urls.add(new_url)    return new_url

          網(wǎng)頁(yè)下載器

          from urllib import request# 網(wǎng)頁(yè)下載器class UrlDownLoader():  def down_load(self, url):    if url is None:      return None    else:      rt = request.Request(url=url, method='GET')   # 發(fā)GET請(qǐng)求      with request.urlopen(rt) as rp:         # 打開(kāi)網(wǎng)頁(yè)        if rp.status != 200:          return None        else:          return rp.read()            # 讀取網(wǎng)頁(yè)內(nèi)容

          網(wǎng)頁(yè)解析器

          import refrom urllib import parsefrom bs4 import BeautifulSoup# 網(wǎng)頁(yè)解析器,使用BeautifulSoupclass HtmlParser():  # 每個(gè)詞條中,可以有多個(gè)超鏈接  # main_url指url公共部分,如“https://baike.baidu.com/”  def _get_new_url(self, main_url, soup):    # baike.baidu.com/    # <a target="_blank" href="/item/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%A8%8B%E5%BA%8F%E8%AE%BE%E8%AE%A1%E8%AF%AD%E8%A8%80" rel="external nofollow" >計(jì)算機(jī)程序設(shè)計(jì)語(yǔ)言</a>    new_urls = set()    # 解析出main_url之后的url部分    child_urls = soup.find_all('a', href=re.compile(r'/item/(\%\w{2})+'))    for child_url in child_urls:      new_url = child_url['href']      # 再拼接成完整的url      full_url = parse.urljoin(main_url, new_url)      new_urls.add(full_url)    return new_urls  # 每個(gè)詞條中,只有一個(gè)描述內(nèi)容,解析出數(shù)據(jù)(詞條,內(nèi)容)  def _get_new_data(self, main_url, soup):    new_datas = {}    new_datas['url'] = main_url    # <dd class="lemmaWgt-lemmaTitle-title"><h1>計(jì)算機(jī)程序設(shè)計(jì)語(yǔ)言</h1>...    new_datas['title'] = soup.find('dd', class_='lemmaWgt-lemmaTitle-title').find('h1').get_text()    # class="lemma-summary" label-module="lemmaSummary"...    new_datas['content'] = soup.find('div', attrs={'label-module': 'lemmaSummary'},                     class_='lemma-summary').get_text()    return new_datas  # 解析出url和數(shù)據(jù)(詞條,內(nèi)容)  def parse(self, main_url, html_cont):    if main_url is None or html_cont is None:      return    soup = BeautifulSoup(html_cont, 'lxml', from_encoding='utf-8')    new_url = self._get_new_url(main_url, soup)    new_data = self._get_new_data(main_url, soup)    return new_url, new_data

          輸出處理器

          # 輸出器class HtmlOuter():  def __init__(self):    self.datas = []  # 先收集數(shù)據(jù)  def conllect_data(self, data):    if data is None:      return    self.datas.append(data)    return self.datas  # 輸出為HTML  def output(self, file='output_html.html'):    with open(file, 'w', encoding='utf-8') as fh:      fh.write('<html>')      fh.write('<head>')      fh.write('<meta charset="utf-8"></meta>')      fh.write('<title>爬蟲(chóng)數(shù)據(jù)結(jié)果</title>')      fh.write('</head>')      fh.write('<body>')      fh.write(        '<table style="border-collapse:collapse; border:1px solid gray; width:80%; word-break:break-all; margin:20px auto;">')      fh.write('<tr>')      fh.write('<th style="border:1px solid black; width:35%;">URL</th>')      fh.write('<th style="border:1px solid black; width:15%;">詞條</th>')      fh.write('<th style="border:1px solid black; width:50%;">內(nèi)容</th>')      fh.write('</tr>')      for data in self.datas:        fh.write('<tr>')        fh.write('<td style="border:1px solid black">{0}</td>'.format(data['url']))        fh.write('<td style="border:1px solid black">{0}</td>'.format(data['title']))        fh.write('<td style="border:1px solid black">{0}</td>'.format(data['content']))        fh.write('</tr>')      fh.write('</table>')      fh.write('</body>')      fh.write('</html>')

          效果(部分):

          希望本文所述對(duì)大家Python程序設(shè)計(jì)有所幫助。

          原文鏈接:https://www.cnblogs.com/wcwnina/p/8619084.html

          文章轉(zhuǎn)載:Python編程學(xué)習(xí)圈
          (版權(quán)歸原作者所有,侵刪)

          點(diǎn)擊下方“閱讀原文”查看更多

          瀏覽 59
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  [无码破解]AV破解版HD在线观看 | 日本精品黄色视频 | 久热久9999| 亚洲福利网 | 男女拍拍拍拍免费视频 |