<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          Python 爬蟲實(shí)戰(zhàn)之爬淘寶商品并做數(shù)據(jù)分析

          共 1370字,需瀏覽 3分鐘

           ·

          2021-10-25 01:02

          前言

          是這樣的,之前接了一個(gè)金主的單子,他想在淘寶開個(gè)小魚零食的網(wǎng)店,想對(duì)目前這個(gè)市場上的商品做一些分析,本來手動(dòng)去做統(tǒng)計(jì)和分析也是可以的,這些信息都是對(duì)外展示的,只是手動(dòng)比較麻煩,所以想托我去幫個(gè)忙。


          一、 項(xiàng)目要求:

          具體的要求如下:
          1.在淘寶搜索“小魚零食”,想知道前10頁搜索結(jié)果的所有商品的銷量和金額,按照他劃定好的價(jià)格區(qū)間來統(tǒng)計(jì)數(shù)量,給我劃分了如下的一張價(jià)格區(qū)間表:

          2.這10頁搜索結(jié)果中,商家都是分布在全國的哪些位置?
          3.這10頁的商品下面,用戶評(píng)論最多的是什么?
          4.從這些搜索結(jié)果中,找出銷量最多的10家店鋪名字和店鋪鏈接。
          從這些要求來看,其實(shí)這些需求也不難實(shí)現(xiàn),我們先來看一下項(xiàng)目的效果。

          二、效果預(yù)覽

          獲取到數(shù)據(jù)之后做了下分析,最終做成了柱狀圖,鼠標(biāo)移動(dòng)可以看出具體的商品數(shù)量。

          在10~30元之間的商品最多,越往后越少,看來大多數(shù)的產(chǎn)品都是定位為低端市場。

          然后我們?cè)賮砜匆幌氯珖碳业姆植记闆r:

          可以看出,商家分布大多都是在沿海和長江中下游附近,其中以沿海地區(qū)最為密集。
          然后再來看一下用戶都在商品下面評(píng)論了一些什么:

          字最大的就表示出現(xiàn)次數(shù)最多,口感味道、包裝品質(zhì)、商品分量和保質(zhì)期是用戶評(píng)價(jià)最多的幾個(gè)方面,那么在產(chǎn)品包裝的時(shí)候可以從這幾個(gè)方面去做針對(duì)性闡述,解決大多數(shù)人比較關(guān)心的問題。
          最后就是銷量前10的店鋪和鏈接了。

          在拿到數(shù)據(jù)并做了分析之后,我也在想,如果這個(gè)東西是我來做的話,我能不能看出來什么東西?或許可以從價(jià)格上找到切入點(diǎn),或許可以從產(chǎn)品地理位置打個(gè)差異化,又或許可以以用戶為中心,由外而內(nèi)地做營銷。

          越往深想,越覺得有門道,算了,對(duì)于小魚零食這一塊我是外行,不多想了。

          三、爬蟲源碼

          由于源碼分了幾個(gè)源文件,還是比較長的,所以這里就不跟大家一一講解了,懂爬蟲的人看幾遍就看懂了,不懂爬蟲的說再多也是云里霧里,等以后學(xué)會(huì)了爬蟲再來看就懂了。
          import csvimport osimport timeimport wordcloudfrom selenium import webdriverfrom selenium.webdriver.common.by import By

          def tongji(): prices = [] with open('前十頁銷量和金額.csv', 'r', encoding='utf-8', newline='') as f: fieldnames = ['價(jià)格', '銷量', '店鋪位置'] reader = csv.DictReader(f, fieldnames=fieldnames) for index, i in enumerate(reader): if index != 0: price = float(i['價(jià)格'].replace('¥', '')) prices.append(price) DATAS = {'<10': 0, '10~30': 0, '30~50': 0, '50~70': 0, '70~90': 0, '90~110': 0, '110~130': 0, '130~150': 0, '150~170': 0, '170~200': 0, } for price in prices: if price < 10: DATAS['<10'] += 1 elif 10 <= price < 30: DATAS['10~30'] += 1 elif 30 <= price < 50: DATAS['30~50'] += 1 elif 50 <= price < 70: DATAS['50~70'] += 1 elif 70 <= price < 90: DATAS['70~90'] += 1 elif 90 <= price < 110: DATAS['90~110'] += 1 elif 110 <= price < 130: DATAS['110~130'] += 1 elif 130 <= price < 150: DATAS['130~150'] += 1 elif 150 <= price < 170: DATAS['150~170'] += 1 elif 170 <= price < 200: DATAS['170~200'] += 1
          for k, v in DATAS.items(): print(k, ':', v)

          def get_the_top_10(url): top_ten = [] # 獲取代理 ip = zhima1()[2][random.randint(0, 399)] # 運(yùn)行quicker動(dòng)作(可以不用管) os.system('"C:\Program Files\Quicker\QuickerStarter.exe" runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8') options = webdriver.ChromeOptions() # 遠(yuǎn)程調(diào)試Chrome options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) # 隱式等待 driver.implicitly_wait(3) # 打開網(wǎng)頁 driver.get(url) # 點(diǎn)擊部分文字包含'銷量'的網(wǎng)頁元素 driver.find_element(By.PARTIAL_LINK_TEXT, '銷量').click() time.sleep(1) # 頁面滑動(dòng)到最下方 driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') time.sleep(1) # 查找元素 element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]') items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]') for index, item in enumerate(items): if index == 10: break # 查找元素 price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text store_href = item.find_element(By.XPATH, './div[2]/div[@class="row row-2 title"]/a').get_attribute( 'href').strip() # 將數(shù)據(jù)添加到字典 top_ten.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location, '店鋪鏈接': store_href })
          for i in top_ten: print(i)

          def get_top_10_comments(url): with open('排名前十評(píng)價(jià).txt', 'w+', encoding='utf-8') as f: pass # ip = ipidea()[1] os.system('"C:\Program Files\Quicker\QuickerStarter.exe" runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8') options = webdriver.ChromeOptions() options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') # options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) driver.implicitly_wait(3) driver.get(url) driver.find_element(By.PARTIAL_LINK_TEXT, '銷量').click() time.sleep(1) element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]') items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]') original_handle = driver.current_window_handle item_hrefs = [] # 先獲取前十的鏈接 for index, item in enumerate(items): if index == 10: break item_hrefs.append( item.find_element(By.XPATH, './/div[2]/div[@class="row row-2 title"]/a').get_attribute('href').strip()) # 爬取前十每個(gè)商品評(píng)價(jià) for item_href in item_hrefs: # 打開新標(biāo)簽 # item_ driver.execute_script(f'window.open("{item_href}")') # 切換過去 handles = driver.window_handles driver.switch_to.window(handles[-1])
          # 頁面向下滑動(dòng)一部分,直到讓評(píng)價(jià)那兩個(gè)字顯示出來 try: driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').click() except Exception as e1: try: x = driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').location_once_scrolled_into_view driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').click() except Exception as e2: try: # 先向下滑動(dòng)100,放置評(píng)價(jià)2個(gè)字沒顯示在屏幕內(nèi) driver.execute_script('var q=document.documentElement.scrollTop=100') x = driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').location_once_scrolled_into_view except Exception as e3: driver.find_element(By.XPATH, '/html/body/div[6]/div/div[3]/div[2]/div/div[2]/ul/li[2]/a').click() time.sleep(1) try: trs = driver.find_elements(By.XPATH, '//div[@class="rate-grid"]/table/tbody/tr') for index, tr in enumerate(trs): if index == 0: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div/div').text.strip() else: try: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div[@class="tm-rate-fulltxt"]').text.strip() except Exception as e: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div[@class="tm-rate-content"]/div[@class="tm-rate-fulltxt"]').text.strip() with open('排名前十評(píng)價(jià).txt', 'a+', encoding='utf-8') as f: f.write(comments + '\n') print(comments) except Exception as e: lis = driver.find_elements(By.XPATH, '//div[@class="J_KgRate_MainReviews"]/div[@class="tb-revbd"]/ul/li') for li in lis: comments = li.find_element(By.XPATH, './div[2]/div/div[1]').text.strip() with open('排名前十評(píng)價(jià).txt', 'a+', encoding='utf-8') as f: f.write(comments + '\n') print(comments)

          def get_top_10_comments_wordcloud(): file = '排名前十評(píng)價(jià).txt' f = open(file, encoding='utf-8') txt = f.read() f.close()
          w = wordcloud.WordCloud(width=1000, height=700, background_color='white', font_path='msyh.ttc') # 創(chuàng)建詞云對(duì)象,并設(shè)置生成圖片的屬性
          w.generate(txt) name = file.replace('.txt', '') w.to_file(name + '詞云.png') os.startfile(name + '詞云.png')

          def get_10_pages_datas(): with open('前十頁銷量和金額.csv', 'w+', encoding='utf-8', newline='') as f: f.write('\ufeff') fieldnames = ['價(jià)格', '銷量', '店鋪位置'] writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() infos = [] options = webdriver.ChromeOptions() options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') # options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) driver.implicitly_wait(3) driver.get(url) # driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]') items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]') for index, item in enumerate(items): price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text infos.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location}) try: driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() except Exception as e: driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() for i in range(9): time.sleep(1) driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]') items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]') for index, item in enumerate(items): try: price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text except Exception: time.sleep(1) driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text infos.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location}) try: driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() except Exception as e: driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() # 一頁結(jié)束 for info in infos: print(info) with open('前十頁銷量和金額.csv', 'a+', encoding='utf-8', newline='') as f: fieldnames = ['價(jià)格', '銷量', '店鋪位置'] writer = csv.DictWriter(f, fieldnames=fieldnames) for info in infos: writer.writerow(info)

          if __name__ == '__main__': url = 'https://s.taobao.com/search?q=%E5%B0%8F%E9%B1%BC%E9%9B%B6%E9%A3%9F&imgfile=&commend=all&ssid=s5-e&search_type=item&sourceId=tb.index&spm=a21bo.21814703.201856-taobao-item.1&ie=utf8&initiative_id=tbindexz_20170306&bcoffset=4&ntoffset=4&p4ppushleft=2%2C48&s=0' # get_10_pages_datas() # tongji() # get_the_top_10(url) # get_top_10_comments(url)????get_top_10_comments_wordcloud()
          通過上面的代碼,我們能獲取到想要獲取的數(shù)據(jù),然后再Bar和Geo進(jìn)行柱狀圖和地理位置分布展示,這兩塊大家可以去摸索一下。

          原文鏈接:blog.csdn.net/zhiguigu/article/details/120061978

          文章轉(zhuǎn)載:Python編程學(xué)習(xí)圈
          (版權(quán)歸原作者所有,侵刪)

          點(diǎn)擊下方“閱讀原文”查看更多

          瀏覽 61
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  丁香激情国产色五月 | 蜜桃臀久久久蜜桃臀久久久蜜桃臀 | 中文天堂新在线 | 苍井空一区二区三区四区五区 | 亚洲午夜久久久久久久久红桃 |