代理IP、增量爬蟲、分布式爬蟲的必備利器 - redis

來(lái)源 | 麥?zhǔn)寰幊?/span>
如果你真正寫過(guò)爬蟲,你一定遇到過(guò)這些問(wèn)題:
爬取數(shù)據(jù)的時(shí)候IP被封或者被限制 網(wǎng)頁(yè)數(shù)據(jù)庫(kù)時(shí)時(shí)刻刻都在更新,不可能每次爬取都爬整站,需要做增量爬取 數(shù)據(jù)量巨大,即使用了scrapy等多線程框架也是杯水車薪
要解決這三種場(chǎng)景,都需要使用某種數(shù)據(jù)庫(kù),而redis是其中最合適的一種。
本文通過(guò)幾個(gè)案例,學(xué)習(xí)用redis數(shù)據(jù)庫(kù)解決以上問(wèn)題:
使用基于redis的代理池,防止被封號(hào) 使用redis管理爬取狀態(tài),實(shí)現(xiàn)增量式爬蟲 使用redis做分布式爬蟲實(shí)現(xiàn)巨量數(shù)據(jù)爬取,著名的分布式爬蟲方案scapy-redis也是類似原理

redis可以存儲(chǔ)爬取的數(shù)據(jù)
當(dāng)爬蟲工程師想構(gòu)建一個(gè)ip代理池的時(shí)候,redis絕對(duì)是首選。
下面我們來(lái)看一段代碼:
import redis
import requests
from lxml import etree
conn = redis.Redis(host='127.0.0.1', port=6379)
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.67 Safari/537.36 Edg/87.0.664.47"}
def get_https_proxy(num):
https_proxy_url = f"http://www.nimadaili.com/https/{num}/"
resp = requests.get(url=https_proxy_url, headers=headers).text
# 使用xpath提取代理ip的字段
tree = etree.HTML(resp)
https_ip_list = tree.xpath('/html/body/div/div[1]/div/table//tr/td[1]/text()')
# 將爬下來(lái)的代理ip以列表(鍵名為'https')元素的形式推入數(shù)據(jù)庫(kù)
[conn.lpush('https',ip) for ip in https_ip_list]
print('Redis數(shù)據(jù)庫(kù)有HTTPS代理IP數(shù)量為:',conn.llen('https'))
# 獲取代理網(wǎng)站1——6頁(yè)的代理ip
for n in range(1,6):
get_https_proxy(n)
在redis命令交互端輸入:
lrange https 0 -1
就可以看到爬到的代理ip:
取出代理ip也非常簡(jiǎn)單:
import redis
conn = redis.Redis(host='127.0.0.1', port=6379)
proxies_ip = conn.rpop('https').decode('utf-8')
print(proxies_ip)
>> 106.14.247.221:8080
是不是非常簡(jiǎn)單,學(xué)會(huì)這招,再也不怕被封IP了。
redis可以輔助實(shí)現(xiàn)增量爬蟲
當(dāng)爬蟲工程師需要寫增量式爬蟲的時(shí)候,一定會(huì)考慮使用redis的set數(shù)據(jù)類型進(jìn)行url“去重”,為什么呢?
現(xiàn)在假如有一個(gè)需求:
1. 爬取菜市場(chǎng)的歷史菜價(jià)。
2. 需要每天更新當(dāng)日的價(jià)格。
那么我的爬取思路就是:
1. 爬取每日菜價(jià)詳情頁(yè)的url,以set類型存入redis數(shù)據(jù)庫(kù)。
2. 爬取redis數(shù)據(jù)庫(kù)中所有的url對(duì)應(yīng)菜價(jià)數(shù)據(jù)。
3. 然后第二天或者(第N天),再次爬取每日的菜價(jià)詳情頁(yè)的url,以set類型存入redis數(shù)據(jù)庫(kù)。如果在存入數(shù)據(jù)庫(kù)的時(shí)候返回0,則表示數(shù)據(jù)庫(kù)中已存在相同的url,則不需要爬取該詳情頁(yè),如果返回1,則表示數(shù)據(jù)庫(kù)中未存在該url,則需要爬取該詳情頁(yè)。

上面看得有點(diǎn)繞,我們來(lái)實(shí)際操作下:
127.0.0.1:6379> sadd url wwww.baidu.com
(integer) 1
127.0.0.1:6379> sadd url wwww.baidu.com
(integer) 0
將www.baidu.com作為url的成員第一次存入數(shù)據(jù)庫(kù)的時(shí)候,返回的是(integer) 1;然后我們第二次進(jìn)行相同的操作時(shí),數(shù)據(jù)庫(kù)返回的是(integer) 0,這表示url成員中已經(jīng)存在了該值。
利用redis這個(gè)特性,我們可以很便利地做到url去重的功能。
# 這是我寫的某個(gè)爬蟲項(xiàng)目的運(yùn)行爬蟲的實(shí)例方法
def run_spider(self):
# 遍歷爬取所有詳情頁(yè)的url
for url in self.get_link_list():
# 將url以set類型存入redis數(shù)據(jù)庫(kù)
j = self.conn.sadd('url',url)
# 判斷返回值是1或0
if j == 1:
# 如果返回值為1,爬取該詳情頁(yè)數(shù)據(jù),反之跳過(guò)
self.parse_detail(url)
# 數(shù)據(jù)持久化存儲(chǔ)
self.work_book.save('./price.xls')
redis可以用作分布式爬蟲的調(diào)度器
鼎鼎大名的分布式爬蟲框架:scrapy-redis一定有聽說(shuō)過(guò)吧?
分布式爬蟲一聽感覺(jué)很高大上,其實(shí)原理很簡(jiǎn)單,就是把scrapy的調(diào)度器共享到服務(wù)器上去,然后各個(gè)設(shè)備的爬蟲從服務(wù)器上獲取需要爬取數(shù)據(jù)的url。
我們來(lái)溫習(xí)下scrapy五大組件之一的調(diào)度器的作用:
調(diào)度器(Scheduler):用來(lái)接受引擎發(fā)過(guò)來(lái)的請(qǐng)求, 壓入隊(duì)列中, 并在引擎再次請(qǐng)求的時(shí)候返回. 可以想像成一個(gè)URL(抓取網(wǎng)頁(yè)的網(wǎng)址或者說(shuō)是鏈接)的優(yōu)先隊(duì)列, 由它來(lái)決定下一個(gè)要抓取的網(wǎng)址是什么, 同時(shí)去除重復(fù)的網(wǎng)址。
下面我們分三步模擬部署一個(gè)簡(jiǎn)單的分布式爬蟲:
1. 第一步,爬取需要解析數(shù)據(jù)的url;
import redis
import requests
from lxml import etree
conn = redis.Redis(host='127.0.0.1', port=6379)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
}
# 獲取一部小說(shuō)所有的章節(jié)的url,并以list數(shù)據(jù)形式存入redis。
def get_catalogue():
response = requests.get('https://www.tsxs.org/16/16814/', headers=headers)
tree = etree.HTML(response.text)
catalog_url_list = tree.xpath('//*[@id="chapterlist"]/li/a/@href')
return catalog_url_list
for i in get_catalogue():
full_link = 'https://www.tsxs.org' + i
conn.lpush('catalogue',full_link)
這段代碼實(shí)現(xiàn)了向redis注入所有需要爬取小說(shuō)頁(yè)面的url的功能。
127.0.0.1:6379> lrange catalogue 0 -1
1) "https://www.tsxs.org/16/16814/13348772.html"
2) "https://www.tsxs.org/16/16814/13348771.html"
3) "https://www.tsxs.org/16/16814/13348770.html"
4) "https://www.tsxs.org/16/16814/13348769.html"
5) "https://www.tsxs.org/16/16814/13348768.html"
6) "https://www.tsxs.org/16/16814/13348767.html"
7) "https://www.tsxs.org/16/16814/13348766.html"
8) "https://www.tsxs.org/16/16814/13348765.html"
9) "https://www.tsxs.org/16/16814/13348764.html"
10) "https://www.tsxs.org/16/16814/13348763.html"
11) "https://www.tsxs.org/16/16814/13348762.html"
12) "https://www.tsxs.org/16/16814/13348761.html"
13) "https://www.tsxs.org/16/16814/13348760.html"
14) "https://www.tsxs.org/16/16814/13348759.html"
15) "https://www.tsxs.org/16/16814/13348758.html"
16) "https://www.tsxs.org/16/16814/13348757.html"
17) "https://www.tsxs.org/16/16814/13348756.html"
18) "https://www.tsxs.org/16/16814/13348755.html"
19) "https://www.tsxs.org/16/16814/13348754.html"
20) "https://www.tsxs.org/16/16814/13348753.html"
21) "https://www.tsxs.org/16/16814/13348752.html"
22) "https://www.tsxs.org/16/16814/13348751.html"
23) "https://www.tsxs.org/16/16814/13348750.html"
24) "https://www.tsxs.org/16/16814/13348749.html"
25) "https://www.tsxs.org/16/16814/13348748.html"
26) "https://www.tsxs.org/16/16814/13348747.html"
27) "https://www.tsxs.org/16/16814/13348746.html"
28) "https://www.tsxs.org/16/16814/13348745.html"
29) "https://www.tsxs.org/16/16814/13348744.html"
30) "https://www.tsxs.org/16/16814/13348743.html"
31) "https://www.tsxs.org/16/16814/13348742.html"
32) "https://www.tsxs.org/16/16814/13348741.html"
33) "https://www.tsxs.org/16/16814/13348740.html"
34) "https://www.tsxs.org/16/16814/13348739.html"
35) "https://www.tsxs.org/16/16814/13348738.html"
36) "https://www.tsxs.org/16/16814/13348737.html"
37) "https://www.tsxs.org/16/16814/13348736.html"
38) "https://www.tsxs.org/16/16814/13348735.html"
39) "https://www.tsxs.org/16/16814/13348734.html"
40) "https://www.tsxs.org/16/16814/13348733.html"
41) "https://www.tsxs.org/16/16814/13348732.html"
42) "https://www.tsxs.org/16/16814/13348731.html"
43) "https://www.tsxs.org/16/16814/13348730.html"
44) "https://www.tsxs.org/16/16814/13348729.html"
45) "https://www.tsxs.org/16/16814/13348728.html"
46) "https://www.tsxs.org/16/16814/13348727.html"
47) "https://www.tsxs.org/16/16814/13348726.html"
48) "https://www.tsxs.org/16/16814/13348725.html"
49) "https://www.tsxs.org/16/16814/13348724.html"
50) "https://www.tsxs.org/16/16814/13348723.html"
51) "https://www.tsxs.org/16/16814/13348722.html"
52) "https://www.tsxs.org/16/16814/13348721.html"
53) "https://www.tsxs.org/16/16814/13348720.html"
54) "https://www.tsxs.org/16/16814/13348719.html"
55) "https://www.tsxs.org/16/16814/13348718.html"
56) "https://www.tsxs.org/16/16814/13348717.html"
57) "https://www.tsxs.org/16/16814/13348716.html"
58) "https://www.tsxs.org/16/16814/13348715.html"
59) "https://www.tsxs.org/16/16814/13348714.html"
60) "https://www.tsxs.org/16/16814/13348713.html"
61) "https://www.tsxs.org/16/16814/13348712.html"
62) "https://www.tsxs.org/16/16814/13348711.html"
63) "https://www.tsxs.org/16/16814/13348710.html"
64) "https://www.tsxs.org/16/16814/13348709.html"
65) "https://www.tsxs.org/16/16814/13348708.html"
66) "https://www.tsxs.org/16/16814/13348707.html"
67) "https://www.tsxs.org/16/16814/13348706.html"
68) "https://www.tsxs.org/16/16814/13348705.html"
69) "https://www.tsxs.org/16/16814/13348704.html"
70) "https://www.tsxs.org/16/16814/13348703.html"
71) "https://www.tsxs.org/16/16814/13348702.html"
72) "https://www.tsxs.org/16/16814/13348701.html"
73) "https://www.tsxs.org/16/16814/13348700.html"
74) "https://www.tsxs.org/16/16814/13348699.html"
75) "https://www.tsxs.org/16/16814/13348698.html"
76) "https://www.tsxs.org/16/16814/13348697.html"
77) "https://www.tsxs.org/16/16814/13348696.html"
78) "https://www.tsxs.org/16/16814/13348695.html"
79) "https://www.tsxs.org/16/16814/13348694.html"
80) "https://www.tsxs.org/16/16814/13348693.html"
81) "https://www.tsxs.org/16/16814/13348692.html"
82) "https://www.tsxs.org/16/16814/13348691.html"
83) "https://www.tsxs.org/16/16814/13348690.html"
84) "https://www.tsxs.org/16/16814/13348689.html"
85) "https://www.tsxs.org/16/16814/13348688.html"
86) "https://www.tsxs.org/16/16814/13348687.html"
87) "https://www.tsxs.org/16/16814/13348686.html"
88) "https://www.tsxs.org/16/16814/13348685.html"
89) "https://www.tsxs.org/16/16814/13348684.html"
90) "https://www.tsxs.org/16/16814/13348683.html"
91) "https://www.tsxs.org/16/16814/13348682.html"
92) "https://www.tsxs.org/16/16814/13348681.html"
93) "https://www.tsxs.org/16/16814/13348680.html"
94) "https://www.tsxs.org/16/16814/13348679.html"
95) "https://www.tsxs.org/16/16814/13348678.html"
96) "https://www.tsxs.org/16/16814/13348677.html"
97) "https://www.tsxs.org/16/16814/13348676.html"
98) "https://www.tsxs.org/16/16814/13348675.html"
99) "https://www.tsxs.org/16/16814/13348674.html"
100) "https://www.tsxs.org/16/16814/13348673.html"
101) "https://www.tsxs.org/16/16814/13348672.html"
102) "https://www.tsxs.org/16/16814/13348671.html"
103) "https://www.tsxs.org/16/16814/13348670.html"
104) "https://www.tsxs.org/16/16814/13348669.html"
105) "https://www.tsxs.org/16/16814/13348668.html"
2. 第二步,配置redis的配置文件
linux或者mac:redis.conf windows:redis.windows.conf 代開配置文件修改: 將bind 127.0.0.1進(jìn)行刪除 關(guān)閉保護(hù)模式:protected-mode yes改為no 結(jié)合著配置文件開啟redis服務(wù) redis-server 配置文件 啟動(dòng)客戶端: redis-cli 


3. 第三步,使用局域網(wǎng)內(nèi)別的設(shè)備訪問(wèn)redis服務(wù);

開始爬取,
使用redis的lpop命令,執(zhí)行后從數(shù)據(jù)庫(kù)返回url,數(shù)據(jù)庫(kù)中則清除該url。
import redis
import requests
from lxml import etree
conn = redis.Redis(host='192.168.125.101', port=6379)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
}
def extract_data(per_link):
response = requests.get(url=per_link, headers=headers).content
tree = etree.HTML(response.decode('gbk'))
title = tree.xpath('//*[@id="mains"]/div[1]/h1/text()')[0]
content = tree.xpath('//*[@id="book_text"]//text()')[0]
return title, content
def save_to_pc(title, content):
print(title + "開始下載!")
with open(title+'.txt','w',encoding='utf-8')as f:
f.write(content)
print(title + "下載結(jié)束!")
def run_spider():
print('開始運(yùn)行爬蟲!')
link = conn.lpop('catalogue').decode('utf-8')
title, content = extract_data(link)
save_to_pc(title, content)
print('下載結(jié)束!')
run_spider()
>> 開始運(yùn)行爬蟲!
章節(jié)目錄 第105章 同樣是君子開始下載!
章節(jié)目錄 第105章 同樣是君子下載結(jié)束!
下載結(jié)束!
就這樣爬取一個(gè)url,redis中少一個(gè)url,直到被全部爬取完畢為止。

