手把手教你用Scrapy爬取知乎大V粉絲列表
導(dǎo)讀:通過(guò)獲取知乎某個(gè)大V的關(guān)注列表和被關(guān)注列表,查看該大V以及其關(guān)注用戶和被關(guān)注用戶的詳細(xì)信息,然后通過(guò)層層遞歸調(diào)用,實(shí)現(xiàn)獲取關(guān)注用戶和被關(guān)注用戶的關(guān)注列表和被關(guān)注列表,最終實(shí)現(xiàn)獲取大量用戶信息。

新建一個(gè)Scrapy項(xiàng)目scrapy startproject zhihuuser,移動(dòng)到新建目錄cdzhihuuser下。新建Spider項(xiàng)目:scrapy genspider zhihu zhihu.com。
01 定義spider.py文件
定義爬取網(wǎng)址、爬取規(guī)則等。
# -*- coding: utf-8 -*-
import json
from scrapy import Spider, Request
from zhihuuser.items import UserItem
class ZhihuSpider(Spider):
name = 'zhihu'
allowed_domains = ['zhihu.com']
start_urls = ['http://zhihu.com/']
# 自定義爬取網(wǎng)址
start_user = 'excited-vczh'
user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
user_query = 'allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics'
follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include= {include}&offset={offset}&limit={limit}'
follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include= {include}&offset={offset}&limit={limit}'
followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
# 定義請(qǐng)求爬取用戶信息、關(guān)注用戶和被關(guān)注用戶的函數(shù)
def start_requests(self):
yield Request(self.user_url.format(user=self.start_user, include=self.user_query), callback=self.parseUser)
yield Request(self.follows_url.format(user=self.start_user, include=self.follows_query, offset=0, limit=20), callback=self.parseFollows)
yield Request(self.followers_url.format(user=self.start_user, include=self.followers_query, offset=0, limit=20), callback=self.parseFollowers)
# 請(qǐng)求爬取用戶詳細(xì)信息
def parseUser(self, response):
result = json.loads(response.text)
item = UserItem()
for field in item.fields:
if field in result.keys():
item[field] = result.get(field)
yield item
# 定義回調(diào)函數(shù),爬取關(guān)注用戶與被關(guān)注用戶的詳細(xì)信息,實(shí)現(xiàn)層層迭代
yield Request(self.follows_url.format(user=result.get('url_token'), include=self.follows_query, offset=0, limit=20), callback=self.parseFollows)
yield Request(self.followers_url.format(user=result.get('url_token'), include=self.followers_query, offset=0, limit=20), callback=self.parseFollowers)
# 爬取關(guān)注者列表
def parseFollows(self, response):
results = json.loads(response.text)
if 'data' in results.keys():
for result in results.get('data'):
yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query), callback=self.parseUser)
if 'paging' in results.keys() and results.get('paging').get('is_end') == False:
next_page = results.get('paging').get('next')
yield Request(next_page, callback=self.parseFollows)
# 爬取被關(guān)注者列表
def parseFollowers(self, response):
results = json.loads(response.text)
if 'data' in results.keys():
for result in results.get('data'):
yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query), callback=self.parseUser)
if 'paging' in results.keys() and results.get('paging').get('is_end') == False:
next_page = results.get('paging').get('next')
yield Request(next_page, callback=self.parseFollowers)
02 定義items.py文件
定義爬取數(shù)據(jù)的信息、使其規(guī)整等。
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
from scrapy import Field, Item
class UserItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
allow_message = Field()
answer_count = Field()
articles_count = Field()
avatar_url = Field()
avatar_url_template = Field()
badge = Field()
employments = Field()
follower_count = Field()
gender = Field()
headline = Field()
id = Field()
name = Field()
type = Field()
url = Field()
url_token = Field()
user_type = Field()
03 定義pipelines.py文件
存儲(chǔ)數(shù)據(jù)到MongoDB。
# -*- coding: utf-8 -*-
# Define your item pipelines here
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymongo
# 存儲(chǔ)到MongoDB
class MongoPipeline(object):
collection_name = 'users'
def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db
@classmethod
def from_crawler(cls, crawler):
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DATABASE')
)
def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db]
def close_spider(self, spider):
self.client.close()
def process_item(self, item, spider):
self.db[self.collection_name].update({'url_token':item['url_token']}, dict(item), True)
# 執(zhí)行去重操作
return item
04 定義settings.py文件
開(kāi)啟MongoDB、定義請(qǐng)求頭、不遵循robotstxt規(guī)則。
# -*- coding: utf-8 -*-
BOT_NAME = 'zhihuuser'
SPIDER_MODULES = ['zhihuuser.spiders']
# Obey robots.txt rules
ROBOTSTXT_OBEY = False # 是否遵守robotstxt規(guī)則,限制爬取內(nèi)容
# Override the default request headers(加載請(qǐng)求頭):
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/ 537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36',
'authorization': 'oauth c3cef7c66a1843f8b3a9e6a1e3160e20'
}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zhihuuser.pipelines.MongoPipeline': 300,
}
MONGO_URI = 'localhost'
MONGO_DATABASE = 'zhihu'
開(kāi)啟爬?。簊crapycrawlzhihu。部分爬取過(guò)程中的信息如圖8-4所示。

▲圖8-4 部分爬取過(guò)程中的信息
存儲(chǔ)到MongoDB的部分信息如圖8-5所示。

▲圖8-5 MongoDB的部分信息

評(píng)論
圖片
表情
