股票数据scrapy爬虫

目标:获取上交所和深交所所有股票的名称和交易信息

输出:保存到文件中

技术路线:scrapy

数据网站的确定:

1
2
3
4
5
获取股票列表:
东方财富网:http://quote.eastmoney.com/stocklist.html
获取个股信息:
百度股票:https://gupiao.baidu.com/stock/
单个股票:https://gupiao.baidu.com/stock/sz002439.html

步骤:

  • 步骤1:建立工程和Spider模板
  • 步骤2:编写Spider
  • 步骤3:编写ITEM Pipelines

步骤1:建立工程和Spider模板

1
2
3
4
\>scrapy startproject BaiduStocks
\>cd BaiduStocks
\>scrapy genspider stocks baidu.com
进一步修改spiders/stocks.py文件

步骤2:编写Spider

  • 配置stocks.py文件
  • 修改对返回页面的处理
  • 修改对新增URL爬取请求的处理
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# stocks.py
# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
name = "stocks"
start_urls = ['https://quote.eastmoney.com/stocklist.html']

def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.findall(r"[s][hz]\d{6}", href)[0]
url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
yield scrapy.Request(url, callback=self.parse_stock)
except:
continue

def parse_stock(self, response):
infoDict = {}
stockInfo = response.css('.stock-bets')
name = stockInfo.css('.bets-name').extract()[0]
keyList = stockInfo.css('dt').extract()
valueList = stockInfo.css('dd').extract()
for i in range(len(keyList)):
key = re.findall(r'>.*</dt>', keyList[i])[0][1:-5]
try:
val = re.findall(r'\d+\.?.*</dd>', valueList[i])[0][0:-5]
except:
val = '--'
infoDict[key]=val

infoDict.update(
{'股票名称': re.findall('\s.*\(',name)[0].split()[0] + \
re.findall('\>.*\<', name)[0][1:-1]})
yield infoDict

步骤3:编写ITEM Pipelines

  • 配置pipelines.py文件
  • 定义对爬取项(Scraped Item)的处理类
  • 配置ITEM_PIPELINES选项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class BaidustocksPipeline(object):
def process_item(self, item, spider):
return item

class BaidustocksInfoPipeline(object):
def open_spider(self, spider):
self.f = open('BaiduStockInfo.txt', 'w')

def close_spider(self, spider):
self.f.close()

def process_item(self, item, spider):
try:
line = str(dict(item)) + '\n'
self.f.write(line)
except:
pass
return item

配置settings.py文件

1
2
3
4
5
# Configure item pipelines
# See https://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}

如何进一步提高scrapy爬虫爬取速度?

2

Donate? comment?