本文来源吾爱破解论坛
本帖最后由 carole1102 于 2019-11-30 22:39 编辑
image.png
(175.82 KB, 下载次数: 2)
下载附件
保存到相册
2019-11-29 10:46 上传
在52学习python,刚接触爬虫,练手爬取,各位大佬轻拍。。。。
from lxml import etree
import requests
import csv
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
df = open(r'e:\douban.csv','wt',newline='',encoding='utf-8-sig')
writer = csv.writer(df)
writer.writerow(('name','url','author','publisher','date','price','rate','comment'))
urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]for url in urls: html = requests.get(url,headers=headers)
selector = etree.HTML(html.text)
infos = selector.xpath('//tr[@Class ="item"]')
for info in infos:
name = info.xpath('td/div/a/@title')[0]
url = info.xpath('td/div/a/@href')[0]
book_info = info.xpath('td/p/text()')[0]
author = book_info.split('/')[0]
publisher = book_info.split('/')[2]
date = book_info.split('/')[-2]
price = book_info.split('/')[-1]
rate = info.xpath('td/div/span[2]/text()')[0]
comments = info.xpath('td/p/span/text()')
comment = comments[0] if len(comments) != 0 else '空'
writer.writerow((name,url,author,publisher,date,price,rate,comment))
df.close()
版权声明:
本站所有资源均为站长或网友整理自互联网或站长购买自互联网,站长无法分辨资源版权出自何处,所以不承担任何版权以及其他问题带来的法律责任,如有侵权或者其他问题请联系站长删除!站长QQ754403226 谢谢。
- 上一篇: 利用python机器学习算法成功破解12306验证码
- 下一篇: 加班上通宵无聊,爬本小说读读