700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 起点中文网爬虫

起点中文网爬虫

时间:2021-10-27 09:18:03

相关推荐

起点中文网爬虫

python3 起点中文网架空历史小说爬虫

尝试爬取起点中文网的全部架空历史小说的一些信息。

信息包括:小说网址、书名、作者、简介、评分、评论数、字数。

首先去到起点的架空历史小说排行的网页:/all?chanId=5&subCateId=22&orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page=1

第一步:解析页面,获取每本书的URL

去到排行的页面后,我们右键检查书名,可以看到这本书的URL。

我们通过requests.get()方法,获取到网页源码,然后通过正则表达式,将书的每本的id提取出来,再拼凑成URL。

#获取各书的idimport reimport requestsdef book_list():url = '/all?chanId=5&subCateId=22&orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page=4'# 打开相应url并把页面作为返回html = requests.get(url).textprint(html)ren = r' <h4><a href="///info/(.*?)"'ren_url = pile(ren)book_url = ren_url.findall(html)return book_url

book_url就是每本书的id,按照网页的格式拼凑成URL就好。

第二步:获取总页数

可以发现URL的page=1时为第一页,现在显示的总页数为1016,让我们把这个总页数爬下来!

#获取总页数def get_page_Count(url):html = requests.get(url).textpageCount = pile(r'data-page="(.*?)"').findall(html)[-1]return pageCount

这个pageCount就是总的页数啦,也就是目前表示的1016。

第三步:进入每本书的URL,爬取书的信息

进入每本书的URL之后,我们会看到这样的界面

其中被红色标注的就是要爬去的信息。

首先我们来获取书的名字。还是老方法了,requests加正则。

#获取书名import reimport requestsdef book_name():url = '/info/1010136878'html = requests.get(url).textren = r' <h1>.*?<em>(.*?)</em>'ren_name = pile(ren)name = ren_name.findall(html)name = ".".join(name)print("书名:"+name)return namebook_name()

同理,作者也这么获取。

#获取作者import reimport requestsdef book_authorname():url = '/info/1010136878'html = requests.get(url).textren = r"authorName: '(.*?)'"ren_autorname = pile(ren)authorname = ren_autorname.findall(html)authorname = ".".join(authorname)print("作者:" + authorname)return authornamebook_authorname()

然后是书的简介,一样的方法,不过要注意简介里面有很多的标点符号,正则的时候需要注意。

#获取简介import reimport requestsdef book_summary():url = '/info/1010136878'html = requests.get(url).textren = r'<div class="book-intro">\s+<p>\s+(.*?)\s+</p>' #[\u4e00-\u9fa5]ren_symmary = pile(ren)summary = ren_symmary.findall(html)summary = ".".join(summary)summary=re.sub('[^\u4e00-\u9fa51-9,。?!.、:;''"《》()—]','',summary)print("简介:"+summary)return summarybook_summary()

简单的搞定了,接下来遇到了第一个麻烦,书的评分是js传值的,直接在网页源码中不能看到评分。

不会了,怎么办?看看别人是怎么解决的~

我这里就不细说了,可以参考这位朋友的文章:/ridicuturing/article/details/81123587

贴我自己的代码

#获取评价def book_rate(id):id = "".join(id)url = '/ajax/comment/index?_csrfToken=nJO0N8zar6LMkYrhA9rwSTraUEIPhtcKkxyyF4mz&bookId='+id+'&pageSize=15'rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r'"rate":(.*?),'ren_symmary = pile(ren)rate = ren_symmary.findall(html)rate = ".".join(rate)print("评分:"+rate)bookrate.append(rate)return rate

评论数和评价一样

#获取评论数def book_userCount(id):id = "".join(id)url = '/ajax/comment/index?_csrfToken=nJO0N8zar6LMkYrhA9rwSTraUEIPhtcKkxyyF4mz&bookId=' + id + '&pageSize=15'rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r'"userCount":(.*?),'ren_symmary = pile(ren)userCount = ren_symmary.findall(html)userCount = ".".join(userCount)print("评论数:" + userCount)bookuserCount.append(userCount)return userCount

解决了一个问题好开心~,然后遇到了更难的问题。。。。。。

起点设置了字体反爬,数字显示为小方框,看不到字数,感觉真是日了狗了。

又遇到问题不会解决怎么办?!继续看别人怎么解决的!

终于找到了解决方法:

1.方框转换成16进制的Unicode编码:/qq_42336573/article/details/80698580

2.Unicode编码通过字体映射成woff文件中的数字:/qq_35741999/article/details/8049

看了这两篇文章以后,知道了原因所在,原来网页上显示的数字用的是电脑没有的字体,所以显示不出来。

我们需要把表示数字的方框爬下来,然后通过这个woff文件将其转为电脑能识别的数字。

我的代码

#获取woff文件def get_woff_previous(id):id = "".join(id)start_url = "/info/" + idresponse = requests.get(start_url).textdoc = pq(response)doc = doc.text()a = pile(r'(.*?).woff').findall(doc)[0]font = pile(r'\w+').findall(a)[4]return font#得到字体的十六进制码def get_code(id):id = "".join(id)start_url = "/info/" + idresponse = requests.get(start_url).textdoc = pq(response)doc = doc.text()num = pile(r'(.*?)万字').findall(doc)[0]for i in num:numlist.append(ord(i))#numlist[]是16进制数字,需要小方框一个一个的转换,所以用list表示return numlist#得到字体def get_font():url = "/qd_anti_spider/" + get_woff_previous(id) + ".woff"response = requests.get(url)font = TTFont(BytesIO(response.content))cmap = font.getBestCmap()font.close()return cmap#字体转码def get_encode(cmap, values):WORD_MAP = {'zero': '0', 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7','eight': '8', 'nine': '9', 'period': '.'}word_count = ''for value in values:key = cmap[int(value)]word_count += WORD_MAP[key]return word_count#获取书的字数def get_num():for i in get_code(id):global s,numlists = s + "&#" + str(i) + ";"s = pile(r'[0-9]+').findall(s)cmap = get_font()word_count = get_encode(cmap, s)booknum.append(word_count)print("字数:" + word_count + "万字")s = ""#s是要转变的16进制数numlist=[]

需要的功能都写好了,可以写进数据库啦。

db = pymysql.connect(host='localhost', port=3306, user='root', password='123', db='spider', charset='utf8')cursor = db.cursor()sql1 = "insert into bookspider(book_url,book_name,book_author,book_summary,book_id,book_rate,book_userCount,book_num) values ('%s','%s','%s','%s','%s','%s','%s','%s')" % (bookurl[0],bookname[0],bookauthor[0],booksummary[0],bookid[0],bookrate[0],bookuserCount[0],booknum[0])cursor.execute(sql1)mit()print("存入数据库成功!")

整体代码

from urllib import requestimport reimport timeimport pymysqlimport requestsfrom pyquery import PyQuery as pqfrom fontTools.ttLib import TTFontfrom io import BytesIO#这些list都是存储信息,为了之后导入数据库bookurl=[]bookname=[]bookauthor=[]booksummary=[]bookid=[]bookrate=[]bookuserCount=[]booknum=[]numlist=[]s=""book_url_list=[]#获取总页数def get_page_Count(url):html = requests.get(url).textpageCount = pile(r'data-page="(.*?)"').findall(html)[-1]return pageCount#获取书名def book_name(url):rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r' <h1>.*?<em>(.*?)</em>'ren_name = pile(ren)name = ren_name.findall(html)name = ".".join(name)print("书名:"+name)bookname.append(name)return name#获取作者def book_authorname(url):rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r"authorName: '(.*?)'"ren_autorname = pile(ren)authorname = ren_autorname.findall(html)authorname = ".".join(authorname)print("作者:"+authorname)bookauthor.append(authorname)return authorname#获取简介def book_summary(url):rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r'<div class="book-intro">\s+<p>\s+(.*?)\s+</p>' #[\u4e00-\u9fa5]ren_symmary = pile(ren)summary = ren_symmary.findall(html)summary = ".".join(summary)summary=re.sub('[^\u4e00-\u9fa51-9,。?!.、:;''"《》()—]','',summary)print("简介:"+summary)booksummary.append(summary)return summary#获取评价def book_rate(id):id = "".join(id)url = '/ajax/comment/index?_csrfToken=nJO0N8zar6LMkYrhA9rwSTraUEIPhtcKkxyyF4mz&bookId='+id+'&pageSize=15'rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r'"rate":(.*?),'ren_symmary = pile(ren)rate = ren_symmary.findall(html)rate = ".".join(rate)print("评分:"+rate)bookrate.append(rate)return rate#获取评论数def book_userCount(id):id = "".join(id)url = '/ajax/comment/index?_csrfToken=nJO0N8zar6LMkYrhA9rwSTraUEIPhtcKkxyyF4mz&bookId=' + id + '&pageSize=15'rsp = request.urlopen(url)html = rsp.read()html = html.decode()ren = r'"userCount":(.*?),'ren_symmary = pile(ren)userCount = ren_symmary.findall(html)userCount = ".".join(userCount)print("评论数:" + userCount)bookuserCount.append(userCount)return userCount#获取每页的书的iddef book_list(i):url = "/all?chanId=5&subCateId=22&orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page=" + str(i)# 打开相应url并把页面作为返回rsp = request.urlopen(url)# 按住Ctrl键不送,同时点击urlopen,可以查看文档,有函数的具体参数和使用方法# 把返回结果读取出来html = rsp.read()# 解码html = html.decode()print(html)ren = r' <h4><a href="///info/(.*?)"'ren_url = pile(ren)book_url = ren_url.findall(html)return book_url#获取woff文件def get_woff_previous(id):id = "".join(id)start_url = "/info/" + idresponse = requests.get(start_url).textdoc = pq(response)doc = doc.text()a = pile(r'(.*?).woff').findall(doc)[0]font = pile(r'\w+').findall(a)[4]return font#得到字体的十六进制码def get_code(id):id = "".join(id)start_url = "/info/" + idresponse = requests.get(start_url).textdoc = pq(response)doc = doc.text()num = pile(r'(.*?)万字').findall(doc)[0]for i in num:numlist.append(ord(i))return numlist#得到字体def get_font():url = "/qd_anti_spider/" + get_woff_previous(id) + ".woff"response = requests.get(url)font = TTFont(BytesIO(response.content))cmap = font.getBestCmap()font.close()return cmap#字体转码def get_encode(cmap, values):WORD_MAP = {'zero': '0', 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7','eight': '8', 'nine': '9', 'period': '.'}word_count = ''for value in values:key = cmap[int(value)]word_count += WORD_MAP[key]return word_count#获取书的字数def get_num():for i in get_code(id):global s,numlists = s + "&#" + str(i) + ";"s = pile(r'[0-9]+').findall(s)cmap = get_font()word_count = get_encode(cmap, s)booknum.append(word_count)print("字数:" + word_count + "万字")s = ""numlist=[]# 使用urllib.request请求一个网页的内容,并把内容打印出来if __name__ == '__main__':# 定义需要爬的页面page_url = '/all?chanId=5&subCateId=22'page_Count = int(get_page_Count(page_url))for i in range(1,page_Count):for j in book_list(i):url = '/info/'+jprint(url)#打印网址bookurl.append(url)book_name(url)book_authorname(url)book_summary(url)id = pile(r'[0-9]+').findall(url)print("编号:" + "".join(id))bookid.append("".join(id))book_rate(id)book_userCount(id)get_num()#数据库db = pymysql.connect(host='localhost', port=3306, user='root', password='123', db='spider', charset='utf8')cursor = db.cursor()sql1 = "insert into bookspider(book_url,book_name,book_author,book_summary,book_id,book_rate,book_userCount,book_num) values ('%s','%s','%s','%s','%s','%s','%s','%s')" % (bookurl[0],bookname[0],bookauthor[0],booksummary[0],bookid[0],bookrate[0],bookuserCount[0],booknum[0])cursor.execute(sql1)mit()print("存入数据库成功!")print()#空格bookurl = []bookname = []bookauthor = []booksummary = []bookid = []bookrate = []bookuserCount = []booknum = []#time.sleep(10)#时间间隔,最好加上

最后,这次终于是自己独立的彻彻底底的完成一次爬虫,前后一共花费了两天的时间,唉,实在是太差了。。。不过这次的收获还是很大的!特别是对于反爬这方面,了解很多知识。其次是requests库,这次还是第一次用,之前都是用urllib的,所以代码里有些是requests有些是uillib,感觉requests还是方便多了。还有字体映射那部分,其实还是不怎么懂怎么写出来的,不过还好会用。这个简单的爬虫还是有很多需要改进的地方,try-catch还没加,实时功能也没写,也不是分布式,还有很长的路要走呀~

如果有人看到这篇文章发现有不理解的地方,问我吧,虽然我也不一定能解答出哈哈哈哈~

对于不会解决的问题,网上多查查,肯定有很多人在自己之前就踩过坑了。肯定能找到解决的方法的,别放弃!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。