700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > python公众号接口_用Python实现微信公众号API素材库图文消息抓取

python公众号接口_用Python实现微信公众号API素材库图文消息抓取

时间:2018-06-02 19:53:18

相关推荐

python公众号接口_用Python实现微信公众号API素材库图文消息抓取

[Python] 纯文本查看 复制代码# -*- coding:utf-8 -*-

import os

import urllib.parse

from html.parser import HTMLParser

import requests

from bs4 import BeautifulSoup

from pymongo import MongoClient

class ContentHtmlParser(HTMLParser):

"""

过滤html标签

"""

def __init__(self):

HTMLParser.__init__(self)

self.text = ""

def handle_data(self, data):

self.text += data

def get_text(self):

return self.text

mongo_client = MongoClient("ip", 27017)

mongo_db = mongo_client["gongzhonghao"]

def get_words():

words = []

with open("words.txt", encoding="utf-8") as words_file:

for lines in words_file.readlines():

if len(lines.strip()) == 0:

continue

if lines.find("、") != -1:

for p in lines.split("、"):

words.append(p.replace("\n", ""))

else:

words.append(lines.replace("\n", ""))

return words

def get_articles(clt):

articles = []

collection = mongo_db[clt]

doc = collection.find_one()

items = doc["items"]

for it in items:

content = it["content"]["news_item"][0]

articles.append(content)

return articles

def download(dir, file_name, url):

if not os.path.exists(dir):

os.mkdir(dir)

try:

resp = requests.get(url)

path = dir + "\\" + file_name

if os.path.exists(path):

return

with open(path, "wb") as f:

f.write(resp.content)

except :

print(url)

def find_images(content):

imgs = []

c = urllib.parse.unquote(content)

img_labels = BeautifulSoup(c, "html.parser").find_all("img")

for img in img_labels:

src = img.get("data-src")

imgs.append(src)

return imgs

def get_suffix(url):

try:

suffix = url[url.rindex("=") + 1:]

if suffix == "jpeg" or suffix == "other":

return ".jpg"

return "." + suffix

except:

return ".jpg"

def filter_content(content):

parser = ContentHtmlParser()

parser.feed(content)

return parser.get_text()

def check_jinyongci(content):

fc = filter_content(content)

words = get_words()

invalids = []

for w in words:

if fc.find(w) != -1:

invalids.append(w)

return invalids

def save_jinyongci(clt, title, invalids):

if len(invalids) == 0:

return

file = clt + "\\invalid.txt"

with open(file, "a+",encoding="utf-8") as f:

f.write("标题:" + title)

f.write("\r\n敏感词:")

for iv in invalids:

f.write(iv)

f.write("、")

f.write("\r\n\r\n")

if __name__ == "__main__":

clt = "xxx"

if not os.path.exists(clt):

os.mkdir(clt)

articles = get_articles(clt)

print(clt + ": 共" + str(len(articles)) + "个")

for i in range(0, len(articles)):

print("正在处理第 " + str(i) + " 个")

title = articles[i]["title"]

thumb_url = articles[i]["thumb_url"]

content = articles[i]["content"]

# 下载封面

# path = os.path.join(clt, title)

fname = str(i) + "_" + title.replace("|", "").replace("<", "").replace(">", "")

download(clt, fname + get_suffix(thumb_url), thumb_url)

# 找出文章中的图片

imgs = find_images(content)

index = 0

for img in imgs:

download(clt, fname + "_" + str(index) + get_suffix(img), img)

index = index + 1

# 找出文章中的敏感词

invalids = check_jinyongci(content)

print(invalids,'----',title)

save_jinyongci(clt, title, invalids)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。