Rstudio 爬虫 文本分词个性化词云设计
-
目录
1、环境准备,加载依赖
2、rvest 爬虫,数据爬取
3、jiebaR用于分词,词频统计
4、wordcloud2 结果可视化
===============================================================================================================================================
1、环境准备,加载依赖
2、数据爬取
3、数据清洗
4、词云设计
install.packages("wordcloud2")install.packages("rvest")install.packages("jiebaR")library(wordcloud2)library(rvest)library(jiebaR)# 开始爬虫url<-'http://www.gov.cn/premier/2017-03/16/content_5177940.htm'#读取数据,规定编码web<-read_html(url,encoding="utf-8") position<-web %>% html_nodes("div.pages_content") %>% html_text()# jieba分词,词频统计#初始化分词引擎并加载停用词。engine_s<-worker(stop_word = "stopwords.txt")#分词seg<-segment(position,engine_s)#统计词频f<-freq(seg)#根据词频降序排列f<-f[order(f[2],decreasing=TRUE),]#基于wordcloud2包进行可视化#总共有2000多个词,为了显示效果,我只提取前150个字f2<-f2[1:150,] #形状设置为一颗五角星wordcloud2(f2, size = 0.8 ,shape='star')