R text file and text mining…how to load data

寵の児 提交于 2019-11-28 17:58:25

Like @richiemorrisroe I found this poorly documented. Here's how I get my text in to use with the tm package and make the document term matrix:

library(tm) #load text mining library
setwd('F:/My Documents/My texts') #sets R's working directory to near where my files are
a  <-Corpus(DirSource("/My Documents/My texts"), readerControl = list(language="lat")) #specifies the exact folder where my text file(s) is for analysis with tm.
summary(a)  #check what went in
a <- tm_map(a, removeNumbers)
a <- tm_map(a, removePunctuation)
a <- tm_map(a , stripWhitespace)
a <- tm_map(a, tolower)
a <- tm_map(a, removeWords, stopwords("english")) # this stopword file is at C:\Users\[username]\Documents\R\win-library\2.13\tm\stopwords 
a <- tm_map(a, stemDocument, language = "english")
adtm <-DocumentTermMatrix(a) 
adtm <- removeSparseTerms(adtm, 0.75)

In this case you don't need to specify the exact file name. So long as it's the only one in the directory referred to in line 3, it will be used by the tm functions. I do it this way because I have not had any success in specifying the file name in line 3.

If anyone can suggest how to get text into the lda package I'd be most grateful. I haven't been able to work that out at all.

Can't you just use the function readPlain from the same library? Or you could just use the more common scan function.

mydoc.txt <-scan("./mydoc.txt", what = "character")

I actually found this quite tricky to begin with, so here's a more comprehensive explanation.

First, you need to set up a source for your text documents. I found that the easiest way (especially if you plan on adding more documents, is to create a directory source that will read all of your files in.

source <- DirSource("yourdirectoryname/") #input path for documents
YourCorpus <- Corpus(source, readerControl=list(reader=readPlain)) #load in documents

You can then apply the StemDocument function to your Corpus. HTH.

I believe what you wanted to do was read individual file into a corpus and then make it treat the different rows in the text file as different observations.

See if this gives you what you want:

text <- read.delim("this is a test for R load.txt", sep = "/t")
text_corpus <- Corpus(VectorSource(text), readerControl = list(language = "en"))

This is assuming that the file "this is a test for R load.txt" has only one column which has the text data.

Here the "text_corpus" is the object that you are looking for.

Hope this helps.

Here's my solution for a text file with a line per observation. the latest vignette on tm (Feb 2017) gives more detail.

text <- read.delim(textFileName, header=F, sep = "\n",stringsAsFactors = F)
colnames(text) <- c("MyCol")
docs <- text$MyCol
a <- VCorpus(VectorSource(docs))
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!