I am doing a lot of analysis with the TM
package. One of my biggest problems are related to stemming and stemming-like transformations.
Let's say I have several accounting related terms (I am aware of the spelling issues).
After stemming we have:
accounts -> account
account -> account
accounting -> account
acounting -> acount
acount -> acount
acounts -> acount
accounnt -> accounnt
Result: 3 Terms (account, acount, account) where I would have liked 1 (account) as all these relate to the same term.
1) To correct spelling is a possibility, but I have never attempted that in R. Is that even possible?
2) The other option is to make a reference list i.e. account = (accounts, account, accounting, acounting, acount, acounts, accounnt) and then replace all occurrences with the master term. How would I do this in R?
Once again, any help/suggestions would be greatly appreciated.
We could set up a list of synonyms and replace those values. For example
synonyms <- list(
list(word="account", syns=c("acount", "accounnt"))
)
This says we want to replace "acount" and "accounnt" with "account" (i'm assuming we're doing this after stemming). Now let's create test data.
raw<-c("accounts", "account", "accounting", "acounting",
"acount", "acounts", "accounnt")
And now let's define a transformation function that will replace the words in our list with the primary synonym.
library(tm)
replaceSynonyms <- content_transformer(function(x, syn=NULL) {
Reduce(function(a,b) {
gsub(paste0("\\b(", paste(b$syns, collapse="|"),")\\b"), b$word, a)}, syn, x)
})
Here we use the content_transformer
function to define a custom transformation. And basically we just do a gsub
to replace each of the words. We can then use this on a corpus
tm <- Corpus(VectorSource(raw))
tm <- tm_map(tm, stemDocument)
tm <- tm_map(tm, replaceSynonyms, synonyms)
inspect(tm)
and we can see all these values are transformed into "account" as desired. To add other synonyms, just add additional lists to the main synonyms
list. Each sub-list should have the names "word" and "syns".
Mr. Flick has answered question #2. I am approaching via answering question #1.
Here is an approach the uses a binary search of a known word data base (DICTIONARY
from qdapDictionaries
). A binary lookup is slow for sure but if we make some assumptions about the replacing (like a range of differences in number of character). So here's the basic idea:
- Turn the
Corpus
into a unique bag of words usingqdap
'sbag_o_words
- Look those words up in a dictionary (
qdapDictionaries
'DICTIONARY
data set) to find words not recognize usingmatch
- These
misses
from step # 2 will be what we lookup
- These
- Determine number of characters for words in a dictionary to eliminate gross difference later using
nchar
- Run each element of
misses
through a loop (sapply
) and do the following:
a. stem each element frommisses
usingtm::stemDocument
b. determine number of characters and eliminate those from dictionary that are not within that range usingnchar
c. useagrep
with amax.distance
to eliminate more words from the dictionary
d. use a binary lookup (that reverse engineersagrep
) to determine the word from dictionary that is closest to the missed element [note this is a non-exported function fromqdap
calledqdap:::Ldist
] - The result is a named vector that we can use for
gsub
bing - Use
tm_map
with a customtm
flavoredgsub
function to replace words - Do the stemming with
tm_map
andstemDocument
Here's the code. I made a fake Corpus
using the words you provide and some random words to demonstrate how to do this from start to end. You can play with range
and max.distance
that is supplied to sapply
. The looser you are with these the slower the search will be but tightiening these too much will make it more likely to make a mistake. This really isn't an answer for spelling correction in a general sense but works here because you were stemming anyway. There's an Aspell
package but I have never used it before.
terms <- c("accounts", "account", "accounting", "acounting", "acount", "acounts", "accounnt")
library(tm); library(qdap)
fake_text <- unlist(lapply(terms, function(x) {
paste(sample(c(x, sample(DICTIONARY[[1]], sample(1:5, 1)))), collapse=" ")
}))
fake_text
myCorp <- Corpus(VectorSource(fake_text))
terms2 <- unique(bag_o_words(as.data.frame(myCorp)[[2]]))
misses <- terms2[is.na(match(terms2, DICTIONARY[[1]]))]
chars <- nchar(DICTIONARY[[1]])
replacements <- sapply(misses, function(x, range = 3, max.distance = .2) {
x <- stemDocument(x)
wchar <- nchar(x)
dict <- DICTIONARY[[1]][chars >= (wchar - range) & chars <= (wchar + range)]
dict <- dict[agrep(x, dict, max.distance=max.distance)]
names(which.min(sapply(dict, qdap:::Ldist, x)))
})
replacer <- content_transformer(function(x) {
mgsub(names(replacements), replacements, x, ignore.case = FALSE, fixed = FALSE)
})
myCorp <- tm_map(myCorp, replacer)
inspect(myCorp <- tm_map(myCorp, stemDocument))
This question inspired me to attempt to write a spell check for the qdap
package. There's an interactive version that may be useful here. It's available in qdap >= version 2.1.1
. That means you'll need the dev version at the moment.. here are the steps to install:
library(devtools)
install_github("qdapDictionaries", "trinker")
install_github("qdap", "trinker")
library(tm); library(qdap)
## Recreate a Corpus
like you describe.
terms <- c("accounts", "account", "accounting", "acounting", "acount", "acounts", "accounnt")
fake_text <- unlist(lapply(terms, function(x) {
paste(sample(c(x, sample(DICTIONARY[[1]], sample(1:5, 1)))), collapse=" ")
}))
fake_text
inspect(myCorp <- Corpus(VectorSource(fake_text)))
## The interactive spell checker (check_spelling_interactive
)
m <- check_spelling_interactive(as.data.frame(myCorp)[[2]])
preprocessed(m)
inspect(myCorp <- tm_map(myCorp, correct(m)))
The correct
function merely grabs a closure function from the output of check_spelling_interactive
and allows you to then apply the "correcting" to any new text string(s).
来源:https://stackoverflow.com/questions/24443388/stemming-with-r-text-analysis