Trying to do some analysis of twitter data. Downloaded the tweets and created a corpus from the text of the tweets using the below
# Creating a Corpus
wim_co
As Albert suggested, converting the text encoding to "utf-8" solved the problem for me. But instead of removing the whole tweet with problematic characters, you can use the sub option in iconv to only remove the "bad" characters in a tweet and keep the rest:
tweets <- iconv(rawTweets, to = "utf-8", sub="")
This does not produce NAs anymore and no further filtration step is necessary.