There was some discussion of Grok. This is now supported as OpenCCG, and will be reimplemented in OpenNLP as well.
You can find OpenCCG at http://openccg.sourceforge.net/. I would also suggest the Curran and Clark CCG parser available here: http://svn.ask.it.usyd.edu.au/trac/candc/wiki
Basically, for paraphrase, what you're going to need to do is write up something that first parses sentences of blog posts, extracts the semantic meaning of these posts, and then searches through the space of vocab words which will compositionally create the same semantic meaning, and then pick one that doesn't match the current sentence. This will take a long time and it might not make a lot of sense. Don't forget that in order to do this, you're going to need near-perfect anaphora resolution and the ability to pick up discourse-level inferences.
If you're just looking to make blog posts that don't have machine-identifiable duplicate content, you can always just use topic and focus transformations and WordNet synonyms. There have definitely been sites which have made money off of AdWords that have done this before.