I have many articles in a database (with title,text), I\'m looking for an algorithm to find the X most similar articles, something like Stack Overflow\'s \"Related Questions
One common algorithm used is the Self-Organizing Map. It is a type of neural network that will automatically categorize your articles. Then you can simply find the location that a current article is in the map and all articles near it are related. The important part of the algorithm is how you would vector quantize your input. There are several ways to do with with text. You can hash your document/title, you can count words and use that as an n dimensional vector, etc. Hope that helps, although I may have opened up a Pandora's box for you of an endless journey in AI.
Given a sample text, this program lists the repository texts sorted by similarity: simple implementation of bag of words in C++. The algorithm is linear in the total length of the sample text and the repository texts. Plus the program is multi-threaded to process repository texts in parallel.
Here is the core algorithm:
class Statistics {
std::unordered_map<std::string, int64_t> _counts;
int64_t _totWords;
void process(std::string& token);
public:
explicit Statistics(const std::string& text);
double Dist(const Statistics& fellow) const;
bool IsEmpty() const { return _totWords == 0; }
};
namespace {
const std::string gPunctStr = ".,;:!?";
const std::unordered_set<char> gPunctSet(gPunctStr.begin(), gPunctStr.end());
}
Statistics::Statistics(const std::string& text) {
std::string lastToken;
for (size_t i = 0; i < text.size(); i++) {
int ch = static_cast<uint8_t>(text[i]);
if (!isspace(ch)) {
lastToken.push_back(tolower(ch));
continue;
}
process(lastToken);
}
process(lastToken);
}
void Statistics::process(std::string& token) {
do {
if (token.size() == 0) {
break;
}
if (gPunctSet.find(token.back()) != gPunctSet.end()) {
token.pop_back();
}
} while (false);
if (token.size() != 0) {
auto it = _counts.find(token);
if (it == _counts.end()) {
_counts.emplace(token, 1);
}
else {
it->second++;
}
_totWords++;
token.clear();
}
}
double Statistics::Dist(const Statistics& fellow) const {
double sum = 0;
for (const auto& wordInfo : _counts) {
const std::string wordText = wordInfo.first;
const double freq = double(wordInfo.second) / _totWords;
auto it = fellow._counts.find(wordText);
double fellowFreq;
if (it == fellow._counts.end()) {
fellowFreq = 0;
}
else {
fellowFreq = double(it->second) / fellow._totWords;
}
const double d = freq - fellowFreq;
sum += d * d;
}
return std::sqrt(sum);
}
You can use SQL Server Full-text index to get the smart comparison, I believe that SO is using an ajax call, that does a query to return the similar questions.
What technologies are you using?
Edit distance isn't a likely candidate, as it would be spelling/word-order dependent, and much more computationally expensive than Will is leading you to believe, considering the size and number of the documents you'd actually be interested in searching.
Something like Lucene is the way to go. You index all your documents, and then when you want to find documents similar to a given document, you turn your given document into a query, and search the index. Internally Lucene will be using tf-idf and an inverted index to make the whole process take an amount of time proportional to the number of documents that could possibly match, not the total number of documents in the collection.
The tutorial at this link sounds like it may be what you need. It is easy to follow and works very well.
His algorithm rewards both common substrings and a common ordering of those substrings and so should pick out similar titles quite nicely.
The simplest and fastest way to compare similarity among abstracts is probably by utilizing the set concept. First convert abstract texts into set of words. Then check how much each set overlaps. Python's set feature comes very hand performing this task. You would be surprised to see how well this method compares to those "similar/related papers" options out there provided by GScholar, ADS, WOS or Scopus.