What is generally faster, grepping through files or running a SQL LIKE %x% query through blobs?

♀尐吖头ヾ 提交于 2020-01-01 08:58:04

问题


Say I'm designing a tool that would save code snippets either in a PostgreSQL/MySQL database or on the file system. I want to search through these snippets. Using a search engine like Sphinx doesn't seem practical because we need exact text matches of code when searching code.

grep and ack and has always worked great, but storing stuff in a database makes a large collection of stuff more manageable in certain ways. I'm wonder what the relative performance of running grep recursively over a tree of directories is compared to running a query like SQL's LIKE or MySQL's REGEXP function over an equivalent number of records with TEXT blobs is.


回答1:


If you've 1M files to grep through, you will (best I'm aware) go through each one with a regular expression.

For all intents and purposes, you're going to end up doing the same thing over table rows if you mass-query them using a LIKE operator or a regular expression.

My own experience with grep is that I seldom look for something that doesn't contain at least one full word, however, so you might be able to take advantage of a database to reduce the set in which you're searching.

MySQL has native full text search features, but I'd recommend against because they mean you're not using InnoDB.

You can read about those from Postgres here:

http://www.postgresql.org/docs/current/static/textsearch.html

After creating an index on a tsvector column, you can then do your "grep" in two steps, one to immediately find rows that might vaguely qualify, followed by another on your true criteria:

select * from docs where tsvcol @@ :tsquery and (regexp at will);

That will be significantly faster than anything grep can do.




回答2:


I can't compare them but both will take long. My guess is grep will be faster.

But MySQL support full text indexing and searching, which will be faster then grep -- i guess again.

Also, I did not understand, what is the problem with Sphinx or Lucene. Anyway, here's a benchmark for MySQL, Sphinx and Lucene




回答3:


The internet seems guess that grep uses Boyer-Moore, which will make the query time depend additively (not multiplicatively) on the query size. This isn't that relevant though.

I think it's near-optimal for a one-time search. But in your case you can do better since you have repeated searches, which you can exploit the structure of (e.g. by indexing certain common substrings in your query), as bpgergo hints at.

Also I'm not sure the regular expression engine you are thinking of using is optimized for a non-special query, you could try it and see.

You may wish to keep all the files you're searching through in memory to avoid harddisk-based slowdown. This should work unless you are searching a staggering amount of text.




回答4:


If you want a full-text index on code I would recommend Russ Cox’ codesearch tools https://code.google.com/p/codesearch/

This is How Google Code Search Worked http://swtch.com/~rsc/regexp/regexp4.html



来源:https://stackoverflow.com/questions/5927241/what-is-generally-faster-grepping-through-files-or-running-a-sql-like-x-query

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!