I have a database of strings (arbitrary length) which holds more than one million items (potentially more).
I need to compare a user-provided string against the whol
A very extensive explanation of relevant algorithms is in the book Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology by Dan Gusfield.
You didn't mention your database system, but for PostrgreSQL you could use the following contrib module: trgm - Trigram matching for PostgreSQL
The pg_trgm contrib module provides functions and index classes for determining the similarity of text based on trigram matching.
Compute the SOUNDEX hash (which is built into many SQL database engines) and index by it.
SOUNDEX is a hash based on the sound of the words, so spelling errors of the same word are likely to have the same SOUNDEX hash.
Then find the SOUNDEX hash of the search string, and match on it.
Since the amount of data is large, when inserting a record I would compute and store the value of the phonetic algorithm in an indexed column and then constrain (WHERE clause) my select queries within a range on that column.
If your database supports it, you should use full-text search. Otherwise, you can use an indexer like lucene and its various implementations.
https://en.wikipedia.org/wiki/Levenshtein_distance
Levenshtein algorithm has been implemented in some DBMS
(E.g. PostgreSql: http://www.postgresql.org/docs/9.1/static/fuzzystrmatch.html)