I have done simple performance test on my local machine, this is python script:
import redis
import sqlite3
import time
data = {}
N = 100000
for i in xrange(N)
The current answers provide insight as to why Redis loses this particular benchmark, i.e. network overhead generated by every command executed against the server, however no attempt has been made to refactor the benchmark code to accelerate Redis performance.
The problem with your code lies here:
for key in data:
r.set(key, data[key])
You incur 100,000 round-trips to the Redis server, resulting in great I/O overhead.
This is totally unnecessary as Redis provides "batch" like functionality for certain commands, so for SET there is MSET, so you can refactor the above to:
r.mset(data)
From 100,000 server trips down to 1. You simply pass the Python dictionary as a single argument and Redis will atomically apply the update on the server.
This will make all the difference in your particular benchmark, you should see Redis perform at least on par with SQLite.