MySQL ResultSets are by default retrieved completely from the server before any work can be done. In cases of huge result sets this becomes unusable. I would like instead to
Did you try this version of fetchone? Or something different?
row = cursor.fetchone()
while row is not None:
# process
row = cursor.fetchone()
Also, did you try this?
row = cursor.fetchmany(size=1)
while row is not None:
# process
row = cursor.fetchmany( size=1 )
Not all drivers support these, so you may have gotten errors or found them too slow.
Edit.
When it hangs on execute, you're waiting for the database. That's not a row-by-row Python thing; that's a MySQL thing.
MySQL prefers to fetch all rows as part of it's own cache management. This is turned off by providing a the fetch_size of Integer.MIN_VALUE (-2147483648L).
The question is, what part of the Python DBAPI becomes the equivalent of the JDBC fetch_size?
I think it might be the arraysize attribute of the cursor. Try
cursor.arraysize=-2**31
And see if that forces MySQL to stream the result set instead of caching it.