Fetching huge data from Oracle in Python

前提是你 提交于 2019-12-04 23:29:42

You should use cur.fetchmany() instead. It will fetch chunk of rows defined by arraysise (256)

Python code:

def chunks(cur): # 256
    global log, d
    while True:
        #log.info('Chunk size %s' %  cur.arraysize, extra=d)
        rows=cur.fetchmany()

        if not rows: break;
        yield rows

Then do your processing in a for loop;

for i, chunk  in enumerate(chunks(cur)):
            for row in chunk:
                     #Process you rows here

That is exactly how I do it in my TableHunter for Oracle.

  • add print statements after each line
  • add a counter to your loop indicating progress after each N rows
  • look into a module like 'progressbar' for displaying a progress indicator

I think your code is asking the database for the data one row at the time which might explain the slowness.

Try:

ctemp = connection.cursor()
ctemp.execute(sql)
Results = ctemp.fetchall()
for row in Results:
    file.write(row[1])
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!