I have a simple python script for indexing a CSV file containing 1 million rows:
import csv
from pyes import *
reader = csv.reader(open(\'data.csv\', \'rb\'
For future visitors, Elasticsearch-py supports bulk operations in a single call. Note that the _op_type field in each doc determines which operation occurs (it defaults to index if not present)
E.g.
import elasticsearch as ES
import elasticsearch.helpers as ESH
es = ES.Elasticsearch()
docs = [ doc1, doc2, doc3 ]
n_success, n_fail = ESH.bulk(es, docs, index='test_index', doc_type='test_doc',
stats_only=True)