Suppose I need to have a database file consisting of a list of dictionaries:
file:
[
{\"name\":\"Joe\",\"data\":[1,2,3,4,5]},
{ ...
If it is required to keep the file being valid json, it can be done as follows:
import json
with open (filepath, mode="r+") as file:
file.seek(0,2)
position = file.tell() -1
file.seek(position)
file.write( ",{}]".format(json.dumps(dictionary)) )
This opens the file for both reading and writing. Then, it goes to the end of the file (zero bytes from the end) to find out the file end's position (relatively to the beginning of the file) and goes last one byte back, which in a json file is expected to represent character ]. In the end, it appends a new dictionary to the structure, overriding the last character of the file and keeping it to be valid json. It does not read the file into the memory. Tested with both ANSI and utf-8 encoded files in Python 3.4.3 with small and huge (5 GB) dummy files.
A variation, if you also have os module imported:
import os, json
with open (filepath, mode="r+") as file:
file.seek(os.stat(filepath).st_size -1)
file.write( ",{}]".format(json.dumps(dictionary)) )
It defines the byte length of the file to go to the position of one byte less (as in the previous example).