Can two processes access in-memory (:memory:) sqlite database concurrently?

前端 未结 2 580
梦毁少年i
梦毁少年i 2020-12-24 14:42

Is it possible to access database in one process, created in another? I tried:

IDLE #1

import sqlite3
conn = sqlite3.connect(\':         


        
相关标签:
2条回答
  • 2020-12-24 14:59

    No, they cannot ever access the same in-memory database from different processes Instead, a new connection to :memory: always creates a new database.

    From the SQLite documentation:

    Every :memory: database is distinct from every other. So, opening two database connections each with the filename ":memory:" will create two independent in-memory databases.

    This is different from an on-disk database, where creating multiple connections with the same connection string means you are connecting to one database.

    Within one process it is possible to share an in-memory database if you use the file::memory:?cache=shared URI:

    conn = sqlite3.connect('file::memory:?cache=shared', uri=True)
    

    but this is still not accessible from other another process.

    0 讨论(0)
  • 2020-12-24 15:04

    of course I agree with @Martijn because doc says so, but if you are focused on unix like systems, then you can make use of shared memory:

    If you create file in /dev/shm folder, all files create there are mapped directly to RAM, so you can use to access the-same database from two-different processes.

    #/bin/bash
    rm -f /dev/shm/test.db
    time bash -c $'
    FILE=/dev/shm/test.db
    sqlite3 $FILE "create table if not exists tab(id int);"
    sqlite3 $FILE "insert into tab values (1),(2)"
    for i in 1 2 3 4; do sqlite3 $FILE "INSERT INTO tab (id) select (a.id+b.id+c.id)*abs(random()%1e7) from tab a, tab b, tab c limit 5e5"; done; #inserts at most 2'000'000 records to db.
    sqlite3 $FILE "select count(*) from tab;"'
    

    it takes that much time:

    FILE=/dev/shm/test.db
    real    0m0.927s
    user    0m0.834s
    sys 0m0.092s
    

    for at least 2 million records, doing the same on HDD takes (this is the same command but FILE=/tmp/test.db):

    FILE=/tmp/test.db
    real    0m2.309s
    user    0m0.871s
    sys 0m0.138s
    

    so basically this allows you accessing the same databases from different processes (without loosing r/w speed):

    Here is demo demonstrating this what I am talking about:

    xterm -hold -e 'sqlite3 /dev/shm/testbin "create table tab(id int); insert into tab values (42),(1337);"' &
    xterm -hold -e 'sqlite3 /dev/shm/testbin "insert into tab values (43),(1338); select * from tab;"' &
    ;
    
    0 讨论(0)
提交回复
热议问题