-
-
Notifications
You must be signed in to change notification settings - Fork 31.8k
sqlite
: timeout
doesn't seem to work
#130971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
sqlite
: timout
doesn't seem to work withsqlite
: timout
doesn't seem to work
It doesn't even work, when using |
Also:
|
sqlite
: timout
doesn't seem to worksqlite
: timeout
doesn't seem to work
I tried a bit more to debug this:
|
In the OP you start both Python scripts using
You won't be able to see the initial
Where in the docs do we explicitly recommend against using The reason we recommend The only problematic setting is the current default: |
You can just change the 2nd shell to also use an explicit
I do so, too, but only after the timeout of 10s has passed (and I don't get it, if I release the lock by ending the transaction in the the shell). In Python I get it immediately.
I think the shell one is already at the absolute minimum.... and well the Python one more or less to. The point simply is, on the shell/
Okay you don't actively recommend against it, but you do recommend for |
I profoundly disagree with this.
I did, and I got the "database is locked" error. |
Immediately or after 10s? The point/bug is... in Python, the timeout is ignored, in |
Since we're directly calling cpython/Modules/_sqlite/connection.c Lines 258 to 268 in e7980ba
BTW:
|
I tried your repro again, with added " $ time ./python.exe busy.py 1st_a 5 0 10
con tmout: 5
rc=0
1st sleep: 0
[]
2nd sleep: 10
./python.exe busy.py 1st_a 5 0 10 0.04s user 0.01s system 0% cpu 10.070 total
$ time ./python.exe busy.py 2nd_a 500 0 0
con tmout: 500
rc=0
1st sleep: 0
[]
2nd sleep: 0
./python.exe busy.py 2nd_a 500 0 0 0.03s user 0.02s system 0% cpu 10.327 total This harmonises with your expectations from the OP, quoting:
Perhaps the issue you are seeing is in your environment? |
I should note that the two shell commands in #130971 (comment) were run in parallel in separate shells. |
I've also extended my test prog #!/usr/bin/python3
import sqlite3
import sys
import time
#def d(s):
# print(f"CAL: {s}")
print(f"{time.time()} connect timeout: " + sys.argv[2])
print(f"{time.time()} connect...")
con = sqlite3.connect("locks.db", autocommit=False, timeout=float(sys.argv[2]))
print(f"{time.time()} connected")
#con.set_trace_callback(d)
cur = con.cursor()
print(f"{time.time()} 1st sleep: " + sys.argv[3])
time.sleep(int(sys.argv[3]))
print(f"{time.time()} 1st sleep over")
#x = cur.execute(f"PRAGMA busy_timeout;")
#print(x.fetchall())
print(f"{time.time()} CREATE...")
cur.execute("CREATE TABLE IF NOT EXISTS locks (name TEXT PRIMARY KEY ON CONFLICT ROLLBACK) STRICT")
print(f"{time.time()} INSERT...")
try:
x = cur.execute("INSERT INTO locks (name) VALUES (?);", (sys.argv[1],) )
except Exception as e:
print(f"{time.time()} exception")
raise e
print(x.fetchall())
print(f"{time.time()} 2nd sleep: " + sys.argv[4])
time.sleep(int(sys.argv[4]))
print(f"{time.time()} 2nd sleep over")
con.commit()
print(f"{time.time()} COMMITTED")
cur.close()
con.close()
print(f"{time.time()} closed" Now when I delete
and
As you can see, the timeout does indeed work here. Next, I don't delete the DB but keep the existing one, and I only change the 1st arguments of both invocations in order to not fail the constraints because the PKEY already exists:
and
Here the timeout is not obeyed, and the 2nd invocation fails immediately. Did you try with a fresh DB every time? Or also be re-using an existing one, and just changing the lock name (i.e.
Hmm, difficult to rule out. I mean I use the normal Python and sqlite (lib/util) packages from Debian sid, I don't have any sqlite specific env vars or so set... nothing that really pops into my mind. Thanks, |
The timeout works again, if one comments the line:
and re-uses the already existing DB, then with I've checked the same with the
waiting there and at the same time doing:
With the So if Python does(?) a Cheers, |
I shall add for the records why I think it might be a bug in sqlite3: It's really the |
I've forwarded the potential issue upstream. From my side we can probably close the bug here. |
Bug report
Bug description:
Hey there.
(please read till the end... it does work with the
sqlite3
utility, but not with Python’ssqlite
)With 3.13.2 on Debian unstable, it seems that
sqlite.connect()
’s timeout doesn't work as it should.I have a small demo program:
autocommit=False
, which usesDEFERRED
transactions, which - AFAIU - means that the transaction only starts until the first, access, i.e. theexecute()
that would initially create the table and thus only after the first sleep.name
value (which is a primary key and must be unique), second thetimeout=
of the connection, third is the sleep before the transaction starts, fourth the sleep right before thecommit()
,My assumption would be that while the DB is locked because of a write transaction, any concurrent write transaction waits
timeout=
before it aborts.Now when I start the script twice (at the same time), first e.g. with:
second with:
I'd expect the second to wait for 500s and as the first sleeps only 10s, it should succeed.
However it immediately aborts with:
I've seen #124510, but the explanation there was about the case of upgrading read transactions to write transactions, so what’s written here doesn't apply, and in fact it seems to just work as I expect with the
sqlite3
utility:First invocation:
Second invocation (note that
sqlite3
’s.timeout
uses milliseconds):Doing this, the second one will block, until either the 10s have passed, or I do a
COMMIT;
in the first.Any ideas why that doesn't work in Python?
Thanks,
Chris.
CPython versions tested on:
3.13
Operating systems tested on:
Linux
The text was updated successfully, but these errors were encountered: