I\'ve asked this question before about killing a process that uses too much memory, and I\'ve got most of a solution worked out.
However, there is one problem: calcu
TLDR: Python precomputes constants in the code. If any very large number is calculated with at least one intermediate step, the process will be CPU time limited.
It took quite a bit of searching, but I have discovered evidence that Python 3 does precompute constant literals that it finds in the code before evaluating anything. One of them is this webpage: A Peephole Optimizer for Python. I've quoted some of it below.
ConstantExpressionEvaluator
This class precomputes a number of constant expressions and stores them in the function's constants list, including obvious binary and unary operations and tuples consisting of just constants. Of particular note is the fact that complex literals are not represented by the compiler as constants but as expressions, so 2+3j appears as
LOAD_CONST n (2)
LOAD_CONST m (3j)
BINARY_ADD
This class converts those to
LOAD_CONST q (2+3j)
which can result in a fairly large performance boost for code that uses complex constants.
The fact that 2+3j
is used as an example very strongly suggests that not only small constants are being precomputed and cached, but also any constant literals in the code. I also found this comment on another Stack Overflow question (Are constant computations cached in Python?):
Note that for Python 3, the peephole optimizer does precompute the
1/3
constant. (CPython specific, of course.) – Mark Dickinson Oct 7 at 19:40
These are supported by the fact that replacing
y = 10**(10**10)
with this also hangs, even though I never call the function!
def f():
y = 10**(10**10)
Luckily for me, I don't have any such giant literal constants in my code. Any computation of such constants will happen later, which can be and is limited by the CPU time limit. I changed
y = 10**(10**10)
to this,
x = 10
print(x)
y = 10**x
print(y)
z = 10**y
print(z)
and got this output, as desired!
-1 -1
10
10000000000
ran out of time!
The moral of the story: Limiting a process by CPU time or memory consumption (or some other method) will work if there is not a large literal constant in the code that Python tries to precompute.
Use a function.
It does seem that Python tries to precompute integer literals (I only have empirical evidence; if anyone has a source please let me know). This would normally be a helpful optimization, since the vast majority of literals in scripts are probably small enough to not incur noticeable delays when precomputing. To get around this, you need to make your literal be the result of a non-constant computation, like a function call with parameters.
Example:
import resource
import os
import signal
def timeRanOut(n, stack):
raise SystemExit('ran out of time!')
signal.signal(signal.SIGXCPU, timeRanOut)
soft,hard = resource.getrlimit(resource.RLIMIT_CPU)
print(soft,hard)
resource.setrlimit(resource.RLIMIT_CPU, (10, 100))
f = lambda x=10:x**(x**x)
y = f()
This gives the expected result:
xubuntu@xubuntu-VirtualBox:~/Desktop$ time python3 hang.py
-1 -1
ran out of time!
real 0m10.027s
user 0m10.005s
sys 0m0.016s