Python, Raspberry pi, call a task avery 10 milliseconds precisely

给你一囗甜甜゛ 提交于 2020-08-26 05:51:36

问题


I'm currently trying to have a function called every 10ms to acquire data from a sensor.

Basically I was trigerring the callback from a gpio interupt but I changed my sensor and the one I'm currently using doesn't have a INT pin to drive the callback.

So my goal is to have the same behaviour but with an internal interupt generated by a timer.

I tried this from this topic

import threading

def work (): 
  threading.Timer(0.25, work).start ()
  print(time.time())
  print "stackoverflow"

work ()

But when I run it I can see that the timer is not really precise and it's derivating over time as you can see.

1494418413.1584847
stackoverflow
1494418413.1686869
stackoverflow
1494418413.1788757
stackoverflow
1494418413.1890721
stackoverflow
1494418413.1992736
stackoverflow
1494418413.2094712
stackoverflow
1494418413.2196639
stackoverflow
1494418413.2298684
stackoverflow
1494418413.2400634
stackoverflow
1494418413.2502584
stackoverflow
1494418413.2604961
stackoverflow
1494418413.270702
stackoverflow
1494418413.2808678
stackoverflow
1494418413.2910736
stackoverflow
1494418413.301277
stackoverflow

So the timer is derivating by 0.2 milliseconds every 10 milliseconds which is quite a big bias after few seconds.

I know that python is not really made for "real-time" but I think there should be a way to do it.

If someone already have to handle time constraints with python I would be glad to have some advices.

Thanks.


回答1:


This code works on my laptop - logs the delta between target and actual time - main thing is to minimise what is done in the work() function because e.g. printing and scrolling screen can take a long time.

Key thing is to start the next timer based on difference between the time when that call is made and the target.

I slowed down the interval to 0.1s so it is easier to see the jitter which on my Win7 x64 can exceed 10ms which would cause problems with passing a negative value to thte Timer() call :-o

This logs 100 samples, then prints them - if you redirect to a .csv file you can load into Excel to display graphs.

from multiprocessing import Queue
import threading
import time

# this accumulates record of the difference between the target and actual times
actualdeltas = []

INTERVAL = 0.1

def work(queue, target):
    # first thing to do is record the jitter - the difference between target and actual time
    actualdeltas.append(time.clock()-target+INTERVAL)
#    t0 = time.clock()
#    print("Current time\t" + str(time.clock()))
#    print("Target\t" + str(target))
#    print("Delay\t" + str(target - time.clock()))
#    print()
#    t0 = time.clock()
    if len(actualdeltas) > 100:
        # print the accumulated deltas then exit
        for d in actualdeltas:
            print d
        return
    threading.Timer(target - time.clock(), work, [queue, target+INTERVAL]).start()

myQueue = Queue()

target = time.clock() + INTERVAL
work(myQueue, target)

Typical output (i.e. don't rely on millisecond timing on Windows in Python):

0.00947008617187
0.0029628920052
0.0121824719378
0.00582923077099
0.00131316206917
0.0105631524709
0.00437298744466
-0.000251418553351
0.00897956530515
0.0028528821332
0.0118192949105
0.00546301269675
0.0145723546788
0.00910063698529



回答2:


I tried your solution but I got strange results.

Here is my code :

from multiprocessing import Queue
import threading
import time

def work(queue, target):
    t0 = time.clock()
    print("Target\t" + str(target))
    print("Current time\t" + str(t0))
    print("Delay\t" + str(target - t0))
    print()
    threading.Timer(target - t0, work, [queue, target+0.01]).start()

myQueue = Queue()

target = time.clock() + 0.01
work(myQueue, target)

And here is the output

Target  0.054099
Current time    0.044101
Delay   0.009998

Target  0.064099
Current time    0.045622
Delay   0.018477

Target  0.074099
Current time    0.046161
Delay   0.027937999999999998

Target  0.084099
Current time    0.0465
Delay   0.037598999999999994

Target  0.09409899999999999
Current time    0.046877
Delay   0.047221999999999986

Target  0.10409899999999998
Current time    0.047211
Delay   0.05688799999999998

Target  0.11409899999999998
Current time    0.047606
Delay   0.06649299999999997

So we can see that the target is increasing per 10ms and for the first loop, the delay for the timer seems to be good.

The point is instead of starting again at current_time + delay it start again at 0.045622 which represents a delay of 0.001521 instead of 0.01000

Did I missed something? My code seems to follow your logic isn't it?


Working example for @Chupo_cro

Here is my working example

from multiprocessing import Queue
import RPi.GPIO as GPIO
import threading
import time
import os

INTERVAL = 0.01
ledState = True

GPIO.setmode(GPIO.BCM)
GPIO.setup(2, GPIO.OUT, initial=GPIO.LOW)

def work(queue, target):
    try:
        threading.Timer(target-time.time(), work, [queue, target+INTERVAL]).start()
        GPIO.output(2, ledState)
        global ledState
        ledState = not ledState
    except KeyboardInterrupt:
        GPIO.cleanup()

try:
    myQueue = Queue()

    target = time.time() + INTERVAL
    work(myQueue, target)
except KeyboardInterrupt:
    GPIO.cleanup()


来源:https://stackoverflow.com/questions/43892334/python-raspberry-pi-call-a-task-avery-10-milliseconds-precisely

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!