问题
I'm trying to send pictures to the aws rekognition from my webcam to detect the activity of the person sitting in front of it using python.
To do so I take a picture every 5 seconds and I send it to the aws. But when I do so it seems that he's always sending back information about the first frame that I sent
cap = cv2.VideoCapture(0)
while 1:
ret, img = cap.read()
client=boto3.client('rekognition')
print("hello")
ret, fileImg=cv2.imencode('.png',img)
response = client.detect_labels(Image={'Bytes':fileImg.tobytes()})
print('Detected labels for Camera Capture')
for label in response['Labels']:
print (label['Name'] + ' : ' + str(label['Confidence']))
sleep(5)
Here is the result i get from that call:
Detected labels for Camera Capture
Human : 99.1103897095
People : 99.1103744507
Person : 99.1103897095
Face : 56.5527687073
Crypt : 51.1719360352
hello
Detected labels for Camera Capture
Human : 99.0247421265
People : 99.0247344971
Person : 99.0247421265
Face : 57.7796173096
Lighting : 51.8473701477
Crypt : 51.08152771
hello
Detected labels for Camera Capture
Human : 99.0808181763
People : 99.0808105469
Person : 99.0808181763
Face : 56.4268836975
Lighting : 54.6302490234
Crypt : 50.8622779846
hello
Knowing during the time of the call the image has changed a lot and should (at least I think) show me other results.
回答1:
Here's some code that I use to put rectangles around faces in a similar way:
import cv2
import numpy as np
import boto3
# Setup
scale_factor = .15
green = (0,255,0)
red = (0,0,255)
frame_thickness = 2
cap = cv2.VideoCapture(0)
rekognition = boto3.client('rekognition')
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
height, width, channels = frame.shape
# Convert frame to jpg
small = cv2.resize(frame, (int(width * scale_factor), int(height * scale_factor)))
ret, buf = cv2.imencode('.jpg', small)
# Detect faces in jpg
faces = rekognition.detect_faces(Image={'Bytes':buf.tobytes()}, Attributes=['ALL'])
# Draw rectangle around faces
for face in faces['FaceDetails']:
smile = face['Smile']['Value']
cv2.rectangle(frame,
(int(face['BoundingBox']['Left']*width),
int(face['BoundingBox']['Top']*height)),
(int((face['BoundingBox']['Left']+face['BoundingBox']['Width'])*width),
int((face['BoundingBox']['Top']+face['BoundingBox']['Height'])*height)),
green if smile else red, frame_thickness)
# Display the resulting frame
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
It scales-down the picture because Rekognition doesn't need full size to detect faces.
来源:https://stackoverflow.com/questions/50741636/send-multiple-frame-to-aws-rekognition