Using OpenCV Output as Webcam

前端 未结 5 2239
忘掉有多难
忘掉有多难 2020-12-14 08:49

So, I want to write a program to make the processed output from OpenCV be seen as a WebCam. I want to use it to create effects for a program like Skype. I am stuck and Googl

相关标签:
5条回答
  • 2020-12-14 09:25

    I had the same problem: My grandmother hears poorly so I wanted to be able to add subtitles to my Skype video feed. I also wanted to add some effects for laughs. I could not get webcamoid working. Screen capture method (mentioned above) seemed too hacky, and I could not get Skype to detect ffmpegs dummy output camera (guvcview detects though). Then I ran across this:

    https://github.com/jremmons/pyfakewebcam

    It is not C++ but Python. Still, it is fast enough on my non-fancy laptop. It can create multiple dummy webcams (I only need two). It works with Python3 as well. The steps mentioned in readme were easy to reproduce on Ubuntu 18.04. Within 2-3 minutes, the example code was running. At the time of this writing, the given examples there do not use input from a real webcam. So I add my code, which processes the real webcam's input and outputs it to two dummy cameras:

    import cv2
    import time
    import pyfakewebcam
    import numpy as np
    
    IMG_W = 1280
    IMG_H = 720
    
    cam = cv2.VideoCapture(0)
    cam.set(cv2.CAP_PROP_FRAME_WIDTH, IMG_W)
    cam.set(cv2.CAP_PROP_FRAME_HEIGHT, IMG_H)
    
    fake1 = pyfakewebcam.FakeWebcam('/dev/video1', IMG_W, IMG_H)
    fake2 = pyfakewebcam.FakeWebcam('/dev/video2', IMG_W, IMG_H)
    
    while True:
        ret, frame = cam.read()
    
        flipped = cv2.flip(frame, 1)
    
        # Mirror effect 
        frame[0 : IMG_H, IMG_W//2 : IMG_W] = flipped[0 : IMG_H, IMG_W//2 : IMG_W]
    
        fake1.schedule_frame(frame)
        fake2.schedule_frame(flipped)
    
        time.sleep(1/15.0)
    
    0 讨论(0)
  • 2020-12-14 09:34

    Check out gstreamer. OpenCV allows you to create a VideoCapture object that is defined as a gstreamer pipeline, the source can be a webcam or a video file. Gstreamer allows users to create filters that use opencv or other libraries to modify the video in the loop, some examples are available.

    I don't have experience marrying this up to skype, but it looks like it is possible. Just need to create the right pipeline, something like: gst-launch videotestsrc ! ffmpegcolorspace ! "video/x-raw-yuv,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1.

    0 讨论(0)
  • 2020-12-14 09:37

    Not trivial, but you could modify an open source "virtual camera source" like https://github.com/rdp/screen-capture-recorder-to-video-windows-free to get its input from OpenCV instead of the desktop. GL!

    0 讨论(0)
  • 2020-12-14 09:38

    One way is to doing this is send Mat object directly to socket and at the received side convert byte array to Mat but the problem is you need to install OpenCV on both both PC. In another way you can use Mjpeg streamer to stream video to ibternet and process the video at receiving side, here you need to install OpenCV on receiving side only.

    Using Socket

    Get Mat.data and directly send to the socket, the data format is like BGR BGR BGR.... On the receiving side you should know the size of image you are going to receive. After receiving just assign the received buffer(BGR BGR... array) to a Mat of size you already know.

    Client:-

    Mat frame;
    frame = (frame.reshape(0,1)); // to make it continuous
    
    int  imgSize = frame.total()*frame.elemSize();
    
    // Send data here
    bytes = send(clientSock, frame.data, imgSize, 0))
    

    Server:-

    Mat  img = Mat::zeros( height,width, CV_8UC3);
       int  imgSize = img.total()*img.elemSize();
       uchar sockData[imgSize];
    
     //Receive data here
    
       for (int i = 0; i < imgSize; i += bytes) {
       if ((bytes = recv(connectSock, sockData +i, imgSize  - i, 0)) == -1) {
         quit("recv failed", 1);
        }
       }
    
     // Assign pixel value to img
    
     int ptr=0;
     for (int i = 0;  i < img.rows; i++) {
      for (int j = 0; j < img.cols; j++) {                                     
       img.at<cv::Vec3b>(i,j) = cv::Vec3b(sockData[ptr+ 0],sockData[ptr+1],sockData[ptr+2]);
       ptr=ptr+3;
       }
      }
    

    For socket programming you can refer this link

    Using Mjpeg Streamer

    Here you need to install Mjpeg streamer software in PC where web cam attached and on all receiving PC you need to install OpenCV and process from there. You can directly open web stream using OpenCV VideoCapture class like

    Cap.open("http://192.168.1.30:8080/?dummy=param.mjpg");
    
    0 讨论(0)
  • 2020-12-14 09:44

    So, I found a hack for this; not necessarily the best method but it DEFINITELY works..

    Download a program similar to SplitCam; this can emulate a webcam feed from a video file, IP feed and/or a particular section of the desktop screen..

    So in essence, you can write a program to process the webcam video and display it using OpenCV's highgui window and you can use SplitCam to just take this window as input for any other application like Skype. I tried it right now it works perfectly.!

    HTH

    0 讨论(0)
提交回复
热议问题