How to pipe live video frames from ffmpeg to PIL? How to pipe live video frames from ffmpeg to PIL? linux linux

How to pipe live video frames from ffmpeg to PIL?


I assume the ultimate goal is to handle a USB camera at a high frame rate on Linux, and the following addresses this question.

First, while a few USB cameras support H.264, the Linux driver for USB cameras (UVC driver) currently does not support stream-based payloads, which includes H.264, see "UVC Feature" table on the driver home page. User space tools like ffmpeg use the driver, so have the same limitations regarding what video format is used for the USB transfer.

The good news is that if a camera supports H.264, it almost certainly supports MJPEG, which is supported by the UVC driver and compresses well enough to support 1280x720 at 30 fps over USB 2.0. You can list the video formats supported by your camera using v4l2-ctl -d 0 --list-formats-ext. For a Microsoft Lifecam Cinema, e.g., 1280x720 is supported at only 10 fps for YUV 4:2:2 but at 30 fps for MJPEG.

For reading from the camera, I have good experience with OpenCV. In one of my projects, I have 24(!) Lifecams connected to a single Ubuntu 6-core i7 machine, which does real-time tracking of fruit flies using 320x240 at 7.5 fps per camera (and also saves an MJPEG AVI for each camera to have a record of the experiment). Since OpenCV directly uses the V4L2 APIs, it should be faster than a solution using ffmpeg, gst-streamer, and two pipes.

Bare bones (no error checking) code to read from the camera using OpenCV and create PIL images looks like this:

import cv2from PIL import Imagecap = cv2.VideoCapture(0)   # /dev/video0while True:  ret, frame = cap.read()  if not ret:    break  pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))  ...   # do something with PIL image

Final note: you likely need to build the v4l version of OpenCV to get compression (MJPEG), see this answer.