
I’ll keep stabbing at this, but if you can figure out a way to terminate this Process, that would be golden. While this works, I cannot close, kill, or terminate the process gracefully, and all of my attempts to do so (calling kill(p) twice terminates it) result in a corrupt video file. Io = open(`$exe -y -hide_banner -loglevel error -f v4l2 -s 1920x1080 -i /dev/video0 -filter_complex 'crop=in_h:in_h,split=2' -map '' $file -map '' -s $(SZ)x$SZ -r $FPS -vcodec png -f image2pipe while process_running(io) # repeatedly update the IMG container with the last frame Instead, here is an in-memory version, but there’s a but at the end (the following readpngdata was shamelessly taken from readpngdata): using FFMPEG_jll, FileIOĬonst IMG = Ref() # a container for the last frame This works very well, but it writes to disk each frame, so there’s a lot of unnecessary IO operations. Run(`$exe -y -hide_banner -loglevel error -f v4l2 -i /dev/video0 -filter_complex 'crop=in_h:in_h,split=2' -map '' $file -map '' -s $(SZ)x$SZ -r $FPS -update 1 $IMGPATH`, wait = false) First, here is a functioning example of how to save a video file while at the same time save to file the last frame (in these examples I crop the image to a square and resize it, you can ignore those parts): using FFMPEG_jllĬonst SZ = 640 # width and height of the imagesĬonst IMGPATH = "frame.png" # this is the image file of the last frame Hi made some progress here, but not quite all the way. The results of that processing is then served with JSServe for diagnostics viewed by users, again, in “real time” (by that I mean within a second or so). My goal is to be able to have a video file recording, while at the same time access (in “real time”) frames from that video so that I can process them. The problem with this is that whenever I close(io) I get Broken pipe errors, which makes sense, cause how on earth could this piped process stop elegantly? The consequences are that sometimes the video file is corrupt or only partially saved. Io = open(pipeline(camera_cmd, tee_cmd, frame_cmd)) # start it allīytes = read(io, bytesperframe) # read one frame

Tee_cmd = `tee test_video.h264` # split the stream into 2, one gets saved to fileįrame_cmd = `ffmpeg -f h264 -i pipe: -r 1 -f image2pipe -` # and one gets processed by ffmpeg for me to extract frames from Here’s what I got now: w,h = (640, 480) # image dimensionsīytesperframe = w * h # number of bytes in the imageĬamera_cmd = `raspivid -w $w -h $h -output -timeout 0 -nopreview` # calling raspivid and piping out the stream If anyone has a better idea on how to do this, please let me know.


I’m trying to record a video to a file while at the same time, extract frames from the video stream, on a Raspberry Pi with a Pi Cam.
