cvEffects

Video effects with OpenCV - Final Project

Course: Image Processing, 2014

Professor: Luiz Velho

Student: Bruno Vianna

bruno (a) pobox.com



Introduction



This project aims to create a few moving image filters using computer vision algorithms described in the Image Processing course.

Computer vision is “is the transformation of data from a still or video camera into either a decision or a new representation. All such transformations are done for achieving some particular goal.” (Bradski & Kaehler, 2008) In our case, the goal of the transformation is to explore the aesthetic potential of these techniques. Richard Szeliski (2010) enrolls several applications for Computer Vision, such as surveillance, motion capture, medical imaging. Nevertheless, this author makes no mention of a video transformation meant only as an artistic resource.

Many of the processes involved in obtaining meaningful data from video streams generate data as by-products. These information sets have several visual relations to the image being processed. Our belief is that many compelling configurations can be created by manipulating both the by-products and the final data resulting of the Computer Vision algorithms.



Tools, platforms and code

This project was programmed in C++, on Linux Mint 17. The OpenCV library version used is 2.4.10.

In order to facilitate the work with videos and graphics, I chose to use the OpenFrameworks platform, which brings a number of libraries for the UI, video, and image manipulation. Version 0.8.4 was used.

Git was used as a versioning control system, and the code can be download from http://github.com/brunovianna/cvEffects



Interface

The interface has a menu on the left hand side, meant to offer the main options for the application. The first part allows the user to load a video, save the video with the selected effects in a file, save the video as a series of frames, and to restart the video.

The second part lets the user choose the background, which can be either black, the running video, or a still frame. This is very useful for the effects that work better with blending.

The blend bar defines whether frames must blended on top of each other. On certain effects, this leaves the trails of moving objects.

A radio allow the video to be scaled down in order to fit the screen.

Finally, a drop-down menu allows the user to choose the effect that will be applied. Currently, 5 effects are available: feature lines, optical flow lines, track optical flow, background extract and surf waves. We will look at three of them in detail in the next section.



Effects

The password for all the videos is vision.

Effect #1: Background Extractor

We will use the video of the construction below to illustrate this effect.

Download



This is the simplest of the effects described. It is probably more interesting due to the choice of the video than the algorithm itself. It works by extracting the background of the image, using the method described by Zivkovic (2004), implemented in the BackgroundSubtractorMOG2 function in OpenCV. Then a light Gaussian Blur is applied (using the GaussianBlur function in CV).

This effect makes the video interesting by creating a space that is defined by the movement of the workers, instead of the graphical representation of the building being demolished.

Download



Effect #2: Track optical flow

We will use the video below to demonstrate this effect.

Download

This effect uses the Lucas-Kanade method to track moving objects in time (Bradski & Kaehler, 2008, p. 323), implemented in the calcOpticalFlowPyrLK function in OpenCV. First, we removed the background by using the same method as the previous filter. This assumes that the camera is in a fixed position, and there is not much movement in the background. In the case of this video, the sky provides a clean scenario. Then, we look for interesting features to track by using the detector described by Shi (1994), implemented in the GoodFeaturesToTrackDetector class.

The resulting list of features is used to compare the moving of objects between frames; a line is them traced between the last and current positions. By using the Openframeworks' blend feature, the trails of lines across frames is drawn on video, resulting in the tracks below.

Download



Effect #3: Gradient blur

This effect uses the gradients of video frames to make a visual composition. We use the Sobel algorithm (1973) to transform the video frame into an image with the gradients of the original frame. The Sobel algorithm is implemented in the Sobel class.



Original video.

Download



The effect is created by applying a very light blend (less then 5%) on the frames. Therefore the gradients are added slowed slowly. At a certain point the effect ceases to change in any interesting way; the most appalling feature is the formation of the blended image.

Download



Effect #4: Sift curves

This effect is based on the image stitching process described in the Image Processing course, where matching features in different images must be found in order to stitch these images together. The process is also described by Szeliski (2010, p. 447). We apply the SIFT algorithm (Lowe, 2004) in the original video, and store the keypoints with the location of the found features.



Original video.

Download



However, instead of matching them to another image, we will use them as points to draw a curve on the video. This curve is visually appalling because it is attracted by points which are similar, yet it redraws itself in every frame, as the algorithm not always detects the same features.

Once we have a list of keypoints, we create a shape using the Openframworks' ofCurveVertex feature. This draws a continuous curve passing through the first 20 points detected.


Download







Conclusion

The exploration of the potential aesthetics of computer vision is a vast field. The combination of video processing and analysis tools with the graphical manipulation of this data can lead to beautiful imagery. I expect to continue developing new filters in the near future.



Bibliography

Bradski, G. & Kaehler, A., Learning OpenCV, O’Reilly Media, 2008. ISBN: 978-0-596-51613-0.

Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110.

Shi, J. & C. Tomasi. Good Features to Track. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, June 1994.

Sobel, I. and Feldman, G. A 3 × 3 Isotropic Gradient Operator for Image Pro- cessing, in R. Duda and P. Hart (Eds.), Pattern Classification and Scene Analysis (pp. 271–272), New York: Wiley, 1973.

Szeliski, R., Computer Vision: Algorithms and Applications, Springer, 2010, retrieved from http://szeliski.org/Book/ in January 31st, 2015

Zivkovic. Z., Improved adaptive Gausian mixture model for background subtraction, International Conference Pattern Recognition, UK, August, 2004,