The number of computer vision algorithms is constantly increasing because of two reasons. The first reason is the specifics of computer vision as an interdisciplinary science. Usually, computer vision scholars have profound expertise in mathematics, statistics, and AI. That is why it is a small wonder that new techniques for image recognition and video tracking are developed constantly by combining the findings in the related fields.
The second reason that contributes to the increasing number of computer vision algorithms is the rapid evolution of computing devices as well as cameras. Unlike in the early stages of development, modern computer vision can rely on the unprecedented computing capacity to face fresh challenges.
In other words, new computer vision algorithms are developed every day. So, to help you keep up with the latest tendencies in computer vision, we have decided to come up with the list of both long established and fresh computer vision algorithms. Keep on reading to learn more about the following seven computer vision algorithms (in the alphabetical order) in this post:
- Efficient Region Tracking (1998);
- Efficient second-order minimization method, aka ESM (2004);
- EigenTracking (1998);
- Eigenface (1998);
- Inverse compositional algorithm (2001);
- Lucas-Kanade algorithm (1981);
- Szeliski-Shum algorithm (2000).
Spoiler Alert: If you are running short of time, feel free to skip the main body of the post and go to the conclusions. This is where you find a table introducing a quick overview of all the analyzed algorithms in terms of their founders, objects of analysis, and more.
Computer Vision Algorithms with High Practical Value
Efficient Region Tracking (1998)
This algorithm was first described in Efficient Region Tracking With Parametric Models of Geometry and Illumination. As the efficient region tracking method belongs to computer vision algorithms that apply findings from mathematics, and namely geometry, it is a telling example of most modern works on computer vision being interdisciplinary.
To examine how this algorithm can be applied to three-dimensional images, go to Tracking in 3D: Image Variability Decomposition for Recovering Object Pose and Illumination by Peter N. Belhumeur and Gregory D. Hager.
Efficient Second-Order Minimization Method, aka ESM(2004)
ESM is believed to be better adapted to complex real-time computer vision challenges in comparison to first-order methods. This is one of the computer vision algorithms that can be applied not only to imagery but also to videos. Developed by Selim Benhimane and Ezio Malis in 2004, the ESM method is straightforward and yet elegant. According to this algorithm, it is possible to connect the current image with the reference one by extracting the information about several images from one planar surface. First presented in Homography-based 2D Visual Tracking and Servoing in 2004, this algorithm is further used to word and calculate a new image-based control law that relies exclusively on visual data.
To learn how this algorithm can be used for image identification on deformed surfaces, read An Efficient Unified Approach to Direct Visual Tracking of Rigid and Deformable Surfaces by Ezio Malis (2007).
Face Recognition Using Eigenfaces by Matthew Turk and Alex Pentland has revolutionized the way faces are recognized by computing devices. Belonging to computer vision algorithms with high practical value, this method is widely applied to identify people based on their head parameters by police and banks. The Eigenface algorithm has also impacted the face recognition procedures on social media. Nor manual interference or visual guidance is required to recognize faces successfully. With more two-dimensional face data fed to the Internet search engines, the Eigenface algorithm has reached a high accuracy level.
To look into more advanced findings of Alex Pentland, read Honest Signals: How They Shape Our World (published in 2008 by the MIT Press)
First introduced back in the late 1990s, the EigenTracking algorithm was developed by Michael J. Black and Allan D. Jepson. In their work EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation, the scholars presented a fresh approach to identifying hand gestures in motion. Based on the analysis of lengthy image sequences that capture a moving hand, the EigenTracking algorithm allows tracking even objects that are shot from different view perspectives.
To learn how to reduce the risks related to videos with high level of noise, consult A Rao-Blackwellized Particle Filter for EigenTracking by Zia Khan et al.
Inverse Compositional Algorithm (2001)
Introduced in Equivalence and Efficiency of Image Alignment Algorithms, the inverse compositional method has become a logical step in the hierarchy of additive vs. compositional computer vision algorithms. Developed by Simon Baker and Iain Matthews, this method is less computationally costly in comparison to the forward additive and forward compositional algorithms.
For more updated information how to apply the inverse compositional image alignment algorithm to active appearance models (AAMs), consult Active Appearance Models Revisited by the above-mentioned authors.
Lucas-Kanade Algorithm (1981)
This algorithm was created by Bruce D. Lucas and Takeo Kanade in the early 1980s. It was presented in 1981 in An Iterative Image Registration Technique with an Application to Stereo Vision. The main goal of the Lucas-Kanade algorithm is to estimate the optical flow. In theory, this algorithm argues that the optical flow is originally constant if you look into the immediate neighbourhood of pixels. In practice, it means that if you know what nearby pixels look like, you will be able to figure out the rest of the pixels within a defined circle. This algorithm works best either with images or videos where objects change their locations bit by bit.
For more details, consult Lucas-Kanade in a Nutshell by Raul Rojas.
Szeliski-Shum Algorithm (2000)
The authors of this algorithm, Richard Szeliski and Heung-Yeung Shum, came up with a revolutionary way to create the full view panoramic mosaics already in 2000. In their first work on the subject, namely Creating Full View Panoramic Image Mosaics and Environment Maps, scholars argue the possibility of designing a perfect mosaics that does not demand any manual guidance. What is more, Szeliski and Shum claim that their algorithm is not compromised by the quality of cameras used to take photos and possible image distortions. On top of that, the Szeliski-Shum algorithm can be applied to geospatial services, namely to design a detailed environmental map from the processed images.
For more practical tips, visit Image Alignment and Stitching: A Tutorial by Rick Szeliski.
All the computer vision algorithms discussed above differ in terms of the cost of computing as well as the time required. The accuracy levels can also differ depending on the type of data you need to process.
Computer Vision Algorithms: Key Elements
|#||Algorithm||Scholars involved||Year of Publication||Objects of Analysis|
|1||Efficient Region Tracking||Gregory D. Hager, Peter N. Belhumeur||1998||Images and Videos|
|2||Efficient Second-Order Minimization||Selim Benhimane, Ezio Malis||2004||Images and Videos|
|3||Eigenface||Matthew Turk, Alex Pentland||1991||Images (Face Recognition)|
|4||EigenTracking||Michael J. Black, Allan D. Jepson||1998||Video Tracking (Hand Gestures)|
|5||Inverse Compositional Algorithm||Simon Baker, Iain Matthews||2001||Images|
|6||Lucas-Kanade||Bruce D. Lucas, Takeo Kanade||1981||Images|
|7||Szeliski-Shum||Richard Szeliski, Heung-Yeung Shum||2000||Images and Environmental Maps|
As you can see from the table above, some of the computer vision algorithms have more practical value if applied to 2D objects, others would work better with 3D models or videos. So, now it is your call to decide what computer vision algorithms to use for your current computer vision project!