To track the key dates in computer vision history, one should be ready to learn a lot about the boom of technologies in the 20th century in general. Computer vision was once considered a futuristic dream. Then it was labeled as a slowly emerging technology. Today, computer vision is an established interdisciplinary field.
The history of computer vision is a telling example of how one science can impact other fields over a short period of time. Computer vision made image processing, photogrammetry, and computer graphics possible. On top of that, the history of computer vision demonstrates how spheres remotely related to computers get influenced too. For instance, modern computer vision algorithms help to facilitate agriculture, retail shopping, post services etc.
In this post, we have decided to shed light on how computer vision evolved as a science. The main question we have raised is “Who and how contributed to the growth of computer vision technology?” To answer this question, we have prepared a short overview of milestones in computer vision history in the chronological order. Get ready to discover why the first attempt to teach computers to “see” were doomed. Also, we will examine the most influential works in computer vision published in the 20th century. Feel free to join us if you are a professional computer vision developer or just as an enthusiastic admirer of computers!
First Success (the 1950s – 1960s)
Constructing computer vision systems seemed to be an unrealistic task for scientists at the beginning of the 20th century. Neither engineers nor data analysts were fully equipped to extract information from images, let alone videos. As a result, the very idea of information being transformed into editable and systemized data seemed unrealistic. In practice, it meant that all image recognition services, including space imagery and X-rays analysis, demanded a manual tagging
(Source – Brain Wars: Minsky vs. Rosenblatt)
A group of scholars led by Allen Newell was also interested in the investigation the connection between an electronic device and data analysis. The group included Herbert Simon, John McCarthy, Marvin Minsky, and Arthur Samuel. Together, these scientists managed to come up with the first AI programs, for instance, the Logic Theorist and the General Problem Solver. These programs were used to teach computers to play checkers, speak English, and solve simple mathematical tasks. Inspired by the achieved results, the scholars got over-enthusiastic about the potential of computers. The famous Summer Vision Project is a telling example of how unrealistic most expectations about computer vision were back in the 1960s.
(Source – Vision memo)
Another important name in the history of computer vision is Larry Roberts. In his thesis ”Machine perception of three-dimensional solids” (issued in 1963), Roberts outlined basic ideas about how one could extract 3D information from 2D imagery. Known for his “block-centered” approach, Roberts laid down the foundations for further research on computer vision technology.
The potential of computers and artificial intelligence (aka AI) mesmerized the scientific community in the middle of the 20th century. As a result, more computer labs got funded. That was mostly done by the Department of Defense of the USA). However, the goals that scientists set during the 1960s were too ambitious. They included the full discovery of an advanced computer vision technology, professional and mistake-free machine translation. There is no wonder why quite soon (already in the mid-1970s), the boom caused by AI research in general and computer vision studies, in particular, started to fade. The growing criticism of the AI and computer vision technology resulted in the first ”AI winter”.
Computer Vision History: AI Winter (1970s)
AI winter is a term used to describe a time when AI research was criticized and underfunded. There were several reasons why computer-related studies underwent a skeptical analysis in the 1970s, namely:
- Too high and often unrealistic expectations about the potential of AI and computer vision. Some of the goals set at the time are still to be reached. Needless to say, the research in AI and computer vision slowed down. That is because of the military, commercial, and philosophical pressure.
- The lack of scientific support on an international level. Although the computer vision laboratories started to appear already in the 1960s, mostly computer vision scholars worked as individuals and isolated groups.
- Imperfect computing capacity. As any data analysis implies processing huge amounts of images, characters, and videos, this analysis can be performed only on advanced devices with relatively high computing capacity.
All the above-mentioned factors combined led to a decrease in the quantity and quality of computer vision studies.
Despite the sharp cutback in the research, the 1970s are also marked as years when computer vision got its first recognition as a commercial field. In 1974, Kurzweil Computer Products offered their first optical character recognition (aka OCR) program. This program was aimed at overcoming the handicap of blindness. That was done by giving free access to media printed in any font. Sure thing, the intelligent character recognition quickly won the attention of the public sector. Moreover, it was used by the
Computer Vision History: Vision by Marr (1980s)
One of the main advocates of computer vision was David Marr. Thanks to his “Vision. A Computational Investigation into the Human Representation and Processing of Visual Information”, published in 1982 posthumously, computer vision was brought to a whole new level.
(Source – Vision: Marr Lecture Index)
What Marr offered was a simple and elegant way to create 3D sketches from 2D (and 2.5D) sketches of an image. During the first stage of computer vision analysis, Marr was applying edge detection and image segmentation techniques to an initial image in order to create 2D models or so-called “primal sketches”. After that, these primal sketches were further processed with a help of binocular stereo to get 2.5D sketches. The final stage of Marr’s analysis was about developing 3D models out of the 2.5D images.
In spite of the fact that modern computer vision scientists consider Marr’s approach to be too complicated and not goal driven, his work remains one of the biggest breakthroughs in the history of computer vision.
Mathematical Tools (the 1990s – until today)
With computers getting cheaper and quicker and the amount of data constantly increasing, computer vision history took a turn towards mathematical algorithms. Most modern studies on computer vision apply linear algebra, projective and differential geometry, as well as statistics to solve numerous tasks connected with image and video recognition and 3D modeling.
A telling example how computer vision is influenced by mathematics these days is the eigenface approach. Using the covariance matrix (a mathematics term) and findings of L. Sirovich and M. Kirby (1987), Matthew Turk and Alex Pentland created the eigenface automated face recognition system. Based on the probability theory and statistics, this system can be applied not only to identify the existing faces but also to generate new ones.
Lucas-Kanade method is one more example of how computer vision history is connected with mathematics. Developed by Bruce D. Lucas and Takeo Kanade, this method argues that pixels in the neighborhood are constant and dependant on each other. This is why it makes sense to identify the contents of an image by using the optical flow equation.
John F. Canny is another scholar who managed to make a solid contribution to the history of computer vision. He developed the Canny edge detector, the instrument that helps to identify a vast majority of image edges easily.
(Source – Canny edge detector demos)
As computer vision technology is getting comparatively inexpensive, more enterprises have implemented this technology in the manufacturing cycles. Computers are trained to scan products to check their quality, sort mail, and tag images and videos. Almost any phase of manufacturing can be tracked with a help of industrial machine vision technology.
However, as computer vision history demonstrates, human interference is still needed to train computers when it comes to image tagging and video tracking. To tag, sort, and identify both stable images and moving objects, any computer vision system has to be given enough examples, already marked and categorized. In other words, the future when computers take over the world is a bit more distant than it may seem for an average computer enthusiast.
To Wrap It Up
Computer vision history started as a summer project related to AI and has turned into a fully established field of science within the last five decades. It is hard to say to say what lies in the cards for this science. The only thing is certain – the history of computer vision is an example that even the most ambitious plans can be made a practical reality.