Recently, the London police discovered a new facial recognition system and they made a worrying and embarrassing mistake.
At the Notting Hill Carnival, the technique carried out about 35 false matches between known suspects and the crowd, one of whom was “incorrectly” arrested.
A camera-based vision surveillance system should have provided a safer society.
But despite decades of development, they are often unable to handle real-life situations.
For example, in the 2011 London riots, facial recognition software led to the arrest of 4,962 people. The failure of this technology means that visual surveillance still relies mainly on people sitting in dark rooms, who are watching camera footage for hours, which is simply not enough to protect people in a city.
But recent research has shown that video analysis software can be greatly improved due to advances in software in a completely different field, DNA sequence analysis.
These software tools and techniques can change the visual monitoring of automation by treating the video as the same evolutionary scene as DNA.
Since the London Police installed the first CCTV cameras in London in 1960, as many as 6 million cameras have been deployed in the UK.
Now, there is already an automated camera on the frontline staff, which not only creates more video clips for analysis, but also produces more complex data due to continuous camera motion. .
However, automated visual monitoring is primarily limited to tasks in a relatively controlled environment.
The discovery of trespassing on a particular property, the calculation of a person through a given door, or digital license plate recognition can be done very accurately.
It is not reliable to analyze a group of people or a person who recognizes an individual on a public street because the outdoor scene varies greatly.
In order to improve automated video analytics, we need software that can handle this change, rather than treating it as a hassle—a fundamental change.
One area for processing large amounts of variable data is genomics.
The output of this genomic data has grown exponentially since the 3 billion DNA sequences of the first human genome (the entire human genetic data) were sequenced in 2001.
The amount of this data and the extent to which it can be changed means that a lot of money and resources are needed to develop specialized software and computing devices to handle it.
Today, scientists have the potential to access genomic analysis services relatively easily, studying a wide range of things, from how to fight disease and design personalized medical services to the mysteries of human history.
Genomic analysis involves studying the evolution of genes by studying the mutations that occur.
This is strikingly similar to the challenge of visual surveillance, which relies on explaining the evolution of a period of time to detect and track moving pedestrians.
We can apply the techniques for genome analysis to video by processing the differences between the images that make up the video.
The early testing of this “video economics” principle has proven its potential.
My research team at Kingston University found for the first time that video can be analyzed even in the case of a free-moving camera.
When the motion of the camera is recognized as a sudden change, they can be compensated, so that a scene is like being shot by a fixed camera.
At the same time, researchers at the University of Verona have demonstrated that image processing tasks can be coded in a standard genomic tool.
This is especially important because it greatly reduces the cost and time of software development.
Combining it with our strategy will ultimately enable the visual surveillance revolution promised many years ago.
If you want to adopt the principle of “video economics,” a smarter camera will be introduced in the next decade.
In this case, we’d better get used to seeing more on the video.