What is Deep Learning? A Complete Guide to AI in Security

AI, machine learning and deep learning have the potential to propel security and life-safety solutions by driving increasingly sophisticated analytics.

What is Deep Learning? A Complete Guide to AI in Security

The security industry has experienced its share of rapid changes that have been well documented in recent years. The question is how soon will the next wave of potential disruption really take hold?

In this case, consider buzzwords that are becoming more common to hear at industry trade shows and in market research reports but perhaps remain foreign to many in terms of the everyday lexicon: artificial intelligence (AI), machine learning and deep learning.

There is much work to be done as these still-futuristic-sounding technologies make their way into the security industry. Since many sensor manufacturers do not have the expertise in these areas, go-to market strategies have been mostly by partnering with video analytics solution providers taking their first steps into AI.

For the physical/electronic security industry, AI essentially refers to systems that show intelligent behavior by analyzing their environment they can perform various tasks with some degree of autonomy to achieve specific goals.

That concept seems relatively straightforward to grasp, but the deep learning and machine learning aspects driving AI may need more explanation. Consider this statement from one solution provider executive: “Soon, it’s all going to be about deep learning; the cameras will be able to perform almost all the sensor functions.”

One can make a case this statement is inaccurate on four counts, but it also delivers a point. It’s a good place to begin analyzing the technology behind these video analytics, go a bit deeper into what goes into deep learning, and take a look at how smart cities in particular and other security applications will benefit from all this intelligence.

Clarifying IP Camera Insights

First of all, the simpler, single mode video surveillance analytic processes in today’s IP cameras are matched well to what the high end camera cost will bear in 2018-2019 markets.

However, with AI chipset pricing decreasing, there is already one solution provider that has developed the multilevel architecture needed to perform more complex AI processes at the edge device and exchange outcomes and scoring with the processing at a server with deep neural networks (more on this in a bit). This is expected to be released in late 2019.

Second, the sensors may not be and in most cases should not be contained at a single edge device like an IP camera; the server will not get to process varied data near a given location. Instead, the sensors should be like the Internet of Things (IoT), spread around like a sensor net, of-ten using battery power or PoE.

Third, the statement implies that video is the primary data set processed in AI. This could not be further from the truth; in fact it was audio processing, cognitive engines with deep learning that helped win the last U.S. presidential election by identifying candidates’ keywords and relating them to many factors like geo-location, “swing states,” and population profiles.

The fourth reason is that most of today’s server-based solutions are still using earlier versions of neural networks with only one- to three-pattern recognition layers where simpler single modelayers located faces, or distant objects crossing a border. This is not a deep neural network.

For more backdrop to that IP camera deep learning misconception, here’s a brief but important look at neural networks.

The first perceptron models, or representations of a single neuron, were shallow, composed of one input and one output layer, and at most one hidden layer in between.

More than three hidden layers (including input and output) qualifies as deep learning. Perceptrons are the most basic form of a neural network.

So, multilayer neural networks do not use the perceptron learning procedure and should never have been called multilayer perceptrons. The perceptron convergence procedure works by ensuring that every time the weights change, they get closer to every feasible set of weights.

This type of guarantee by the perceptron network or the simplest neural network cannot be extended to more complex networks in which the average of two good solutions may be a bad solution.

How Do AI Solutions Learn?

So clearly there’s a learning curve to learning about AI. What security integrators should know is that AI, particularly machine-learning technologies, can learn by ingesting unrestricted data from cyber intrusion detection systems, mobile phones, navigation systems, LIDAR (light, imaging, detection and ranging) sensors, radar sensors, IP video cameras and many other different sensors to make predictions and build “playbooks” of learned intelligence.

AI and machine learning vary greatly, but most of the successes in recent years have been in one category: supervised learning systems, in which the machine is given lots of examples of the correct answer to a particular problem.

This process almost always involves mapping from a set of inputs, X, to a set of outputs, Y. For instance, the inputs might be pictures of various animals, and the correct outputs might be labels for those animals: dog, cat, horse. That should sound more familiar to security providers and their customers.

The inputs could also be waveforms from a sound recording and the outputs could be words: “yes,” “no,” “hello,” “goodbye.” The inputs from a cyber intrusion detection system will lead to outputs determining whether the actions represented from an intrusion profile we’ve recognized before.

Back to neural networks, they are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input.

The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.

Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on.

A familiar example is how deep learning can process spam or significant mail in an email filter. Some of your financial institution customers are using deep learning for fraud detection; however, most are still using a simple AI or perceptron layer, obvious when the wrong decision is made and it is really you trying to make a purchase using a credit card.

Real-World Applications

Here’s a glance at some of the ways customers are using AI and deep learning:

■ Cybersecurity intrusion detection and automated response

■ Pedestrian safety

■ Traffic flow and traffic incident management

■ Trash collection

■ Smart lighting maintenance

■ False alarm reduction for facility perimeter monitoring

■ Law enforcement robots with AI natural language support, controlled remotely

■ Autonomous security vehicles with a built-in aerial drone

Smart Cities Applications Go Deep

The closest point of intersection with the security industry where deep neural network costs may have already been accommodated is in smart cities.

Places like Las Vegas and Dubai are already using deep learning sporadically to know when and where to pick up refuse, service street lights or to simply control intersection signals when complex situations arise, like a pedestrian stepping into a crosswalk when a traffic signal has changed.

There are extremely few cities that take extra steps to ensure this late-crossing pedestrian’s safety. Those cities will automatically stop traffic in both directions, slow down incoming traffic from further intersections to provide more stable traffic flow and use advanced sensors like LIDAR, not IP cameras to detect vehicle presence and recognize objects in intersections.

LIDAR can provide object detection and recognition at a fraction of the cost of an IP camera running “simple” boundary analytics. With LIDAR chipsets being used in the autonomous vehicle industry, the prices have decreased, allowing city agencies to use many of these cost-effective sensors at intersections, gathering data for their deep neural network.

Looking further out for real-life examples of what will be deployed globally yet happening first in pockets of advanced smart city tech like Dubai or Las Vegas’ Innovation District we see numerous use cases for deep neural networks.

The government of Dubai media office outlined in April its roadmap for implementation of AI and deep learning, and it’s worth delving into as an example of why municipal customers may turn to security systems integrators in the future:

Q3 2018: The city of Dubai is piloting “smart” digital license plates embedded with collision detecting-sensors and GPS trackers, digital screens with news-ticker style updates on the weather and road conditions. Through AI and deep learning, the smart plates will automate some safety processes: if a collision is detected, for example, it will automatically contact police and ambulances.

Dubai’s long-term technology roadmap includes using smart license plates with deep learning for real-time traffic intelligence and AI to reduce manpower needs on federal roads projects.

In 2019: Mini autonomous police cars paired with companion drones and facial recognition technology will begin patrolling the streets of Dubai to help the city identify and track down criminal suspects. A robot with AI natural language support will support these autonomous law enforcement vehicles, controlled remotely from behind a computer dashboard. The security vehicle comes with a built-in aerial drone that can surveil areas and people the robot can’t reach.

Named the O-R3 and equipped with thermal imaging and license plate readers, the patrol car can navigate on its own. The manufacturer, Singapore-based OTSAW Digital, claims the car and drone duo as the first of its kind.

“The application of artificial intelligence will have a positive impact on safety in this sector and will reduce cost in a very drastic matter, with an increase in efficiency and a reduction in labor used by 80%,” according to the Minister of State for Artificial Intelligence.

By 2021: The Minister of Infrastructure Development along with the Minister of State for Artificial Intelligence explored the ministry’s application of AI technology in the federal roads projects during their field tour to the Kalba Ring Road development project, which will contribute to an estimated reduction of 54% in project duration, 37% in fuel consumption, 80% in manpower dependence and 40% in equipment and manpower needed.

By 2030: Dubai plans for AI/deep learning robots to make up 25% of its police force.

By 2031: The UAE is adopting artificial intelligence in a very fast yet strategic manner in active sectors that will have the most impact. “These effects are all aimed towards the benefit of the country and the happiness of its people. The ministry of Infrastructure development activities are in line with the UAE artificial intelligence strategy of 2031 which aims to have the UAE be a leading country in the adoption of artificial intelligence. This technology will have great economic return such as a growth rate of 26%, and economic savings of 335 billion dirhams,” the Minister of State for Artificial Intelligence says.

Best Practices Can Hasten Adoption

Industry stakeholders are just starting to realize the many ways AI, machine learning and deep learning may impact the physical and cyber security markets, and what types of customers may be best suited for such solutions.

Systems integrators will play an important role if they begin adopting strict, standards-based information communication technology (ICT) and IT infrastructure deployment.

For both Dubai and Las Vegas, the designers, developers and project management has originated from their respective ICT divisions. In North America, security integrators use a wide range of Ethernet cable types, often not specifically chosen for their scalability to support new IEEE and BICSI standards like next-generation 802.3bt compliant infrastructure.

These integrators, however, are still specifying non-standard Ethernet media extension devices to transport video surveillance cameras distances longer than the 802.3bt standard.

Realize, however, smart cities and transportation verticals will be some of the larger AI use cases, and these projects generally follow strict ICT standards and employ contractors with experience in both security/surveillance and infrastructure.

Unless security integrators begin adopting ICT best practices as their primary business, their role in AI/machine learning projects will lessen before it really even gets going in full force.


Steve Surfaro is Chairman, Public Safety Working Group, for the Security Industry Association (SIA).

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our FREE digital newsletters!

Security Is Our Business, Too

For professionals who recommend, buy and install all types of electronic security equipment, a free subscription to Commercial Integrator + Security Sales & Integration is like having a consultant on call. You’ll find an ideal balance of technology and business coverage, with installation tips and techniques for products and updates on how to add to your bottom line.

A FREE subscription to the top resource for security and integration industry will prove to be invaluable.

Subscribe Today!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our Newsletters