How Do Security AI Applications Learn?

AI is a massive buzzword in security, but how exactly does it work? Here’s an overview of learning techniques such as deep learning and reinforcement training to help you better understand.

How Do Security AI Applications Learn?

Much is made of artificial intelligence (AI) these days. Surveys conducted by consultants consistently suggest that a majority of executives believe that AI has a role in improving their business. From a day-to-day perspective, we see the progress, too.

AI applications are everywhere: software is embedded in physical security cameras, drones, autonomous cars, as well as machine assisted medical diagnosis.

If you look under the hood of these AI applications, many are developed using a single technique referred to as deep learning and, while there has been significant progress with this technique, it begs the questions: how advanced is this technique and are we “putting all our eggs in one basket,” thereby slowing progress in this very important scientific field?

How Deep Learning Works

To better understand the root of the concern, we need to delve a little bit into deep learning. Deep learning is a technique that mimics the neurons and synapses in the human brain. It differs from nature in that it is a highly supervised form of learning.

To understand this better, here is a look at the steps facilitating a deep learning application that processes images.

  1. First, obtain a set of image files to train the model. This dataset will consist of images files tagged to a label of the image (e.g. cat, dog, etc.).
  2. Then, create a neural network model which very crudely mimics the functionality of the human eye by using different layers of neurons to perform convolutions, feature extraction from the image, subsampling and finally classification of the image.
  3. The model is trained by running it through the selected training dataset. The training is done through multiple passes and a technique called “back-propagation” tunes the neural network to improve the accuracy of the image classification.
  4. Given a satisfactory accuracy score, the trained model is then used in the real world to detect images it has been trained to “see.”

Constraints of Image Processing Training

While the above technique works well for images for which the model has been trained, it does not work as well for others that were not in the original training dataset. For example, if the model has been trained to detect puppies, it is unlikely to detect fully grown dogs.

If the model has been trained to detect dogs, it is unlikely to be able to classify a cat as an animal, despite the categorical similarities — namely four legs and a tail.

While this may be acceptable for specialized image processing applications, it significantly lacks the intuition and experiential learning capability of a 2-year-old.

Moreover, the volume of training required to create a more generic image processing application can be quite daunting. Clearly, humans have an edge when it comes to the learning process and overcoming inherent constraints of deep learning.

Alternative Methods Focus on Experience

AI researchers are experimenting with other methods to develop a more experiential approach to learning similar to the way a young human being might learn.

One such method is reinforcement learning. The key components of this form of learning are an agent, a set of actions the agent can take, a simulated environment reflective of the real world and a reward system for tasks accomplished.

The agent is given a task to complete in the environment. At any point, the agent can choose from a set of actions and, at the end of each session, the agent is given a reward if it completes the task successfully.

An agent learns, through repeated trial and error and by compiling a history of attempted actions and associated rewards. The agent’s objective is to maximize the reward.

This technique is often used in gaming and was used by Google’s AlphaGo Zero, an AI program that plays the complex game of Go, developing strategies to defeat the world’s best player. In learning to play, AlphaGo Zero started from scratch, used random moves and played against itself.

To Err Is Human … and Foundational?

At times, we often invoke the old saying, “To err is human,” to compensate for our failings, but AI researchers are starting to discover that this all-too-human trait and the process of trial and error may be the very foundation of learning.

It is too early to tell if reinforcement learning will find practical real-world application, but the prognosis is positive. As for deep learning, the technique is the workhorse of AI applications nowadays and it is likely to have its niche now and in the near future.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our FREE digital newsletters!

About the Author

Contact:

Dave Bhattacharjee is Vice President, Data Analytics, for Stanley Security.

Security Is Our Business, Too

For professionals who recommend, buy and install all types of electronic security equipment, a free subscription to Commercial Integrator + Security Sales & Integration is like having a consultant on call. You’ll find an ideal balance of technology and business coverage, with installation tips and techniques for products and updates on how to add to your bottom line.

A FREE subscription to the top resource for security and integration industry will prove to be invaluable.

Subscribe Today!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our Newsletters