Here’s the Expert Lowdown for Integrators Who Want to Raise Their AI IQ

John Carter, CTO of ReconaSense, addresses numerous hot topics related to artificial intelligence and its impact on the physical security industry.

John Carter is co-founder and CTO of ReconaSense, a provider of physical security intelligence and next-gen risk-adaptive access control. A former NASA engineer, Carter’s security industry tenure includes serving as a system architect for solutions spanning federal and commercial applications.

He spent five years serving on the SIA Board of Directors and chairing the Homeland Security Advisory Group, leading it to become a SIA Executive Committee.

In the following interview, Carter addresses several topics related to artificial intelligence (AI) and its impact on the physical security industry.

How do you define AI for the lay person, and what should physical security integrators understand about key differences between AI and machine learning?

Let’s start with the Artificial Neural Network that enables learning. A neural network, in many ways, is very similar to the human brain. As humans, we have thousands of input sensors. We can see, hear, smell, taste, touch and are sensitive to light, heat and additional stimulus. As those sensors become active, our internal variables are triggered. We recognize patterns based on those sensors and previous learning.

For example, if we smell smoke, we look for fire. When we find the fire, if it is small, we put it out. If it is large, we call for help. That output example is also how a neural network functions.

Back propagation is when the neural network re-evaluates based on the outcome of the previous decision. A more sophisticated example is how we evaluate other humans. Again, like a neural net, our brains collect and characterize data over periods of time. The longer we know someone, the more we understand about them.

Over time, we can identify someone’s patterns of behavior. We may recognize when they are happy, sad, angry, sick or hungry. We may notice when behavior is abnormal or different to what we have previously overserved as the norm. We may also develop trust or no-trust relationships with other people because of their behavior.

Another simple example that illustrates how a pattern gets recognized is baking peanut butter cookies. If I walk into the kitchen and see eggs on the counter, my first thought would be that I’m going to soon have something to eat and that it may be breakfast.

When flour gets put on the table, my pattern recognition moves toward something being baked. When I also see vanilla and then peanut butter, I have enough information that I might predict peanut butter cookies with a high level of confidence. Because I have seen cookies being made before, and I recognize each new ingredient, I can reach a more logical conclusion.

Let’s take this to a neural net in security. First, in the case of an insider threat. If the neural network notices an employee suddenly behaving differently, it will start to watch more closely for additional behaviors — “ingredients.” If they begin working late when no one is around and accessing locations they seldom access, the system can get more intrigued and so on. Ultimately, a neural net will recognize a pattern forming that warns me of a potential threat.

A machine can aggregate, characterize and assess much more data from many more sources than humans. It never gets bored, distracted, and it has no bias. It’s important to remember to keep a human in the loop; however, with AI the machine should be there to assist rather than replace the human.

What is an example of how AI can be a critical success factor in the convergence of physical and cybersecurity?

This is an excellent question. If you compared the marketing materials from most cybersecurity providers to that of the physical security providers who utilize AI, you might think, both are doing exactly same thing. In fact, there are people from both disciplines who would argue they are doing the same things. The experience beyond data aggregation and pattern recognition is where the difference lies.

There are several advantages that can be gained using AI in a converged environment. First, consider the differences between cyber and physical security expertise.

In physical security, there is a broad base of knowledge and experience that provides the insight into creating an application that protects life safety, improves security and protects assets. There are many physical systems, sensors, data and devices that must be integrated to produce a working solution. Aside from the control systems involved, you must be knowledgeable of the regulations, policies, procedures and code compliance issues surrounding what must and must not occur in regard to security.

A simple example is the ability to provide no prior knowledge egress that meets fire code across a wide range of door types, lock mechanisms and alert notification local devices. You must be fully up to speed on a multitude of credential technologies, readers, devices, and be able to support current federal regulations and specifications.

You must understand the value and approach to the integration of access, video, IDS, building automation, notification and IoT technology. While some may refer to this as a “simple matter of software,” knowing when and why to integrate and associate are often more important than knowing how to integrate.

This is also true when it comes to protecting information and connecting between locations. There is a myriad of networking, communications, anti-viral, firewalling and activity characterization solutions available and installed by IT professionals that one must be experienced with to provide a proper security solution. There are data compliance, data protection, confidentially and reporting issues that sound similar but are, in fact, different than those associated with physical security.

This understanding is key to setting the stage for these technologies to converge to solve a wider range of security issues. Intelligent solutions that are AI-enabled can inform or query each other as “interesting” events are identified on either side of the coin.

When this approach is established, either system can alert the other and work in conjunction to identify threats based on abnormal, trained or pattern observed behavior. This collaboration enables both systems to modify the security profile for access to facility or data.

In your estimation, how pervasive is the deception by some tech organizations that proclaim leveraging AI while not being clear about their products’ limits?

I am often disappointed when I see organizations defining a solution they provide as AI. Because the term is so popular and important, we see many organizations either by design or lack of understanding making false claims.

I see it most often with video solutions and basic rules-based access control engines. When alerts only occur because a defined policy has been breached, then no “thinking” is involved.

What are the negative implications of this type of deception for security integrators trying to fulfill their end customer’s security needs and goals?

Customers might read an advertisement, data sheet or website and come away thinking they have found the Holy Grail and insist on it from their integrator. The integrator gets the blame when the system doesn’t live up to the hype. Why? That is because the vendor who overpromised its capabilities could point the finger at the integrator while they try to catch up to the claims they made.

Unfortunately, many of those companies are quite large and are simply working on an old technology base that stops them from delivering the newer intelligent solutions. They make claims, counting on the fact that the security professional takes the old approach, “Nobody ever got fired for buying x,y,z.”

It is bad for the user and the industry. Unfortunately, a negative experience can leave people discouraged and less likely to try the new AI technology that could truly change their whole approach to risk and security management.

Read on as Carter provides advice to security integrators on how to ferret out vendors that make dubious claims about AI-enabled solutions, how AI can be leveraged to help enterprises reduce false alarms, and much more.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our FREE digital newsletters!

About the Author

Contact:

Although Bosch’s name is quite familiar to those in the security industry, his previous experience has been in daily newspaper journalism. Prior to joining SECURITY SALES & INTEGRATION in 2006, he spent 15 years with the Los Angeles Times, where he performed a wide assortment of editorial responsibilities, including feature and metro department assignments as well as content producing for latimes.com. Bosch is a graduate of California State University, Fresno with a degree in Mass Communication & Journalism. In 2007, he successfully completed the National Burglar and Fire Alarm Association’s National Training School coursework to become a Certified Level I Alarm Technician.

Security Is Our Business, Too

For professionals who recommend, buy and install all types of electronic security equipment, a free subscription to Commercial Integrator + Security Sales & Integration is like having a consultant on call. You’ll find an ideal balance of technology and business coverage, with installation tips and techniques for products and updates on how to add to your bottom line.

A FREE subscription to the top resource for security and integration industry will prove to be invaluable.

Subscribe Today!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our Newsletters