The rise of artificial intelligence (AI) within security is fraught with both hope for its capabilities and caution of the unknown, as many are not ready to completely hand over the keys to kingdom to this exciting new technology revolution within electronic access control and the mainstream public.
Currently, just 22% of end users say they are using AI to optimize the accuracy of threat detection and prediction in security programs, with those who are (44% of them) using it for data analytics, according to the latest findings from the HID 2024 State of the Security Industry Report, which gathered responses from 2,600 partners, end users, and security and IT personnel worldwide, across a range of job titles and organization sizes representing more than 11 industries.
In addition to analytics, 11% said they are using AI-enabled RFID devices, 15% are using AI-enabled biometrics and 18% have AI supporting their physical security solutions. Looking to the future, 35% of end users surveyed report they will be testing or implementing some AI capability in the next three to five years.
“AI is certainly a hot topic, and it is good to see the enthusiasm and the natural questions about how it can be applied [within security] and what we should be doing,” says Rob Rowe, Ph.D., who is VP of the AI and Machine Learning (ML) Lab at HID, which has been in existence since 2018. “Looking at that 35%, though, I have to wonder why that number isn’t the remainder of the 22% – so why isn't it 78% … those other folks?”
As Rowe points out, there are numerous use cases for security that are driving continued interest in AI and how it can be leveraged for more than just efficiencies in human tasks, like identifying anomalies or poring over data. In addition to using AI for data analytics to rapidly unearth trends, patterns and anomalies not visible to the human eye, Rowe points out that AI- powered analytics can identify low- and high-risk scenarios and can help to automate risk-based decision-making.
“Looking at it abstractly, security products, as you know, are always a tradeoff between security and convenience,” he explains. “And you can define curves that relate to the more security, the more inconvenient it becomes for the authorized user, and the more convenient for the authorized user, the less secure it is against the bad guys. What AI does is, rather than riding along that curve, or different points on that curve, it allows you to shift that curve; it allows you to get greater security and greater convenience.”
He continues, “AI to me is an enormous motivation to move the curve instead of just choosing optimum points. And then, of course, there are cost efficiencies, reductions of latency … all those sorts of things that are general across every industry, not just the security industry.”
The following interview looks at what Rowe and his team are researching at the lab, the role of AI in security within the next 5-10 years, as well as his concerns about negative stories surrounding AI – such as recent instances where inaccuracies and false information was provided by Google Gemini, for example – and possible legislation that could arise and hinder progress and research within security and elsewhere as a result of negative press or outcomes.
Locksmith Ledger: Please talk about your role at the AI/ML Lab.
Rowe: The role for me and my team is really doing advanced R&D. Most of the projects are on a three- to five-year out time horizon, so we're not driven today by the market trends of the moment. We're driven more by looking at that time horizon and saying, what do we see five years from now and what do we need to do to get in a place to meet that directive?
My team works out of the CTO's office (Ramesh Songukrishnasamy is HID's CTO and SVP of Engineering), so we get involved in different business areas and in different internal functions. We get involved with activities going on at the parent company level, the ASSA ABLOY level, so we move throughout the organization at all different levels, partner on projects, all different applications including some products. Basically, if there’s a business problem at hand and lots of data, we’d love to get involved in those situations.
LL: Can you give an example of a particular product that came out of the lab, so to speak, and is now available?
Rowe: HID just introduced a facial recognition system, a multispectral facial recognition system, which my team developed the earliest versions of, and we were doing the early prototyping. I personally have been in the biometric industry for several decades, and started a company which had multispectral fingerprint sensors and multispectral fingerprint sensors that HID acquired and really was the foundation of the new multispectral facial recognition system.
LL: Expanding on that last question, please talk about what you are working on at the lab and how some of that work will manifest in other new products and solutions?
Rowe: Getting back to the conversation we started with security versus convenience, we are working in areas that increase convenience – something called intent detection. So being able to understand not only when a person is close to a door, but when they're close to a door and intend to go through it, as that's important to increase convenience and to avoid security issues as you can imagine.
For example, let's say you have a security system that automatically unlocks the door when an authorized person is intending to go through it, but instead of properly recognizing the intention to pass through the door, the system just looks at proximity, so every time an authorized person walks down a hallway, all the doors unlock, which is not a very secure system. So, we are focusing on ways to bring together security and convenience through sophisticated intent detection.
We are also continuing the journey of the fusion of mobile devices with physical access systems, which is a very ripe area for work that my team and others are doing. That computer and sensor network that you carry around in your pocket, has got a lot of valuable information to combine with physical access control systems in various ways.
Somewhat tangential to physical security, although it touches on it, is real time location services – being able to use an RFID tag, for example, to identify a person or an asset, such as in hospital environments – identifying where doctors and nurses are and where important equipment or even patients are. That's an area that we are embarking on deploying state-of-the-art AI methods to increase accuracy, position estimation accuracy and simultaneously decrease latency.
And this touches security in a variety of ways, especially with emergency notification where you want to know with high certainty and quickly where that person is so you can get the right resources to that area as fast as possible. That's where we're seeing real gains in introducing artificial intelligence and moving away from some of the traditional methods of doing position estimation.
LL: Is that because AI/ML is able go through all that data faster? And is AI having the most success when there is big data?
Rowe: That's part of it – that contributes to the latency reduction. But the other thing that I can do is sort through more data that might be discrepant under classical assumptions.
You know, a lot of the algorithms that are used classically assume RF (radio frequencies) have certain characteristics. That's not the case when you get in the real world with metal girders and infrastructure that distorts the RF signals. The classic assumptions don't work as well, so AI is able to take that into account and give you much better accuracy by considering the real-world characteristics.
And yes, it is about data volume, so a variety of different kinds of larger enterprise organizations, within different verticals, can really benefit from AI because of the volume of the data they are producing, such as companies with hundreds and thousands of employees, for example.
LL: Can data mining using AI/ML be beneficial on a smaller scale?
Rowe: It depends on the specific instance that we're discussing but certainly AI can help.
I’ll give you an example with data coming from different smaller systems with multiple formats of data that you’d like to combine. Historically, you'd have to manually figure out how these data streams can be combined in some sort of uniform way, but now you can use AI to do that sort of massaging of data and be able to create uniform data streams from disparate systems. And in that sense, it can really help with heavy lifting.
LL: How long before security can start to use AI to create a more predictive and preventative approach to securing buildings, assets and people – so to start to alert us of possible events before they happen based on prior data and info? Is this already happening or the holy grail, so to speak?
Rowe: I think certainly people are doing it, not universally, but I think looking forward in time they'll be more and more. Prediction is a little bit tricky in so much as there's always the unexpected, so if you're planning for some set of scenarios, almost invariably, another scenario comes along in the real world that you hadn't planned for. We talked about anomaly detection where you're trying to capture all the things that are normal and then identifying what's not normal. Right now, I think anomaly detection, in general, is one of the most important tools in the AI arsenal.
LL: And does that still involve training on a machine learning level?
Rowe: Yes, machine learning is absolutely critical, especially based on the use cases and the data volumes used. Training is important by somebody but today, with so-called foundation models or frontier models, training is already being done by big tech, which makes it available to others to then adapt. And this is onerous training, for example, and people throw around numbers like $100 million to train a language model, which most companies can’t do, but using that existing training model and adapting it to specific purposes, we can apply it.
The other thing that's going on, particularly in the open-source community, is these foundation models are getting better, they are getting smaller, and they occupy less memory, less computational resources, so adopting them and training them doesn't take nearly the amount of data. As these foundation models, cloud services, and these open-source models become smaller, the training requirements reduce.
The other thing that's happening is a move to the edge and being able to have sophisticated computations occur at the edge device rather than going back to some cloud service somewhere. We're seeing more and more of the confluence of smaller, more powerful local models with more capable edge-device computational units, the neural processing units. Being able to combine all that together allows us to bring more capabilities to people, and at doing it at the edge has a variety of different benefits.
LL: What are your thoughts on some of the bad press Google was getting for its Gemini release and giving bad information? Do negative stories such as this, or with ChatGPT, Claude, etc., impact people’s trust of AI and using it?
Rowe: Right now, I would say with ChatGPT, Gemini, and Claude, I think there is a growing awareness for sure, and with that comes a growing concern. And the most concrete manifestation of that concern is the regulatory environment where we’re following regulations that different regions are adopting, which aren't always well aligned. There are different regulations in different places, touching on different aspects of AI systems, so not only are they evolving in time, but they're different per region. That makes for complex environment to introduce products under. We're thinking about that even in the early stages, so how do we meet privacy requirements? How do we meet informed consent requirements? How do we meet all these regulatory standards that are coming into view?
Q: Where do you see AI going in the next 5-10 years, especially as it relates to security and access control? Any big predictions, or cautions/concerns?
Rowe: I think routine tasks, tedious tasks in access control, such as some poor person sitting in a room monitoring multiple video feeds … there's no reason why it shouldn't go away, and the technology, if not there today, very soon will make that sort of a routine task. So routine monitoring, we'll just see more and more AI coming in, freeing up people to respond better to those alerts AI is generating.
One area that is under-appreciated is the impact that large language models have on the user interface. I think language models really are defining that next user interface; it's that kind of new epoch that we're entering, and we're just at the earliest point in time in that.
My concerns would be, as you alluded to, bad press. If somebody implements something, such as in the security industry, they implement something poorly and somehow it shines a negative light on that technology area, then other companies that implemented it properly are adversely affected by it.