News
AI Is Transforming Security Decision-Making: What Lies Ahead
Artificial intelligence is not just another technology shift. It is changing how security decisions are made across the industry. For security leaders, the shift is real. Decisions are happening faster, sometimes with less direct human oversight. Systems can now analyze, decide, and act in ways that shorten timelines and change how risk appears.
So, what does this look like in practice? And how should organizations think about what comes next?
At The Security Foundation, we believe conversations and information sharing around topics like these are critical to building a more connected and resilient security community. Our role is to bring together perspectives from across the public and private sectors and help surface what practitioners are seeing in real time. To help guide that conversation, we spoke with Erik Antons, Managing Vice President and Chief Safety and Security Officer at Marriott International, and TSF board member.
What Sets AI Apart
Previous waves of technology still relied heavily on human input. AI is different. It introduces a level of autonomy that changes how decisions are made and how risk is managed.
As Antons explained, “AI introduces autonomy, systems making decisions at machine speed. This reduces human friction and human oversight, which is a fundamentally different risk paradigm for security leaders.”
That shift reduces friction, but it also reduces visibility. In many cases, organizations are using tools they are just beginning to understand. Resources like IBM’s overview of artificial intelligence are a useful place to start.
Where AI Adds Value and Creates Risk
There is clear value. AI is helping automate repetitive work, improve decision-making, and allow teams to move faster while focusing on higher-value issues. In physical security, the shift is especially noticeable. Teams are moving from reviewing past incidents to identifying potential issues earlier, sometimes before a human would have recognized them.
At the same time, the risks are evolving. AI systems can be manipulated. Data can be biased or compromised. There is also a growing tendency to trust outputs without fully validating them. As adoption increases, so does the likelihood of failure. That risk is not limited to cyber environments. It extends into physical operations as well.
An Underestimated Risk – Trusting Too Much
One of the most significant risks is also one of the easiest to overlook. AI can feel precise and objective, which can create a sense of confidence.
That confidence can be misplaced.
Over-reliance on AI systems, especially without proper validation, may introduce new vulnerabilities. As these systems become part of core operations, whether in building systems, logistics, or access control, the consequences become more tangible. This reinforces the importance of fundamentals such as layered defenses, oversight, and clear accountability.
How Security Roles Are Evolving
AI is not just transforming systems. It is changing the role of security teams.
Teams are spending less time monitoring and more time validating. The role is shifting from operator to evaluator. This means asking better questions, interpreting outputs, and understanding where systems may be wrong, not just where they appear to be right.
There is also opportunity in that shift. Instead of focusing on screens, teams can focus on decisions, advising the business, recognizing patterns, and anticipating risk. Done well, this elevates the role of security within the organization.
What Leaders Need to Consider in the Future
Looking ahead, a few themes stand out. The first is data governance. The effectiveness of any AI system depends on the quality of the data behind it. Without discipline in this area, everything else becomes less reliable.
The second is AI literacy. This does not mean deep technical expertise. It means having enough understanding to ask the right questions and challenge what you are seeing. For many leaders, this may feel unfamiliar. At the same time, experience and judgment become more important.
As Antons put it, “You do not need to learn programming language. You need to ask better questions.”
As AI becomes more accessible, the differentiator shifts toward context, curiosity, and sound judgment. In many ways, this creates a more human advantage. For a broader perspective on this shift, this article offers a helpful and reassuring look at how experience continues to matter in the age of AI.
Advancing the Conversation
AI is already influencing the security landscape. The question is how organizations choose to respond. At The Security Foundation, we believe progress comes from collaboration and shared understanding. When practitioners, policymakers, and industry leaders come together, the conversation becomes more practical and more grounded.
That is the purpose of the upcoming Topical Forum on Artificial Intelligence in Security Risks, Realities, and Readiness, hosted by The Security Foundation, OSAC, and DSAC. The goal is straightforward. Bring the right people together, share what is working, identify gaps, and move the conversation forward.
Insights in this article are based on a conversation with Erik Antons, Managing Vice President and Chief Safety and Security Officer at Marriott International.
