Related Posts
Subscribe via Email
Subscribe to our blog to get insights sent directly to your inbox.
For organizations of all sizes, the role of the Security Operations Center (SOC) is evolving faster than ever before. At NuHarbor Security, we’re on the front lines of this shift—advising clients on how to strike the right balance between people, process, and AI-driven tools.
One of the most critical questions emerging in this shift today: What happens to SOC talent development when Level 1 analyst roles become automated?
As a NuHarbor vCISO I have been working with organizations firsthand to help them move towards enabling automated functions within the SOC. One motion we are already seeing is the early signs that AI can handle many of the functions traditionally assigned to Tier 1 SOC analysts. Through supervised and unsupervised learning models, AI systems can watch how human analysts handle alerts, ingest historical ticketing data, and begin to build decision trees for automated response.
I think that many SOC Level 1 functions could potentially be handled through AI after nine months of indirect and direct supervision. This idea of "Level 1 closure" is critical. If AI can resolve 60% of alerts without human involvement, organizations can dramatically reduce response times and operating costs.
Looking towards the future, an even more compelling function is the concept of nested AI levels within the SOC. Imagine an AI trained for Level 1 responsibilities, triaging alerts and passing more complex incidents to a Level 2 AI agent. Only the most nuanced, high-risk threats would reach a human analyst at Level 3 or beyond. The pyramid becomes slimmer and more focused, with elite human talent operating at the top.
But there’s a catch.
"If AI displaces entry-level roles, where will tomorrow's cybersecurity leaders come from?" This came from a recent client conversation, where they rightly questioned how organizations can cultivate Level 3 talent if there are no Level 1 jobs to build foundational experience. This is more than a workforce issue—it's a strategic risk.
"How do you build your talent pipeline to go beyond Level 1, if the Level 1s are no longer there?" the client asked.
Organizations must rethink what career development looks like in this new era. This is an opportunity to evolve training programs to focus on AI-human collaboration from day one. Security practitioners won’t just be alert responders, they’ll become AI supervisors, model validators, and incident escalation specialists.
This motion isn’t meant to replace humans; it can be used to increase retention and job satisfaction. Having the Level 1 alerts become automated through AI, alert fatigue will be reduced, and the Level 2 and Level 3 analysts will be able to focus on training, education, and critical thinking to triage the higher priority, more complex alerts.
While AI may enhance internal security operations, it’s also supercharging the capabilities of adversaries. Deepfake technology, voice synthesis, and hyper-targeted phishing campaigns are already blurring the lines between real and fake—not just online, but in person.
Threat actors are going into social media, training their deepfake tech to recreate a person’s voice... After listening to 10-15 videos, they can make a phone call and you wouldn’t know the difference.
Attackers are taking advantage of the trust we used to rely on. Verification isn’t just a courtesy anymore, it’s a control. For some of the most sensitive conversations, organizations may need to reconsider how and where they happen. The zero trust model isn’t just an IT architecture anymore. It’s a survival strategy for interpersonal communication.
Despite the hype, AI is not a silver bullet. The most important factor in whether AI can support SOC functions is context. The best way to have AI do the job is to be able to see historical data.
Effective AI needs more than just alert logs. It needs:
This underscores the need for organizations to invest in data hygiene, structured processes, and annotated feedback loops. If you feed your AI poor data, AI will make bad decisions.
One exciting (and often underutilized) application of AI is in phishing simulations and red teaming. Use AI to simulate real-world conditions in phishing exercises, incorporating company-specific events like leadership changes or M&A activity.
This is a more advanced and realistic approach to readiness testing—one that keeps pace with adversaries who are already using AI to enhance their lures.
One growing trend NuHarbor sees in red team engagements: AI-generated deepfake videos and phone calls designed to impersonate executives. This goes far beyond the old "gift card scam" model. These are full-fledged impersonation attacks asking for login credentials or confidential financial information.
As we look ahead, the takeaway is not fear. It's preparedness.
Organizations must:
Most importantly, we must recognize that trust is now a vulnerability. Verification is no longer a courtesy, it’s a control.
At NuHarbor, we help organizations navigate this AI-driven security landscape with confidence. From advising on SOC automation to designing responsible AI governance frameworks, we’re helping clients prepare for the threats of tomorrow without losing sight of the human expertise needed to fight them.
Jorge Llano is an Executive Cybersecurity Strategic Advisor at NuHarbor Security. In his role, Jorge helps clients that want to enhance their cybersecurity program by offering objective cybersecurity knowledge, approaches, and tools. Jorge has worked as a cybersecurity executive for two decades, holding positions in both the public and private sectors. His primary responsibilities are creating and executing the organization's security strategy and presenting it to the board of directors, employees, and other executive management colleagues. Jorge holds a doctorate in information assurance from the University of Fairfax and a master's degree in cybersecurity from Penn State University.
Subscribe to our blog to get insights sent directly to your inbox.