AI in Cybersecurity

AI in Cybersecurity Through the Lens of a CTO, Rajan Koo

AI in Cybersecurity This series explores how AI is reshaping cybersecurity and offers insights to help leaders build resilient, security-first organizations.

Artificial Intelligence is rapidly reshaping the cybersecurity landscape, offering both immense promise and profound challenges. On one side, AI empowers security teams with predictive analytics, intelligent automation, and real-time threat detection, enabling faster and more accurate responses than ever before.

Yet, this same intelligence also fuels a new generation of cyber threats. Adversaries are leveraging AI to create adaptive malware, conduct deepfake-based social engineering, and exploit system vulnerabilities at unprecedented scale and speed. The result is a high-stakes race where both defenders and attackers are evolving through intelligence and automation.

For CTOs and technology leaders, understanding this dynamic is crucial. The strategic decisions they make today around AI adoption, governance, data security, and ethical deployment will determine whether AI becomes their organization’s strongest ally or its most unpredictable risk factor.

In this interview, Rajan Koo, CTO of DTEX Systems, shares his insights on how AI is transforming enterprise security. Koo explains why AI can act as a double-edged sword, particularly in managing insider threats, and sheds light on the importance of regular incident response testing and staying ahead of regulatory shifts.

His perspectives offer valuable lessons for technology leaders seeking to embed cybersecurity and resilience into the foundation of their organizations from day one.

As a CTO, how critical is your role in shaping a secure digital ecosystem?

Rajan: As a CTO, my role isn’t just to select tools or set standards — it’s about embedding security deeply within our culture. Technology leaders have a responsibility to ensure that new capabilities don’t outpace the guardrails that keep them safe.

The most dangerous risks don’t come from obvious attacks but from blind spots such as unmonitored workflows, shadow AI projects, or integrations that unintentionally broaden access. Building a secure digital ecosystem requires a proactive approach, anticipating how systems will evolve, rather than just focusing on how they operate today. This means designing for adaptability, transparency, and resilience — principles that simultaneously support innovation and security.

How do you ensure that third-party vendors or software integrations don’t compromise security?

Rajan: Third parties are often overlooked because organizations tend to over-trust integrations.

To safeguard against potential threats, businesses must view external partners as internal users by extending critical security principles to third-party providers, such as validating behavior, monitoring access, and enforcing the principle of least privilege.

Static questionnaires and compliance checklists are not enough.

There have been too many breaches that have stemmed from assumptions — that a vendor has adequate controls, or that software behaves only as documented. To prevent compromises, companies should design visibility by first understanding what data flows, how it flows, who touches it, and how those patterns change over time. By grounding trust in observed behavior, we shift from a posture of blind confidence to one of continuous assurance. That mindset is what keeps the ecosystem secure, even when boundaries blur.

At DTEX, how is AI leveraged to enhance your cybersecurity defenses?

Rajan: With DTEX’s Risk-Adaptive DLP, we’ve moved beyond static rule sets to a model where behavior itself drives protection. Instead of waiting for a known signature or content trigger, our AI continuously learns from how people work — their file patterns, application usage, and device interactions — and infers document sensitivity in real-time. This allows us to dynamically adapt controls.

For example, when an employee’s behavior drifts into higher risk, we don’t apply blanket blocks — we calibrate interventions based on intent, role, and context. For generative AI or unknown file formats, our system governs AI use through oversight at the browser-agnostic layer, preventing data from being uploaded unintentionally.

However, the real challenge is that proper AI defense only works if the models are explainable, auditable, and grounded in behavioral signals, rather than relying on black-box heuristics. Without that, you’ve just built another blind spot that attackers will exploit. To address this challenge, our approach enables security teams to ask “why” for every action, not just “what.”

Why is AI a double-edged sword for insider risk?

Rajan: AI is both an accelerant and a blind spot. On one hand, it gives defenders the ability to analyze behavior at a scale and speed we’ve never had before.

On the other hand, it empowers insiders — and outsiders masquerading as insiders — with tools to obfuscate their intent, automate misuse, and bypass controls more quickly than humans can react. I often say that AI has created “non-human insiders,” which are autonomous agents and workflows that can act with the same level of access and influence as a privileged employee. The challenge is that most organizations still rely on rules-based controls, which were never designed for this dual reality. To manage insider risk in the AI era, we need adaptive, behavior-driven security that protects against both human and machine misuse.

Furthermore, employees are forming strong emotional and professional connections with AI tools because these tools lead to successful performance, leading them to build workflows and identities around these AI capabilities. Unlike shadow IT, which was largely about convenience, shadow AI is about empowerment. Therefore, blocking unauthorized AI usage is a losing strategy. CISOs who believe they can simply block unauthorized AI use risk alienating top talent and stifling innovation. Worse still, it could push AI adoption further underground, thereby escalating risks.

The modern CTO’s role is to provide a more balanced approach, emphasizing that AI governance is not about prohibition, but about enabling safe and intentional adoption. Organizations that view AI as a threat to be blocked will fall behind those that manage and optimize it as a capability.

How often do you test your incident response and recovery plans?

Rajan: In my experience, effective teams test continuously—not just once a year during a compliance drill. That doesn’t always mean a full red-team engagement.

Even brief, scenario-based walk-throughs reveal gaps in assumptions, communication, and escalation procedures. With AI now embedded in workflows, we also need to test for machine-driven scenarios: what happens if an AI agent misclassifies data, or is manipulated into exfiltrating information? Recovery isn’t just about restoring systems; it’s about restoring trust — with employees, customers, and regulators.

The only way to build that trust is to prove, through regular practice, that when a breach occurs, your team knows how to contain, explain, and adapt in real time.

What’s your view on regulatory developments shaping the cybersecurity landscape today?

Rajan: We’re at a moment where regulation is racing to catch up with technological reality. Whether it’s CNSSD 504 modernization for national security systems or new AI governance frameworks, the intent is clear: transparency, accountability, and adaptability must become baseline requirements.

I welcome this shift. In the past, compliance often meant checking boxes that didn’t reflect real-world risk. Today, regulators are starting to focus on outcomes. Can you detect insider misuse, explain how your AI models make decisions, and prove resilience? For security leaders, this is an opportunity to align business trust with regulatory compliance, rather than treating them as separate mandates.

Companies that proactively embrace this approach will not only reduce risk but also gain a competitive advantage.

How can companies prepare for the next wave of AI-driven cyber threats?

Rajan: Preparation starts with acknowledging that the attack surface has changed. AI doesn’t just make phishing emails more convincing — it enables adversaries to scale reconnaissance, automate privilege escalation, and exploit vulnerabilities in ways humans can’t keep pace with.

Companies must be more proactive and move toward adaptive controls that understand context by monitoring not only for malicious activity but for deviations in how systems and data are used — whether by employees or AI agents. Another key step is governance, which ensures that every AI integration has a clear lineage, accountability, and established guardrails.

Without this, organizations risk deploying “shadow AI” that attackers can exploit. Today’s threats are targeting trust itself — and the best preparation is building defenses that are transparent, explainable, and continuously adaptive.

What advice would you give new leaders trying to embed cybersecurity from day one?

Rajan: Start with culture before tools. Technology can only enforce what people believe in and practice. New leaders should embed security as a design principle, not an afterthought.

That means asking tough questions early, such as how will data be used, who will need access, and what would happen if those assumptions changed? It also means modeling transparency — showing employees that security is about enabling trust, not policing behavior. From a technical standpoint, it is critical to invest in visibility before control since you can’t protect what you don’t understand. Some of the strongest companies are those where security evolves at the same pace as innovation.

By focusing first on how data moves, how users behave, and how systems interact, you lay a foundation for adaptive security that grows with the business.

Key takeaways for CTOs and business leaders

  • Adopt AI responsibly: Embrace AI-driven cybersecurity tools. However, ensure they operate within strong governance frameworks to prevent unintended risks or misuse.
  • Balance automation with human intelligence: AI can detect patterns, but human context is crucial for accurately interpreting intent and effectively mitigating insider threats.
  • Prioritize third-party and integration security: Every external connection can introduce vulnerabilities. Hence, implement rigorous vetting and continuous monitoring of vendor systems to ensure optimal performance.
  • Test and evolve incident response plans: Regular simulations and team exercises can help organizations stay agile and ready for AI-powered attacks.
  • Stay ahead of regulatory shifts: Proactive compliance and alignment with emerging cybersecurity laws can protect reputation and reduce long-term risk exposure.
  • Build a culture of cyber awareness: Security must extend beyond tools and policies. Hence, empower teams with awareness and accountability to strengthen overall resilience.
About the Speaker: Rajan Koo is the CTO and head of DTEX’s Insider Investigations & Intelligence team. He is responsible for developing, implementing, and operating technologies to prevent insider risks from becoming insider threats. Rajan has played a pivotal role in establishing DTEX’s privacy-first approach to insider risk management. He has also led several high-profile insider threat investigations that have resulted in successful prosecutions and exonerations. As a Chartered Professional Engineer with over 20 years of cybersecurity and insider risk experience, Rajan has been awarded patents for his work in R&D (including DTEX’s unique pseudonymization features) and has led technical reviews for multi-billion-dollar industrial automation projects.
Avatar photo

Gizel Gomes

Gizel Gomes is a professional technical writer with a bachelor's degree in computer science. With a unique blend of technical acumen, industry insights, and writing prowess, she produces informative and engaging content for the B2B leadership tech domain.