Dr. Darren Williams on Shadow AI: The Hidden Threat Leaders Can No Longer Ignore
As enterprises accelerate the adoption of AI across daily operations, a new and largely invisible threat is emerging: shadow AI – the unauthorized use of AI tools, models, and features.
These unsanctioned AI tools don’t just bypass policy – they actively ingest, learn from, and redistribute enterprise data outside approved security and compliance frameworks. This creates a widening gap between how leaders think AI is being used and what’s actually happening on employee devices, increasing the risk of data leaks, intellectual property loss, and regulatory trouble.
In this conversation, Dr. Darren Williams, Founder and CEO of BlackFog, pulls back the curtain on how shadow AI is already operating inside modern enterprises. He explores why policies alone are failing, how traditional cultural behaviour is reshaping the threat landscape, and why old perimeter-based security tools are no longer fit for an AI-driven world.
Williams also offers a practical perspective on where enterprise security is headed. Here’s what leaders must do now to stay ahead of AI-driven data loss, rather than reacting to it after the damage is done.
Shadow AI
Shadow AI is emerging as a significant blind spot for enterprises. From your perspective, what early warning indicators should CTOs and security leaders watch for that signal shadow AI is already operating inside their environment?
Williams: Not unlike what we are seeing in the education sector, where students are leveraging AI to do their homework for them, businesses are witnessing employees developing content faster than ever by utilizing AI.
If they are perceived as being more productive and more efficient, it makes sense for both parties. Ultimately, it comes down to the productive and responsible use of AI. CTOs and management actually have no visibility into what is really happening. There are currently no controls in most organizations. The only evidence most organizations have involves asking people if they are using AI, and many do not want to admit it. The more obvious indicators include many telltale signs from content produced using AI, including certain punctuation, the use of semicolons and long dashes, and the overuse of bullet points.
Wikipedia has posted an article that is probably the best currently available, highlighting some of the core red flags for AI use that work well here.
What is the most critical misconception executives hold about the risks shadow AI poses to their data and intellectual property?
Williams: The most common misconception is that many believe that issuing an AI policy will ensure employees follow it.
The truth is, people will use whatever tools they feel will get their job done faster, regardless, and resort to any means necessary to use their preferred tools. Most employees do not understand – or, frankly, consider – the loss of intellectual property or trade secrets, and organizations are often confused (rightfully so, in many cases) by the complex legal landscape and less-than-transparent policies from many LLM vendors.
The Human Factor
Your research shows 71% of employees prioritize productivity over privacy. What does this reveal about the cultural and behavioral disconnect between security expectations and real-world employee behavior?
Williams: People value their time more than anything else. If they are given a choice, they would rather complete their work more quickly and leave early. People will take any unfair advantage they can get.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
The same behavior is observed in education, and this is even more concerning because students are often left with a degree that lacks practical skills.
Technical Deep Dive: Anti–data exfiltration (ADX) and Prevention
BlackFog pioneered the anti–data exfiltration category. In practical terms, how does ADX differ from legacy tools like DLP, EDR, or CASB in stopping AI-driven data leakage?
Williams: Existing tools are all predicated on the idea that the way to prevent attacks and data leaks is through defensive-based approaches at the perimeter.
However, as we have seen, these defenses have limited success because not only can threat actors evade them (usually through AI training), but they are often already latent on the device.
When you combine this with the shift to hybrid work environments and the use of BYOD devices, organizations no longer have effective central control mechanisms.
ADX was born from the idea that data loss occurs from the device itself and escapes through the device’s back door – or egress channel – which is often not monitored in most organizations. Even when it is, it’s done at the level of the firewall, which most users bypass anyway.
So, ADX runs on the device itself and detects data moving off the device, and therefore gets scanned in real-time. This is also how BlackFog is able to detect AI traffic on the endpoint itself.
Can you walk us through how ADX’s real-time decisioning works? Specifically, how do the machine-learning models distinguish legitimate outbound activity from high-risk exfiltration attempts?
Williams: ADX is basically a Machine Learning decision tree that consumes many parameters at the same time to make decisions about the packets flowing off your device. Take one example: the way attackers communicate with their command and control (C2) servers.
We can determine in real-time if the domain is authentic, when it was generated, and which country it is routing to. If it determines this is a C2 server, then the transaction will be prevented. This is only one example. The system includes hundreds of other rules, which, when combined, ensure that all data flowing off your device is legitimate.
Regulatory and Compliance Implications
Many generative AI tools continuously store and learn from user inputs. What long-term risks does this create for enterprises – particularly around IP leakage, model training exposure, and data lineage transparency?
Williams: This is perhaps the most critical part of any AI-based system and varies widely in terms of what is used and stored for training purposes. This also varies by the type of license from the same vendor.
These prompts have several important aspects that need to be considered.
Firstly, they need to be parsed carefully to ensure that the prompt itself is not poisonous, which can include context switching, obfuscation, or jailbreaking.
Secondly, the prompt should not exfiltrate commercially sensitive or personal data, which can be very difficult to detect. There is a general lack of awareness among employees regarding the transmission of confidential data to these systems and its subsequent handling. Even those who understand often overlook the privacy and policy aspects for the sake of efficiency. So the long-term risks to a company are quite high.
The simple fact is, no organization actually knows what is really going on right now, as there are no auditing and/or controls in place.
Generative AI systems are increasingly capturing enterprise data at scale. Which regulatory gaps or blind spots do you expect governments and compliance bodies to address first?
Williams: There are many regulatory gaps in this new AI age that governments have not even begun to consider yet. Such as who owns the data and the prompts themselves that are being sent to these LLMs. How do these models evolve based on the data being submitted, and can such information be leaked back to other users?
Will there need to be a “Right to be Forgotten” law, similar to the one established with GDPR?
Many AI vendors can now analyze, store, and reuse enterprise data in ways that are not always fully disclosed. How should organizations evolve their compliance strategies to address this new reality?
Williams: This is such a fast-moving technology; it will take some time to adapt. The technology is developing faster than any laws can be written. Compliance should focus on the bigger picture – most notably, data exfiltration – and develop policies and controls around this first. As technology matures, compliance strategies will also evolve.
Forward-Looking
Looking ahead, what do you believe will be the next major shift in enterprise security architecture? What role will AI play in driving that change?
Williams: I think we are already seeing this in the last 12-18 months. Existing defensive-based approaches to security, along with the use of SIEM solutions, are giving way as organizations recognize the need to be less reactive and more proactive in their security strategies.
While AI can consolidate and process a large number of events more effectively than a human (latest statistics reveal that less than 10% of events are even reviewed), it is still subject to the law of diminishing returns.
The focus should instead be on preventing them in the first place, essentially eliminating the need for these other layers.
AI-based data exfiltration is rapidly evolving as a threat. Do you foresee it surpassing traditional phishing or credential-based attacks in terms of scale and business impact?
Williams: We have seen a significant shift from encryption to over 95% data exfiltration in ransomware attacks alone in 2025. We expect this to continue as the attacks focus on the actual goal of the attack – the data itself.
This is only accelerating now, as the criminals have access to these sophisticated AI tools to both train against and develop more novel attack patterns and vectors. This has also given rise to highly targeted attacks on specific industry segments that have been highly successful. Great examples include the retail food sector, such as the Marks & Spencer attack this year.
Armed with AI, attackers have found that they can breach the weakest layers of an organization and establish a beachhead within any organization they desire.
The security shift leaders can no longer ignore
The rise of shadow AI serves as a stark reminder that innovation without oversight carries real risks. As Dr Darren Williams emphasizes, the most critical challenge for leaders is not slowing AI adoption, but ensuring it is used responsibly and securely.
For future leaders, the lesson is clear: success in the AI era requires a proactive approach to data governance, visibility into how AI tools are used responsibly, and the ability to prevent unauthorized exfiltration before it happens.
Leaders who embed these principles into their culture and architecture will not only protect their organizations’ most valuable assets, but they will also set the pace, stay competitive, and define what it means to lead in an AI-driven world.
As Williams succinctly puts it,
“The leaders who understand where data actually flows and act decisively today will be the ones defining safe and successful AI-driven enterprises tomorrow.”