AI Deregulation Is Reshaping How AI Is Built: Cris Kinross Explains Why
The move toward federal preemption in AI regulation promises to simplify compliance for enterprise tech leaders who’ve been navigating a patchwork of state-level rules. In practice, however, deregulation often creates more ambiguity.
As state-level rules are replaced by broader federal guidance, the instructions get fewer – but the responsibility gets greater. CTOs and business leaders are now required to balance innovation speed with risk management, particularly when it comes to AI systems that make business-critical decisions. The stakes are especially high for AI tools that generate advice, analysis, or recommendations.
In this context, AI hallucination isn’t just an error; it’s a compliance liability. When AI outcomes influence customer interactions, financial planning, or long-term strategy, trust in those systems becomes a business necessity rather than a technical preference.
Federal preemption and the new regulatory reality
Cristine Kinross, Creator of Levr and Founder of Zenly, explains why deregulation is pushing companies to rethink how they build their AI systems. Drawing on her experience at Danaher and her work building Levr, she argues that trustworthy, controlled AI infrastructure is now a competitive imperative – not merely a regulatory safeguard.
Q. Federal preemption is being positioned as a way to simplify AI regulation. However, many CTOs feel it actually brings more uncertainty. From your vantage point, where does deregulation genuinely reduce friction, and where does it quietly increase enterprise risk?
Kinross: Many companies are creating AI strategies just to check compliance boxes, but that approach actually increases enterprise risk. If you build your roadmap based on shifting regulations, you are constantly reacting to external forces rather than securing your business. The smarter play is to focus on building robust, secure systems from the ground up.
If you prioritize fundamentals, like restricting AI to your own verifiable data rather than general web content, you will likely find yourself aligned with most global regulations by default.
Q. As state-level rules give way to broader federal frameworks, what signals should CTOs look for to future-proof their AI systems? Especially when the regulatory end state is still unclear?
Kinross: The rollout of GDPR is the clearest model to follow. While there was plenty of noise about the burden of compliance, the reality is that it forced enterprises to finally audit the health of their systems. Many organizations realized that fragmented customer data could not support meaningful analysis. By upgrading their data infrastructure to meet legal standards, they unlocked the ability to personalize marketing and forecast revenue accurately.
The strategic signal here is simple: do not view upcoming regulation as a constraint, but as a forcing function to modernize your data architecture for actual business growth.
Q. Many enterprises believe deregulation provide them more freedom. In reality, does it place more responsibility on CTOs to self-govern AI usage? And if so, what capabilities must technology leaders have internally now?
Kinross: The US might be talking about deregulation, but that creates a dangerous blind spot. Europe and California are moving in opposite directions, and a global strategy cannot ignore that. More importantly, AI is now critical corporate infrastructure; successful companies are moving aggressively to bring it in-house rather than renting intelligence.
The immediate threat isn’t a regulator; it’s shadow usage. Too many sales and marketing teams are quietly pasting sensitive customer data into public tools like ChatGPT or Gemini to hit their targets.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
Technology leaders need to start auditing workflows immediately by asking, ‘How exactly did you generate this report?’ The moment you realize your IP is feeding public models, it becomes obvious that you need to build secure, internal environments to regain control.
AI risk, hallucinations, and compliance exposure
Q. You’ve said that in a deregulated environment, AI hallucinations shift from being a technical flaw to a compliance liability. How should CTOs rethink their risk models when AI-generated advice or analysis directly informs business decisions?
Kinross: It is easy to laugh at memes about AI failure, but it stops being funny when your employees use those hallucinations to make high-stakes business decisions.
We are entering a ‘circular trap’ where generic chatbots are training on web content that they themselves generated, effectively validating their own errors. In early 2025, I realized I could no longer trust open-web models for serious guidance, so I built a proprietary solution.
We created Levr to interface with a closed, vetted database of strategies from high-growth companies. Leaders need to understand the difference between ‘web content’ and ‘business guidance;’ one is a commodity that degrades over time, and the other is a verified asset you can actually bank on.
Q. In your view, what types of AI use cases carry the highest regulatory and reputational risk today? Are those generating recommendations, insights, or strategic advice the most exposed?
Kinross: The three areas carrying the highest toxicity in early 2026 are customer interactions, financial analysis, and portfolio planning. We are already seeing brand equity evaporate when customers get trapped in loops with incompetent AI agents-people notice when you replace humans with cheap bots. But the silent killer is financial analysis. AI models are making basic math errors daily, leading to mis-stocked inventory and flawed revenue reports.
I predict 2026 will be the year of the ‘AI Financial Scandal’ – not because of fraud, but because a model hallucinated a number in an SEC filing or skipped a spreadsheet column.
Finally, the strategic risk will hit later in the year when companies realize their 5-year plans were built on generic, hallucinated market data. The only organizations safe from these headlines are the ones that refused to outsource their intelligence to open web models.
Trust as architecture, not policy
Q. You said you built Levr around the core concept that AI outputs must be grounded in vetted, factual data. Why do you see controlled data environments becoming the standard for enterprise AI, and not the exception?
Kinross: The cost of public failure is becoming unsustainable. We are rapidly approaching a split in the market: companies that enforce controlled, accurate data environments will thrive, while those rolling the dice on open-ended sources will flounder. This isn’t just a technical issue; it is a career-ending liability.
I predict we will see a ‘revolving door’ of C-suite exits in the coming years, driven entirely by executives who are held personally accountable when their AI strategy results in a public embarrassment rather than a competitive advantage.
Defensible AI systems
Q. For CTOs assessing generative AI vendors, what are the non-obvious questions they should be asking? How can they decide whether trust, traceability, and factual grounding have genuinely been engineered into the product?
Kinross: A good starting list might include:
- From where do you source your data?
- Show me your policy on who can access the data.
- Have you customized any of the APIs from the AI platforms? Or do you use one specific model exclusively?
- Explain how you test your models for accuracy.
- How do you decide when you need to update your AI tool?
Innovation speed vs. governance discipline
Q. Speed to innovation has long been the north star for AI adoption. In the present environment, how should tech leaders recalibrate the trade-off between delivering faster and building auditable, regulation-ready systems?
Kinross: It sounds counterintuitive, but rigorous systems actually ship faster. Speed is an illusion if your team spends 50% of their time fact-checking AI hallucinations or fixing errors caused by ‘wild assumptions.’ This is precisely why enterprises that rushed to deploy generic chatbots are struggling to show ROI. They remain trapped in constant remediation loops rather than innovation loops. Real acceleration comes from accuracy.
By using Levr’s vetted business guidance instead of open web content, I slashed my own product development cycle from 14 months down to two months. You move infinitely faster when you don’t have to audit every output for basic competence.
The evolving role of the CTO
Q. As AI moves closer to board-level scrutiny, how is the role of the CTO changing?
How does this shift play out in an environment where regulation lags behind technological capability?
Kinross: We are witnessing a fundamental shift in the corporate hierarchy. The CTO role is becoming so central that functions like Product Development, Marketing, and Sales could soon report directly to it. In fact, the CTO is rapidly becoming the logical successor to the CEO.
But this power comes with a new mandate: CTOs must act as external ambassadors to regulators and internal owners of enterprise risk tolerance.
They can no longer just speak ‘tech’; they must be able to clearly articulate business priorities that align Legal, HR, and Sales to the same strategic goal.
Q. Looking ahead three to five years, what will separate organizations that successfully navigated this regulatory transition? What will distinguish them from those who treated AI compliance as a box-checking exercise?
Kinross: The separation will be brutal but clear. Enterprises that embrace privacy and data protection as core product features will dominate because they earn the one currency that matters: fierce customer loyalty. Meanwhile, the companies that treated compliance as a gamble, hoping the regulatory storm would blow over, are going to find themselves structurally unable to compete.
In three to five years, we won’t be talking about ‘compliance costs’; we will be talking about how trust became the defining competitive advantage.
Q. If you had to leave CTO Magazine readers with one strategic takeaway, what would it be?
Kinross: Ask yourself one question: what business goal actually drives your AI foundation? Achieving that goal requires three non-negotiables: customer loyalty, regulator trust, and a brand that people love. The single most important decision you can make today is to prioritize AI technologies that strengthen your reputation rather than just expanding your tech stack. If your AI strategy builds code but burns trust, you have missed the point entirely.
In essence
The key message from this conversation is clear: In an era of AI deregulation, responsible AI is no longer enforced by regulators – it must be engineered by leaders.
As AI systems move deeper into strategic decision-making, tech leaders can no longer rely solely on speed, scale, or vendor promises. The enterprises that will win are those that consider AI as core infrastructure, built on controlled data, measurable accuracy, and clear accountability. In the absence of strict rules, trust becomes the ultimate differentiator to stay relevant in the market.