ai-roles-in-enterprise-kirit-goyal-interview-edge-ai-localization

AI Roles in Enterprise, Localization and the Truth About Hype: Insights from Kirit Goyal

AI and Tech Leadership This interview series is grounded in lived experience. It explores how technology leaders move AI from experimentation into day-to-day operations, where decisions carry real consequences for teams, customers, and the business. Through conversations with practitioners who have led transformations at scale, the series examines how AI reshapes execution, accountability, and outcomes.

At a time when artificial intelligence is moving from centralized systems to distributed, real-world deployments, conversations with practitioners often reveal where the technology is actually heading, beyond the hype cycles.

As part of CTO Magazine’s on ground coverage at the AI Visionaries Summit, an exclusive forum by Magnivel International Group where executives share insights, explore ideas, and learn from real world experiences, we had the opportunity to engage with industry leaders shaping the future of AI.

Kirit Goyal, Director and CEO of Gazelle Information Technologies, participated as an esteemed speaker and invited panelist at the summit. With over 16 years of experience across IT, ERP, and business process consulting, he is widely recognized for his expertise in AI governance, data strategy, and delivering innovative supply chain solutions to global clients.

On the sidelines of the summit, CTO Magazine sat down with him for an in person conversation to explore how AI is evolving from early experimentation to enterprise scale deployment.

In this candid and wide-ranging discussion, Kirit Goyal traces his journey from early machine learning experimentation in supply chains to building AI-driven systems that integrate sensors, cameras, and edge intelligence.

The conversation also dives into AI localization, enterprise challenges like feature creep, and why global AI governance may be more complex than it sounds.

Let’s start simple, tell us a bit about your journey. How did Gazelle InformationTechnologies come into being, and where does AI fit into your story today?

Kirit Goyal: We started as a supply chain services company, focusing on the entire value chain, from sales forecasting and production planning to warehouse management and procurement.

Interestingly, our association with AI, or what we called machine learning back then, goes much further back than the current wave. Even in the early 2000s, we were working on stochastic forecasting models, trying to predict when equipment might fail. That, in essence, was machine learning, even if we didn’t label it as AI at the time.

Over the years, what has changed is the input layer. Today, we’re not just relying on structured data, we’re integrating inputs from cameras, sensors, and edge devices. That has significantly improved the accuracy and scope of our predictions.

So yes, we started with supply chain planning and execution, but today, AI allows us to go far beyond that.

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

You mentioned something really interesting earlier, AI localization. It’s a buzzword now, but what does it actually mean in practice?

Let me explain that through a simple use case, sales forecasting.

Forecasting is never universal. It always depends on localized parameters, regional demand patterns, language, cultural behaviors, even environmental conditions. If your model isn’t trained on these localized inputs, it simply won’t give you accurate results.

AI doesn’t have inherent intelligence, it works purely on the data you feed it. It can evaluate probabilities at scale, but it doesn’t understand context unless you explicitly provide it.

So for organizations, the key question is: what local inputs are critical for your process? That could be language, operational nuances, or even behavioral patterns.

Now, does this mean building a new foundation model every time? Not necessarily.

A more practical approach is to build preprocessing layers or wrappers that condition the data before sending it to the core model. That’s how you make AI outputs truly relevant to local contexts.

AI is evolving incredibly fast, almost every few months. Looking ahead, what trends do you think will define 2026?

One major shift is happening on the hardware side, the move to edge computing.

Earlier, data would be collected, sent to centralized servers, processed, and then returned. Now, with more powerful chips available, processing is happening directly at the edge.

Even devices like CCTV cameras now come with built-in AI chips capable of running inference locally. That’s a big shift, it reduces latency and enables real-time decision-making.

The second trend is around models themselves. With continuous training and user interaction, models are improving rapidly. The quality of outputs will only get better.

However, one area where we’re still lagging is policy and governance. The frameworks around AI usage are still evolving, and there’s a lot of ground to cover there.

That brings us to a big question, do we need a global AI governance framework, especially given its use in geopolitics and warfare?

In theory, it sounds ideal. In practice, it’s extremely difficult.

Even if such a framework exists, enforcing it globally is a challenge. AI, like any powerful technology, will always have both constructive and destructive applications.

So instead of relying solely on global frameworks, responsibility has to be shared.

  • Consumers need to be more critical, don’t take AI outputs at face value.
  • Providers need to build guardrails into their systems to ensure outputs remain within legal and ethical boundaries.

Take deepfakes as an example. Completely preventing misuse may not be possible, but providers can implement constraints to reduce harmful applications.

Ultimately, it’s a shared responsibility ecosystem.

Let’s shift to enterprise AI. One issue that keeps coming up is feature creep, AI getting added everywhere, often without clear ROI. How do organizations deal with that?

Feature creep isn’t new, it existed even in traditional ERP implementations.

You start with a defined scope, but as the project evolves, new requirements keep getting added. That’s natural.

Earlier, strong project management could control this to an extent. But today, the pace of change is so rapid that what you design today might become obsolete in a couple of months. So, the conversation around scope creep has evolved. Now it’s less about controlling features and more about evaluating business value. If a new feature aligns with changing business processes and delivers tangible benefits, it may actually be necessary rather than excessive.

The key is to continuously reassess:

  • Does this addition improve outcomes?
  • Does it align with current business realities?

If yes, then it’s not really creepy, it’s an adaptation.

In brief

What stands out from this conversation is a grounded, practitioner-led view of AI, one that cuts through the noise.

From early machine learning roots to today’s edge-driven intelligence, the journey reflects a broader industry shift: AI is no longer just about models, it’s about context, integration, and real-world usability.

And perhaps most importantly, as Goyal points out, the future of AI won’t just be defined by technological capability, but by how responsibly and thoughtfully it is applied.

Rajashree Goswami is a professional writer with extensive experience in the B2B SaaS industry. Over the years, she has honed her expertise in technical writing and research, blending precision with insightful analysis. With over a decade of hands-on experience, she brings knowledge of the SaaS ecosystem, including cloud infrastructure, cybersecurity, AI and ML integrations, and enterprise software. Her work is often enriched by in-depth interviews with technology leaders and subject matter experts.