Data Governance Frameworks

Why Data Governance Frameworks Are No Longer Optional: Ananya Sundar on What’s Changed

AI and Tech Leadership: This interview series is grounded in lived experience. It explores how technology leaders move AI from experimentation into day-to-day operations—where decisions carry real consequences for teams, customers, and the business. Through conversations with practitioners who have led transformations at scale, the series examines how AI reshapes execution, accountability, and outcomes.

For years, enterprises believed that more data would naturally lead to better decisions. But as organizations now sit on unprecedented volumes of information, the gap between insight and action has only widened.

Ananya Sundar, Associate Director, Client Services at Neutrino Advisory, has watched this gap form from multiple vantage points, research, social listening, AI-driven food technology, and now enterprise advisory. With over 14 years of experience, she works closely with Fortune 100 and 500 companies to translate AI ambition into operational reality.

Ananya Sundar, Associate Director, Client Services, Neutrino Advisory

What follows is a candid conversation with Ananya Sundar on ethics, time-to-insight, and why human validation still matters in an AI-first world.

Your career spans research, social listening, food tech, and now AI advisory. When you look back, what connects these seemingly different chapters, and how did they shape the way you think about AI today?

Sundar:
I started my career with a research-based organization in the telecom domain, where everything revolved around structure and rigor. From there, I moved into a company that was building Asia’s largest social listening platform, listening to conversations across Twitter, Instagram, and other channels and converting them into large-scale data.

That phase taught me how insights are formed, not just from data, but from patterns and context. Later, when I moved into an AI-based food tech company, I really learned a lot more about AI. Of course, I had exposure in my previous organization as well, but in the food tech company I truly understood what AI actually is and how it can be utilized in a better way.

Currently, I work for a company called Neutrino Advisory, where I am an Associate Director for Client Services. This means anything and everything related to client relationships, bringing in new customers and making sure existing customers stay happy. I’ve been here for close to about a year and a half, and I’m based out of Bangalore.

When I look at my journey, from research to listening, then insights, and now decisioning, the common thread is how we move closer to meaningful outcomes using AI.

Many organizations still rely heavily on dashboards, charts, and reports. But you’ve said that’s no longer enough. What do you believe is missing from the way enterprises think about insights today?

Sundar:
We have moved from a phase in which we used to go to libraries, look at books, research extensively, and gather responses, to a phase in which machines are giving us almost everything on a platter.

However, insights without meaning are just noise. We are very good at collecting data, there is no scarcity of data these days. But we are also entering an era of data overload. The real question now is how intelligently we handle this data, identify what is important, observe the patterns, and extract what is actually essential.

The industry is clearly marching toward a future that is far more focused on insights, noise-free insights, and ultimately toward a decisioning era, where clarity and imagination can be turned into action.

That idea of moving from data to decisions sounds powerful, but also risky. Many leaders say they have the data yet still struggle to trust AI-driven outcomes. From your experience, where do organizations usually get stuck?

Sundar:
There are definitely gaps. One very important concept is “garbage in, garbage out.” If the data fed into the system is inaccurate or unreasonable, the output will not be something you can rely on.

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

That’s why the most important term in this entire conversation is human in the loop. AI can do wonders, but it can do even more wonders when a human ensures that what is fed into the system is correct and noise-free, and that the outputs are legitimate.

Systems are still learning and are in a relatively nascent stage. It will take some time for AI systems to give instant insights and context-aware recommendations. Until then, human validation at every stage is extremely important.

Speed is often treated as the ultimate goal in AI. But in reality, insights don’t always come instantly. When delays happen, what kind of framework should organizations adopt instead of rushing to automate everything?

Sundar:
There will definitely be delays until the system learns and understands the ecosystem. I completely agree with that. But if you really want accurate insights, you have to give the system that time to mature.

Human-in-the-loop becomes very important here. Sometimes bias may come into play, but what truly matters is validating whether what you are seeing actually makes sense. If I query a system, get a response, and post it without even reading or understanding it, it’s not going to help anybody.

When I read it, apply my intelligence, and validate it, I can at least confirm that the system is moving in the right direction. My suggestion would be not to hurry in the initial stages. Once you are confident that the system is making sense and working correctly, you can gradually reduce dependency. But in the early phases, from research to decisions, human validation is critical.

Yes, it may delay the process to some extent, but it ensures that you land on something right, for yourself and for your customers.

Bias and data gaps are often discussed in abstract terms. For tech leaders dealing with them in real systems, what practical steps actually make a difference?

Sundar:
Bias and data gaps are very real. Leaders need to be extremely aware of what is being fed into the system. Human oversight is essential to ensure that the data and outputs are contextually accurate.

Until systems are mature enough to self-correct, human validation is the safest and most responsible approach.

Data privacy and misuse have become global concerns. Do you believe regulation can keep up with AI’s pace, or does responsibility ultimately sit elsewhere?

Sundar:
I believe both play a role. Even though we talk about data overload, sometimes the data you actually need is not available, especially in specialized domains like biotechnology. Governments and institutions can play a larger role in enabling access to the right datasets.

At the same time, the power of AI is now largely in the hands of users. If it is not used wisely, it can cause harm. With open-source models, voice AI, and generative media becoming more common, strong guardrails around privacy and security are essential.

When organizations talk about ROI from AI, the conversation often stays vague. From your perspective, which metrics truly reflect value?

Sundar:
Time to insight is one of the most important metrics, how long it takes for the system to provide meaningful insights.

The second is decision accuracy. Together, time to insight and decision accuracy are the key ROI metrics organizations should focus on.

Data literacy is another challenge, especially at scale. What should organizations realistically focus on?

Sundar:
Training is extremely important. Organizations need to encourage employees to upskill themselves continuously. At Neutrino Advisory, learning is part of our DNA—we provide support, learning allowances, and access to global platforms so employees can keep upgrading their skills.

Organizations should also encourage participation in conferences and events to gain practical exposure. Data literacy begins within the organization itself.

We’re also seeing a generational shift in the workforce. Gen Z and millennial founders often approach data infrastructure very differently compared to Gen X leaders. From your perspective, if this younger workforce is building data infrastructure or data products today, what are the non-negotiable pillars they must adhere to, no matter how fast they want to move?

Sundar:
Understood. First and foremost, I would say the ethicality of the data should stand in front of you, come what may. Even though there are going to be multiple pressures to get things out quickly, if you are not ethically treating the data, you are not going to end up with anything useful—and you may enter a loop of consequences later on.

So, I would say the first pillar for anybody should be data ethicality. And then, of course, don’t be in a hurry to make money. Everyone is moving very fast today. Let’s not just run behind speed at the cost of ethicality. That should be your core pillar. There’s a proverb that says: if wealth is lost, nothing is lost; if health is lost, something is lost; but if character is lost, everything is lost. I want to apply the same analogy to AI.

Second, build with a purpose. Start with a problem-solving attitude. If you’re not building with a purpose, it will fail very soon.

And third, the time to insights is very important. You need to get insights efficiently, but human-in-the-loop also plays a major role. Nobody can replace human intelligence. So as much as possible, take validation, take suggestions, take feedback, and keep improving your models as you progress.

About the Speaker: Ananya as an Associate Director, Client Services at Neutrino Advisory, brings over 14 years of robust experience spanning strong client servicing, SaaS, innovation, and technology. She plays a pivotal role in building trust, engaging clients, and executing AI strategy, design, new product development, and digital marketing projects for Fortune 100 and 500 companies. Before Neutrino, Ananya anchored key AI based engagements at AiPalette, a SaaS firm that uses Artificial Intelligence and Machine Learning to help CPG companies develop consumer-winning products. She also worked at Radarr (now acquired by Genesys), a social media listening firm, where she helped clients identify market trends, conduct research, manage crises, and optimize digital strategies. A high-achieving professional, Ananya is a recipient of the prestigious Influential Femmes 2024 award

Rajashree Goswami

Rajashree Goswami

Rajashree Goswami is a professional writer with extensive experience in the B2B SaaS industry. Over the years, she has honed her expertise in technical writing and research, blending precision with insightful analysis. With over a decade of hands-on experience, she brings knowledge of the SaaS ecosystem, including cloud infrastructure, cybersecurity, AI and ML integrations, and enterprise software. Her work is often enriched by in-depth interviews with technology leaders and subject matter experts.