
Open-Source AI and Enterprise Tech’s Evolution: A Conversation with Praveen Akkiraju
We’re in the midst of one of the most profound technology upheavals since the dawn of the internet — and artificial intelligence is at its very core. The noise around open-source AI is deafening, but for enterprise tech leaders, the real challenge isn’t hype. It’s cutting through the clutter to find what truly matters.
For Praveen Akkiraju, this moment feels familiar. “Growing up, I used to bike to the edge of an airfield to watch planes take off,” he says. “I always imagined jumping on one and figuring out how things worked.” That curiosity has powered a career of bold leaps across deep tech, edge computing, and venture capital.

Praveen has helped build some of the world’s most critical infrastructure. At Cisco, he worked on the foundational protocols that underpin the internet. Later, he became CEO of VCE, where he helped create a new cloud infrastructure category and led the company to $1 billion in revenue. At Viptela, he scaled an enterprise SaaS platform to over $100 million in run-rate revenues before its acquisition.
Now, as Managing Director at Insight Partners, Praveen works with the next generation of enterprise AI founders, bringing a rare combination of product instinct, operational experience, and strategic clarity to the boardroom. His focus spans automation, DevOps, security, data infrastructure, and the AI-native stack reshaping them all.
But above all, he’s in it for the people. “Great companies are built by individuals who dare to take a chance,” he says. “Investing is a journey with founders—I bring my hands-on experience and Insight’s resources to help build something enduring.”
We sat down with Praveen to explore where this AI wave is really headed, what CTOs need to prioritize now, and why open-source AI is both a powerful equalizer and a trap if you’re not ready for it.
Let’s get into it.
Praveen, you’ve worn quite a few hats over the years—engineer, CEO, investor. Before we dive deep into AI, let’s rewind a bit. How did your journey take you from leading engineering teams at Cisco to building a startup in edge computing, and now, into venture capital at Insight Partners?
Akkiraju: Yeah, it’s been a bit of a winding road, but with a common thread—deep tech. I come from an operating background.
I spent much of my career at Cisco, running engineering teams and later serving as GM for the enterprise business. After that, I led a cloud infrastructure company and later built a startup in edge computing. Once we were acquired, I transitioned to investing and joined Insight about five years ago.
Today, I’m a Managing Director based in the San Francisco Bay Area, focusing primarily on deep tech infrastructure. That includes the developer ecosystem, cybersecurity, the data and AI stack, and especially enterprise automation. I’ve been particularly drawn to how AI, especially generative AI, is finally delivering on the decades-old promise of automation.
At Insight, I lead many of our AI investments and thought leadership in this space.
That’s a front-row seat to a lot of evolution. And still, many people say AI is in a “proving phase.” From your vantage point—both as an operator and now as an investor, what feels different about this moment? What should enterprise leaders be paying attention to?
Akkiraju: It’s true, AI has been around for decades. Even back in grad school, I remember doing AI projects, though most of it was purely theoretical. Things really began to shift with deep learning in the mid-2010s. The first breakthrough we could all see was in autonomous vehicles—AI being used to make real-world predictions, at scale.
But the current wave, driven by large language models and generative AI, is a completely different beast. What changed is this: AI can now recreate, not just recognize. Earlier models were great at tagging photos or predicting maintenance issues. Now, they can write content, generate code, and summarize legal documents. That leap—from recognizing patterns to generating original output—is what makes this moment so pivotal.
And that’s what makes Gen AI such a game-changer for enterprise software. Most enterprise workflows are essentially: user inputs data, software manipulates it, and something gets recorded in a database. With generative AI, you can start embedding those workflows directly into the application. The AI isn’t just interpreting data—it’s completing tasks. That’s where real automation and real productivity gains start to materialize.
Let’s zoom out a little. So, with your long view of the industry, what are the key trends you’ve noticed in enterprise AI?
Akkiraju: You can almost map out the evolution in chapters. We’ve had multiple “AI moments,” but until now, we never had the right stack, compute power, models, data pipelines, and user interfaces—all come together at once.
That changed around 2017–2018, with deep learning going mainstream. Autonomous vehicles were the first high-profile use case people could understand. Now, companies like Waymo handle a significant chunk of ride-hailing in San Francisco. That kind of adoption took nearly a decade to mature.
With generative AI, things are moving even faster. And here’s the shift: instead of just helping people interpret data, AI can now do the task itself. That has massive architectural implications. It changes how enterprise software is built, from data layers and APIs to how users interact with an app.
We’re not just embedding AI into products; we’re rebuilding the products around AI.
A lot of that rebuilding seems to be happening around open-source models—Mistral, LLaMA, DeepSeek. Some CTOs are going all-in; others are hesitant. What’s your take? Is open source the future or just a side route?
Akkiraju: It’s a fascinating moment. In some ways, all AI is built on open source. The original Transformers paper—arguably the foundation for today’s LLMs—was published by Google in 2017. Since then, models like LLaMA, Mistral, and DeepSeek have pushed the open-source frontier forward.
So yes, open source is core to AI’s evolution. But when it comes to adoption, I’d frame it as not open-source vs proprietary, it’s more of a hybrid future. Think about it like operating systems: Windows, Mac, and Linux all coexist. Enterprises will choose what fits their needs.
For some, using OpenAI or Google’s vertically integrated models is ideal. They don’t want to deal with model deployment, tuning, or integration. For others—say, developers or companies with the right AI ops teams—open-source models can offer cost efficiency and control.
What’s interesting is that open source is driving down the cost of inference dramatically, forcing the proprietary players to respond. So even if a company doesn’t use open source directly, it still benefits from the competitive pressure it creates.
You mentioned DeepSeek earlier; it’s been getting attention for doing much with less. How do you see models like that stacking up against giants like OpenAI? Could they eventually become the go-to for enterprise teams?
Akkiraju: DeepSeek definitely made waves with its early releases, especially because of how much performance it packed into a smaller model footprint. That’s the real innovation—how efficient can you make a model without sacrificing accuracy?
That said, we have to be clear: we’re still in the early stages of adoption. A lot of teams are testing these models, but few have fully productionized them. Enterprises aren’t just chasing model benchmarks—they care about outcomes: improved customer experience, faster time-to-insight, better decision-making.
Open-source AI helps democratize that experimentation. But they’re not plug-and-play. You need the right infrastructure to run and maintain them. That’s where proprietary platforms still have an edge—ease of use, integrated UX, security, and compliance.
So yes, DeepSeek and others have a big future. But for now, most companies are mixing and matching—sometimes in surprising ways.
That makes sense, but as AI tools expand rapidly, do you think there’s a real risk of “feature creep,” especially with open-source tools? How should CTOs manage that?
Akkiraju: That’s a great question. When we talk about “feature creep” in AI, it’s really about model bloat—adding more and more capabilities, often at the expense of stability or clarity.
From a CTO’s point of view, you’re building an application, not just deploying a model. So what matters most isn’t which model has more features. It’s: Does this make my product better? Is it more reliable? Does it scale affordably?
Right now, model providers are focused on three things:
- Accuracy and consistency – Reducing hallucinations and making model output dependable.
- Efficiency – Getting smaller models to perform at par with larger ones. This is what DeepSeek nailed.
- Multimodality – Text, audio, image, video… if AI is going to be a real co-worker, it has to interact like humans do.
So CTOs should build their AI systems with abstraction layers. Don’t hardwire to one model. Think like you’re designing a car where the engine can be swapped out every year with a better one.
That’s a great metaphor. But it’s easy to get swept up in the excitement around open source and forget the operational cost. What are the blind spots CTOs tend to miss when going that route?
Akkiraju: Yeah, that’s a real issue. There’s often a gap between experimentation and production. Just because a model has millions of downloads doesn’t mean it’s powering real products.
Open-source models do offer a big win on cost—especially inference cost. And they’ve pushed even the proprietary players to be more efficient. But you’re also taking on more responsibility. You need a team that can fine-tune, deploy, secure, and monitor these models. It’s not a weekend project.
So CTOs need to ask themselves: What’s the business case?
If the use case is mission-critical and reliability matters, you might want the support of a commercial provider. But if you have a strong engineering bench and want flexibility and control, open-source AI can be a great bet. You just have to go in with your eyes wide open.
You’ve said this AI wave isn’t just about tech, it’s about culture. How are smart companies introducing AI without creating confusion or fear? Any roadblocks you’ve seen?
Akkiraju: Culture is huge. Across our portfolio at Insight, we’re seeing companies mandate AI literacy. They’re buying licenses to tools like ChatGPT, Cursor, and GitHub Copilot—then investing in training, not just access.
You can’t just say “Use AI.” You need to teach your teams how to use it responsibly and effectively.
And there’s still fear. People worry about job replacement. But the truth is—AI is here to enhance productivity, not replace workers. Klarna’s example is a great one. They replaced 200 customer support reps with AI chatbots, then had to rehire them because the experience didn’t hold up.
We’re still in the augmentation phase. Smart companies position AI as a tool that helps employees spend more time on creative, high-value work, and less on repetitive tasks.
Q: Alright, last one, what’s the one piece of advice you’d give to a Gen Z or millennial CTO stepping into the AI world today? No buzzwords, no hype—just the kind of guidance that actually sticks.
Akkiraju: Stay curious. Break things. Try models, test code, fail fast. This space is moving incredibly quickly, and no one has all the answers.
Don’t marry one model, design with abstraction. Use the right tool for the right task. Claude might be great for coding, ChatGPT for research, and Gemini for video. Let flexibility be part of your architecture.
And always remember: AI is a means to an end. Build something that solves a real problem, not just something cool with AI. If you can give users a better experience, more speed, more insight, more ease, then you’re doing your job. That’s what will endure.