
Engineering Discipline for AI-Driven Software Development: Insights from Divyesh Patel
The pressure on enterprise leaders to create AI-driven software has never been higher. Boards expect it. Investors signal it. Competitors announce it. But for the engineers and architects who must actually deliver these systems (at scale, under real-world conditions, with meaningful accountability) the challenge is far more nuanced than the conversation around it tends to suggest.
Divyesh Patel, CEO of Radixweb, a global software engineering firm with a 25-year track record across industries including healthcare, fintech, and legal services, has watched multiple waves of transformative technology arrive, overpromise, and eventually settle into something durable and useful. He believes AI will follow a similar arc, but only for organizations willing to build the discipline to use it well.
In this conversation, Patel shares his perspective on where AI is genuinely changing software development, what separates productive adoption from performative urgency, and how he expects the role of engineering partners to evolve as AI matures.
The relationship between a business problem and a technology solution has always been complicated. Has AI made that relationship more or less clear for the organizations you work with?
Patel: Honestly, more complicated… at least in the short term. Five years ago a client would come in and describe a problem. Here is what’s broken, here is what we need to fix. That’s actually the best possible starting point, because it gives you something real to work with before you start talking about approaches.
Now a significant number of clients walk in and within the first five minutes they’ve said AI three or four times, before they’ve even properly described what they’re dealing with.
I understand the pressure they’re under. Leadership is being asked to demonstrate action on this front, and that’s a reasonable expectation. But when AI is the starting point rather than the outcome of a proper diagnostic, you often end up building something that looks like progress without actually changing anything that matters. So we invest more time now at the beginning of an engagement just slowing things down — making sure we understand what we’re actually solving before we talk about how.
How do you help a client get from ‘we want AI’ to a problem worth solving?
Patel: We ask them to describe the specific decision or process they’re trying to change, and to be very concrete about it. The quality of that answer tells you almost everything about whether a project is going to go well. If someone can say, “right now our support team manually triages five hundred tickets a day and we want to automate the categorization so they can focus on resolution” — that’s a conversation we can do something genuinely useful with. But if the answer is “we want to make our product smarter” or “we want to leverage AI to improve the user experience,” then we have to do a lot more digging before we can be helpful. The specificity of the problem determines the quality of the solution. That’s not new, but it matters more with AI because the surface area for ambiguity is so much larger.
There’s an enormous amount of noise around AI-driven software. What is genuinely real and where does the hype take over?
Patel: The productivity impact is real. When experienced developers work with these tools thoughtfully, the speed at which they can move on certain kinds of problems is legitimately different from what was possible even two or three years ago. I have seen that firsthand in our own teams. So that’s not hype, that’s a genuine shift in what a well-equipped engineering team can accomplish.
Where the hype takes over is in the idea that AI can somehow replace the judgment and experience that good software engineering requires. It can accelerate a lot of things. But it doesn’t understand context the way a seasoned engineer does. It doesn’t know the history of a system, it doesn’t understand why certain architectural decisions were made, and it doesn’t understand the downstream consequences of getting something wrong in a specific environment. There’s a category of work that requires that kind of deep situational knowledge, and AI tools are not close to handling it independently.
How are you integrating AI into your own engineering workflows, and where have you drawn deliberate boundaries?
Patel: We use AI-driven software tools across code assistance, automated testing, documentation, and requirement analysis. These are everyday parts of how our teams work now. But we’ve been quite deliberate about where we draw the line.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
Architectural decisions still come from experienced engineers.
How a system is structured, how components interact, where the failure points are… all that’s not something we hand to a tool. We’ve also invested heavily in training our people not just to use these tools, but to question what comes out of them. A large portion of our developers now hold AI certifications from IBM, Microsoft, and Azure, and that came from something we noticed early on: AI output can sound completely right and still be wrong for your specific situation. A developer without sufficient experience won’t catch that. The work for us has been less about adopting AI and more about building the right discipline around it.
Industries like healthcare, fintech, and legal operate with zero tolerance for unexplained error. How does your approach to AI-driven development change when the stakes are that high?
Patel: We are considerably more conservative in those environments, and I think that’s the right posture. The way we think about it is that AI in regulated contexts should be augmenting human judgment, not replacing it. It can help a clinician surface relevant information faster, it can help a compliance team flag anomalies they might have missed, it can help a legal team work through documents more efficiently but the human has to own the decision.
What we also spend significant time on in regulated industries is explainability. It is genuinely not sufficient for an AI system to produce a correct answer if it cannot show you how it arrived there, because the people who are accountable for those outcomes (say the doctors, the compliance officers, the lawyers) need to be able to follow that reasoning and stand behind it. A large portion of the AI solutions currently being marketed into these industries don’t meet that bar. They perform well in controlled settings and fall apart when you introduce the complexity of a real operating environment. That’s one of the reasons we tend to move carefully in these sectors.
What does good AI governance actually look like inside a software engineering organization?
Patel: It means having clear accountability for every AI-assisted output before it reaches production. It means not treating AI certification as a one-time credential but as an ongoing discipline — the tools are changing fast enough that what someone knew twelve months ago may not reflect what they need to know today. It means documenting the reasoning behind architectural decisions, particularly the boundaries you’re setting around where AI is and isn’t making consequential choices. And it means having the courage to slow things down when the pressure to ship is running ahead of confidence in what you’re shipping. Governance sounds bureaucratic, but in practice it’s just the discipline that prevents preventable failures.
How would you advise a CTO today who is trying to separate genuine strategic opportunity from noise?
Patel: I’d tell them to start with the problems their organization actually has, not the solutions being marketed at them. The organizations getting the most real value from AI-driven software development right now are the ones that identified high-friction, well-understood processes and applied AI thoughtfully to reduce that friction, not the ones that announced an AI strategy and then went looking for use cases. The technology is genuinely capable of delivering meaningful impact. But it requires the same rigor that any serious engineering challenge requires: understand the problem deeply, design the solution deliberately, and build in the mechanisms to know when something is going wrong before it causes damage.