building ai operating model

Building AI Operating Model That Delivers: Inside Thomas Squeo’s Playbook

Enterprise AI transformation: Learn the principles of building an AI operating model that connects technology, people, and strategy together, for sustainable business outcomes.

Enterprise AI adoption is accelerating at an unprecedented pace, yet many organizations are struggling to translate momentum into meaningful business outcomes. The challenge runs deeper than tools or models – it lies in how enterprises modernize their systems, structure their teams, and align technology with tangible business outcomes.

To explore this shift, we spoke with Thomas Squeo, CTO of  Thoughtworks Americas. In this conversation, he offers a clear, experience-backed perspective on building an AI operating model. Squeo emphasizes that scaling AI is not just about adding more tools – it requires alignment with culture, processes, and leadership to support AI-driven innovation.

He highlights how leading enterprises are moving from isolated AI experiments to fully embedded AI capabilities, demonstrating that the most successful organizations treat modernization, engineering, and AI as a connected system.

building ai operating model

AI and modernization strategy

Your recent research with IDC suggests that many enterprise AI initiatives struggle not because of the technology itself, but because the underlying approach is outdated. The report also notes that only 12% of organizations believe their modernization efforts actually prove value. Where does the misalignment usually occur between modernization investment and what AI actually requires? How does this affect achieving measurable business outcomes?

Squeo: At a high level, the misalignment comes from treating modernization, AI, and business outcomes as separate initiatives rather than a connected system.

Many organizations still approach modernization as a technology upgrade, focused on cloud migration, tooling, or cost reduction. At the same time, AI is pursued through isolated pilots or use cases, often disconnected from core business workflows. The result is a fragmented landscape. This is where the underlying architecture, data foundations, and operating model are not aligned to support AI at scale or deliver meaningful outcomes.

More specifically, the breakdown tends to occur across three layers of the stack and value chain.

  1. At the foundation layer, data is often siloed, poorly governed, or not structured for real-time use, limiting the effectiveness of AI models.
  2. In the engineering and platform layer, organizations lack the product-centric and platform-based capabilities needed to operationalize AI reliably, including MLOps, observability, and continuous delivery.
  3. On the business layer, there is often no clear linkage between AI initiatives and measurable value drivers such as revenue growth, cost optimization, or risk reduction. Without that alignment, modernization becomes activity without impact, and AI becomes experimentation without scale.

The organizations that are breaking through align all three layers. They treat modernization as the foundational prerequisite for AI and tie both directly to business outcomes from the start.

From projects to platforms

What are the first technical or organizational signals that a company is ready to move from intermittent modernization to a continuous model?

Squeo: At a macro level, the shift from intermittent modernization to a continuous model shows up when modernization stops being treated as a program. And starts operating as a built-in capability of the organization. This is a key step in building an AI operating model that can reliably scale across teams and systems.

The earliest signals are less about technology adoption and more about consistency in how decisions are made and work gets done.

You begin to see alignment around product-oriented structures, clear ownership of outcomes, and a bias toward incremental change rather than large, disruptive initiatives.

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

Modernization becomes part of the normal flow of delivery. Not something that is periodically funded and executed as a separate effort.

From a more concrete perspective, there are a few clear signals across the stack and operating model.

At the engineering layer, teams are deploying frequently with mature CI/CD pipelines, strong automated testing, and observability that informs real-time decisions. In the platform layer, there is an internal developer platform or shared capabilities that reduce cognitive load and standardize how systems are built and operated. On the data and AI layer, data is increasingly treated as a product, with governance and accessibility designed for reuse and real-time consumption.

Organizationally, funding shifts toward persistent product teams, while leadership focuses on measuring outcomes instead of outputs. Clear feedback loops connect business performance directly to engineering decisions, ensuring continuous alignment and impact.

When these elements are in place, modernization is no longer episodic. It becomes continuous, compounding, and directly tied to business value.

Technology shifts often require cultural shifts. What changes in leadership mindset are necessary to move organizations toward continuous modernization?

Squeo: At a broader level, the most important shift is moving from a control-oriented mindset to a systems-oriented one.

In traditional models, leaders focus on managing scope, timelines, and individual initiatives.

Whereas, in a continuous modernization model, the focus shifts to shaping the environment in which teams operate. That means prioritizing flow over milestones, outcomes over outputs, and adaptability over certainty.

Leaders must embrace incremental progress, foster continuous learning, and confidently make decisions even when information is incomplete. The role evolves from directing work to enabling high-performing systems that can sustain change over time.

More specifically, this shows up in a few critical behavioral changes.

Leaders move from funding projects to funding long-lived product teams with clear accountability for business outcomes. They shift from measuring activity to measuring impact, tying engineering efforts directly to value creation.

There is also a shift in how risk is managed, from avoiding change to reducing its cost through automation, testing, and platform capabilities. At the organizational level, this requires a lot more. It requires trust in teams, investment in engineering excellence, and a willingness to standardize where it accelerates and differentiate where it matters.

The leaders who succeed are those who see modernization not as a transformation to complete. But as a core organizational capability that continuously evolves with the business.

Engineering and architecture in building an AI operating model

The study notes that improved development speed and agility are key benefits organizations are pursuing. How closely tied are these engineering improvements to successful AI adoption?

Squeo: At a conceptual level, engineering speed and agility are not just adjacent to AI adoption – they are foundational to it.

AI systems are inherently iterative. Models need to be trained, evaluated, deployed, monitored, and continuously improved based on real-world feedback. If an organization cannot move code, data, and models through that lifecycle quickly and reliably, AI remains stuck in experimentation.

What many organizations underestimate is that AI amplifies the need for mature engineering practices. It increases the frequency of change, the complexity of dependencies, and the importance of feedback loops. Without strong delivery fundamentals, AI initiatives struggle to scale or sustain value.

More concretely, the connection shows up across multiple layers of the stack.

At the application lifecycle level, CI/CD pipelines evolve into CI/CD/CT (Continuous Training) pipelines that include model training and validation. In the platform layer, capabilities like MLOps, feature stores, and observability become critical to operationalizing AI. On the data layer, real-time access, data quality, and governance directly impact model performance. And at the business layer, faster engineering cycles enable tighter alignment between AI outputs and business outcomes through rapid experimentation.

Organizations that excel here treat AI as an extension of their engineering system, not a separate capability.

The ones that struggle tend to bolt AI onto slow, fragmented delivery environments. In these cases, the lack of speed and agility becomes the primary bottleneck to realizing value.

What are some practical steps technology leaders can take to reduce technical debt while still delivering new AI capabilities quickly?

Squeo: The key is to stop treating technical debt reduction and AI delivery as competing priorities. They need to be managed as a single system.

AI initiatives quickly expose weaknesses in architecture, data quality, and delivery pipelines. The most effective approach is to treat AI-driven work as a forcing function to address the right technical debt.

Instead of broad, unfocused remediation programs, leaders should target debt that directly impacts the speed of change, data usability, and system reliability. This creates a compounding effect where every improvement accelerates current delivery and enables future AI capabilities.

In practice, this comes down to a few focused actions across the stack.

At the application lifecycle layer, embed refactoring into the delivery process by allocating a fixed percentage of every sprint to addressing high-friction areas, particularly around test coverage, modularity, and deployment pipelines. On the platform layer, invest in an internal developer platform that standardizes environments, CI/CD, and observability, reducing variability and preventing new debt from accumulating. In the data layer, prioritize data quality, lineage, and accessibility, treating data as a product so AI models have a reliable foundation.

Moreover, shift funding toward persistent product teams with clear ownership of both delivery and system health. And make technical debt visible by tying it to business impact, such as cycle time, incident rates, or model performance degradation.

The leaders who do this well create a continuous loop where delivering AI capabilities actively improves the system rather than degrading it.

If you had to identify one structural change that would most improve AI success rates in large organizations, what would it be?

Squeo: According to me, the single most impactful structural change is shifting from project-based delivery to persistent, product-aligned teams that own outcomes end to end. This shift is at the heart of building an AI operating model.

Most AI initiatives fail not because of models or tools, but because they are executed as temporary efforts layered onto fragmented organizations.

When teams are disbanded after delivery, ownership of models, data pipelines, and ongoing optimization is lost. AI, by its nature, requires continuous iteration, monitoring, and refinement. Without stable teams that own both the system and the business outcome, organizations cannot sustain or scale value.

In practice, this means organizing around products or value streams with long-lived, cross-functional teams that include engineering, data, and domain expertise. These teams should own the full lifecycle, from data ingestion and model development to deployment, monitoring, and business integration. This spans multiple layers of the stack, including data platforms, ML and application layers, and the business workflows they support. It also requires a shift in funding and governance, moving from milestone-based investment to outcome-based accountability.

Organizations that make this change create the conditions for AI to compound over time, where each iteration improves both the system and the business result, rather than resetting progress with each new initiative.

Leadership journey

Over the course of your career, you’ve worked closely with organizations navigating large-scale technology transformation. Which experiences have most shaped your perspective and leadership approach as a Chief Technology Officer?

Squeo: My leadership perspective has been shaped by three defining experiences prior to Thoughtworks. Each of these experiences reinforced that transformation is fundamentally about people, systems, and discipline working together.

In the Navy, I learned what it meant to be well-led in environments where continuous improvement was not optional. It was embedded at the operational, organizational, and individual levels. Leadership was treated as a skill that required constant investment and embodiment. At the same time, we were building and operating systems at what today would be considered internet scale, in mission-critical contexts where failure was not an option. That experience grounded me in the importance of accountability, clarity, and resilience under pressure.

Those principles were applied at an enterprise scale, at Measured Progress and later at Intrado.

At Measured Progress, I led a large-scale agile transformation across engineering, operations, and product. The focus was shifting the organization toward more adaptive, outcome-oriented ways of working.

At Intrado, the focus expanded to a full enterprise engineering transformation, introducing platform engineering, evolving from project-based to product-led models, and directly impacting business valuation and talent attraction.

Across all three experiences, the consistent lesson was servant leadership. The role of leadership is to create the conditions where good teams become great teams. Where engineering excellence aligns with business outcomes. And where transformation becomes a sustained capability rather than a one-time event.

As CTO for the Americas at Thoughtworks, you operate at the intersection of technology strategy and enterprise change. What does your role look like in practice? Also, what are some of the key priorities that occupy your day-to-day work?

Squeo: In practice, my role sits at the intersection of strategy, practice, and partnerships. With a primary focus on advisory across the Americas.

I work closely with senior executives to shape business technology strategy, with a particular emphasis on AI-driven engineering transformation and AI-assisted modernization in large enterprises. These engagements are rarely constrained by technology alone.

The real challenge, and where I spend most of my time, is at the socio-technical intersection of operating model, culture, and technical strategy. This includes defining target architectures, evolving organizations toward product and platform-based models, and identifying where AI can be embedded to drive meaningful business outcomes rather than isolated use cases.

Beyond client work, I focus on strengthening Thoughtworks’ market position by advancing our practices and deepening our partner ecosystem. This includes incubating new approaches in areas like agentic AI, platform engineering, and modern software delivery. All while ensuring our capabilities are both leading-edge and grounded in execution.

I also work closely with delivery and leadership teams to translate strategy into action. This is to ensure that what we design is practical, scalable, and outcome-driven.

Day to day, the role balances advisory, capability building, and ecosystem alignment. The consistent priority is helping clients navigate complex transformation in a way that is both ambitious and achievable.

Could you share what currently excites you most about the work happening at Thoughtworks? Also are there any magnificent transformations you’re seeing across the industry?

Squeo: What excites me most right now is the convergence between modern engineering practices and AI. Particularly the emergence of agents and more agentic systems.

For years, we have been helping organizations move toward product-centric operating models, platform engineering, and cloud-native architectures. Those foundations are now proving to be critical enablers for a new phase of transformation.

Today, AI is no longer an add-on capability. It is reshaping how software is built, how systems operate, and how teams engage with complex problem spaces.

At Thoughtworks, we are seeing firsthand how AI, combined with strong engineering discipline, enables teams to go beyond incremental improvement. It opens the door to entirely new ways of working and delivering value.

Across the industry, the most compelling shifts are happening where organizations use AI and agentic approaches to fundamentally change the economics and speed of delivery.

We are seeing leading firms free engineering teams from routine toil, allowing them to focus more deeply on higher-order problems and innovation. With the support of modern applications, work that was traditionally measured in quarters is now being delivered in weeks. And sometimes even more faster. While efficiency is part of it, the real change is in what teams feel empowered to pursue. Now they are experimenting more and learning faster. They are also pushing further into problem spaces that were previously out of reach.

The organizations that are getting this right are not just adopting AI. They are integrating it into their operating model, engineering systems, and culture, which is where the real transformation is happening.

Looking ahead

As enterprises mature in their AI journeys, how do you see the strategies evolving over the next five years?

Squeo: I would challenge the premise of a five-year horizon in this space. The velocity of change we are seeing in AI, particularly around foundation models, agentic systems, and developer tooling, makes that kind of forecast window increasingly unreliable.

What used to evolve over multi-year cycles is now compressing into quarters. Strategies that look differentiated today can become table stakes within 12 to 18 months. So rather than trying to predict a fixed end state, the more useful lens is how organizations build the capability to continuously adapt as the landscape shifts.

What we are seeing, even in the near term, is a clear directional evolution. Organizations are moving from isolated use cases to AI embedded across value streams, from copilots to more autonomous agentic systems, and from experimentation to operationalization at scale. This spans the full stack, from data foundations and AI platforms to application architectures and business workflows.

The differentiator will not be who picks the right model or tool, but who can integrate AI into their operating model, engineering system, and decision-making processes.

The next phase is less about adopting AI and more about becoming an AI-native enterprise, where continuous learning, rapid iteration, and tight alignment to business outcomes are built into how the organization runs.

Advice and mentorship

What advice would you give to future CTOs who want to build an AI strategy that truly supports long-term innovation and growth?

Squeo: The most important advice is to treat AI strategy as a business strategy enabled by technology, not a technology strategy in isolation. Many organizations start with models, tools, or use cases, but the real leverage comes from aligning AI to how the business creates value. That means being explicit about where AI can drive differentiation, where it can improve efficiency, and how it connects to measurable outcomes. It also requires laying the foundations for sustainable AI, including data quality, engineering discipline, and platform capabilities. Without that alignment, AI efforts tend to fragment into isolated experiments with little lasting impact.

In practice, future CTOs should focus on building adaptive systems rather than fixed roadmaps. This spans multiple layers of the stack and value chain. At the data layer, treat data as a product with clear ownership, governance, and accessibility. At the platform layer, invest in capabilities such as MLOps, observability, and internal developer platforms to enable rapid iteration. In the application layer, design systems that can incorporate AI dynamically rather than as a bolt-on.

Likewise, shift to product-aligned teams that own outcomes end to end, with feedback loops tying AI performance directly to business impact. The real winners are those who continuously learn and evolve. Because in AI, lasting advantage comes from compounding progress, not a single breakthrough.

In brief

Building enterprise-grade AI isn’t just about deploying more models – it’s about creating the right foundations to sustain and scale them effectively.

Organizations that treat modernization, engineering, and AI as a connected system – rather than separate initiatives – are the ones turning ambition into real impact. As Thomas Squeo emphasizes, the future belongs to enterprises that embed continuous modernization into their DNA, align technology closely with business outcomes, and create the conditions for innovation to compound over time.

About the Speaker: Thomas is a results-oriented technology executive focused on the business of software. Capable and forward-looking, Thomas leads within mission-driven, agile organizations that differentiate themselves through excellence in product innovation and customer-centered solution delivery. With a proven record in product development, program and project execution focused on the application of mobile, social, data and cloud technologies. Thomas has led in organizations ranging from start-up to Fortune 500; product, service and consultative; public and private sectors; for-profit and non-profit in roles from operational to revenue-generating with a focus on lowering barriers to entry through simple, human-centered solutions.

Gizel Gomes is a professional technical writer with a bachelor's degree in computer science. With a unique blend of technical acumen, industry insights, and writing prowess, she produces informative and engaging content for the B2B leadership tech domain.