Dr. Oliver Bastert on Digital Twin Technology Beyond the AI Hype
In today’s enterprise environment, decision-making has become a high-stakes exercise in managing complexity. Business leaders are expected to optimize across competing priorities, cost, resilience, sustainability, and speed. This needs to be done while navigating volatile markets, fragmented data, and constant disruption. Traditional analytics can explain what happened. What they often fail to answer is what should happen next.
That gap is where decision intelligence is quietly becoming mission-critical.
For decades, Gurobi Optimization has been at the forefront of this shift. The company has helped enterprises turn complexity into a competitive advantage through mathematical optimization.
At the center of Gurobi’s next phase of innovation is Dr. Oliver Bastert, the company’s Chief Technology Officer. A leader in decision intelligence technology, Dr. Bastert has a background spanning mathematics, computer science, product management, and enterprise AI. With more than two decades of experience working at the intersection of theory and real-world impact, he has made significant contributions.

Dr. Oliver Bastert, CTO, Gurobi Optimization
His mandate is clear: advance Gurobi’s industry-leading solvers while responsibly integrating generative AI into decision-critical systems. Dr. Bastert cuts through the hype surrounding AI and digital twin technology.
He describes how GenAI can augment rigorous models and what technology leaders must rethink about data, governance, and innovation culture in the years ahead.
You’ve moved fluidly between academia, deep engineering, and executive leadership. When you look back at that journey to the CTO role, what experiences most shaped how you think about technology today? What are you focused on now at Gurobi?
Dr. Bastert:
I’m currently the CTO at Gurobi Optimization. We’re a company focused on optimization algorithms. I lead the Research and Development organization, where we develop both algorithms and platform and deployment solutions. Most recently, we’ve been combining generative AI with optimization.
Prior to this, I had a classical education in mathematics and computer science. After completing my PhD, I became an optimization developer. I then moved to another firm, where my product was acquired and I transitioned into product management. So, I bring both product development and product leadership experience to this role.
We’re in a moment where nearly every product roadmap claims to be disruptive, often driven by AI. From your vantage point across research and product development, what changes do you see that are structurally disruptive rather than cyclical or hype-driven?
Dr. Bastert:
To answer that, I’ll probably take a second to describe how our products are used. With optimization, you can determine the best course of action for many different business problems. That could involve designing the most efficient or resilient supply chain, planning robust flight schedules or crew rotations, or ensuring reliable power grid operations. Those are just a few examples; there are many more.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
If you want to solve an optimization challenge like that, the first step is to build a rigid mathematical formulation. You can think of it as a digital twin of the business questions you want to resolve. Once you’ve computed the optimal solution, the next step is to inspect it, interact with it, and really understand it.
Traditionally, these two steps are typically performed by experts. Building these models is often a PhD-level task, or by creating concrete solutions around the models, so business users can interact with them.
The disruption, unsurprisingly, comes from generative AI. It enables us to ease both steps: building the initial model, the digital twin, as well as allowing business users to interact with the solution, receive explanations, and run what-if scenarios. All of this helps organizations better understand how to operate their businesses effectively, and it’s an area where we’re investing heavily.
You mentioned digital twin technology, and this is something we frequently hear from various tech teams, often with very different interpretations. From your perspective, what are the key processes organizations need to adopt when developing effective digital twins?
Dr. Bastert:
Let me use supply chains as an example, as they are a very tangible concept. A typical project starts by capturing data, demand and supply forecasts, constraints, and operational assumptions.

You then build a digital twin of the entire supply chain at a suitable level of abstraction. Once data flows through that model, it generates recommendations on where to build a distribution center, which ones to close, or how to reconfigure logistics.
But that’s rarely the final answer. What follows is a dialogue with the digital twin. You ask, ‘Why didn’t you choose this location?’ What if demand changes? What’s the best course of action now? That interactive loop is how organizations derive real value.
Do you believe this approach, combining optimization with digital twins, will become a genuine differentiator in enterprise environments as we progress?
Dr. Bastert:
Absolutely. In many industries, this is already an established practice. You couldn’t operate airplanes without optimization. You couldn’t manage production facilities, supply chains, or energy networks without it. These are all heavy users of optimization; it’s part of day-to-day business operations.
That said, there are still some departments and industries that are clearly behind. One reason is the historically high barrier to entry, although that barrier is coming down, and we’re actively working to reduce it further.
The other reason is the notion that you need to have your data ideally in order before you can apply advanced analytics. While I agree with that to an extent, businesses need to be careful not to get stuck in endless data preparation and BI only. They should move toward predictive and eventually prescriptive analytics, like optimization.
You do need good data. However, understanding what advanced analytics methods can actually do, and moving forward, is what ultimately drives business success.
The data reality check, AI, simulation, and the limits of approximation
Data readiness is often the prerequisite and the bottleneck for advanced analytics. From what you’ve seen in the field, where do organizations struggle most, and how can they move forward without waiting for “perfect” data?
Dr. Bastert:
We’re a vendor providing optimization technology, so we mostly advise customers rather than manage their data directly. Internally, our data operations are relatively simple.
What I consistently see is that organizations have massive amounts of data but don’t fully use it for decision-making. They stop at reporting and business intelligence. When they move beyond that—toward optimization—they can directly influence core KPIs and even improve metrics like environmental impact.
Today, enterprises are experimenting with predictive AI, AI simulators, and optimization engines, sometimes interchangeably. How do you see AI reshaping digital twin technology, and where do you think the boundaries should be clearly drawn?
Dr. Bastert:
That’s a fantastic question. There’s this notion that optimization is also an AI technology, like machine learning or LLMs, so we’re playing in that same field.
There’s one big difference. Mathematical optimization is a very rigid, deterministic, and precise methodology. As you mentioned in your question, we now see machine learning or generative AI providing solutions for digital twins.
These are often awe-inspiring solutions, but they don’t follow the same rigor. They can’t be optimal, or they can’t prove optimality. I see that as a double-edged sword.
It’s great because the hype drives adoption. On the other hand, it’s a considerable risk—because with imprecise solutions, or solutions that you can’t actually put into production because they don’t fulfill all of your constraints accurately, you can run into serious problems and potentially damage the reputation of advanced analytics models.
That’s something we really need to watch out for, and it’s where we need to educate our customers and future users about the differences, advantages, and challenges of these approaches.
As digital twins move closer to operational decision-making, what risks do you think technology leaders tend to underestimate?
Dr. Bastert:
The key risk is forgetting that a digital twin is not a real twin. It’s always an abstraction. Choosing the right level of abstraction still requires experience, some would even call it an art.
On the one hand, the model must accurately represent the business question you’re trying to answer. On the other hand, it also needs to be solvable. If your digital twin is too close to reality, if it’s a “true” twin, you haven’t actually simplified the problem, and you’re essentially operating in the real world again.

Finding that right abstraction layer, one that captures what matters while remaining computationally tractable, is the real challenge at the heart of the digital twin concept.
Data Governance and legacy thinking in an agentic world
You’ve also touched on data. Data governance has shifted from a back-office concern to a board-level issue, particularly with autonomous systems gaining agency. With data governance becoming a central topic across enterprises today, what’s your perspective on its growing importance?
Dr. Bastert:
Data governance is becoming increasingly important, particularly as we transition to agentic systems that necessitate broader and more autonomous access to data. From that perspective, the data governance challenge is rapidly expanding in both scope and urgency.
That said, I’m looking at this primarily from an outside or advisory standpoint. At Gurobi, our focus is on algorithms rather than directly solving our customers’ data challenges. But from what I see across the industry, the importance of getting data governance right is only accelerating.
If you were advising an organization starting fresh today, which legacy practices, technical or cultural, would you argue most strongly against keeping?
Dr. Bastert:
On the data side, it’s very clear. Across technology, we’re moving toward modular architectures, we’ve gone through microservices, and now we’re talking about agents. That’s a well-established trend.
However, in the data world, many organizations continue to operate with monolithic systems. They’re difficult to maintain, and when you start talking about accessibility, governance, and control, everything becomes incredibly hard on those legacy platforms.
Breaking those systems apart into more modular, domain-centric data architectures is the right direction. It’s a significant undertaking, it’s not a trivial problem, but it’s the best path forward.
That said, I want to come back to an important point: we shouldn’t get stuck endlessly reengineering our data. There’s already a lot of value that can be derived from data in its current form. So I strongly advocate doing both—cleaning up data architecture over time, while also ensuring you’re driving real value from your data today.
There’s often a search for “new” data sources. From your perspective, is the bigger gap really about missing data or about how organizations translate data into decisions?
Dr. Bastert:
It’s less about specific data sources and more about how data is used. Organizations are good at understanding and forecasting from data, but they struggle to turn that insight into structured decision-making.
Moving toward prescriptive analytics, using data to determine the best possible actions, is where the most significant opportunity lies.
Separating signal from AI Hype and leadership philosophies
Every technology cycle creates its own hype curve. In the current AI-driven moment, where do you think skepticism is not only healthy but necessary?
Dr. Bastert:
Whenever you’re in a hype cycle, let’s take generative AI as an example, you get the impression that it will be able to solve every problem. That’s probably not true.
I strongly believe in the potential of LLMs and many AI technologies. Where I’m skeptical is in how broadly people apply them without understanding which use cases they actually deliver value for. In optimization, for instance, I’m thoroughly convinced of the power of combining LLMs, generative AI, and natural language processing with optimization.
However, if someone says, “I can just ask ChatGPT to optimize my supply chain because it can do everything,” that will fail. And it’s not due to a lack of intelligence in the LLM; it won’t be able to do that in ten years either. It’s simply the wrong technology for that problem.
That’s why this kind of skeptical thinking is important: understanding what technologies can and cannot do, and applying them to the proper use cases.
As a CTO, staying with execution and leadership, how do you manage the pressure to innovate when you’re also facing quarterly expectations, ROI scrutiny, and constant demands to move faster? How do you create space for deep innovation without losing business momentum?
Dr. Bastert:
There’s always pressure to move faster and do better. One thing we’ve intentionally allowed ourselves, particularly for our core algorithmic development, is to operate on an annual release cycle.
That means we’re not driven by quarterly pressure, and we can dedicate real time to pure research and experimentation. Even with a very slick release process and continuous delivery, there’s always overhead. What this cadence provides is a substantial block of time during which high-level research and development can mature properly before being brought to market.
It’s a slightly slower cadence than usual, but it allows us to innovate at a much higher level.
Leadership philosophies tend to evolve with experience. Is there a belief you once held strongly that you’ve since had to recalibrate?
Dr. Bastert:
Early in my career, I firmly believed in extreme ownership, taking responsibility for everything. The idea that you’re in charge of everything, accountable for everything, and responsible for getting everything right. I still believe in that mindset when you’re trying to move something challenging forward. Owning it and delivering can be incredibly effective.

But I’ve also learned that it’s not how you scale, and it’s not sustainable in the long term. It’s simply too exhausting, and it’s not the kind of culture I want to build or work in.
Today, I focus on having my team be truly dedicated and responsible for their own goals and tasks. My role has shifted more toward orchestration and mentoring. That said, failures still ultimately sit with me; I’m the accountable person, but the way we get there is very different.
With multiple generations now working side by side in tech teams, what cultural behaviors are absolutely non-negotiable if you want to sustain speed and build a highly velocity tech team?
Dr. Bastert:
For me, it’s trust. I want honest people who tell the truth, who are open-minded, and who are willing to learn from their mistakes. If you’re moving fast and trying to innovate, things will go wrong, and we have to accept that.
So how do we get there? First, the organization or team must genuinely trust its people. At the same time, those people also need to be willing and able to trust the team and the organization.
When you have that kind of mutual trust in place, the behaviors I just described naturally begin to emerge.
For emerging CTOs, especially those early in their leadership journey, what guidance would you offer that often gets overlooked?
Dr. Bastert:
Love your technology. Love your product. Share that passion with your team, and co-create value for your customers. It sounds simple, but it works.
When you look ahead for a few years, which areas of today’s tech stack do you expect to become far less manual, or at least far less central?
Dr. Bastert:
Whenever you make projections like this, you’ll almost certainly be wrong, and I’m very sure I’ll be wrong on this one as well. That said, if I look at areas like data processing and ETL, or on the other hand, DevOps, they are still extremely heavyweight and very script-heavy today.
What I can see is that this is an area where LLMs or generative AI can genuinely help make us more agile. That does introduce new risks, and we shouldn’t ignore those. But I think the value of adopting AI in these areas will be tremendous.
I don’t believe these technologies will disappear entirely. After all, we still see tech from 20 years ago being happily used today. But I do expect a significant shift toward them being far less manual, and likely far less used in their current form.
Want to explore where enterprise innovation is heading, and what leaders must do next? Read here!