Perspectives episodes
Olivier Pomel (Datadog): Reimagining observability for the AI-powered enterprise

Olivier Pomel (Datadog): Reimagining observability for the AI-powered enterprise

Co-founder and CEO of Datadog, Olivier Pomel built one of the most important cloud infrastructure companies of the decade. In this discussion he explains how observability and modern infrastructure make high-scale AI systems possible.

Table of Contents

Summary

Key takeaways

  • Teams now spend less time building applications and more time running them – and understanding system behavior in production is the only way to ensure they work as expected.
  • AI's real value is eliminating reaction mode. The biggest productivity gains come from automating firefighting – incident response, false alarms, system maintenance – so teams can focus on thinking and building instead.
  • Trust requires accuracy well above 90%. While humans can be effective at 70% accuracy, AI systems need much higher standards to earn adoption. Security agents need consistency; incident response agents need speed.
  • Short feedback loops prevent self-deception. Datadog's early decision to sell monthly contracts rather than multi-year deals provided immediate visibility into what was working.
  • Simplicity requires deliberate counterweights. The bottom half of Datadog's customer base represents 1% of revenue but forces product discipline by surfacing when features become too complicated for everyday use.

Few companies have shaped modern engineering culture as profoundly as Datadog, the platform that helps engineering, product, and security teams understand how their systems behave in the real world.

As organizations have adopted cloud-native architectures, microservices, and now AI-infused applications, Datadog has become central to the way teams monitor reliability, performance, and security across environments that grow more complex every year.

At the helm is Olivier Pomel, Datadog’s co-founder and CEO. Since launching the company in 2010, Olivier has led Datadog from its early startup days to its current status as a publicly traded company valued at more than $55 billion. Along the way, he has guided it through multiple waves of technological change, from the rise of cloud computing through to AI transformation.

That vantage point gives Olivier a uniquely clear view into how software development is shifting. In this conversation, he explains how AI accelerates system complexity, why troubleshooting increasingly happens in production, and what it will take for enterprises to trust AI-driven automation inside mission-critical systems.

As leaders navigate questions about operational resilience, data quality, and AI assisted decision making, Olivier offers hard-won clarity and practical guidance.

Observability matters in an AI-powered enterprise

Ask Olivier how observability is changing, and he first goes back to the principles. “Observability is basically understanding how your applications are working,” he says. That means understanding both how the system functions technically for engineers and how it performs for the business and its customers. In practice, observability spans everything from the central processing unit (CPU) and network metrics to the revenue an application generates.

Teams once spent years building software that ran on a single machine. Now, with AI, something created in minutes can run on thousands of machines and hundreds of services.

“There has been an explosion of complexity for applications in general. We are spending a lot less time building and a lot more time running applications.”

Olivier Pomel, Co-Founder and CEO, Datadog

Observability becomes the only reliable way to tell whether applications are working as expected, whether they are secure, and whether they are delivering the outcomes the business needs. Without strong signals and clear feedback loops, teams are left reacting to issues they cannot fully explain.

This is the core problem many companies are facing now. As the amount of data flowing through their systems grows, so does the need to make sense of it all.

Freeing teams from reaction mode

When Olivier talks about the role AI should play inside modern engineering teams, the first theme he touches on is recouping the hours lost to what he calls “reaction mode.” That includes wrestling issues that pop up unexpectedly, making last-minute changes, and handling benign alerts that still require investigation.

“If you ask engineers what they spend their day on, they will tell you there is so much of their time they spend firefighting,” he says. “If we can get rid of that and have the AI do a lot of that, we free a lot more time to think, to build, to understand what matters.” 

“If you ask engineers what they spend their day on, they will tell you there is so much of their time they spend firefighting. If we can get rid of that and have the AI do a lot of that, we free a lot more time to think, to build, to understand what matters.”

Olivier Pomel, Co-Founder and CEO, Datadog

Rather than replacing human judgment, Olivier believes that AI agents clear space for it. Instead of spending most of the day chasing false alarms, teams can concentrate on improving their overall posture and planning what comes next.

The second theme is data. AI is only as good as the information flowing into it, which is why Datadog puts so much emphasis on improving data quality. “If we can get better data, faster data, cleaner data, we can make better decisions,” he says. That starts at the source. The application itself is often the most reliable place to capture the signals that reflect how the business actually runs, whether it is an ecommerce platform or a customer-facing service.

But data rarely stays in one place. “Data typically follows a lot of different steps until it is fed into dashboards or AI models,” Olivier explains. Datadog has had to build for that entire journey. That reality led the company to create new capabilities in data quality and data observability so customers can trust the information downstream. Ultimately, the aim is to spend less time reacting and more time thinking, with better data feeding every decision.

The end goal is systems that fix themselves

The idea of software that repairs itself has floated around the industry for years, but Olivier is clear about where things stand today. “We are not there yet,” he says.

Fixing an application that fails in the middle of the night is still real work, usually handled by highly skilled engineers who understand complex systems and can form and test hypotheses under pressure. Even as Datadog builds tooling to automate parts of this process, “the applications themselves keep getting more complicated,” which means teams are still playing catch-up.

What has changed is that AI can give those skilled engineers a leg up. “What used to be science fiction a few years ago is possible now,” Olivier says. Large models can now read documentation, interpret user interactions, and process signals that would have taken hours to stitch together manually before.

That shift is starting to show up in real workflows. Datadog’s AI SRE agent can now investigate incidents on its own to help engineers understand what broke. “In a large number of cases,” says Olivier, “by the time customers wake up, there is already a hypothesis of what is wrong.” Instead of launching a Zoom call with 20 people and spending three hours diagnosing an issue, teams can often narrow down the problem in minutes.

We have not yet reached the world Olivier wants, where everything is fixed by morning and nobody is waking up to alarms. But it is a meaningful step. Teams can spend less time hunting for the cause of failures and more time ensuring their systems work the way they should. And each improvement brings the industry a little closer to the long-standing vision of software that can handle more of its own complexity.

What should and shouldn’t AI decide?

As AI takes on more operational work, the question becomes where automation should stop.

Olivier’s view is practical and grounded in experience. “All the bigger decisions are not going to be made by AI for quite some time,” he says. Choices about resource allocation, long-term priorities, and product direction remain firmly human. What AI can handle today is the smaller, real-time actions that drain time and attention.

That line is not the same across every domain. With Datadog’s AI SRE, teams can safely automate steps like restarting systems or rolling back code. With the security agent, automation can go even further. The reason, Olivier explains, comes down to the cost of overreacting versus underreacting. If a security system overreacts, the downside is usually a short period of reduced availability. If it underreacts, the downside can be a full security incident, which is far more serious. As a result, customers are more comfortable letting AI take quick action in security than in operations.

“Users prefer to be in the loop today” regarding operations, says Olivier. This may shift over time, but the threshold for trust looks different depending on the risk profile of the decision. AI will keep expanding into operational tasks, but the boundaries will continue to reflect how organizations weigh risk, culture, and the cost of being wrong.

Calibrating trust in AI systems

Humans are wrong all the time. Engineers form hypotheses, share them with coworkers, and later discover they were off.

“Humans being right 70% of the time is actually great,” Olivier notes, but the same tolerance does not apply to machines. “If an AI is only right 60% or 70% of the time, it is not going to work.” People expect more from automated systems and have far less patience for being led in the wrong direction by an AI model.

That difference shapes how Datadog evaluates its agents. Before anything goes to customers, the team needs to know the model is “well north of 90%” accurate for the use case or cases it supports. 

The standard varies depending on the domain. Security requires stability and repeatability, so the model needs to produce the same answer if asked twice about the same issue. Incident response demands speed, so the model needs to be able to run in minutes, even if the exact result differs each time because the underlying data is changing.

Different teams need different guardrails, but the principle is consistent. AI systems must be accurate enough, stable enough, and fast enough to earn their trust. Without that, adoption stalls and automation becomes more of a burden than a benefit.

How Datadog organizes for AI adoption

Olivier is clear that AI adoption cannot succeed through a single, inflexible approach. Some initiatives require top-down direction, but many depend on the insight that comes from teams working closest to the problems. Teams encounter specific bottlenecks and opportunities long before leadership can see them, which makes bottom-up experimentation essential.

For every new initiative, Datadog starts with a simple question: “What does it look like if it works?” Teams define the criteria upfront and revisit them over time, often using a six-month checkpoint to decide whether to continue or stop. The goal is to stay honest about what is delivering value rather than letting excitement or momentum carry a project forward.

A significant portion of management focus now goes toward product development for AI. Datadog has created new data operations functions and built an AI research team to keep pace with capabilities that evolve quickly. On the engineering side, adoption of modern coding agents has been rapid. The tools clearly improve day-to-day workflows, even if their long-term business impact is still being measured.

Outside of product, Datadog relies on existing performance metrics to understand where AI is making a meaningful difference. Support response time is one early indicator Olivier highlights, since the company already has stable baselines for what “good” looks like. These measurable areas help Datadog see where AI is creating real, repeatable value.

Datadog’s New York City origins shaped its outlook

Datadog’s early years as a startup running out of NYC did not follow the usual Silicon Valley arc. The company struggled to fundraise, and with better-resourced competitors on the West Coast, was never treated as the obvious winner. That shaped the internal culture. “Nobody was telling us we were geniuses, so we were not tempted to believe it,” Olivier says.

Instead of building for a small circle of tech-forward startups (which often happens in Silicon Valley), Datadog spent its earliest months speaking with the broader set of enterprises that actually needed the product. Those conversations helped the team focus on real, persistent problems rather than industry trends.

Location also influenced talent and continuity. Retention in New York was stronger than in the Bay Area’s intense hiring market, giving Datadog a more stable and grounded team during its formative years.

But the tech landscape is changing again today. AI has pulled talent and energy back to the Bay Area, and breakthrough research now turns into products faster than ever before. Olivier believes companies have to pursue ideas earlier, even when the problems are not fully clear, and accept that they will “be wrong more often” in fields that overlap with AI.

Datadog’s core philosophy remains unchanged: stay close to customers, avoid chasing hype, and keep the product simple enough for both startups and global enterprises. Building outside the center of gravity helped Datadog develop that discipline, and it continues to guide how the company operates in the AI era.

Maintaining simplicity with a scaling customer base

As Datadog has grown from a startup tool to a platform used by some of the world’s largest enterprises, Olivier has kept a firm view on simplicity. A product becomes a reflection of the customers you spend time with – and, without balance, complexity can creep in.

Many enterprise software companies begin their journey selling to startups, then shift their attention to big accounts as those contracts grow. Over time, the product bends toward sophisticated edge cases and loses clarity. Olivier calls this outcome “the enterprise abomination,” where software becomes so complex that most people no longer want to touch it.

Datadog works deliberately to avoid that trap. Large enterprises influence the roadmap, but the counterweight comes from tens of thousands of small teams using the product every day. Datadog’s free tier and broad startup adoption give the company constant, grounding feedback. When something becomes too complicated, these users surface it quickly.

Olivier points out that the bottom half of Datadog’s customer base represents only about 1% of revenue, yet they are essential. They force simplicity, prevent the product from drifting into enterprise-only territory, and ensure Datadog remains usable without layers of consulting or customization.

It is one of the quiet disciplines behind the company’s trajectory: serve the largest companies without abandoning the smallest, and let simplicity guide the product even as complexity grows around it.

So, where is the future of AI heading?

When Olivier talks about the future, he focuses less on AI making big strategic decisions and more on the everyday work it can remove.

Most of the opportunity, he says, is in eliminating the operational drag that weighs teams down.

“Most of the boring work, most of the reactive work, most of the caring and feeding for all the various systems we have around us… If all of that could go away, I think it would be fantastic.”

Olivier Pomel, Co-Founder and CEO, Datadog

Freeing teams from that constant churn could unlock what he sees as a tenfold gain in productivity.

Olivier also believes better data will reshape the way organizations operate. With more accurate and timely signals, companies can take faster and more informed actions. Combine that with the time teams win back from automation, and he sees the potential for a step change. “If we can be 10 times as productive, maybe we can build all that amazing technology we’ve seen in science fiction,” he says. Not because AI will build it on its own, but because humans will finally have the leverage to do it.

The biggest uncertainty, in Olivier’s view, is the form AI will take inside enterprises. “I’m not quite sure that the chatbot is the UI that will work in the end,” he says. Humans want predictable interfaces, and they trust systems they can inspect and understand. Whether AI shows up as an agent, a set of embedded actions, or something else entirely, “there’s still a lot that needs to be understood” about how people will interact with these systems.

Another risk is explainability. Leaders need visibility into how AI systems think, what signals they rely on, and where they intersect with human workflows. Without that clarity, adoption breaks down.

Olivier believes a growing share of future work will involve supervising and understanding these systems – “interacting with the animals,” as he puts it – a form of observability applied not just to software, but to the AI running more of it.

The opportunity is enormous, but so is the responsibility. Teams must be able to see what AI systems are doing, measure their behavior, and trust the signals they produce. Only then can businesses use AI confidently inside the mission-critical parts of their operations.

How to lead when the ground keeps shifting

Asked what mistake he hopes other founders and senior leaders will avoid, Olivier doesn’t hesitate. "Seeing the reality for what it is is the toughest part of running and scaling a business," he explains. Without short feedback loops, teams can convince themselves something is working long after the data says otherwise.

One way that philosophy shows up is in Datadog’s revenue model. Early on, the company made an unconventional choice: selling monthly contracts rather than the multi-year deals typical in enterprise software.

“If we sell for three years, we won't have the bad news early enough. And we will be able to lie to ourselves by thinking that, yeah, okay, the customer is not really using the product, but we're gonna fix it.”

Olivier Pomel, Co-Founder and CEO, Datadog

Monthly revenue made reality impossible to ignore. When customers left, the team knew immediately. When adoption stalled, the numbers showed it. "When you have revenue that disappears very quickly, you can't lie to yourself. You have to stay at it and you have to fix it." That forcing function continues to shape how Datadog evaluates new initiatives today.

Olivier believes organizations must be willing to surface bad news without fear. As companies grow, people want to look good, get promoted, and celebrate success. But he argues that cultures need room for candor. Leaders should reward teams that surface problems early, even when it’s uncomfortable: “It shouldn’t be career-limiting to say, actually, we’re doing that and it’s not working.”

When asked for a decision he’s proud not to have made, Olivier points to Datadog’s product philosophy. Instead of building in secret and revealing fully finished products, the company has always worked openly with customers from the earliest stages. “Every time we thought, should we keep this under wraps?… I’m very happy I decided no. Let’s open it up.” Even if early versions were imperfect or tipped off competitors, the feedback loop mattered more.

Finally, Olivier offers one piece of advice for leaders trying to stay on top of AI. His recommendation is simple: read widely, and stay curious. “There are a ton of newsletters,” he says. He subscribes to several technical ones to understand the underlying models and capabilities. Staying informed takes time, but Olivier sees it as essential. “It’s the biggest transformation I’ve seen since I started working,” he says. “It’s important we all understand what’s happening and where it’s going.”

Conclusion

Across the conversation, Olivier draws a clear line through Datadog’s philosophy. As software grows more complex and AI accelerates what teams can build, the real challenge becomes understanding what those systems are doing. Observability, high-quality data, fast feedback loops, and grounded product discipline are no longer optional. They form the foundation for running successful software in an AI-powered world.

What stands out most is Olivier’s pragmatism. He is optimistic about AI’s ability to remove reactive work and free up teams to think more deeply. He is candid about the risks, from false positives to opaque decision making, and firm that trust must be earned rather than assumed. Above all, he is clear that simplicity, customer closeness, and honesty about what is and is not working remain the only reliable guardrails as companies navigate this shift.

His message for leaders building their own AI roadmap is straightforward. Stay anchored in reality. Invest in visibility. And keep your organization honest about the value you are creating. AI will reshape the way teams operate, but the principles behind good product building and good leadership will remain the same.

Pigment newsletter

Join the community shaping the future of AI and business

Sign up to our newsletter to receive expert takeaways, and behind-the-scenes insights from the people building the next generation of products, infrastructure, and AI capabilities.