AI Won’t Replace Developers: Qodo’s Take on the Future of AI-Powered Engineering

 

In this episode of Founded & Funded, Madrona Investor Rolanda Fu is joined by Dedy Kredo, the co-founder and chief product officer of Qodo. Formerly CodiumAI, a 2024 IA40 winner, and one of the most exciting AI companies shaping the future of software development. Dedy and his co-founder, Itamar, are entrepreneurs who have spent their careers building for developers, and with Qodo, they’re tackling one of the most frustrating problems in software engineering — testing and verifying code.

As AI generates more code, the challenge shifts to ensuring quality, maintaining standards, and managing complexity across the entire software development lifecycle. In this conversation, Dedy and Rolanda talk about how Qodo’s agentic architecture and deep code-based understanding are helping enterprises leverage AI speed while ensuring code integrity and governance.

They get into what it takes to build enterprise-ready AI platforms, the strategy behind scaling from a developer-first approach to major enterprise partnerships, and how AI agents might reshape software engineering teams altogether.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Rolanda: Well, before we dive into Qodo, Dedy, could you just share a little bit more about your journey? You’ve navigated diverse roles across the tech landscape. You and your co-founder both are still entrepreneurs. What pain points did you guys experience, a developer workflow that made you say, “We have to solve this, and AI is finally ready?”

Dedy: So I have diverse background. I’ve been in engineering roles, data science roles, product roles across both small startups and larger organizations. And I think throughout all of this, software quality has always been kind of near and dear to my heart.

It’s always kind of a challenge to strike this balance between wanting to move fast, develop fast and providing high-quality software. As a startup, you’ve got to be moving really fast. And I think with AI, it’s now becoming even more important now that the market is changing really, really fast. It doesn’t matter which field you’re in. If you’re impacted by AI, you have to be moving really fast. But you have to strike the balance with also providing high-quality software, and that’s always been a challenge.

So Itamar and I have known each other for many, many years. And basically, we realized that as AI starts to generate more and more of our code, the challenge kind of shifts to how do I make sure that my code is well tested, well reviewed, secure, aligned with the company best practices, especially in very, very large enterprise organizations?

That was a challenge that we felt that is going to become the next frontier. And we realized that pretty early on. So if you look at our seed investment deck from 2022, it’s kind of like we’ve been seeing the same pitch for a while now, and I think now it’s actually all coming together where we’re really well-positioned for this. So yeah, it’s exciting times.

Rolanda: So you and Itamar are both serial entrepreneurs and have known each other for a long time. How do you think about whether you were the right founders for building this? How do you think about this new category of intelligent coding platforms, right? How did you think about this idea, and how did you get the confidence to build in this space?

Dedy: Yeah. Well, on one hand, for both of us, software quality has always been very near and dear to our hearts. I have a lot of experience working with US companies, with large enterprises. I spent a lot of time working with financial institutions, for example. And Itamar was a CTO several times before, and he was kind of ready to step up for the CEO role, and we share a lot of the same values.

We grew up in the same small town halfway between Tel Aviv and Jerusalem. And then, yeah, we just knew that this is very interesting times and very exciting times and that basically software engineering is being reinvented and is being transformed in a massive, massive way. And we believe that the right way to enter or penetrate into this market is through enabling organizations to embrace AI for software engineering in a responsible way. And we’ve had a similar pitch since day one. Yeah.

Rolanda: Yeah, that’s awesome. And maybe with that, let’s dive into the Qodo platform a little bit more. What’s been your guys’ North Star the whole time since those early seed days, and how do you think about what you have versus the broader space of the plethora of AI coding tools out there these days?

Dedy: Basically, there’s a lot of excitement, and I would say some hype also around AI code generation, a lot of talk around vibe coding, and how AI is going to write everything. And we believe that for enterprise to really embrace gen AI and really have gen AI impact their organization in a way that helps them to increase productivity significantly, they’ll have to find a way to kind of balance this with quality and with making sure that code is aligned with the best practices, that code is well tested, well reviewed.

And in order to do that, the foundation for everything has to be very deep understanding of the enterprise code base. So this is something we’ve been investing in a lot. The foundation of our product is something called Qodo Aware. It’s a layer of understanding code bases, indexing code bases, and understanding how different components relate to each other in very large code bases. So that’s one major area of focus. And then on top of that, we have two major product areas. One is around code review and code verification.

So this is our Qodo Merge product that integrates with different Git providers and basically helps take code review to the next level. Because if you think about code review, it hasn’t changed in decades. And basically, developers open a pull request and they start reviewing diff by diff and trying to figure out if there are issues or bugs or anything like that. And with Qodo Merge, we make pull requests a lot less painful. We help developers understand the actual changes, and we create a detailed logical walkthrough.

And then we also try to catch bugs, and we automatically generate best practices for each team and each repo and for the organization as a whole. So that’s on the code review side. And basically, we believe that as more and more code gets generated by AI, the bottleneck shifts to how do I review this at scale? How do I maybe auto-approve code that is maybe smaller changes or that doesn’t have any major issues, but how do I help developers review code at scale and catch issues fast?

And then we have the code generation side, where we basically have different IDE plugins for different… for various IDEs. Our approach is basically not to have developers need to switch their IDE into something different. So we integrate with existing heterogeneous environments, both JetBrains IDEs, for example, as well as VS Code. And then we’re just about to launch our CLI. So essentially, we have the same kind of coding agent in the background that is driving both the IDE plugins and CLI, and also agents that would run in the background.

And all of that has in the back of its head, you can think about it like that, the coding agent, has the company knowledge and best practices, and that’s kind of what unites us… unifies us. So basically, we look at the SDLC in a holistic way. And then one last thing I would add is that we believe that we have a strong belief that developers and enterprise organizations will need to customize AI agents for their specific needs.

So we don’t believe in a one agent that would rule them all. We believe in more of an agent swarm type approach where different teams will configure agents a bit differently, will give them different tools, maybe different permissions, and would want to control the input and the output, and the triggering of the agents. And we built a system to enable them to do that.

Rolanda: But I think that’s one thing that I really love about your guys’ approach is, right, kind of that end-to-end development lifecycle coverage, whereas I think a lot of tools out there tend to kind of pick an area to focus on. So I think that’s really clever on your guys’ end.

I am a little bit more curious, too, to dive into the platform. I mean, it seems like you guys have built a lot. Can you talk a little bit more about that decision to do a lot of that development versus leveraging existing models out there? How do you make those trade-offs?

Dedy: Generally, we do have… we do leverage the large models, and you have the ability to choose the companies that use our product, leverage both Anthropic, OpenAI, Google models. So we also have very flexible deployment options. You can use our SaaS. We also support single-tenant environments and self-hosted. For self-hosted environments, we do provide our own model that is essentially built on top of an open-source model.

So we don’t train a foundation model from scratch, but we did invest quite a bit in training embedding models for code because we believe that the foundation for… as I mentioned earlier, the foundation for everything is deep code-based understanding, and we saw that there was a gap in the market in that area. So we did train a state-of-the-art embedding model for code that has comes as a built-in part of our platform.

Rolanda: I love how dynamic you have set it up to be, and I think that’s really critical for scaling any kind of solution these days. And maybe just to pivot a little bit, I think a topic that’s really on everyone’s mind these days is this term around vibe coding. So I’d be curious to get your guys’ thoughts too. How does your platform enable the vibe coders of this generation to better leverage your platform, and how does that impact what you have created?

Dedy: Vibe coding, I think, it’s like… And I think also when Andrej Karpathy coined that term, he was really referring to pet projects for coding when you don’t care about how the actual code is being built. You’re more focused on the functionality and just seeing that a functionality actually works, but that’s not sustainable for enterprise production code. You’re basically generating a lot of tech debt, and you may be overlooking issues. You’re not focused on testing. So, in order to make this process of AI generating the code work for these complex code bases, you’ve got to put the right processes and tools in place that allow you to check the code to set the right frameworks and best practices.

So that, first of all, you try to get the code to, right as it gets generated, already get to take into account your different rule and best practices for a given code base for a given team. So we do that with our Qodo Gen with our generation side.

But then also once you need to review the code, that’s the checkpoint, that’s the point where you got to really make sure that it’s aligned with the best practices, that it’s well-tested, well-reviewed. And we believe that having these two sides work hand in hand, we call it the blue team and the red team, and that makes it actually work in an enterprise environment.

Rolanda: I think that’s a really good description of both sides. The red team, blue team have to play a little bit of both. And I think that’s something we’ve talked a lot about internally, too, is just people talk about code generation a lot, but not enough about some of the other sides around testing and review. And I think those are even more critical it seems in this current environment, especially with something like vibe coding.

So I’m curious, you mentioned enterprises. Can you talk a little bit more about how you balance getting developer love versus selling to enterprises? Is it one or the other, or is it a little bit of both? Have there been any pitfalls when trying to focus on one or the other, or has it always been a smooth ride?

Dedy: It’s definitely a challenge. Generally, with a startup, we try to focus, and for us, we call this strategy middle-out. Our focus really resonates with team leads with architects, with platform teams, and developer experience teams, which by the way, you’re now seeing these kinds of teams at large organizations gain a lot of, I would say, power in the organization or ability to influence the tools. So we are assisting these teams to really grow.

So on one hand, our pitch really resonates with higher-level managers, architects, and team leads, but on the other hand, as a dev tool company, you ahve to have this bottom-up approach, and developers need to love using your product. We’re trying to always balance that. So we go both top down and bottom up. And it means that we do have a self-serve approach, and we have a freemium tier. We do have the ability to swipe a credit card and go to our team’s offering.

But typically, when you get to the enterprise side and you want to index a very large code base and you want to do it in a single-tenant secure environment, that’s where you do a more controlled proof of value and you engage in the conversation with the enterprise stakeholders. So a lot of our, I would say, larger customers, they tried us out self-serve, they just came and experimented a little bit with our product, but then they contacted us to do kind of a larger trial or a larger pilot. So this is how things have worked for us generally.

Rolanda: That makes a ton of sense. I think the balance of both is super critical, both the individual and the enterprise level. I’m curious, are there any stories of enterprises that were hard for you to get that you were probably most proudest of converting, or any horror stories, I guess, of trying to sell in the space, or even advice for people that are trying to sell in this space?

Dedy: I can give an example of a Fortune 10 retailer that is really one of our largest customers, and they really like… their challenge was around, “How do I make sure that the code that gets generated by AI is well-tested and well-reviewed? And how do I kind of…” Their focus was a lot on the code review bottleneck, and they approached us and they started small with a small pilot, and what they saw is that the product just started start expanding in organization. People were hearing about it and were wanting to turn it on in their repos.

And the challenge that we had was really around supporting the growth that they had inside of their organization. All of a sudden, they had thousands of developers knocking on the door, and this is an air-gapped environment. So you have to take into account things like load on GPUs and things like that, making sure that response times are good and that the quality of the results are good and that it’s aligned with their best practices of the different teams.

So we worked with them very closely to be able to support them, but yeah, they expanded and now they standardized for the entire organization on Qodo for their entire pull request and code review process. So yeah, it was a journey. So the pilot was a few months, and then they expanded, and it took time until they expanded the entire company. But yeah, it’s like with these companies, you’ve got to really support them. You have to really give them the feeling that, or not just giving them feeling, but really work very closely with them and listen to their pains and be willing to kind of go the extra mile for them.

Rolanda: That’s kind of the dream land-and-expand scenario with a customer, right? Hopefully, they’ll be your customers for a long time coming. I’m curious, I mean, given you have spent so much time with these developers and these enterprises, how do you see the future of some of these developer teams changing? Are you already seeing how Qodo is impacting how these teams are structured? I’m curious where you see the future of all this going? Are there… Are engineers going to all be replaced? I think that’s what everyone’s scared of, right?

Dedy: The way I think about it is that I think that the roles of developers are just changing. I think that, especially for very large complex code bases, I don’t imagine a world where a product manager can in a click of a button just make a change that impacts the entire code base and redo the entire onboarding experience for a new customer for one of these very large. Maybe a large bank or any kind of large enterprise. So I think the developers are going to become orchestrators of agents. Each one of them will have the ability to launch multiple agents and also customize each agent for specific use cases, specific triggers, and then be able to review the work of these agents at scale.

And then, yeah, most of the code, they’re not going to hand write, I would say. But they need… I think there’s still going to be, at least the way I see it for the foreseeable future, for complex code bases, you’re going to need technical people, technical developers experience that are able to orchestrate this work and make the dev teams a lot more productive, but also make sure that you don’t have this kind of, I hope it’s okay that I’ll say it, but a CrowdStrike moment where the world grinds to a halt because something was overlooked. So the way I see our goal as a company, or what we’re trying to do, is enable these organizations to be so much more productive, but not have these CrowdStrike moments.

Rolanda: And is this something that you see playing out, I guess, over the next five to 10 years? I mean, I think you talked a little bit about the near term, right? I think that makes a lot of sense with the developer kind of as the orchestrator. How do you see this playing out even further out? Is there even going to be an entry developer role, or how do you see Qodo really being that partner in terms of really catalyzing this change?

Dedy: First of all, it’s very hard to make very long-term predictions. But the way I see it is that I think the role is going to continue to evolve. I think we still need… There may be some kind of a curve in the demand for developers where you see maybe going down and then going back up because you’re going to need these people that are very, very technical, that are able to orchestrate and manage these agents that are writing code.

And if fewer people end up now going into studying computer science and things like that, then you’re going to have a situation where you don’t have enough of these people. So I think we’ll see that there will be very interesting dynamics. Also, I think there’s going to be an explosion of software in general.

Think about all the ideas in people’s minds that for software that are not becoming companies today. I think there’s still so much more potential for a lot more software to be created, and you’re going to need engineers for that. So yeah, I do believe that engineers are going to be mostly orchestrating agents in the next 2, 5, 10 years. But I think you’re still going to need the engineering team for the foreseeable future. This is how I see it.

Rolanda: I think that’s a great assurance for any engineers listening in on the podcast. And I think it’s also something that we’re excited as investors, right, and something that we believe in, as just a lot of these forces will continue to multiply, and more software just means more things for people to manage. I think it’s more about the roles shifting. So totally aligned there.

So maybe just to pivot and switch gears a little bit, I think one thing that’s impressive is just around how fast you guys are growing. So I’d love to hear a little bit more about how you think about go-to-market. How do you make sure that you’re targeting the right customers and training your reps?

Dedy: It’s funny, we’re just now doing an onboarding bootcamp because we significantly grew the team. We’re going to be 80 soon in the company.

Rolanda: Wow.

Dedy: Yeah, I think maybe a year ago we were 30 or so, something like that. So, we’re actually now experiencing this growth, and how do you do that? I think you need to, first of all, do this… spend a lot of time as founders, and we were like 10 people in the founding team, something like that, when we just started [inaudible 00:23:10] company. So you have to have the founders and the founding team really working closely with the go-to-market team, helping them, supporting them, joining them on calls, and make sure that you’re constantly enabling them.

You’re constantly over-communicating. Also, as the market shifts or the market… there are changes in the market, you make product decisions, and you got to make sure that people understand where your product is also heading, and do a lot of product roadmap sessions with both, actually, the go-to-market people, but also with your customers. I think it’s just spending time and making sure you do that.

Rolanda: Yeah, yeah. I mean, I think your job’s only going to get more exciting and harder now to scale out the team and transition from that founder-led sales. But yeah. No, I’m sure you’ll do a great job there. Maybe going off that, I’m curious for your guys’ founding journey so far.

What kind of advice do you have for other people building in the developer space and in AI in general? Are there any hard-earned lessons that you have come across in the first couple of years that you would like to share with some other people that are starting to embark on this journey?

Dedy: The challenge with this AI space and AI and software engineering also now, there’s so much noise, so much going on, and you have to have an insight. You have to stick with what you believe, and you have to find the right balance between building to the future.

So, building for where you believe the models will be in X time from now, but you can’t build for too much into the future because… so you have to strike that balance, right? You need to… On one hand, I would say the balance is probably building for a few months out, where you believe the model capabilities will be, and then just stick with your insight. And yeah, it’s like you either win big or you fail big. So I don’t think there is an in-between at the moment.

Rolanda: Yeah, that’s great advice. And I’m curious, how do you maintain your own long-term focus and integrate customer feedback, right? I think a lot of founders struggle between, there’s a lot of noise in the market that you hear from competitors, from probably your investors, from different kinds of customers. How do you maintain focus between you and Itamar to make sure that you continue to build for that right three-month direction?

Dedy: On one hand, you have to stay up to date. You can’t ignore the competition. You have to strike a balance — on one hand, you do need to react to things that are happening. So if all of a sudden there’s a new model that comes out that allows you to do things that you couldn’t do before, you do need to respond to it, but you can’t just be reactive.

So you need to have a roadmap, you need to stick with that roadmap, but then you also need to build the organization in a way that people embrace the change. I think the people that are best suited for fast-growing startups in the AI space are those who, on one hand, have this kind of grit that can stick with things. And there are hardships. There’s moments where, all of a sudden, we worked on something and some competitor released it a week before we were about to release it, and it kind of sucks this situation.

So you need to have people who can deal with this kind of situation. But on one hand, you have to be very, very adaptable. So you do need to see what’s going on in the market and be determined. I mentioned earlier that one core company value is no fear of good conflict.

We always debate things even between us as founders and with the founding team and with a broader team, but also move fast with confidence. So once you decide on something, you have to move on it fast, and you have to do it with confidence, then you have to make decisions. The biggest issues happen, I feel like, when you get pulled in different directions and you end up not making a decision.

Rolanda: That makes a ton of sense. I mean, this has been so incredibly insightful, Dedy. I just have a few rapid-fire questions for you to wrap this all up. I think we’ve agreed on a lot of things so far. So maybe just to spice it up a little bit, the first question is, what do you believe about the future of AI and software development that many people might not fully appreciate yet, or that people might even disagree with?

Dedy: I think one area that would come to mind is if you look at, for example, the big labs now that are launching their products in this space, right? So you have the OpenAI Codex, you have Claude Code, right, from Anthropic. And I think the way they think about it is let the model do most, or keep the system layer very, very lean, and the model capability is getting better and better and better, and the model will solve everything eventually. Context windows are expanding, so you’ll just shove everything in a context window, and the model will do it.

We have a different point of view on that, a significantly different point of view. So we believe that for enterprise complex code bases, you’re not going to just shove the entire code base into the model context window for every inference of the model. You actually need to have a system that preprocesses the code base and creates their relationships and derives the insights. And you also want to give the developers the ability to control the agent, define the tools for different use cases, and create different workflows that are customized or configured for their specific use cases.

So we believe in a more controlled agentic environment where you have, again, I mentioned this earlier, a little bit a swarm of agent, and each agent is more tailored, and it has different permissions, it has maybe different tools, and this entire thing is controlled by developers.

And this is why we also believe that developers are not going away because they’re going to manage these agents, and they’re going to configure it and build them and track them and monitor them. So yeah, I think that’s probably the majority of the market think about this a bit differently, so is how we think about it.

Rolanda: Yeah, that’s a great insight. And thinking about outside of just Qodo, even maybe development lifecycle for a second, what’s a company or AI trend that you’re really excited about outside of all of this?

Dedy: Outside of coding, I’m very excited about the impact that AI can have in biology, for example, and potentially finding cures for diseases. I think the next couple of decades will be very, very exciting in this space. The big labs are going to scale reinforcement learning, that’s obviously in verifiable domains like coding. It is not even a question anymore. It’s obvious. And I think we’ll see in the rest of 2025 and 2026, a very significant rapid improvement in model capabilities because of the scaling of reinforcement learning that is going to happen. And I think if they’ll be able to solve this for other fields like biology and figure out how to close the reinforcement learning loop, then we’re going to see rapid advancements in these fields.

And I’m very excited about the possibility. There are still unknowns, a lot of unknowns there, but I’m hopeful. Obviously, it’s not my area of expertise, but I’m hopeful that they’re going to be able to figure this out and make significant advancements there.

Rolanda: I think that’s really powerful. Obviously, it’s great to impact people’s work, and that’s a lot of people’s lives, but obviously, there’s the actual life part of it as well. So I think that’s a great insight there. We’ve talked about advice that you would give others building in this space. What’s one piece of advice that you would give your own past self? If you were to rewind and think about when you were starting this company, what’s a piece of advice for what you would do differently?

Dedy: I think it’s a great question. I think to always remember that this is a marathon, not a sprint, and that’s in terms of the balance you need to strike as a founder. For example, for me, I used to be very big into rock climbing, and for the first two years of the company, I basically gave up on that because I couldn’t find the time … I couldn’t strike that balance. I started realizing that this is going to be a 10-20-year journey (who knows) doing this, so you’ve got to strike a balance. So, recently, I started getting back into climbing. And for me, it really affects me in a very, very positive way — and even makes me feel more productive at work. So it’s like you’ve got to strike that balance and realize that you can’t give up on things that are really important for you just because you’re a founder.

Rolanda: Yeah, I think that’s a really powerful message for people. And there’s, at least over here in the US, a lot of developers like to rock climb on the side too. So you never know. You might find some of your future customers there. So it can work out in both ways. And yeah, I mean, maybe just a fun question to wrap up with. You changed your name from Codium to Qodo. I’d love to learn what Qodo means.

Dedy: So Qodo is Quality of Development — and it’s like code with a Q. So the trigger to change from Codium, so there are. You’re probably aware there were two Codium companies. We both started at a similar time, and there was just a lot of confusion. Obviously, there’s an overlap. We’re more focused on quality, verification, and testing in enterprise organizations. So there was always differentiation, but there was still confusion. So that triggered the change. And I think it worked out quite well.

Rolanda: I love that Q for quality. I’ll remember that. Well, Dedy, thank you so much for the insights today, and thanks for joining us on the Founded & Funded podcast. I know I’ve learned a lot through our conversation, and I think it’s such a great story of your guys’ vision and journey, so I really appreciate you sharing that.

Dedy: Thanks, Rolanda. This was a lot of fun.

Breaking into Enterprise: How Anagram Landed Disney with Cold Outbound

 

This week, Madrona Partner Vivek Ramaswami hosts Anagram Founder & CEO Harley Sugarman. Harley’s founder journey is a fascinating one — from magician to music video producer to engineering at Bloomberg and then to investor at Bloomberg Data before eventually leading to becoming an entrepreneur. He launched a company in 2023 with a bold mission: to fundamentally rethink how we protect the human side of cybersecurity.

Originally founded to help security teams upskill through immersive training, the company has since evolved into a next-generation human security platform that’s tackling one of the biggest unsolved challenges in enterprise security: employee behavior.

In this episode, Harley and Vivek unpack how one pivot, a flood of cold outreach, and relentless focus on behavior change transformed a niche tool into an enterprise platform serving companies like Disney and Pfizer. From landing enterprise logos off of nothing but wireframes to outmaneuvering the 800-lb gorilla in a legacy industry — Harley’s tactics are a masterclass for any founder trying to stand out in a crowded market.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Vivek: What motivated you to launch this company? How did your background help you in the early shaping of the company itself?

Harley: Yeah, so I have a bit of a funny background. I actually went into school as an English major, was a semi-professional magician prior to doing that. So I really did not think I was going to be in the tech world for a long time. But after going to school in Palo Alto, sort of being very much immersed in that world, the idea of starting a company took hold, but I didn’t know what I wanted to go and do. So right out of college, I moved to New York and was working at Bloomberg, mostly there doing engineering for sort of infrastructure tools that Bloomberg was building, and sort of realized that I knew a lot about the product side of startups, but didn’t know much about all the other stuff, about the fundraising, about the hiring, all those other facets that go into creating a company.

So I found Bloomberg Beta, which was this early-stage fund that exists within Bloomberg and is very focused on the future of work, which is something I was also very interested in. So I joined for two years, essentially with the ultimatum that after two years, I want to be fired and forced to go and start a company. And the team I worked with was very supportive. I worked with a woman, Karen, out of New York who sort of held me to that promise. And for me, security was always a very natural fit. There’s a lot of engineering within security that I find really fascinating. That was my specialist focus at university.

And really for me, there was always this interesting intersection of the technical side of security and the human side of security. And that human side was this sort of much fuzzier problem than the technical side. And it was a problem that a lot of people had tried to solve, but hadn’t really been able to solve. And so, that was what gave me the trigger to say, “You know what? Let’s go into this space. Let’s look and see where there’s opportunity.” And that led to the initial vision for the product, which was focused on security teams and eventually led us to what we’re now building at Anagram.

Vivek: Yeah, and I think obviously you had a lot of interesting developments leading you to even starting the company in security in the first place. But then, the original vision of the company was very different, or certainly different from what you’re doing today, which was around upskilling cybersecurity employees and teams with this kind of capture the flag style approach. But then, you pivoted the company in 2024 and saw some rapid success from there. What sort of led you to this pivot? Walk us through what your thought process around that was.

Challenges and Market Realities

Harley: So the original product that we built, which I stand by as being an awesome product, was this way of evaluating security talent and training security talent through this idea of puzzles, right? In security, there’s this culture of capture the flag, which is we’ll give you a piece of broken software or vulnerable software, and your job is to figure out how to exploit that vulnerability. And in doing that, you very quickly understand, “Oh, okay. This is how I would defend against that in the future.” And it’s a really cool, engaging way to teach people to evaluate what somebody knows. And so, we started off building software very much focused on solving that problem for security teams, which I can talk about at length.

But I think one of the issues with it is that security is a very gate-kept industry. There are a lot of certifications, a lot of a feeling that you need to have gone to a certain kind of school, or have a certain degree to get into this field, and I didn’t buy that at all. So that was the first product that we launched. We got a little bit of traction with it. And candidly, we just realized quite quickly that the market wasn’t there for it. We launched sort of 2022, end of 2022 timeframe into a market that was very much contracting. So companies that had big security teams, financial institutions, tech companies, et cetera, were on hiring freezes. They were doing layoffs. They were downsizing. So we quite quickly learned that there was a cap to how big this business could be.

And then, the other challenge with it was the needs and the requirements for security training at these different orgs looked fairly different, given the risk profiles, given the compliance frameworks you needed to worry about, given the kind of software that you are developing. So we decided to make a pivot, but we knew that we had built something special because the feedback from users was really, really positive. They loved the puzzles. They loved this idea of critical thinking and keeping things short, but engaging. And so, we basically started talking to the customers that we had, and we said, “Okay. Where are you feeling this pain, where this kind of solution could be interesting?”

And the thing that came up more and more frequently was this idea of training users who weren’t necessarily on the security team, training the general population. And it’s a very known issue. Human risk tends to be the biggest hole for most enterprise companies, meaning someone clicking on a phishing email, somebody sharing data with their personal email or to someone that they shouldn’t. Now, increasingly, there’s a lot of AI risk around what information gets put into models and what does that company do with that information. And there hasn’t really been a good solution to that problem. There’s a lot of companies out in that space. It’s a very popular space. But the approach most of these companies has taken is very cut and dry, right?

Pretty much everyone watching this or listening to this is going to have had to have gone through some kind of security awareness training in the past. That’s usually 45 minutes of videos talking about what is a phishing email, followed by a kind of reading comprehension quiz that looks like it came out of the SAT, and that is not a good way to train people. And what we learned was that there was real appetite to take this more engaging way of educating and teaching, and apply it to this space, which is a lot broader space, because every company above a certain size needs to do this employee training. And so, that was the seed that led to Anagram.

To be honest, I was quite hesitant to do this initially. I thought we were going into a space that was very commoditized. I felt it was a little bit of a race to the bottom. So I sort of felt like, “Okay, if we do this, we have to do it right and we have to do it differently.” And so, that was the initial sort of hump that I had to get over was my own kind of internal bias like, “Does this actually make sense? Is this a good idea? Is this not a good idea?” But we got really, really good feedback, and so we then started doubling down. And it kind of didn’t even really feel like a pivot at all.

Executing a Startup Pivot

Vivek: Take us through that actual pivot. You talked about externally why you decided to… You saw one direction, and you said, “Okay, you know what?” Actually, if I take the company and the product in a different direction, I’m seeing a lot of immediate traction there.”

But internally, what actually happened? If you just think about the tactics of this, did you wake up and you tell the team, “Folks, we’re going to change the direction of the company, and this is the product we’re going to go into.” Did you have to change the team? What sort of happened? At least on the internal side after you made that pivot, because that’s something so many founders have to go through.

Harley: So the way that we thought about it or the way that I thought about it was every step kind of felt like a natural progression of the step before. So I actually don’t think I remember waking up one day and saying, “Hey, you know what? We’re now a security awareness company.” That didn’t happen. It actually felt very organic. I make decisions well when I see data. That’s kind of the framing through which I view the world. And when we started selling this new product, we started sort of ideating on, “Hey, okay, security training for security teams. That’s probably going to hit a wall at some point. Where can we expand?”

We started just running a ton of little experiments around messaging and outbound, outside of our existing customers, and we said, “Okay. My hypothesis — this is probably not going to work.” Right? There’s a million companies out there. CISOs kind of think this is an unsolvable problem because human risk is that you can’t fix the human, which I think is a really terrible framing. And so I sort of said, “All right. Well, we’ve got these customers who are willing to give us some money. Let’s focus some effort on building that product as cheaply and as quickly as we can.”

And then in the meantime, let’s just start running some tests. Let’s start reaching out via LinkedIn, via email to CISOs, and see if we can get any interest like, “Hey, we’re going to try this a little differently.” And we got a lot of bites that way. Of our first, I think, 20 customers, 80% of them came through cold outbound. It wasn’t my network. It wasn’t people doing us favors, right? We had a couple of existing customers who converted, but the vast, vast majority came from us just reaching out. And I think that was the thing above everything else that made me say, “Actually, this is cool. There’s a there there. There’s an opportunity here.”

And people are sick of the status quo, and they feel like there is a chance for us to build something here that is differentiated, and feels unique, and feels innovative.” And so, we just slowly started spending more and more of our time on it. And there was a moment, I’d probably say early to the middle of last year, where we decided as a team, this is going to be our focus now. We are spending 80% of our time on this product. We closed more revenue in the first six months than we did in the past 18 months building the original product. And so at that point, it was fairly obvious like, “All right, we’re going to do this.” And you know, for the team, a few of them came and sort of said to me like, “It kind of didn’t feel like a pivot.”

Someone said to me once, I thought it was kind of interesting. There’s like two genres of pivot. There’s a market pivot or there’s a buyer pivot. And it’s hard to do both, but you can do one. We kind of made a buyer pivot, right? We are still selling to security. The process is not a million miles away. It’s this top-down enterprise sale. But the end user, the experience, the problem that we’re solving is just fairly different. And we were kind of lucky there was a lot of DNA from product one that could apply into product two.

But yeah, it felt I think very natural and we didn’t need to make any team replacements or anything like that.

Vivek: I think that’s where the word pivot just sounds like you’re making a hard pivot, right? Where so many things are changing, and it’s like you got rid of one team, and you got another. In this case, it’s more almost a transition, right? There’s a transition of the product, but there’s a transition on the buyer side. They decided, “Okay, you know what? This first part I’m not going to spend that much money on. But the second thing that you’re doing, actually, there’s budget for this.” And now, you can tap into that budget.
And I think the thing you said that was really interesting and that is differentiated is, so much of your early traction came from you reaching out cold on LinkedIn and on email. And these aren’t just small SMB customers. This is Disney, and Pfizer, and big blue chip customers. Maybe talk through that a little bit. What was sort of different in what you were doing that allowed it to work?

Security Awareness Training that works

Harley: A few things that we did, I think, that worked for us and would work for other people is, we always started from a place of feedback. So in the early days, we didn’t have a ton of functionalities to the product. We had a lot of wireframes, we had some basic things that we could demo, and we were very upfront about that. We weren’t saying, “Hey, we are going to come in and solve all of your problems.” We came in and said, “Hey, security awareness training sucks. You probably hate what you’re doing. We’ve taken this approach that is a little bit different. We’d love to show it to you. We’d love to hear if the approach resonates with you. Do you have 15 minutes?” And I will die on the hill that a cold email is the highest reward-to-risk ratio proposition that you will ever encounter as a founder.

It costs you literally as close to nothing as it can cost you, right? Some time and some thoughtfulness around who you reach out to and how you reach out. And the worst that can happen is they ignore you. And maybe if you reach out to them a year later, they respond, but they’ll have forgotten about the first one. So there’s zero reputational risk really if you do a thoughtful one, but the potential upside is so high. So we really leaned into that. Just try it. But as I say, I think coming at it from a place of, “We want your feedback. We are not telling you that this is going to solve all of your problems.” Coming at it from a place of the open secret that in this space, there are a bunch of issues. And hey, we are thinking about how to solve this, and we’re being innovative. I also think, tactically, if you frame yourself as earlier than you are, that can help.

Even now, we send stuff from my university alumni email address, and we try things where we say, “Hey, we’re a team of… We recently founded a company, even though the company’s been around a year.” I just think having that framing of like we are early gives you two things, especially for C-levels at big companies. One is it puts them a little bit more at ease. It feels like, “Hey, I’m not just going to get pitch slapped by this dumb company that I get 400 of these every day.” And then the second thing is a lot of these big C levels love the idea of paying it forward and helping startups. Some of them might become investors. Some of ours did become investors. Some of them might want to help connect you with VCs that they’re connected with. So there is this idea of a rising tide lifts all boats that I think we were able to capitalize on as well.

Vivek: And I think also you approaching this with a level of humility that I think is really important too, right? It’s not this idea of, “Hey, buy us, and we’re going to solve all your human risk problems.” Right? It’s like we are the greatest security training tool. It’s not like, “Hey, we have a different approach. Why don’t you try us out? Or I’ll at least, get on the phone and talk to you about it. And we’ve had this success and you can just sort of build on top of that.” And I think that’s great. By the way, this probably the first time I’ve heard pitch slapped.

Harley: I didn’t come up with that.

Vivek: I was going to say it’s very good. And I’m not sure if we’re going to be able to use it or not. Hopefully, we can. But I think one thing you noted there is really important, which is this is a space that’s been around for a long time. Right? As you say, at a certain size, you need to have some level of security awareness training, products within your organization. Now, the vast majority of them today are very much check-the-box, and it’s not a great experience. And I’m sure 90% of the people who are listening or watching this podcast have kind of gone through that. Now, I’d love to hear how you all and how Anagram thinks about standing out in this space.

We’ve talked about this. There are almost two sets of competitors. One is all these new age sort of AI forward companies, many of which are getting venture funded, and a lot of them have been out there for the last few years. And then, there’s this one giant behemoth, right? In KnowBe4, which has been around, probably the first one in this space, and is sort of the 800-pound gorilla in this market. For a product and a company that’s only been around for a couple of years, how do you think about how Anagram competes in the spaces? Do you think about all of these? Do you think about this side more? Walk us through the competitive set and how you compete and find success.

Harley: So I think for what we do, it is a fairly commoditized space. There are a lot of startups. There are a lot of incumbents. There is really one massive incumbent, which is KnowBe4, and they’ve built a machine. I’ve got to give them credit because they have managed to dominate the industry. When we go into these enterprise companies, nine times out of 10, we’re competing with KnowBe4. All stuff they’ve built internally. But it’s unusual, I think, to see a space within security where so few of the incumbents of the big customers are using a startup as a solution. When we go up for these customers, we’re very rarely competing against the startups. We’re usually competing against KnowBe4. Sometimes, a company like Proofpoint, like an email security platform, has some training built into it.

And then, the startups that are doing the AI solution. We run into them a little bit more at the mid-market, sort of maybe 500 to a thousand, 1500 employees. We learned quite quickly that our bread and butter is the big enterprise. We found that we can serve 1000, 2000-person companies pretty well, but those processes tend to be very competitive. You’re going against all these other AI-driven companies. There’s a lot more shininess. And you’re also going up against a team that you’re selling to a persona who’s got a lot more going on in the sense that there is typically one person. Maybe it’s a CISO. Often, it’s not even a CISO. It might be a director of IT who’s wearing 17,000 hats, and security awareness is one of those hats.

And so they just need to get something in, check the box, and get it done as quickly and as cheaply as possible. And that’s not really the company that we’re going after. What we’ve realized is that these big enterprises that have the biggest risk surface because they’re dealing with 10,000, 20,000. In our bigger customer cases, 400 – 500,000 employees, they have to think about this with a bit more sophistication. They have to think about, how do we target training to different parts of the org? How do we customize what people get? We deal with a lot of companies that have manufacturing facilities. And those workers are hourly. And so for them, training that is 45 minutes long versus 15 minutes long, all of a sudden.

Vivek: An hour of pay.

Harley: Right. Exactly. Yeah. If you take 10,000 manufacturing workers times that by 30 minutes, that’s 300,000 minutes. I think I did the math right.

Vivek: It’s a lot of time.

Harley: Yeah, it’s a lot of time. But it’s interesting, because what we’ve learned quite quickly is that that is our bread and butter. Those are the programs where we can go in and show ROI really, really quickly. And so, that’s where we focused. In terms of how we’ve looked at it from a product perspective. I think if you look at the incumbents, we’re all familiar with that product. As I say, it checks a box, but it doesn’t lead to behavior change.

And I think the big problem in this space, and the reason you’re seeing a lot of startups in this space, is that CISOs have realized this training doesn’t really do anything. It is a thing that we have to do because our compliance framework says we have to do it, but that’s kind of as far as it goes. And the analogy I’ve started to use is if we taught kids, if we taught school-age kids the way that we expect adults to learn security awareness, within two weeks, there would be protests in the streets.

Vivek: Yes.

Harley: We’re not going to let our kids just sit in front of a screen and then take a comprehension quiz. That doesn’t actually teach. And it doesn’t teach you anything, right? There’s been so much research on how behavior change works at scale and how education can work at scale, not from security companies, but from companies like Duolingo, from companies like TikTok.

Vivek: Right.

Harley: And you need to incorporate those kinds of technologies and techniques into awareness training. The big incumbents don’t do that. And then, even the smaller startups that are very AI forward and they say, “Hey, we’re going to take the phishing simulations that you do and we’re going to let them be AI-generated,” or, “We’re going to have AI-generated videos.” Again, there’s some value there, but it’s kind of, to me, the lowest hanging answer to the question, how do we incorporate AI into this training?
And I just feel like if you put someone, and give them three minutes and say, “Hey, how do I incorporate AI into awareness training?” That’s what they would come up with. So what we’re trying to do is go one level deeper and look at the workflows that our users are engaging with day-to-day, and figure out how do we insert nudges, behavior change, training into those workflows. And I think that’s something that no one’s really doing right now.

Vivek: At the end of the day, for most of these customers, they would like to drive behavior change. Right? It’s very hard to do.

Harley: Yeah.

Vivek: Because we’re used to the way we do things at work, right? And so, being able to show, especially at these big enterprises, can you drive behavior change? Or what leads to behavior change to make your environment more secure is so important. And so, I would say that one of the things that attracted us to Anagram was taking this enterprise-first approach, right?

A lot of companies will go say, “Hey, you know, let me start with a small customer and kind of work my way up.” Versus just going, “No, Disney has a complex environment, and that complexity is what’s going to make our product shine.” What advice do you have for other founders who are trying to break into the enterprise? Because enterprise is not easy, right? And these large companies are not easy. What sorts of things have you found successful for Anagram that you think might be relatable to other founders?

Enterprise Sales Strategy

Harley: So I think within the enterprise, you really have to put in the time understanding how their business works. It becomes a much more of a consultative sale than a prescriptive sale because every enterprise has its own beast. There are power dynamics at play that you will not get a sense for until you’ve spent real time with them. And what we started to do that is showing a good amount of promise is look for a specific problem that we can solve, right? Let’s take J&J, or Kenvue, or Disney as an example.
They have such a huge number of challenges that if you try and say, “Hey, we’re going to solve everything day one.” First of all, they’re not going to believe you if you’re a company at our stage. Second of all, the number of people who would have to buy in for you to go and solve all of those problems is probably in the dozens. Not always, but oftentimes. And there’s different departments that might need to buy in. I’m not even talking about budgeting yet or dollar amounts. Just the sheer complexity of the sale goes up so quickly. So what we’ve tried to do is really focus on specific areas where we can improve. So it might be training people who click on phishing emails a lot. It might be doing a little bit more targeted training to certain parts of the organization. But this land and expand motion, I think, works really, really well within those enterprises, because you get the chance to build some of that trust.

And you get in the door a little bit quicker. Your contract size won’t be that big initially, but our mission is to build trust, and to get them enjoy working with us, and that unlocks volumes, I would say, in terms of ability to expand. Also, social proof, right? I think once you’re in one of them, you can then kind of name-drop that one. And then all of a sudden, you get a little bit of the FOMO among the enterprise, which is always nice.

Vivek: Yeah. Well, you should be less humble about it. It’s not easy to break into these companies in the first place, and then expand. But as you say, if you can get your foot in the door, you have a good starting point that can give you a lot of leverage to moving across the organization.

Choosing the Right Investors

Let’s switch gears here a little bit to fundraising. Tell us a little bit about the fundraising journey. And I guess I’ll put you on the spot a little bit. You weren’t really fundraising when we talked, and so why’d you even take the call? And give us a little bit about that experience.

Harley: Yeah, we weren’t fundraising when I first met you. I think we had planned to fundraise kind of around now, like March, April, May. I guess we’re in June of 2025. At the end of 2024, when we got introduced, I was really just in the early stages of saying, “Okay, I probably need to get my reps in fundraising. And I remember this from my time as a VC. We used to give this coach to our founders. Fundraising is like a muscle, and you just have to remember how to talk about your business.

What are the common questions that come up? What are some of the big things that you’re going to have to answer? Who are your competitive set? How are they fundraised? And there’s a lot of prep work that I think goes into a good fundraising process. And so, this was going to be the very, very early stages of that process for me. We got connected through my little brother, of all people.

Vivek: You owe him a nice Christmas present.

Harley: Yeah, exactly. And he was like, “Oh, I met these guys at Madrona. They seem really smart. Do you want to have a chat?” I said, “Yeah, I’ll get some practice swings in.” And yeah, I think on our side, we were in a fairly good position because we had had a lot of momentum. We’ve closed a lot of contracts. We were and are growing pretty well. But I was just really surprised. I loved the conversation that we had, and you asked really good questions. I also fundamentally believe that fundraising… And I hate the cliche, but fundraising is kind of like a marriage. This is a long-term partnership, right?

This is not, “Oh, they’re going to write a check, and then disappear.” This is — you are going to be seeing these people or speaking to these people multiple times a month, or once a course, in a board meeting or whatever it is. And so, you need to get along both on a professional level, obviously. And then, have a shared vision for the company. But also, you need to just want to spend time with them, and trust that you can be honest with them, and have these conversations. Life’s too short to have friction like that. And yeah, as I say, I just… You know? Not to big you up too much on your own podcast.

Vivek: We’ll take it. We’ll take it.

Harley: But I just thought it was a really great conversation.

Vivek: The feeling is mutual. And I think, for us, we had looked at a number of companies in this space. But I feel like the approach you are taking and Anagram was taking, but also your authenticity, right? Outside of just having a very interesting approach in this space, it’s like, are you the kind of founder or the kind of person that’s going to be able to make changes when the market throws a million things at you, right? It takes time to suss that out. I remember we spent half a day in New York and went for dinner. And those are the kind of things that I think get us really excited. Great traction and customers is important, but ultimately, so much of the company is defined by the founder in the early stages. And I think getting to those things is really important.

And I think the other underappreciated part is that when you already have a great group of investors, like in your case, you already had Bloomberg Beta and GC. You want to have someone who also can probably fit into that board dynamic while also challenging you in certain ways. And I think you’ve been very lucky to have a great set of investors, and I’m not talking about ourselves, but even your other investors. And so, it’s been fun to be part of that dynamic in that group. If you were to give advice to founders about fundraising and choosing a partner for the long term, what are one or two things that you would, maybe that they don’t hear as much? Or are there one or two pieces of advice you’d give after having gone through this process?

Advice for Founders

Harley: I would say the biggest thing is choosing someone who you get along with. To me, that is far and away the most important piece. And this isn’t to say the case for every CEO, but I really value collaboration and I really enjoy hearing other people’s ideas. I think of myself as a sponge. And I might take in all the information, and then just wring it out and ignore all of it, but I love absorbing it. And so, I need someone who I can listen to and who I’m going to care about their opinion.

And I think if you are someone who always thinks that you’re the smartest person in the room, if you’re someone who has to get the last word in edgewise, you are just going to… I just know myself and I know that I’m going to start dismissing what that person says or take it with a grain of salt. And so for me, I was really solving for that. Who is this person who is going to be a thought partner? Because ultimately, you also have to, as a founder, know where to take the advice and where not to take the advice. VCs are great, and we have been very lucky. We have a really good batch, but they are backseat builders, right? They are not there in the trenches. They don’t see what’s happening day-to-day. So you, as the founder, have to make the call ultimately, but you need someone who is going to have that little bit of humility on the VC side and say, “Look, I know that I am not necessarily the best person to answer this question. You’re the best person to answer this question, and trust you with that vision.” Again, being a VC, I think I saw this happen in a positive, constructive way.

And I saw this happen in very negative ways, where board meetings were just sort of this thing that the founder and even some of the VCs dreaded because they knew there was going to be conflicting egos. It’s uselessly destructive because it doesn’t contribute to building the business. It doesn’t contribute to a good relationship between these people who are spending a ton of time and are really invested in each other’s success. So yeah, as I say, for me, it’s a soft answer, but I think that interpersonal connection is massively important.

Vivek: That totally makes sense. And let’s end with a couple of rapid-fire questions here. So, outside of fundraising, if you could provide founders with one piece of advice, what would it be?

Harley: Send more cold emails than you think you should.

Vivek: Love that. Okay. Send more cold emails. Yeah, I think you probably learned some of this skill as a VC too, right? This is what we do all day. And one, let’s talk about hiring for a second. What lessons have you learned around hiring, especially when it comes to the kind of talent you want to get at this stage that Anagram’s at?

Harley: This is a longer answer for a quick-fire question, but it’s difficult to get talent that can scale at any stage of a company. What is a good fit for someone when you are 2, 3, 4, 5 people might not be a good fit when you are 10, 11, 12, 13 people, which might not be a good fit when you’re 25, 30, or 50, or a hundred. And so, always being cognizant of, is this the right blend of talent that I need for my team at this stage of this company? And then, a lesson that I have learned and a mistake that I’ve made that I’m trying to internalize and continue to get better at is not letting bad hires or mistaken hires drag out. So, making those decisions quickly.

And I say that not in a kind of, oh, let’s just be cruel and callous about it, and just hire and fire everyone and kind of create, because that’s really bad for the culture. But there genuinely is, for most people, a stage of company where they will shine and a stage of company where they will not. And we’ve hired a couple of times people who I think would be really great fits for companies where there was a little bit more infrastructure. And maybe if the company were 50, or a hundred, or a thousand people, they would be a really, really strong A-level player. But when you’re five people, when you’re 10 people, there’s just so much ambiguity and so much comfort that you need to be able to do without that, it just wasn’t the right fit for them or for us, right?

It’s not really fair for them to be wasting their talent at a place like us. And it’s not really fair for us to be bringing them in where they’re not contributing what they could be, right? So I think making decisions quickly. Also, smaller point — hire more junior than you think, especially at the earlier stages. Like someone who is earlier in their career, maybe, or not as experienced, but really hungry and learns really quickly. I will take that 10 out of 10 times over someone with a bit more experience who’s maybe coasting.

Vivek: Yeah. In fact, it feels like we’ve actually had more success in that type of hiring, and watching those people grow, and do all the tactical sort of get in the mud working versus bringing in someone senior. There’s a time for that. But I think, as you say, for seed series A, early-stage companies, that kind of role makes a lot of sense. Last question for you, Harley. Looking out five years, where do you see this market and where do you see Anagram?

Future of Security Awareness Training

Harley: I think that the current incarnation of security training disappears. I think it has to. One of the nice things about the way the current compliance frameworks are written is that they’re actually very vague. They just say, “Well, you have to train your employees annually on security relevant to their jobs.” That’s pretty vague. And what that has meant so far is we’ve done the lowest hanging fruit like, “Okay. Well, we’re going to give employees annual training about security,” and it doesn’t work.

And I think that as AI attacks become more sophisticated, so as these emails, that phishing farms can create become more personalized, become higher quality, become more relevant, the email security platforms are going to struggle to detect that as effectively and struggle to prevent those from landing in employees’ inboxes. I think we’ll see it with AI language models and these tools that we’re using. Even companies that are banning them outright, they’re still getting a ton of data leaked into them because employees are just putting it on their phone or taking a screenshot of the code and loading it into ChatGPT.

These are all stories that I’ve heard. And AI is just this kind of massive tailwind towards the humans needing to become better at detecting and preventing these security breaches, right? Already, humans account for 70, 80% of the number of breaches that big enterprises face. I think that number is just going to get higher and higher and higher because the tools that attackers are used get more and more sophisticated.

And so, the only way that we can solve that is to actually create behavior change and actually impact the way that users think about security. And for us, as I say, annual training is not a way to do that. And so, that’s where we are really focused on innovating in terms of both the simulations and the tests that you can use to train your employees, the format of the training itself, and then also ultimately the workflows, and getting into those workflows, and pointing them in the right direction.

Vivek: It’s awesome. Well, Harley, this has been great. We’re so excited to be on this journey with you. It’s a pleasure to have hosted you here, and I’m excited for everything you do in the future. So thank you again, Harley.

Harley: Awesome. Thank you for having me.

The End of Biz Apps? AI, Agility, and The Agent-Native Enterprise from Microsoft CVP Charles Lamanna

 

In this Founded & Funded episode, Madrona Managing Director Soma Somasegar sits down with Charles Lamanna, corporate vice president at Microsoft, to unpack his journey from startup founder to corporate leader. They dive into what it takes to build a successful AI-transformed organization, how the nature of business applications is evolving, and why AI agents will fundamentally reshape teams, tools, and workflows. This conversation offers a tactical AI adoption playbook—from “customer obsession” to “extreme ownership”—as Charles delivers insight after insight for startup founders and enterprise leaders navigating the age of AI.

They dive into:

  • Why business apps as we know them are dead
  • How AI agents and open standards like MCP and A2A are reshaping software
  • The shift toward generalist teams powered by AI
  • What startups are doing today that enterprises will follow in 3–5 years
  • How to focus deeply on a few high-impact AI projects instead of chasing 100 pilots

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Soma: Now, Charles, you came back to Microsoft when Microsoft decided to acquire your company, MetricsHub. How was that sort of entrepreneurial experience, and then transition back to a large company? Any learnings that you want to share that you think other founders or entrepreneurs might find interesting or valuable from your experience?

Charles: Absolutely. From my time on the outside of Microsoft, there were two big things that I learned and internalized in a profound way. The first is true customer obsession. That means even if you’re the engineer writing code, really understanding exactly how your customers use your product, what problems they’re trying to solve and what they’re looking to try to get out of it. That obsession has followed me since then, and I’ve really tried to bring back and inject deeply into Microsoft.

We do things like customer advisory board. We had one a couple of weeks ago with a few hundred customers came to town, and we have great quantitative analysis of how people use our products, where they get stuck, what our retention and funnel look like. Those are things that sometimes you forget or lose in a big company like Microsoft, because you have this amazing go-to-market arm, and you can make money even if maybe you aren’t delighting your customers. That has been a huge change. I think, of course, any startup is not going to be successful if they don’t really understand their customer and the pain points they’re going through.

The second thing is this idea of complete ownership, and when you have this sense of complete ownership, it doesn’t matter who’s responsible for doing something, if it’s necessary for the product or the business to be successful, you do it. And that is the biggest separation from a big company and a startup because in a startup, you don’t look around and say, “This is somebody else’s job.” It’s your job. If you’re a founder, everything is your job. Whether that’s responding to a customer support request, figuring out how to set up payroll, or doing financing. All of that is part of the job, and you never second-guess it. You never think for a second, “I need to hire someone to do this, or this is somebody else’s problem.”

That’s another thing, as you go into a big company like Microsoft, sometimes it’s easy because there’s such a robust support framework around you. You’ll say, “Oh no, I don’t do marketing, I don’t do finance, I don’t do this selling process.” Bringing back that extreme ownership has made it so much easier to create these successful businesses inside of Microsoft over the last 10 years. Things like Power Apps and Power Automate, they’re really expanding Dynamics 365. It’s that sense of total ownership.

Soma: I love those two things, customer obsession and complete ownership. Thanks. As I was just mentioning, Charles, we’ve heard Satya or Microsoft publicly say this on many occasions, like, “Hey, business applications as we have known them are dead.” We know that with AI, there is a tremendous amount of re-imagination of what business applications could mean or could look like that’s happening, including places at Microsoft. How do you think about this?

Charles: As the guy at Microsoft who works on business applications, sometimes the truth hurts, but business apps as we know it are indeed dead. I think that’s just the truth of it, and the analogy I always make is, it’s going to be like mainframes. I’m not saying tomorrow there will be $0 spent on CRM and ERP and HCM inside the enterprise. People will probably spend the same amount of money they did before, maybe a little bit less. They’re not going to do any innovation or any future-looking investment in that space because a system of record designed for humans to do data entry is not what transformation is going to look like in the world of AI agents and AI automation.

Instead, what will probably happen is you’ll see this ossification of these classic biz apps, the emergence of this new AI layer, which is very focused around automation and completing tasks in a way that extends the team of humans and people with these AI agents that go and do work. And if I kind of break it down, what’s in a biz app, I always have thought there are basically three things. It’s a basic form-driven UI gooey application on mobile and the web. It’s a list of things, and you can drill in and edit the individual things, whether it’s orders, or leads, or sales items. It’s a set of workflows that are statically defined, which codify how a lead goes to an opportunity, or how you close a purchase order. Very fragile, not dynamic.

Then it’s some relational database to store your data. That’s what a biz app was. Those aren’t the three elements of what a business application is going to look like in the future. Instead, it’s going to be closer to business agents. You’re going to have a generative UI, which AI dynamically authors and renders on the fly to exactly match what the person’s trying to do. You’re going to replace workflows with AI agents, which can take a goal and an outcome and find the best way to accomplish it, and you’re going to move from static relational databases to things like vector databases, and search indexes, and relevant systems, which are a whole new class of technology. When we fast-forward 10 years from now, you’ll look at those two things and it’ll be so clearly different, but right now they’re just beginning to separate. The gist of it is yes, indeed, biz apps, the age of biz apps is over.

Soma: When you mentioned forms of UI, in our workflow and database, you literally transported me back to my VB days.

Charles: Yes, yes.

Soma: Those were the things we were thinking about to help democratize application development. The fact that 20 years later we are still, at least today’s deployed application world is, tells me it is time for some disruption and some innovation.

Charles: Yes, exactly. I always joke, if you go and you look at a biz app that ran on a mainframe, it looks remarkably similar to a web-based biz app of today. That’s not going to be true in 10 years.

Soma: Whether it was the internet wave or the mobile platform wave, it always takes several years, many years, before you would find what I call canonical applications that define what the platform is capable of. In the AI world, I sometimes wonder whether that is still ahead of us as opposed to behind us. For all the hoopla and excitement that the world has seen around ChatGPT, that’s one sort of AI app that has gotten to what I call some critical mass in terms of adoption and usage.

Now in the startup world, there are a bunch of others like Perplexity, Glean, Cursor, Runway, Typeface, and a whole host of other companies that are getting to some level of critical mass. Some of these applications are targeted at consumers, some of them are targeted at enterprises, and some of them have aspirations to go both directions. What do you think is going to be the time when we can look and say, this is what a modern business application is going to look like, and throw away all the mental models you have about what that could be? Do you think it’s around the corner? Do you think it’s a few years away? What do you think?

Charles: I think we’ll see what the shape starts to look like very clearly in the next 6 to 18 months. I think because you already have glimmers of it, and then I think it’ll take longer to be mainstream. The refresh cycle of biz apps and core business processes takes a little bit longer, but in my mind, by 2030, this will be the prevalent pattern for business applications and business solutions. And in the next 6 to 18 months, you’ll really have it codified.

We can look to some of the places that have moved faster, like I’ll use Cursor as a great example. If you take Cursor, it’s a AI-powered application, tailored to provide an entirely AI-forward environment for a coder or developer. If you think about that, there’s the same type of work that happens for sales, or customer service, or core finance, like if it’s budget analysis, or reconciliation, or for core supply chain. You’re going to see things like Cursor or GitHub Copilot show up for each of those disciplines and be extremely tuned to take what people used to do and reimagine it with AI.

Just like how you have things like vibe coding, you’ll have vibe selling, vibe marketing, and vibe legal work. Those things will all show up. There are great companies out there. Harvey is a great company on the law side. There are a lot of companies that are emerging that are starting to do that. And of course, I’m biased. I think we have a lot of great stuff at Microsoft. We have very broad adoption of our Copilot offerings, but I think we’re going to see that fill out by industry, by business process, and by function.

The last thing I would say, which I think is probably one of the more interesting elements of all of this, is right now we’re taking the way organizations are structured and just mapping them to this AI world, right? Oh, you have a sales team, so they need AI for sales. You have a customer support team, you need AI for customer support. I don’t know if that will be what the world looks like at the end of the decade. You’ll have new disciplines, and new roles. Maybe you don’t have sales and customer support as two divisions. Maybe it’s one. Maybe sales, marketing, and customer support all become one role, and one person does all three. I think we’re going to reason through that, and that element is what will probably take the longest. We’ll probably have a wave of great technology for the old way of working that have new ways of working. Then another second wave of great technology, but all I know is it’s definitely going to be an exciting couple of years.

Soma: Your last point particularly made me think about this, Charles, instead of AI for sales, and AI for finance, and AI for this, and AI for that, do you think people are starting to think about, hey, what do people need to do in a company to get their job done, or to get their work done and start thinking about workflows that may or may not stay within a particular function or a particular discipline and cross discipline? Do you think there’s enough of that push that’s happening already, or is it coming in the future?

Charles: It’s very early. I mean, what’s amazing is that startups are doing this because startups, in a world where you have extreme ownership and you have to do whatever it takes to succeed, you don’t feel constrained by disciplines and boundaries. If you want to see where the enterprise world or where mid-size companies are going to go in three to five years, look at what startups are doing right now and that’s exactly what they’re doing. Different structures, different ways of working, and there are two things which I think are going to really drive a lot of this transformation.

The first is, these AI tools bring experts to your fingertips. As a result, you can be a generalist with a team of expert AI supporting you. That’s how I feel every day. I have an agent that helps me with sales research. I’m not a salesperson, I’m an engineer, but I don’t have to go out and talk to a salesperson to get ready for a customer meeting. I have a researcher agent, which helps me prepare and reason over hard challenges. I have a document editing and proofreading agent, which makes me a better writer. I have all these tools, which make me more of a generalist, kind of overseeing these set of AIs. What that translates to is probably de-specialization in the enterprise, de-specialization in companies where you have less distinct roles and disciplines, more generalists powered by AI. That’s item one.

The second thing is, what makes a team? We always think a team is a group of people. The big change is that the team is a group of people and AI agents. That’s really how we need to start thinking about how we organize organizations and companies, and how we even go out and do hiring. If you think about who you work with, you’ll start to, increasingly I think of it as — here are the people I work with, and here are the AI agents I work with to get a job done. That means you have meetings, you have calls, you have documents you work on together. Those two things will help drive that transformation. It’s not like a startup sits down and says, “How should we structure ourselves for the future?” They tackle this problem, that problem, that problem in the best and most efficient way, and it happens to look like that. So that is, I think, probably a lot of the changes that we’ll start to see.

Soma: You talked about this notion of, a team as not just a bunch of people, but a bunch of people plus a bunch of AI agents. Can you take it one step further and say, hey, every information worker or knowledge worker is really a human being plus a bunch of AI agents at their disposal? Is that a good way to think about it?

Charles: Absolutely. The way we approach it is every individual contributor, everybody who individually does work, will increasingly become a manager of AI agents who do the work. We have a thing we talk about internally at Microsoft, which is in the past we built software for knowledge workers to do knowledge work. In the future, probably most knowledge work and most information work will be done by AI agents. And a knowledge worker’s main responsibility will be the management and upkeep.

Soma: To orchestrate and manage.

Charles: Exactly. That’s where you get this idea that you can be much more of a generalist and an expert, and this is how you get a huge productivity gain. You’re not talking about, oh, I’m 10% more productive or 15% more productive. We all are going to have entire teams of AI agents working for us. We can be 5 or 10 times more productive if you get that right, and that’s what gets me excited because that’s what starts to change the shape of the economy and really create abundance of doctors, lawyers, software, and all of those things.

Soma: People fondly refer to 2025 as the year of agentic AI. First of all, do you agree with that? How do you see the role of agentic AI or AI agents as far as the next generation of business applications go?

Charles: It definitely is the year of agents. Everyone I talk to, from the smallest to biggest company, understands what agents are and they want to get started with deploying agents in their enterprise. You can see, you have Google with Agentspace, you have Salesforce with Agentforce. We have plenty of agents at Microsoft in and around Copilot. OpenAI is talking about agents, Cursor is talking about agents, everybody’s talking about agents.

It very much is beginning to diffuse — kind of like how 2023 was probably the main year of chat AI experiences on the back of ChatGPT and Copilot’s launch, that’s what 2025 will be, but for agents. Business applications, in particular, are going to be the ones most changed as a result, and I think you’re starting to see it. Every company I work with tells me, “I’m excited by business applications with AI, that’s great, but I really care about business agents. Tell me how I can get agents deployed in my back office, in my front office? How can I grow revenue, cut costs, using agents?” That is a new conversation, which to me means it’s the era of agents.

Soma: We’ve gone through a major platform shift almost every decade or so, and sometimes during this platform shift, every major player would go off in their own direction, trying to figure out what it means for them and what they can do with that kind of thing. If you go back to the internet platform way, you could argue that HTTP was something that sort of came in pretty early on, and everybody adopted and said, “We are going to be behind this kind of thing.”

Similarly today, when I think about this agentic world, I look at a protocol like MCP, or a protocol like A2A, and see a tremendous amount of industry consolidating. In fact, the thing that surprised me is that Anthropic, in MCP’s case, came out with MCP, and within a few months, pretty much anybody that mattered talked about how they’re all in on supporting MCP and came out with their own offerings. That level of industry consolidation around something is both exciting and fantastic. How do you see that?

Charles: It’s probably 30 years since we’ve had such an industry-wide convergence on an open standard, back to really the original open web HTML, HTTP, and JavaScript. It’s incredible because that means more opportunity for startups because there’s really not some strong incumbency advantage, as a result of open standards. Also for customers. I can buy 10 solutions, 10 different AI agents, and I have confidence that they’ll work together. Even at Microsoft, we support A2A. We’ve announced that a couple of weeks ago. We have MCP support for a couple months, and we’ve even contributed back changes to MCP that have been accepted and merged with a bunch of other companies for authentication to make that work well with MCP.

This is going to be great because a typical company has so many, say, SaaS applications and databases today. In the future, they’re going to have a ton of these different agents and tools for agents. That’s what the future is going to look like. If you think about what it’s like to be in an IT department that has 300 different SaaS apps, it’s so painful to integrate them. I don’t think it’ll be as painful in this world of MCP and A2A and that’s huge opportunity for lots of these startups, which can be so fast and agile using these AI tools and can interoperate with the big footprints that exist in a typical user’s day, whether it’s consumer or commercial.

Soma: I want to go back to one of the earlier things you talked about, which is customer obsession. You mentioned that you had a customer advisory board, and a couple of hundred customers come through. When you talk to enterprise customers, where do you think they are in the journey of adopting AI, whether it’s in the form of business applications, next-generation business applications, or Copilots, or what have you? Do you think they’re in the early stages, mid-stages, or later stages, and what are you hearing from them?

Charles: It’s a big spread out there right now. Some companies are almost like a tech company in terms of how aggressive and ambitious they are with the AI transformation, usually that comes from a very top-down investment focus from the CEO, the board, plus having business and IT and tech resources equally engaged. A lot of companies are very early, and they’re looking for that first big win. Maybe they have a few POCs, a few prototypes, a few experiments. They don’t have that big top line or bottom line moving win.

What’s interesting is that if you went back a couple of years ago, it was all about building things yourselves. Everybody had dev teams calling APIs and using models. We’re coming out of that because people realize how hard it is to assemble these things and get business outcomes. It’s the era of these AI finished solutions, whether that’s in an agent or this new type of AI application like Cursor. That is starting to be the main place that companies are looking to get that value quickly. If I were to take a step back and maybe do a pattern match of what are we seeing for companies that are being most successful, enterprises that are being most successful, the three main things when it comes to the AI transformation.

First, they’re being very focused on driving real resource constraints into the organization to drive productivity improvement. If your budget grows every year, you don’t feel a lot of pressure to improve your unit performance inside the organization. That’s a hard thing to do, particularly if a company is growing. The second thing is having a big focus on democratizing access to AI. Companies which are struggling are the companies that don’t have AI in everybody’s hands every day.

If you want to become an AI-transformed company, the only way to do it is all of your users, no matter where they are, technical, non-technical, need to be picking up and using these tools each and every day. If you don’t have that, people will have dreams of the magic AI can do, which isn’t grounded in reality, or they’ll be unnecessary skeptics for future projects. Get AI in the hands of everybody. The third and last bit is don’t spread yourself a mile wide and inch deep. For companies that are successful, they don’t do 100 projects, they do 5 projects very well with a lot of force and with continuous improvement in mind. That’s kind of what I see as showing up as the most successful enterprise organizations.

Soma: That’s great. Did you hear the Shopify CEO make a prompt from a few weeks ago about how everybody should be thinking about AI?

Charles: Yes.

Soma: That dovetails with what you’re saying about, hey, make sure that everybody has access to AI tools?

Charles: Exactly. I go out and tell my team, “This year, you won’t be promoted unless you use AI tools if you’re an engineer, because how can you really say that you’re on the cutting edge of AI software development if you yourself are not using AI?”

Soma: That’s great. Charles, earlier on, you talked about customer obsession and complete ownership. Some of the learnings that you had from being a startup founder to coming back to Microsoft. Going hand in hand with that, how do you think about agility? One of the things I worry about, and I was part of Microsoft, so I can sort of say that I’ve been there, but as the company gets larger, sometimes you sort of wonder whether the agility is what it needs to be, the level of urgency is what it should be. How do you encourage your teams and Microsoft to say, hey, I want to operate with the same level of urgency and agility that a startup does?

Charles: There’s three big things that we’ve done to help instill that. The first is, for the most intense period since I’ve been back at Microsoft, it’s mission-oriented. Everybody understands what the mission is. All of our software, all of our technology, all of our products is going to be completely disrupted by AI. Do we want to be the people who watch that happen, or do we want to be the people who do it to ourselves? The energy is off the charts. I’ve not seen folks at Microsoft working as hard and pushing the limits and boundaries and innovating in the last 10 years I’ve been there, as there has been over the last couple of years. That’s kind of item number one.

Number two is when you’re in a big company, there’s always this incredible inertia and this incredible layers of bureaucracy, and process, and layers of decision makers, and consensus building that slows everything down. That’s where extreme ownership and this desire to grind through anything is really critical because anything you want to do, if you want to innovate, there’ll be 100 reasons why you cannot do it. You have to find the one reason why you can and how you can, so that extreme ownership grit to really push through all these barriers to go be successful.

And the third piece is really encouraging experimentation and being willing and rewarding failure if it produces learnings. We have these interesting forums at Microsoft where folks will come in and say, “Here is a product experiment we’ve done, or here’s an AI model experiment we’ve done.” We have these every week and they share good or bad. Here’s what we tried. It didn’t work for these reasons. Here’s what we tried. It did work for these reasons. It’s almost like the cloud post-mortem culture that you had to develop with repair items and a blameless post-mortem.

It’s this continuous experimentation, innovation feedback loop around model and AI products, and doing both of those because those are both equally important, is how we’re really starting to drive this culture of, it’s not build a plan for six months and we’re going to run the plan no matter what. It’s build an experiment, run it in a day, learn, run it another day, learn, because that’s what all the good AI companies are doing. Those are just a few of the things. If you look at the pace of innovation, Microsoft is definitely moving faster than we’ve ever moved before.

Soma: That’s a super helpful framework to think about, as teams and organizations are thinking about how do they operate with the same level of urgency or agility that is required in today’s age. It’s not a nice to have or hey, yeah, someday I’ll do it. If you want to survive and if you want to be ahead of the curve, you need to do it today. Now, coming to the personal side a little bit, Charles, I’m sure AI is impacting your life in a positive way, whether it’s at work or outside work. Are there one or two tools that you use on a daily basis, and can you talk a little bit about what those tools are and how they change what you’re doing?

Charles: I will exclude all my Microsoft tools that I use all the time, in the interest of being different a little bit, because I use a bunch of those, and one of my favorite features that have been released lately is the deep research functionality. Between o3 and Deep Research, you can get some amazing insights. A big thing I like to try to do is really have a good view of the market to try to find blind spots. What startups are out there being successful, and how are the big competitors doing when they do their earnings, announcements, or conferences?

What I can do with Deep Research is I can basically have a very specific question, and I run this basically every week. I’ll give an example, help me understand the financial performance of business application companies, and who is accelerating versus decelerating, and what are some interesting facts and terms around usage that they’ve announced. I can basically describe this nice little big healthy prompt, send that off, come back 10 minutes later, and I get a beautiful little view. This is a way that I stay on top of what’s happening in the market every week. In the past, I could do this by reading various places, Hacker News, and on X, and stuff like that. But this gives me a really in-depth view report, almost as if I truly had a competitive researcher full-time doing work for me.

That has been game-changing and my poor team is probably tired of me sending screenshots to these reports because I use that for a lot of public information. Second thing is, I’m a big user for image generation tools. I have subscribed to Midjourney. That’s just so much fun because I never was a great artist, but I’d say I can create lots of fun images and pictures and I share them with family and friends. That’s kind of like a relaxing thing for me to do. And I don’t have Photoshop. I would never have opened up and drawn free form, but I can have that feeling of creation and creativity in a way I wouldn’t have before. It’s interesting. It’s a new kind of hobby, a new accessibility. Again, back to the generalist specialist thing, I’m definitely not a specialist artist, but I can use AI.

Soma: It’s a good outlet for your creativity.

Charles: Exactly, exactly.

Soma: That’s fantastic.

Charles: I cannot wait for companies like Runway, as they mature capabilities to be more than just images to videos. I can’t make a film or a movie today, but I bet in the next 10 years I’ll be able to make a 60-minute film, like really. So that’ll be fun.

Soma: That is great. On that note, Charles, thank you so much for taking the time to be here with us today. I really enjoyed the conversation, and we took it in multiple directions, and it was fun to be able to hear your views, your perspectives, and your experiences. Thank you so much.

Charles: Thank you for having me.

Building AI That Sells: Scaling Smarter with Luminance CEO Eleanor Lightbody

 

Entrepreneurs often ask: When do I know it’s time to scale? Or, how do I lead when I wasn’t the original founder?

In this week’s Founded & Funded, Madrona Partner Chris Picardo sits down with Eleanor Lightbody, CEO of Luminance, who shares how she took a promising AI product and built a company culture, go-to-market motion, and product strategy that’s scaled 6x in just two years. Eleanor’s candor and tactical insights on hiring, selling, and navigating founder dynamics make this a must-listen.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Chris Picardo: To kick things off, I think it would be great to just talk a little bit about your career journey. How do you go from a cybersecurity account manager at Darktrace to the CEO of Luminance?

Eleanor Lightbody: I think to understand that, it’s probably worth going way back. I was reflecting on this the other day, I grew up in a household where my mother was an entrepreneur. She started a small business that has grown massively and I saw that when I was growing up. And my father’s a lawyer. Reflecting on it, and looking back, I’m like, “Oh, matching, kind of mirroring entrepreneurship and working in the legal space.” It feels like a given now.

I started at Darktrace when I was fresh out of a post-grad, and I chose Darktrace for a few reasons. The first one was that it had some seasoned investors, and it felt like they had built and grown quite established companies before. I was going to be one of a handful of people in the London office. It felt like it was an opportunity to work for a company that had the potential to scale very fast because what they were offering was appropriate to every single company in the world.

I’m pretty glad that I did. I had a few offers on the table, but I’m very glad that I did because I joined when the company was super small, probably 50 of us globally, if that. I left just before the IPO, and recently the company was bought by Thoma Bravo for over $5 billion. I set up the African operations. I was the first one on the ground to open up that market. Then I went on to run a global division that looked at securing national critical infrastructure, and the investors of Darktrace were similar to those at Luminance.

I got a phone call over five years ago, and it was like, “Hey, our portfolio company’s doing something really interesting. Would you like to join them?” And I thought, “What do you mean? Join them? I’m like meeting the company.” I had to clarify that with them. I’d actually known the founders at Luminance from afar for a while. After speaking to them, I wasn’t thinking about leaving Darktrace, but actually,y it was a very quick yes and an easy yes, because the opportunity that Luminance had was absolutely massive. And so I thought, “You’d be crazy not to say yes to this.”

Chris Picardo: It’s so interesting. It’s got to be a great phone call to get to be like, “Hey, do you want to go do this really fun thing at another exciting company?” One quick question I wanted to ask that you mentioned, which I’m curious about, is you said when you joined Darktrace, it felt like it was ready to grow really fast. What does a company feel like when it’s ready to grow really fast? I know that sometimes you get that feeling, sometimes you don’t. What resonated with you when you were like, “Okay, this company is ready to go.”

Hear from Eleanor during
the 2024 IA Summit.

How to Know When an AI Startup Is Ready to Scale

Eleanor Lightbody: That’s exactly why I joined Luminance, because I was like, “They’ve got these foundations and they have this ability to scale.” I think it’s a combination of things. One, it’s like— how big of a problem are they addressing? Are they thinking about the problem in a different way? This is when I’m talking to founders. Is there deep expertise in the technical team, and do they have a real sense of what they’re trying to deliver? Those are the key elements that I was always looking at.

But fundamentally, ideas change, they permeate. I think any successful company experiments a lot. It’s not necessarily about the product today, but it’s about the team today and the way that they think about things. Then alongside that, it’s like, do they have that energy? You know when you walk into a room and you’re like, These people want to build, and they are competitive, and I feel like there is no Plan B, but success. Those are all of the things that both Darktrace and Luminance have shared.

Chris Picardo: It’s sort of like that drive that you are going to build this product because you want to win and you want to see it out in the world and fulfill your vision, and that is what’s motivating you as being part of the company.

Eleanor Lightbody: 100% and nothing’s going to stop me.

Chris Picardo: Obviously, you’ve had that feeling twice and both stops have been quite successful, so it must be a great feeling to have. When we talk about Luminance, you’ve done something obviously a little bit different than a lot of people’s journeys, which is stepping into the CEO job. Talk a little bit about what that experience was like. The founders are still there. There have been a couple of other CEOs. How did you think about that? How did you navigate the dynamic? What was that broader experience like?

how to scale an AI startup with a non-founder CEO -- Luminance CEO Eleanor Lightbody

Navigating Founder Dynamics as a Non-Founder CEO

Eleanor Lightbody: When I first joined, it was a combination of a tiny bit of naivety, which I think is actually really good. As we grow up in businesses and get more and more experiences, one of the key things is to try and keep a bit of naivety because that can help you take on these hugely daunting tasks that in time we slightly get a bit wary of. I was excited, and I was daunted.

I look back on it and it’s quite funny. One of the pieces of advice that I was given from one of my mentors was, A, it’s going to be really hard. I didn’t think I knew how hard it was going to be, but I was like, “Okay, cool.” The second one was, “Look, you want to join a company and within the first few weeks you’re probably going to want to change so much. Get a piece of paper, write everything down that you want to change, put it in a cupboard, and don’t look at it for four weeks. Don’t change anything for four weeks and then revisit it. You’ll start to understand why some of the things you might not need to change or why they are a certain way.”

I didn’t listen to that very well, and I think for a good reason. Regardless of the circumstances, they’re bringing you in. The most important thing to set the scene is to have very frank conversations upfront to say, “This is what I can bring. These are the things I can bring.” For me, when I joined Luminance, there was so much that could be done to mold the commercial teams and to change the way that the company was thinking about how to go to market.

It felt like I could sit with the founders and say, “This is what you guys are doing, this is what you should be doing. Let’s do it today.” And they were like, “Oh, wow. Okay.” Then, having a few metrics to show early signs of success can help you bring the founders on that journey with you. From the outset, my piece of advice to anyone going in is, “Know where your strengths are. Don’t try to boil the ocean from day one.” They’re stuff from that list that I wrote, and I haven’t changed, and know where your finer strengths are. For the first few months, figure that out. Be very upfront in your conversations, be very transparent, and then you’ll start to build a lot of bridges.

Chris Picardo: It seems like when you bring in a new CEO, obviously, one of the reasons you might do it is because you want a change agent. You have to be nuanced and thoughtful about how you want to go about getting buy-in and team-wide enthusiasm about the change that you’re bringing.

Eleanor Lightbody: Yeah, exactly. Being thoughtful and mindful are really, really key things.

Chris Picardo: One thing I hear a lot from CEOs, people who’ve been CEOs, people who’ve worked with a lot of CEOs, is it’s kind of a job. There’s no job description around being the CEO, and it’s not like, “Oh, here’s this great training path to becoming a CEO because it’s a very different type of role than anything else in the company.” Is that your experience as you sort of transitioned into the seat, that maybe your expectations around what a CEO does or what people externally think a CEO does is kind of different from what they actually do?

Eleanor Lightbody: I think everyone thinks that a CEO does something different. If you were to take a general survey of the market, I’m not sure that you would get consensus at all. I don’t think I knew what I was getting myself into, and I mean that in a very positive way. The rate that we’re growing, a lot of CEO’s roles change and they morph. I used to say every month felt slightly different.

The rate of innovation and the speed of scale each week are the key thing, for successful CEOs are a few things. One is being able to understand the vision, being able to understand the 10,000-foot view, but also to be able to parachute in and understand the problems and think about things from the first principle basis to then help fix them. The second thing, and the most important, I have this opinion, very biased, but I think it’s — focus.

As a CEO you can be pulled in so many different directions. All the teams can benefit from having you in the conversations. What are the most important things that day, that week, that month, that year, the next few years? And to try and be very regimented in that. I also think part of my role is being a bit of the hype person for the business, you know? Like going in every day, leading by example. There are other CEOs who might not necessarily agree with that. It really depends on the company, what’s required at that given time, and adapting to it.

Chris Picardo: Being able to figure out what the company needs at any given time is one of the key things you have to be able to do. It’s probably a good time also to talk a little bit about that in practice at Luminance. You stepped in and were able to really massively change the growth trajectory of the company. I’m both very curious about how you did that? But also, I think it’s a good time to spend 30 seconds on Luminance and what it is and why you were so excited about it being able to really put the company on such an incredible growth trajectory.

Scaling Go-to-Market: From Law Firms to Global Enterprises

Eleanor Lightbody: If you think about it as we sit here right now, there will be teams in every corner of this world who will be receiving legal contracts. They’ll be reviewing them, they’ll be processing them, and they’ll be deciding what to do with them. As that stands, it’s very time-consuming, it’s expensive, it’s prone to human error, and it’s costly. What we do at Luminance is that we automate that process end-to-end.

Our customers are pretty much any company of any size, whether that’s someone like AMD or DHL or Hitachi, all different industries who are using us for every single interaction that they have with their legal contract. When I joined the company four years ago, Luminance was only selling to law firms and they had done a really good job. They’d sold to a quarter of the top hundred law firms.

When I came in, I was like, “This platform’s so powerful.” Their addressable market is so much bigger than just law firms. One of the things that we did very quickly was we used the underlying AI models and technology, and we built a whole new workflow and platform to be much more directed at in-house legal teams. That was one of the bets that really paid off because we saw that sales cycles were much faster, the time to value was much faster. That product in itself has grown its ARR 6x over the last two years.

And, again, going back to what I did very quickly when I joined Luminance, I’ve got a sales background. I came in and I asked the teams, “How many fast meetings have you got? How many POVs?” We do free trials. “How many free trials are you running? How many cold calls are you making?” All this stuff. There was a bit of a disconnect of what was expected from the sales teams versus what sales teams should be delivering.

The sales teams have been ex-lawyers. I was like, “Wait a second, we need a lot of lawyers in the company to help build the product, help understand the use cases, help be subject matter experts, but the one area that we do not need lawyers is in selling, because selling is slightly different. Give me a graduate who’s 21 and I will train them how to sell, as long as they understand what we’re doing.”

We changed the whole hiring process, the whole structure. I very quickly promoted two very young, at the time, account executives into more leadership positions, into mentor positions. I remember some people saying, “They’re very young, they don’t necessarily have enough experience.” And I was like, “Give me someone who’s young and hungry and I will help train them into what they need to be.” They’ve become two of the most successful people in the business. That was one of the moves. Adapting the product, it wasn’t really changing it was adapting it and totally changing our go-to market for two areas that were really important to do first thing.

Chris Picardo: You just gave a very condensed master class in both positioning your product and then figuring out how to sell it. One of the things we were joking about in our prep call, I guess a couple of weeks ago now, was people love to say, “If you’ve got a great product and AI, it’s just going to sell itself.” I think we were both laughing because that is definitely not true. Certainly, some companies have been able to figure that out, but I think for the most part, you have to map a great go-to-market strategy with a great product strategy, right? Those two sort of work together.

Eleanor Lightbody: I totally agree. By the way, any company that has been able to do that, hats off to you. Kudos to you. High-fives to you. I want to talk to you. Great, amazing. But that, as a founder, as a CEO, is not necessarily what you should be solving for. If that comes organically, brilliant, amazing. But there are still some key things that might not necessarily last till the end of time.

So understanding your product and understanding distribution, of course, you’ve got to have innovation at the heart of it. You’ve got to have a really solid understanding of the problems that you’re trying to address and how technology can help. If you don’t know how to go to market to it and you don’t know how to take customer feedback, and the key thing is that taking the right customer feedback and prioritizing it, again, is I think so important when you’re scaling business.

Chris Picardo: That’s an interesting point that you want a lot of customer feedback, but you also need to prioritize the customer feedback around what moves the needle and what is useful and interesting information, but maybe not as high on your priority list.

Eleanor Lightbody: Exactly. Everyone talks about customer feedback. It’s important that customer feedback is crucial, don’t get me wrong, especially depending on where you are in your product development cycle. If you’re early on as a company, then what we found super successful or to be super successful was choosing. We were very lucky that two very big companies, early on, were like, “We’ll become design partners with you.”

We said yes to them, but actually no to some others because we felt that the use cases that they were trying to address with our technology and what they were trying to build with our technology could relate to companies around the world of all different sizes. So we were like, “We’re going to go with you guys.” And I think finding some early design partners can be super useful.

Then obviously, as you get more and more customers, the key is two things. The first one is prioritizing customer feedback. I have friends in other areas and other businesses who, as time goes on, they remove themselves from the customer. For us, it’s so important to be even more ingrained with the customer because you get so much valuable information there. But, again, having a system where the feedback loops really fast and you can see if the same feedback’s coming across from all of your customers, those are the ones that you need to really focus on versus a customer saying, “Can I put my logo on it or can it be yellow rather than blue?”

Another mentor of mine told me early on, and I didn’t really get this because in sales you’re like, “Yeah, yeah, we can do this, this,” But when you’re running a business, the power of, no, when you’re talking to customers. Saying, “No, we’re not going to do this and there are some reasons why we’re not going to do this.” Most often than not the customers say, “Okay, cool, that’s fine. It was just an idea.” Actually, if you overpromise, then that’s when you find yourself in a bit of a wrap rate, like a hamster wheel of trying to always catch up and that’s not necessarily a place that you want to be in.

Chris Picardo: Then you’re customizing for every customer, and it’s a problem, right?

Eleanor Lightbody: Then the amount of when you’re upgrading, not somewhere that you necessarily want to be. The second thing is around, we have a team that focuses on customer feedback and product, but we also have our blue sky thinking team because sometimes customers can identify lots of things that they want and for you to kind of help them with. They might not necessarily see around the corner, they might not see the potholes. We’ve got a team very much focused on what can’t we do today that we might be able to do in six months time, in a year’s time.

The capabilities of these models are getting better and better and better. That’s no secret to anyone. They’re getting cheaper and cheaper and cheaper. How are we building for the future? Metaphorically, they’re carved out into a separate room because I don’t want them to get too distracted by the noise of the daily operations. I want them always to be thinking about innovative, cool, different end-use cases.

Chris Picardo: I was so curious to talk a little bit more about this concept that you have around, I think you call your innovation team, the jet pilot team. Then you’ve got the core team. I know a lot of founders and CEOs always try to think about how to balance innovation versus core product delivery and what are the ways to do it? You’ve been successful at doing that. How do you think about managing that on a day-to-day basis and having those two different groups working in concert with each other?

Inside Luminance: Running a Dual-Track Product & Innovation

Eleanor Lightbody: It’s always a bit of a balancing act. For us, one of the pillars that a whole company is based off of is innovation and speed. I think that to be competitive in this market, you have to build very, very fast. You can only do that if you’ve got a team that really understands the impact of what it means to build slowly, actually. There are areas that you need to build slowly, don’t get me wrong, but there are new value-adds, features, use cases, and modules that you want to get out there. It’s seen as a classic like 80/20 rule. We’ll get it out there and see what the feedback is, choose a few customers to go live with, and then you can iterate on it and fly.

The way that we’ve got our teams to buy into this is that they’re the ones talking to the customers. They’re the ones who developers will build something, they’ll sit in and they’ll see the impact and historically they’re siloed away and they’re like, “Oh, but why do we need to build fast? Why does this matter? Let’s make sure that we’re getting it totally right before we push it out.” My argument always to them, and they only started believing this when they saw it, was you don’t know what’s going to really work. You actually have no idea.

We might think in isolation that something is going to land really nicely, but more often than not, sometimes the things that we don’t know are going to land are the ones that people really, really like. So just get it out there. A terrible analogy, but it’s like throwing mud at the wall and seeing what sticks. Throw as much mud and see what sticks, and then you can iterate really quickly on that.

Chris Picardo: Are you able to earn your ability to do that because you’re so close to your customers and able to gain that trust? I think you said something interesting, which is the 80/20 is okay to launch some of these things and be iterative and to get new product in front of customers quickly. Did you earn your way into doing that? Is that something you did? Then customers said, “Hey, this is great, right? Keep it coming.” What’s the nuance there? There are a lot of people who are like, “Hey, if I do launch something and it’s terrible, then have I blown up all of my trust?” Or do people now think it’s not a great product? I think clearly you’re saying, “Yeah, you might have a couple of those things that just don’t land.” But if you launch and you put stuff in front of customers, that’s the cycle to get the best product out there.

Eleanor Lightbody: It’s choosing the right customers. You don’t do this across all of your customer base. You say, right here, X customer. AMD is a great example. They came to us and they were like, “We love your AI for legal negotiation, but at the moment, I still have to review clause by clause.” The AI tells me whether I should agree to it or not. It gives me language that I can use instead. Still the human has to go through and say, “Yes, I agree.” Or, “No, I don’t agree.” They were like, “It’d be amazing if I could just click a button and it rewrote the whole contract for me.” We we’re like, “Okay, let’s do that.”

The first few times that we showed it to them, it didn’t get it right. We could have easily lost their trust, but it’s all about the framing and the positioning, which is like if you’re going to these development partners and saying this is a fully baked solution, then if it doesn’t work or if it doesn’t quite hit the mark, then of course you’re going to lose trust very quickly. It’s like, “No, this is something that we’re working with you on and we’re going to iterate. This is the first future version of it. Let’s continue this dialogue.” Then if you pick the right partners, they’re going to love being able to roll up their sleeves and help you with that.

Chris Picardo: You have a set of customers that are basically perpetual design partners that are happy to work with you on iteration knowing that some of it is going to be experimental and you guys will co-work on it together to land the final version of the product.

Eleanor Lightbody: The good thing is that most of the stuff, we eat our own dog food, it’s the best way to describe it. We test it out ourselves first as a legal team and we think, “Yeah, this is going to have legs on it and then we go to a few select customers. There is sometimes it doesn’t even pass getting past our legal team and we’re like, “This is a cool idea, but actually this is never going to work in reality.” Yeah, that’s how we come to that.

Chris Picardo: It’s funny because, obviously, you’re so focused on go-to-market, but you have tons of product insight on how to map the two pieces of go-to-market and the product roadmap together. That’s a hard thing sometimes for purely technical or purely product-focused founders and executive teams to be able to see how those two pieces work very closely together.

Shifting from Sales to Product Leadership as a CEO

Eleanor Lightbody: It goes back to the earlier point, which is I think I bought in very much for distribution and go-to-market. Now I spend much less time thinking about that. We’ve got a really good repeatable machine and I dip in and out of it. Now my role is very much with product and thinking about, “Okay, what are the next use cases? ” I absolutely love that.

I was talking to one of the founders the other day, and I was like, “I’m not an AI expert and I always come up these ideas.” He’s like, “Eleanor, they’re not your ideas?” Really at a high level, oh, I’ve great ideas.” Building that respect and that trust has been so great and why we’ve seen so much success.

Chris Picardo: I wanted to circle back to that question, which is, how you think about working with technical founders. I’d imagined you still might not consider yourself to be ultra deeply technical, even though you clearly are pretty technical now on a lot of this. I think Snowflake is the example people often use about very technical founders and very commercially minded CEOs, but it seems like it’s a great working relationship that you have. Are there ways that you thought about that or culturally think about that to make it so effective?

Eleanor Lightbody: I don’t think either Adam or Graham, who are our founders will mind me saying this. The beginning part that was definitely, we had to work through a few things. I didn’t know what I didn’t know, and they didn’t know what they didn’t know. The key thing is, again, I’ve done this. It’s understanding where your lane is when you start off. You’re in a room, all of you, because you are super strong at something you would hope.

We always talk about leaving egos at the door when we get together as a management team. We’re pretty blunt with each other. We give constant feedback and no one’s off the hook. I get bombarded and the founders get bombarded. When I first started I was a bit like, “Oh, wow.” Then you’re like, “No, this is exactly what success looks like.” Is that we’re constantly holding ourselves to higher standards, and we have really productive, sometimes heated, conversations, most of the time not, and that’s so key.

It’s starting where your strengths are, showing a bit of success, showing that you appreciate each other’s art. The other day I was reminiscing. When I came in, the founders didn’t really know what ARR was. They weren’t plugged into the monthly sales numbers and they didn’t quite understand that whole world. Now, the first people to text me on a monthly management call, end of the month, would be the founders. “How are our numbers this month?” They’re like, “How are the sales? What are the growth? Why don’t we win?”

I’ll be the ones asking them about new AI models. And I’ve got a bit of a competition where I’ll be like whenever this new AI model that comes out, I’ll text them, I’ll be like, “Am I the first one to know about this?” And they’ll “No, you’re still not the first one. There’s this appreciation now of each other’s worlds, and I love that.

Chris Picardo: Many of the things you’ve been saying have come back to mutually aligning the culture around a company that’s going in the same direction culturally, doing things the same way and has the guardrails up around what the culture is like. It almost goes back to the initial thing we were talking about, around feeling that energy and alignment at a company that is just, “Hey, we are scaling, we are going, this is awesome.” Iit does seem like an explanation under a lot of these topics is the cultural alignment that you’ve been able to drive and alongside obviously the founders and the team to set up this version of Luminance where it is.

It’s really fun for me to listen to stories like that, especially because I think it’s unique to do it as the non-founder. To come in and say, “Hey, maybe at this point you’re effectively a quasi-founder or a true right founder of this version of Luminance.” It’s very unique to do it that way and at that time versus, “Hey, I’m going to try to start this from the beginning because I started the company from scratch.”

Scaling People and Culture Alongside Product

Eleanor Lightbody: It comes with a lot of challenges, but also it comes with so much potential. The fact that you can join, you can change things, and you can hopefully clearly see what needs to be improved, and also what has happened, that’s been amazing, and to remind people of the amazing things that they’ve done. Sometimes they get lost in that. Then slightly change things up to make sure, you guys, you’re on the right path to the next stage of growth. I love it.

Chris Picardo: You can tell. It’s fun talking about it, obviously. We’ve talked a little bit about it, but I want to ask you, is there anything that’s been particularly hard? I’m sure there’s lots of things. Is there anything that surprised you like, “Wow, that is hard. We’ve got to go figure out how to solve.” That maybe wasn’t on your list of things you thought were going to be hard as you scale the company?

Eleanor Lightbody: I mean, it’s all hard. I don’t know, if anyone tells you it’s not great, but I’m very honest. It’s so rewarding and fun and as I said, absolutely, you’re so immersed in it. One piece of advice I would give to anyone who is thinking about stepping into this role would be, there’s no such thing as work-life balance. Leave that at the door.

If you think that you’re going to join a company, you want to scale in and you want to be successful, it’s not just you that’s going to be immersed in it. It’s your spouse or partner or your friends, you family. You are all in it, whether they like it or not, you are all in it together. I think that is important. What have I found hard? What I have found hard is, and maybe this is a bit controversial for a podcast.

I’ll be very open, which is, you really hope that the team that you have with you will scale with you and that they will learn and they will grow. Sometimes, some people don’t. As you have to, as you go through that journey, manage that. That is probably one of the hardest things that I have found in the past. So that was quite a wide awake. I don’t think you know what it’s going to feel like until you have to go through that.

What is always interesting is making sure that you’re focused because you can spend a week feeling like you’re super productive and so busy, but have you really moved the needle or have you been focused on the things that are the most important part? The third thing that I found, I don’t want to say hard, but I just hadn’t expected it. We hear so much and you read so much about how hard it is or how exciting it is and how the teams are so important in innovation. What you don’t hear or I had heard less about, is how important having the story is, how important having a narrative is. Not just for you as a company and so that everyone’s aligned to it, but also for the outside world. I think in the U.S. Marketing is amazing, and I don’t think I had appreciated the power of it until I joined Luminance.

Chris Picardo: Sometimes the English major part of my brain comes out. In another part of my life, I work with a bunch of scientists here at Madrona, and I think one of the things we talk about a lot is telling your story in simple language that people can understand. It doesn’t matter what you do or even if you’re talking to people who are extraordinarily deep experts in your field, like that. The ability to do that and modulate how you can tell your story, I think, really sets apart a lot of very, very successful companies and founders and executives from people who sometimes struggle with that.

I do want to thank you for making the point around the difficulty of knowing that some people on your team won’t scale because sometimes you hear people talk about that from time to time. It is one of the conversations that certainly from our investor CEO relationship perspective and from, I’m sure, with your board members, that’s a conversation you have more than I think people generally talk about and it’s an important and hard thing for everybody involved. It’s nice that you mention that for the broader audience, that it’s a normal part of scaling a company.

Eleanor Lightbody: Exactly. Most often than not, people do massively surprise you for the positive and people step up and people grow. The counter to that is I really believe in giving . I took on Luminance when I was 29-years-old and, I mean, that’s maybe really old for Silicon Valley, but globally quite young. It’s so powerful to give opportunities to talent that might not necessarily be quite ready for it or the traditional experience for it.

We’ve seen some of our best employees rise up the ranks and be given tasks and responsibilities that might be slightly, at the time, felt like maybe too much. They have not only risen to the challenge, but they have absolutely accelerated in it. I have to sometimes remind myself that it’s like some of the best people that I’ve worked with are given opportunities young and to keep on doing that rather than to be like, “Oh, no, we need to make sure that they’ve got 20 years of experience.”

Chris Picardo: It’s also so important that you say that. I think about that from time to time sitting here in Seattle, we can all name some famous Amazon, Microsoft, et cetera, executives. Almost all of them were given opportunities at those companies really young and were able to scale into those jobs. They didn’t just emerge out of nowhere. I think you giving people those opportunities to be able to do that is, I’m sure, massively beneficial to the company, but also just amazing for the people. It’s such a great cultural piece.

Eleanor Lightbody: Also everything for you as a leader. You get to watch people mature, grow, and believe more in themselves. That is, I think, one of the funnest part of my role, is seeing someone who was a 22-year-old, now 26 and just so capable and even I thought, that they were going to be.

Chris Picardo: Those people are in your network forever. I mean, we’ve met so many CEOs that you’re like, “Wow, there are hundreds of people who have worked for you, been in your organization, been given opportunities that have then gone on to do incredible things. A lot of them can point that path all the way back to those opportunities they got or being at Luminance right at the right time to be able to get an opportunity. That’s got to be an incredible feeling.

Eleanor Lightbody: Yeah.

Chris Picardo: This has been such a fascinating discussion and there’s so many other topics we could have talked about, including, we probably could have spent an hour on the topic of design partners, which I find to be endlessly fascinating. I wanted to talk about a couple of quick questions to end it. The first one is, where are we in the hype cycle? What’s your take of AI? Is it a good thing? Is it a bad thing? What’s real? What’s noise?

Where AI Is Headed — and Why 10x Value Matters

Eleanor Lightbody: Depends on any given day that you’re asking me — my mind might change. I’m kidding. Fundamentally, there is so much that AI can do for good, and there is so much impact, and I think we have only scratched the surface. But I think where we are is, historically, up until today, I think people have seen kind of two X three X productivity gains. To change behaviors, you need to see 10x productivity change. We’re about to start seeing that, and that’s going to be absolutely massive.

So, where we are in the hype cycle, everyone’s got a different opinion. Do I believe that we are only starting to see the potential and we’re going to see more and more potential? Absolutely. The conversation that I think has moved on hasn’t hit. Now people are like, “Okay, cool, this is really useful. This could have a huge amount of impact.” Now people are like, “Okay, what are the return on investments? How am I driving adoption? Why am I actually using this?” Those are the really interesting places to be.

Chris Picardo: What’s a belief you had a year ago or last week, I don’t know, about AI that you think turned out to be totally wrong?

Eleanor Lightbody: It goes back almost to the point I just made, which is like, I always thought that if you could do something and save 50% of time or if you could save X amount of costs and if you had something that gave your customer two X productivity gains in some shape, way or form, that would be enough for adoption and it’s going to be totally ingrained.

To change human habits, it always has to be much more than two X. A few years ago, I was like, “Oh wow, this is not about being more efficient or being more productive. What is the real intrinsic value?” If you don’t have value, then you might be like a one-hit wonder, essentially. People use you but then get tired of you and move on to the next thing.

Chris Picardo: If you look out into the future, what has you the most excited or what do you think we should all broadly be really excited about?

Eleanor Lightbody: I think today the human is still in the driving seat with AI. The human’s still putting the inputs in. It’s still most often than not, they’re not checking the outputs. That’s going to inverse and the AI is going to be massively in the driving seat and the human’s going to be there to slightly tweak levers and to put some guardrails up. I’m so excited for that.

From the humanist point of view, AI negotiating against AI, I think we’re going to live in a world where most legal contracts are the first pass, second pass, third pass, done by AI either side. Also beyond legal, like drug discovery, the impact that we can have on personal medicine, the impact that we can have on curing some of the diseases that we’ve been trying to cure for years. That is going to be, and again, we’ve only scratched the surface there, but it’s going to be so, so positively beneficial for society as a whole.

Chris Picardo: Yeah, that’s the other part of my world. I, 100% agree with you. Eleanor. This has been so fun and we could have talked so much more about so many of these topics, but I really appreciate you joining us on the podcast.

Eleanor Lightbody: Thank you so much for having me.

Tune in to the next episode on May 28 to hear from Microsoft CVP Charles Lamanna.

Customer Obsession & Agentic AI Power Ravenna’s Reinvention of Internal Ops

 

Most startups bolt AI onto old products. Ravenna reimagined the entire workflow.

When we first met Ravenna Founders Kevin Coleman and Taylor Halliday, it was clear they weren’t just chasing the hype cycle. They were pairing AI-native architecture with deep founder-market fit, and rebuilding how internal ops work — from first principles.

Their new company is going after a market dominated by legacy players. But instead of being intimidated by incumbents, they got focused, making some smart moves that more early-stage teams should consider:

  1. Speak with 30+ customers before writing a line of code
  2. Define a clear ICP and pain points
  3. Build natively for Slack — where support actually happens
  4. Prioritize automation, iteration, and real workflow transformation
  5. Stayed radically transparent with investors and early customers

In this episode of Founded & Funded, these two sit down with Madrona Managing Director Tim Porter and talk through their journey, what they did differently this second time around as co-founders, and how they’re building a durable, agentic platform for internal support.

If you’re a founder building in AI, SaaS, or ops — this conversation is full of lessons worth hearing.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Tim: So I mentioned in the intro, you’ve done a company together before and this is a second one. We’re super excited to have been able to invest in the company, an announcement that just came out here recently. But let’s go back. Tell us about the moment you decided to start Ravenna. What problems did you see that you said, Hey, we got to go solve this for customers?

Kevin: I was at Amazon for four years, and I think the whole time I was there, I was looking around trying to figure out what was going to be the next company that we go and do. It took a while to find it, but about halfway through my tenure there, I realized one day that I was spending a lot of time in an internal piece of software Amazon has that serves as the help desk across a lot of different teams. It was the tool where I would go to request onboarding for new employees, to request new dashboards to get built from our BI team. My teams would use it for other teams to request work from us. I realized I was spending so much time in this tool, it wasn’t a great product experience. The way I always described it to folks is it was like the grease in the enterprise gears, if you will. It was the way that things got done internally.

And so I got obsessed with what is this product category? It’s so foundational to how Amazon as a business operates and I started doing a bunch of research in this space. I found out it’s called enterprise service management, which is the category. ServiceNow is the leader. I finally understood what ServiceNow as a business did and why they’re such a valuable business and how large this market is. I started thinking, what does a next-generation, amazing version of this product look like if a very innovative startup built it that cared about design and user experience and cared about automation as well? So really, what does the next generation ESM platform look like?

Tim: I love that because ESM is a category, it’s a big market. ServiceNow is a leader, but I also think, like a lot of things, Amazon did it in an innovative kind of scrappier way. You actually used it for more things. This was the way you just requested and got things done across different groups, as opposed to “Well, we got to log it into this system of records so somebody has a record of it,” and it’s like, no, this is actually the way it was getting done.

Kevin: Yeah, absolutely. And so I came up with this concept and when a concept gets lodged in your mind, you can’t get rid of it. I went and ran to Taylor, who obviously was my co-founder previously and the guy I wanted to start the next company with, and I said, “Hey, I’ve got this awesome idea. We’re going to build a next-generation enterprise service management platform.”

Taylor: On first blush, it was tickets and queues. I was looking at this like, “Is this Jira? I don’t quite understand what’s going on here.” But it came at a good time. So rewind the clock, ChatGPT hit the scene, Zapier, just like every other company probably on the planet, had a little mini freak out, like, “What do we do with this? What does this mean for us?” Product strategy, what have you. At the time, I was lucky enough to pair up with some of the founders to basically do a listening tour. We headed around to mid-market size companies, talked to C-suite directors, executives, what have you. Obviously, Zapier is known for its automation game; we wanted to try to figure out what would be a great solution here in the world of AI/LLMs to bring a new level of value to them.

We asked an open question. Where would you like to have us poke and prod? We did 20 to 30 of these calls. It became pretty clear, resoundingly — we kept hearing about internal operations, over and over and over.

I had a problem picking through that in my own head, and I even had a blunt conversation with one of the CEOs saying, “I’ve heard this so many times. What’s the deal? What’s going on here? Why do I keep on hearing about internal operations?” I think it was a couple answers. One, I can wrap my head a lot better around internal efficiencies or lack thereof in this company, and what we are hoping to do. There was this desire or gap of a desire kind of thing in terms of “I don’t have a good amount of visibility to what folks are actually doing. I can’t top-down efficiency at my company. They would say, “As much as I say, ‘Hey look, I want you to be more efficient, do these different things,” I don’t know super well what the market or what the engineer is doing.” They all kind of use AI as an opportunity to help maybe bottoms-up some of this efficiency without it being a top-down thing.

I think that was a lot of the interest. And so when Kevin ran to me, it was like, “I have this idea circling around, this internal management tool, and there’s an opportunity perhaps in this larger market that’s old, 30-year plus incumbents are all over the place. That’s what got me interested and sparked a lot of the collaboration in the early days.

Tim: We think a lot about founder market fit when we’re looking at new investments, and I remember our first meeting Matt McIlwain and I had together with you guys and we both left like, “Oh, my gosh. We have to find a way to fund this.” It was this unbelievable founder market fit that you had lived the tool in using it at Amazon. You literally witnessed it across all of these customers at Zapier who are using it to get these automations in place, but it wasn’t a full product. So, awesome to see you both come together with those insights. I’m going to abstract away a bit. We’ll come back to Ravenna and the specifics about enterprise service management, but you guys have done a company before.

Kevin: We have.

Tim: You’ve been through YC, and you decided to do it again. That’s a testament to your working style. But were there things like, “Okay, we’ve got the idea again.” And other things like, “Hey, we’re going to do something different. We’re going to do it the same”? How is it different for you guys going at it a second time around?

Taylor: It’s funny, Kevin and I, you mentioned a past company, I always joke, it depends on how you count a company. Kevin and I have been working together for quite a long time. Whether that be in the early days just finding a coffee shop or bar at night working on the smallest app. Then to San Francisco, we ganged up together and tried to start a CDP of sorts, went through YC with that, and molded that into several different things. But regardless, we were kind of taking stock of that history, if you will.

To your point, what went wrong, and what went right? I think an interesting way of characterizing it, which I feel like a lot of entrepreneurs do this in the early days, and early innings is trying to, “What’s the new, new? What’s the thing that doesn’t exist out there yet?” If you were to take a retrospective look on the stuff that we’ve put a lot of time and effort into, it circles around that, which is frankly speaking some of the fun and exciting when you’re with some buddies and say, “Hey, this doesn’t exist yet. What if we made this?”

We were thinking like, “Hey, look, what if we flipped it on side this time? Instead of doing that approach, let’s try to figure out a market that’s super well-defined and try to focus in on opportunities to actually bring better experience.” Especially in the age of AI, it seemed like the perfect time to target this one in particular.

Kevin: Taylor hit the nail on the head there. We, for the life of us building software together have always been trying to identify something that doesn’t exist and shy away from competition, and this time we’re taking ahead on. So we’re super excited about that. Big markets are exciting. We don’t have to go find a small group of people who need what we want. We know there’s a ton of people out there who need what we’re building. That’s really exciting to us.

Taylor: As part of that analysis of, “What do we want to work on and where do we want to press?” I remember talking to you, Kevin, thinking about taking a step back, “What kind of risks do we want to bring on?” We kind of framed it like that. Going back to my earlier point, I would characterize a lot of the early endeavors as pretty high in market risk. We’re trying to figure out, “Hey, let’s try to not optimize for that this time. Let’s try to optimize for something else.”

To compliment ourselves a bit, I think we’re pretty good at building a lot of products. And doing it pretty fast, too. Also at getting together a lot of good folks to work with. So from, call it a human capital risk, I didn’t see that on the table. Taking a larger market, trying to take the market risk off the table. We tried to optimize more for what we thought of as go-to-market risk.

Kevin: The other thing I’d say that we’re doing better on, I don’t know if we’re doing great at it, but we’re doing better at it this time around, is understanding who our customer is and being super clear about what we’re building and for who. So ICP, ideal customer profile, if you will. Taylor mentioned the last company, the first product that we built was a customer data platform. We effectively at his startup, my startup, we had problems with our customer data. It was sprawling all over the place. Folks who were non-technical were always asking us to integrate various tools so they could get customer data where it needed to go. We would go around to potential customers and say, “Hey, you probably have problems with your customer data. Can we help you?” And they’re like, “Yes, of course we do.”

The problem was the problems were all over the place. There wasn’t a product that we could identify that would cut across a bunch of companies. Part of that was, we were early entrepreneurs and didn’t know what we were doing. This time around, before we even built any software, we spent months talking to customers and understanding the space and understanding what pain they had before we started writing a line of code. We wanted to be super clear about our ICP, what they needed, what their problem was, and then we back into the product from that. So a hundred percent this time around, we’re doing a lot better on that front than we were last time and we think it’s definitely the right way to go.

Tim: Well, this is definitely a super big market, and another thing that came through from the beginning, as we have been engaging and working together, is customer-driven, customer-driven. Sort of the maniacal customer focus that is maybe the core attribute for I think successful startups. So that’s been awesome.

Let’s talk a little bit more about what the product does and bring it more to life. I’ll lead you into that by talking about some of the investment themes that Madrona has that we thought Ravenna embodies. A big part of that is AI and part of the why now for Ravenna. Probably our biggest theme is around how AI can reimagine and disrupt different enterprise apps. You’re using what I would call or many in the industry would call an agentic approach where you can actually create various agents that don’t just surface insights but can automate and finish tasks. This world and this product area is really ripe for that, and you’ve done some interesting things there.

And then new UIs. The user experience, you’ve embraced Slack as a place that work is getting done and made the product be extremely Slack native and fully integrated in people’s existing workflow as well as an ethos around clean, simplistic. Taylor, you and I talked about this the very first time we talked about the product, but maybe give a better description. Okay, great service management, tickets, people have something in mind, but say more about the key features and then maybe tie that back to when you were out talking to these initial prospects, what did you hear about what was missing and what could you deliver in your product to make this experience such a big leap forward?

Taylor: Going back to why we picked this. There’s a well-known UX product pattern that you see basically in this market. We weren’t very impressed by what we saw. In the age of AI/LLMs, the popular thing, I would argue, would be to come at this with an intelligence layer. We definitely consider that, and we made a conscious decision on what we think is maybe where the longer-term value is — but also perhaps the tougher one — which is that we’re not building the intelligence layer for this market. We have a lot of confidence, conviction, if you will, that there’s room for a new rethought platform. What that means in actual practice for those who are familiar with even the space, a help desk is probably the most down-to-earth version.

Tickets and queues, it’s a very similar pattern to how you expect it to be from a customer interface with customer service software, same type of thing except the primary difference is that this software is geared towards basically solving your colleagues’ problems. The canonical example, is the IT help desk. You asked for a password reset, new equipment, what have you, that creates a case, it creates a ticket. That’s the typical way of going about this. We’re not talking purely about the intelligence layer and the agents, which we are super excited about, I think we have a lot of fun stuff there, but also very much of building and rethinking what the larger brick and mortar ticketing platform looks like.

Kevin: Yeah, 100%. So enterprise service management is the category. That’s a very broad term. Most people don’t know what enterprise service management is. The easiest way to think about it is it’s an internal employee support platform, internal customer support platform if you will. So, you have functions across an organization, whether it’s rev ops, HR ops, sometimes called people ops, facilities, legal, etc. They all offer internal services. What I mean by that is they offer services that other colleagues can leverage.

So in legal, a service might be, “Hey, can you review this contract?” In facilities, it might be, “Hey, my desk is broken, can somebody come and fix it?” And so this pattern exists across companies, and what people need is a tool that allows them to intake these requests, assign those requests, resolve the requests, and then get reporting and analytics. Increasingly, with AI and automation, classic workflow automation, they want to automate a lot of this work as well.

What we’re building is a platform that allows any team within a company to offer a best-in-breed service, best-in-breed help desk and provide amazing service to their colleagues and then also automate a lot of their work with our AI. That’s a pretty straightforward way of describing it.

Tim: You recently were part of a launch that Slack did for partner companies. Pretty cool. You’re Slack native but yet a new company, kind of an interesting series of events that maybe led to that. What’s the background on that and what has it been like trying to partner closely with Slack?

Kevin: I’ll say upfront that when you start a company, weird, cool, fun stuff just happens. It’s kind of like Murphy’s Law, right? Anything that can happen will happen. It feels like that is embodied in a startup to a certain extent. So yeah, we were a launch partner for the launch of the AI and assistance app category in the Slack marketplace. You can find Ravenna in the Slack marketplace, which is super cool.

The way it happened is very fun. Matt McIlwain, who is obviously your partner here at Madrona, when we were going through our recent fundraise, he said, “Hey, there’s a local entrepreneur you should go and talk to.” He made the introduction, this local entrepreneur went on a walk with Taylor, heard what we were talking about, what we were building and said, “Hey, a certain high level executive at a large CRM company in the Bay Area,” who happens to be Slack’s parent company, “should learn about this.” We were like, “Of course, anybody who’s an executive of these companies should learn about us.”

They ended up forwarding along our deck. That got forwarded over to the executive team at Slack, and they got in touch with us and said, “Hey, what you guys are doing is super interesting, we should talk.” We had a conversation, and we got a Slack channel open with a couple of those folks, as you do when you’re working with folks at Slack. Then we noticed that this new app category is coming out. So because we had that introduction there, we reached out and said, “Hey, we think Ravenna fits really nicely into this new app category. What’s going on here? How can we get involved?”

It was, fortunately, really good timing. We got connected with the partnership folks over there, and they said, “We’re launching this category in two months. If you guys can get your stuff ready, we’re happy to feature you as a launch partner.” Funny how these things work out.

Tim: You all have been great about using advisers but also using your own networks to get feedback. You never know where it’s going to go.

Kevin: You never know, you never know.

Tim: This is another example of putting yourself out there, and getting the feedback. Sometimes it takes you right through to the CEO’s desk.

Taylor: Kevin mentioned, it’s the funner parts, to be frank with you about it. If you have humility to understand that there’s so much out there to learn — especially going into a category that you’re trying to make some hay in and do a different thing in — It’s valuable to get a lot of perspectives there. The more of that you do, there’s tangible stuff that tactically you might get some Ps and Qs kind of learnings along the way, but there’s also some of the funner random doors to get open, such as that one.

Tim: One thing I think is cool too, and part of it is using Slack, and part of it is how you can pull data in from other places — is that questions get asked, and people didn’t realize, this question’s been answered already. How do you create this instant knowledge base from what’s already in Slack all over the place or maybe from an existing knowledge base that is there, but people don’t go look at it. It’s easier to fire off a Slack like, “Hey, Taylor, can you tell me the answer to X?” And by doing that, you can create an automation that the person, and the task gets finished and you didn’t have to do anything, right? That’s a big unlock here.

Kevin: You’ve mentioned Slack a couple times, and we should revisit that really quickly. Slack is the interface for the end customer of the platform. That’s super critical and was a learning during our listening tour at the beginning of last year. The traditional help desk, there’s basically a customer portal where you go, you fill out some form and then your request goes into the ether and you don’t know what happens to it until somebody pings you back a couple of days later is like, “Hey, we resolved your issue.”

What basically every customer across the board told us is employee support happens in Slack now. So, “If you guys are going to build this platform, everything needs to be Slack native, that’s where our employees work. We don’t want to take them out of there. That’s super key to us. If you go to our website, it’s very clear that we’re deeply integrated with Slack. So, we started building into Slack and then to your point about knowledge, we started talking to customers and said, “Hey, you get a lot of repeat questions. A lot of those questions pertain to probably documents or knowledge bases that you’ve written. If you give us access to those, we can ingest them and use AI to basically automate answers to those questions so you don’t have to answer them over and over again. Just to save you time.”

Some people were like, “That’s amazing, let’s definitely do that.” Other people basically said, “Yeah, it’s not going to work for us.” And so we were like, “Okay, why not?” They were like, “We don’t have good knowledge. We don’t have time to maintain it, it gets out of date really quickly and, frankly, it just doesn’t make the priority list.” And so we asked the next question, which is, “Okay, if you hire somebody, how do they get up to speed? How do they learn how to answer these questions if you’re answering them in Slack?” And they were like, “We literally point them to Slack channels and say, ‘Go read up on how we answer these questions and that’s how you should answer going forward.'”

That was this light bulb moment where there is a treasure trove of corporate information and really knowledge that exists in Slack, or any team chat application, so Teams as well, that is sitting there. And companies don’t derive a ton of value from that. A lot of what we’re trying to build is not only give operators of these help desks tools to turn Slack conversations into knowledge-based articles, but really to build a system that can learn autonomously over time.

You should assume that when you’re using Ravenna, your answers are going to get better over time. The system’s going to get better at resolving your internal employees’ queries over time because we’re listening to that data and evolving the way that we respond and take action based on how your employees are answering their colleagues’ questions.

Tim: One of the things that is super exciting here is that I see this as how work gets done inside businesses, and it’s really broadly applicable. On the other hand, a truism about successful startups is that they stay focused, and there is this IT scenario initially where IT is used to using tickets, people are used to asking IT for things. Those things tend to be maybe more automatable, I don’t know. But how do you balance that? Staying focused on, let’s just go nail IT service management, ITSM, versus we have this broader vision around better automation for how enterprises get work done. How do you get that balance right? What are you learning from customers and where are they drawn to initially as you start talking to them and start working together through an initial set of users and customers?

Taylor: I’m going to tie this back to some of the questions you asked around, what’d you get excited about working on this? Rewind the clocks, Kevin runs over, “I got this great idea. The market’s called ITSM.” I’m like, “What? I haven’t heard of this thing.” “No, it’s a huge market.” “Really? I’ve never heard of this acronym before.” ITSM is the name of the larger market and it’s been traditionally known as, okay. Half that acronym’s IT.

Today if you say, “Look, who’s the ICP? Who do you want us to introduce you to at a company?” We’re going to say, “Look, it’s the IT manager.” And it’s because they know what it is. Again, longstanding industry, they know what to call it. They know that funny acronym. They know the pain points very, very well and they understand how to kind of wade through the software. And so that is typically I’d say the beachhead if you will, for us approaching.

Tim: That’s the initial wedge. That’s a great place to enter.

Taylor: Correct. Where this gets more interesting, in my opinion, though, I remember kind of noodling on this thing. I was looking at Zapier’s help desk channel and I was kind of looking through it and being like, “Huh, this is not the most inspiring, password reset, what have you, kind of stuff. Is this really this massive market that Kevin’s super excited about?” No shade if anyone from Zapier’s listening in. The channel’s great, by the way. But I would say it’s what light-bulbed, looking around the rest of the company, it was the same interaction pattern. The same user pattern that you see in what was traditionally known as the help desk channel — that same pattern is present in HR ops. It’s the same thing that you see in marketing ops. It’s the same thing you’ve seen in engineering ops.

It was interesting, though, because I was being very coy interviewing a lot of folks back then. IT knows what they call it. They know what the class of software is, right? But the folks who were in charge of marketing ops, engineering ops, I couldn’t find many who knew the acronym ITSM, so I stopped doing that pretty early, but they know the pain. I started to realize, I came around, circling around to being like, “Look, if you are in, call it an ops position, marketing, engineering, pick your flavor and/and department. If your job is to provide great service to your colleagues, you are operating a help desk, whether or not you know it. That’s the fact of the matter. So again, to your question about who do you start with, we start with IT, it’s the most well-known commodity in that space.

The excitement for me, maybe it’s broader than IT, maybe there’s more stuff than that. That’s kind of grown to be true so far in the early innings here is that other folks see basically a better class of software being introduced by IT. It’s this interesting thing, it’s an infectious being, like, “Wait, what is that? Where’d that come from?”

And so therefore we are trying to maintain in terms of precedence, IT is the number one persona and that’s the one where we’re going to, I’d say charge ahead on the absolute most in terms the bespoke workflows that they have to do and the ones that we have to help automate better. Nonetheless, though, HR ops seems to be the one that we’ve just seen in organic pull with, it’s kind of second in position, and after that is revenue ops.

Kevin: I’ll give you a very concrete example. This morning I had a demo call with a large email marketing tool in Europe. They got out these four IT guys on the call like, “Hey, we’re looking for a new tool. We need a new help desk tool, we need AI, etc.” We’re like, halfway through, they’re going through all the requirements. They’re like, “Oh, by the way, it’s not just us. It’s facilities, it’s HR,” and I think they said product is the other team. That happens all the time.

We are always talking to IT people, and it always comes up on our calls, “It’s not just for us. Other people who offer internal services need this as well.” So it’s exciting for us because IT is the entry point, but then you’ve got this really nice glide path into the rest of the organization. Again, I don’t know if it’s a secret or whatnot, but it’s one of our core learnings you’re going through this journey — there’s a lot of teams across these companies who need this type of tool. So that’s exciting for us.

Tim: Yeah, it’s an interesting form of land and expand.

Kevin: Yeah, exactly.

Tim: IT has budget, they get it, they need it, but everybody is asking them for something so you can get sort of a viral spread, and there’s no difference in the product functionality to start using it for sales ops as you were using it for IT ops.

We referenced ServiceNow a couple of times, so one of the most valuable application software companies in the world, $175 billion market cap. VCs like to use shorthand to describe companies’ investments, one of our best investments ever, Rover, it was Airbnb for dogs. I’ve shorthanded Ravenna as an AI-native ServiceNow for mid-market and growing companies. ServiceNow is obviously upper enterprise. You like that moniker? Should I keep using that, or do I need to disabuse myself of that type of VC speak?

Taylor: I think that’s a good one. It ties back for me at least to the distinction I made earlier around the platform versus the intelligence layer, kind of like, well what are you guys doing? I always like to joke, for better or for worse, we’re doing both. I say for better or for worse, again because it’s a lot of software, but that’s where the conviction is. ServiceNow is what we view as someone who’s taken a very similar bet a long time ago in terms of, “Look, we want to actually own the data layer. We want to actually be the thing that is close to basically all the customer data and the employee data at a company.” We view that as a more durable, longer-term play rather than just the intelligence layer. And so, I like the moniker.

Kevin: Definitely like the moniker.

Tim: All right, I’ll stick with it. So, it’s been fun in this conversation as you ping-pong back and forth, Taylor talking about go-to-market things, Kevin talking about product things. Taylor, your background is traditional engineering leadership. Kevin, you most recently have been doing go-to-market at Amazon, but an engineer by background. How do you divide it up? How do you divide up the responsibilities inside the company? That’s always an interesting thing that sometimes founders struggle with, is we’re full stack, you guys are both full stack, but we have to have some roles and responsibilities here.

Taylor: For Kevin and I, given how long we worked together, I think it’s probably more blurry than most, but I think that’s also one of the benefits of working with him. I know him so well that I can trust him for a wide range of things. That all said, we do try to basically divide up the product and how we go about this. I’ve tried to focus more on the AI automation side of the fence. Kevin’s very much more on the, call it the broader platform side of the fence, and so that’s roughly speaking from a product angle.

From a go-to-market angle, I’d say it’s messy at this point. We’re a young startup, it’s kind of like hit the network, hit all your networks.

Tim: Most of you’re on customer prospect calls all the time.

Kevin: Of course.

Taylor: I mean — roles and responsibilities only matter so much in terms of if you have people that you think might want to buy this kind of stuff, we got to do that. It’s good to have some delineation between roles, but I think at the earliest stages it’s just messy, and embracing that I think is part of the deal.

Tim: Another way you run the business that was super nice for us in the process of leading up to investing is you’re radically transparent. And ll of the prospect calls or customer calls, all those videos you record, they were all on Slack. You just gave us access to all of them. Like, “Here, go watch them and see what we’re learning and help us along the way.” That was super nice. But that must also permeate through your organization, and maybe it gets to the culture a bit, maybe speak to the culture some and what you’re trying to be intentional about in terms of building culture here in the relatively early days of Ravenna.

Kevin: I think this was, for me at least, a core learning from the first business. We didn’t do a good job of talking about what we were doing or telling people what we were doing. Part of it was, I don’t know, I didn’t think the business that we had was the most exciting thing in the world. So it was a little bit of not wanting to broadcast it as that much. I would hang out with friends and whatnot, and they wouldn’t know what my business was back then, and I would be kind of frustrated internally like, “How do you not know? We don’t have a lot of friends who started businesses, you should know.” But the fact of the matter is, they shouldn’t know. I should have been a lot more vocal about our business.

This time around, I think there’s two things. We want as many people to know about what we’re doing as possible because we think it’s pretty cool. Hopefully, other people will think it’s pretty cool. Hopefully, customers will think it’s pretty cool. The other thing is we want as many sharp minds helping us — in the product, and business, as possible. We think the way to accomplish both of those goals is being radically transparent. It’s radically transparent with our team and our customers. When we talk about roadmap, or when we talk about the stage of the business, what we have and what don’t. It’s all an open book, and we’re very transparent with them on where we’re at and where we’re going.

With investors as well, we shared a ton of stuff with you guys, and it wasn’t an angle to get you guys excited about what we were doing. It was more that we really liked you guys. We thought you were really sharp, and if we share a lot of stuff and you guys see what’s going on, hopefully you’ll get excited about the business. But then also hopefully you’ll, I don’t know, see something that we’re doing and be able to give us feedback on how we can sell better, how we can build better, pattern match across different portfolio companies that you’ve seen and help us. We want everybody to know what we’re doing, and we want as many smart people helping us and being transparent helps us accomplish this.

Tim: Super effective. We should say in other investors that were even investors before us, Khosla, Founders’ Co-op, have been really I’d say best practice in making a great collaborative style where we always are up to speed and can try to add value.

It probably has impacted recruiting too. It’s a hard recruiting market, especially for good technical talent and AI talent. You’ve done an amazing job of building the initial engineering team, including great AI background. Without giving away any hiring secrets, talk a little bit about how you’ve been able to do that. It’s never easy, but you’ve made it look relatively easy in these early days. What’s it been like in this hiring market, especially when you’re competing for AI talent?

Taylor: I don’t have any deeply held secrets.

Tim: At least that you’re going to share.

Taylor: If I did, I wouldn’t give it away anyway on podcast. But, really —we’re super excited about the team we have, and I think equally as proud about the culture that we’ve been much more intentional building this time around. We’ve tried to hold a high bar with the folks that we’re interviewing. I think that was more of a self-serving thing originally. But I like to think that comes through, frankly speaking, for a lot of the folks that we are speaking with.

It’s not just about the mission per se, it’s also about knowing that we basically have built quite a bit of software in our past lives and have a lot of perspective and a lot of conviction. Not just the market we’ve talked a lot about, but also how to go about building this and how we’re thinking about taking a different approach. I think that in itself has helped basically attract a lot of folks that, frankly speaking, we’re honored to be working with at this point as well.

Kevin: Totally. My playbook, I’m happy to share it because it’s pretty simple. I reach out to a lot of people and I tell them that Khosla and Madrona put some money into a company to help go after ServiceNow’s market, and people get excited about it. Yeah, it’s just trying to find good people and trying to get them to have a conversation with you and then explain the vision of what we’re doing and why we think not only the opportunity is really big, but we want to build the next great Northwest software company, if not West Coast software company.

We want to be intentional about building an amazing engineering culture, an amazing product culture, an amazing culture that works backward from customers. Amazon likes to say they’re the most customer-centric company. Hopefully, we’re going to be the most customer-centric company over time. And we’re very much striving to do that right now, but just really build a great place where people want to come work.

Tim: What’s an example of something that maybe you had an assumption coming into this company, now a year later it turned out to be wrong, and you had to quickly work through that. Not necessarily like an 180 degree change in direction, but constantly sort of course correcting.

Taylor: It goes back to perhaps what I would say about picking a large market, and being conscious about that. Nothing in life is for free. You get into that and you quickly realize a couple different things. If you pick a large existing market, sure people know it, you can assign a market cap to it. It probably makes the investor conversation a little more easy in terms of figuring out what the TAM is. But you start actually building here, you quickly realize that a well-known market has a lot of well-known features, a lot of well-known capabilities, a lot of basically well-known expectations from the buyer. Which in some level is good. It kind of clears things up.

The trade-off that what we found is that it translates into a lot of software. So again, for better or for worse fits well to some of our strengths and also some of the recruiting that we’ve done. We’ve been moving extremely fast because we have to. Another quick tenant about Kevin and I, the way we think about doing building companies, the whole stealth thing is orthogonal to us. I’m not going to go so far to bash some of the folks who want to do that type of thing.

One of the things of learnings from our journey is that there’s nothing more true, harsh, and real than the market. Every bit of time that you spend not interfacing with that market with what you’re building is a gap that you were accumulating and accumulating. One thing we always talk about at Ravenna is making contact with the reality as fast as possible.

Tim: I agree. I think the value from asking for feedback, shipping, and getting the feedback from actual shipping so outweighs any risk of, “Gosh, somebody else took my idea, or we should have stayed in stealth longer.” It’s just not even close. You guys have lived that. We keep talking about this big market. We alluded to this, that a way to think about Ravenna is a AI native ServiceNow for mid-market. So ServiceNow just did a big acquisition.

Kevin: Yeah, it did.

Tim: They bought this company called Moveworks, you know, biggest acquisition in the history of ServiceNow. It’s kind of an AI ITSM. How do you think about that move? Is that relevant for Ravenna? How is Moveworks similar or different to the product you’re building in the market you’re going after?

Kevin: In terms of is it relevant? Sure, it’s relevant in the sense that it’s definitely in the market that we’re playing in. We got really excited when we saw it. There’s clearly, we’re not the only smart people in the world who know that there’s a lot of opportunity in this space, but it’s exciting to see the activity and obviously a big acquisition, so it’s cool to see.

Moveworks is a previous generation AI intelligence layer on top of existing help desks. It was brought up a lot by investors when we were going through initial fundraising. Which was like, “Are you guys trying to be Moveworks? Are you to be ServiceNow? How do you guys think about it?” Because there’s AI, but there’s also the platform. Our approach is distinct from them in the sense that Moveworks sits on top of existing platforms like ServiceNow, whereas we’re trying to build the foundational platform plus the intelligence layer on top.

At the end of the day, customers will get similar AI capabilities from Ravenna, current next-gen capabilities, because we’re LLM native. I think they’re built on previous generation NLP technologies.

Tim: Which has a huge impact on accuracy and does it work?

Kevin: We think so. Yeah, exactly. I mean, no shade or anything to the Moveworks folks. They’ve clearly built an awesome business and had an amazing outcome and congratulations to the team because that’s fantastic. That’s what every entrepreneur strives for. We just believe, in the fullness of time, the majority of the value accrues to the platform if you can become the system of record. We honestly felt like this was the time to take a shot at building a new system of record in this space. That’s one of the fundamental differences between us.

Now in terms of near-term impacts on the market, I’m not sure what ServiceNow’s plans are for Moveworks, but there is a large kind of call it mid-market enterprise segment of customers who need this AI capabilities. Whether or not Moveworks continues to play there or ServiceNow kind of brings a more upmarket into large enterprise, which is where they like to play, there’s just a lot of opportunity for us in this space.

Tim: Yeah, that’s a great point because I think the things we talked about, Slack, easy to get up and running, beautiful UI, but another thing is price point.

Kevin: Yeah, very true.

Tim: You get a lot of functionality at the enterprise level, but you’re making this accessible and a price point that’s accessible for faster-growing companies and for them to grow with you.

Kevin: 100%.

Tim: We’ve talked about how AI is an integral part of the product, and you also built AI systems at Zapier, Taylor. One question we think about a lot from an investment standpoint is what’s durable? Is there a moat from the AI itself? What’s your take on that? Do you feel like the technology itself is a place that you can build competitive advantage? You’re building an agent-based system here. What does that mean to you, and is that part of what you think you’ll provide customers with, with durable competitive advantage over time?

Taylor: This goes back to the things that got me excited about this originally. It might be first useful instructive, breakdown, when we say AI and automation here. Like, what’s that mean? Big generalization, big time. 50% of the stuff in terms of automation here falls into the category that we talked about earlier, around like, “Hey, there’s information somewhere. It’s Slack, it’s a KB, it’s in these other interesting places. Can we answer that in a more automated way?” That’s one side of it.

The other side of it is actions. When I say that, for lack of a better example, instead of asking, “Hey, what’s the procedure to reset my password?” It’s more interesting to say, “Hey, can you reset my password,” right? Actions. On the first side, I think we covered decently well. One of the things that Kevin touched upon is creating knowledge. I think that’s a very interesting thing here is whether or not you want to call it us building a KB, we haven’t gone so far to put that stake in the ground in terms of our product feature yet.

Nonetheless, one of the things that gets me excited about the idea, it’s like Ravenna is growing with you. That this knowledge is in all these disparate places and we have the ability to go through and hone in on where people work, and make Ravenna better.

Tim: Awesome. So exciting. So much to go build, so much opportunity. Last question. Any advice for other aspiring founders out there thinking about going to get something started in this market right now with AI?

Kevin: The thing I would encourage everybody to do if you’re thinking about building a product, is go talk to a lot of customers before doing it. It was definitely the biggest mistake we’ve made many times throughout our career is like, “Oh, this seems cool. Let’s go and spend a month, three months, six months.” As people who know how to code as engineers, the bias is just, let’s go build because it’s easy. Building is way easier than going and finding 20 customers who will give you a half hour of their time to validate your idea or whatnot. However, you’re going to save yourself so much time, disproving ideas or you’re going to validate your idea and have a lot more conviction about going off and doing it. The biggest piece of advice I can give to folks who want to start companies is go talk to 20, or 30 companies or customers before you start writing a single line code.

Tim: You think you’ve done enough customer validation? Go do more, double down.

Kevin: You can never have enough. Even now, every customer call we’re on at our stage, I mean, we’re not a super old company, we’re eight months old, but we treat it as a discovery call. We spend most of the time asking questions and trying to learn as much as possible about the pain that they’re trying to solve for, because that influences what we’re going to build next week, next month, et cetera. We spend a little bit of time talking about Ravenna as well, but the learnings are still critical for us, and I think will always be.

Tim: Bring us home, Taylor.

Taylor: I’m always reticent to give advice. It’s because I’ve found that just doing this for a decent amount of time, everyone’s experience is so bespoke to them. I do love advice. I do love hearing other people’s journeys, but that’s the way I kind of think about it.

One of the things from my journey that I try to hold true, and I always get word, even conversations a little bit like this. We’ve talked about ServiceNow so much and the incumbents out there, but at the end of the day, the only thing that matters is the customer. That’s the only thing that matters. I try to very much hold a, call it competitor aware, but customer obsessed point of view.

That’s critical because I’ve seen the playbook the other way around, and I’ve seen basically not a lot of success. Whereas I have been lucky enough to work with folks, even to my surprise, a maniacal, just focus on the customer, despite the fact that we were circling around by crazy incumbents and everything on the wall said we were going to lose, it was that maniacal focus on the customer and the problem that pulled us through at the end of the day. So I’ll try and pull that together where we’re at here too.

Tim: Customers, it’s all about the customers.

Kevin: A hundred percent.

Tim: Thank you both so much. It’s a real privilege and a ton of fun to be working together. Looking forward to the future.

Reinventing Recruiting: SeekOut’s Anoop Gupta on the Rise of Agentic AI

 

This week, Madrona Managing Director Soma Somasegar sat down with Anoop Gupta, the co-founder and CEO of SeekOut — a company at the forefront of agentic AI in recruiting, redefining how organizations discover, hire, and manage talent.

In this conversation, Soma and Anoop explore how SeekOut has evolved its platform to include SeekOut Spot, an agentic AI solution that reduces the time it takes to move from job description to qualified candidates — from 45 days down to just three. Together, these two long-time friends unpack lessons on building an AI-native company, navigating changing market dynamics, and what it takes to deliver real outcomes in a sea of AI hype.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Soma: Anoop, you have truly an eclectic background, starting with, initially, academic, then a startup, then a large company, and now back to a startup. Why don’t you introduce yourself a little bit to the audience and also talk about what you do at SeekOut and what SeekOut does?

Anoop: I’m Anoop, I’m a geek and an entrepreneur. I got my PhD from Carnegie Mellon in computer science. I was in the faculty of computer science at Stanford. And then did my first startup in ’95. We were one of the first companies doing streaming media. When the modems were still 56K modems and it was a wonderful time at Stanford. That company was acquired by Microsoft in ’97. SeekOut is a talent business. We focus on helping companies build winning teams and fill the talent gaps. We look at talent very holistically from external talent, internal talent, how to retain, grow, and recruit people. We are used by some of the top brands. We have over 700 customers, who is who in tech, in defense, in pharma. We feel really privileged that they used us as their recruiting and talent solution.

The Role of Agentic AI in Recruiting

Soma: Now, to remind people, as much as we think AI has been around for the last 100 years, it was really only five years since the transformer revolution, so to speak, happened and the large language models came into existence. Even before all that happened, you were thinking about SeekOut as an AI-first company. You’re going from like, “Hey, how can AI help you,” to , “Hey, how can AI do it for you,” kind of thing, right? Some people refer to it as agentic AI or what have you. Tell us a little bit about what got you started on AI from day one and how you evolved with the changes in technology and the innovation that’s happening at a pretty fast pace.

Anoop: If you look at talent, there’s a lot of data. You’ve got to understand the data. The data is noisy. Even if you think about you went to the University of Pennsylvania, do you say Wharton, Pennsylvania. If you’re doing diversity classifiers. The early AI was in data cleansing, in data combining, in classifiers, in everything in how do you build the most amazing search engine in the world. So that is where we started. As the time has gone along, especially, the second thing was, “Oh, LLMs are there and you just give it a job description and it can build searches for you.” But it has fundamentally changed with agentic AI.

The thing is that recruiting in some sense, and actually any talent job, whether you’re thinking succession planning, anything, is a very predictable thing. You look at a job description, you talk to the hiring manager to understand the needs. You go and search a large landscape, you go and evaluate hundreds of thousands of people, you reach out in very hyper personalized ways, and you do that and that magic agentic AI is very good. Vertical AI is very good. And that’s where we can bring the time from a job description to initial candidates you’re interviewing from 30, 45 days to three days. That is the magic. It’s a transformational jump.

Soma: I’ve heard you consistently tell me this, Anoop, for a while now. Companies that don’t take a step back and reimagine how they are going after recruiting talent and managing talent, they’re going to be left behind. Why do you think now is the time for companies to embrace agentic AI solutions to reimagine how they could go about talent acquisition and what is the urgency here?

Anoop: Yeah. So, Soma, the world is changing. Every industry is changing, every business is changing. People are refining their strategies and saying, “We’ve got to adopt AI, or we’ve got to do this thing differently otherwise.” Now all of this evolving, changing business strategy, you’ve got to have a talent strategy. You got to say, “Do I have the right people in the seat now the speed of change has become, and the speed at which the companies need to change has increased?” In this world where things are changing, it becomes urgent for the talent organizations to say, “What am I going to do differently to deliver talent quickly that’s high quality so the right people are in the right seat?” One more angle just quickly I would add to that is there is a lot of pressure on all organizations that AI is coming, how are you becoming more effective, more efficient? That is another pressure that people are feeling on how to do more with less.

Soma: You guys have recently launched SeekOut Spot. I’m super excited about that. In fact, I’m proud to say in this forum that we were probably one of the earliest customers of SeekOut Spot and we are happy customers. Tell me a little bit about that. Tell me how you came up with that, because there is a growing trend here where people are talking about not just software as service, but service as software with AI playing a key role. Tell me a little bit about the genesis of SeekOut Spot and what does it do for companies, organizations, and talent leaders.

Business Model and Flexibility

Anoop: When we start with the business leader and what they care about is the right hires with speed and quality that is there. The magic of AI agents is outcomes as the delivery thing, and not, “Here is a tool that your people have to use.” The fundamental thing, from a business model, is the focus on outcomes. Now there is a lot of flexibility because it’s a combination of people and the AI agents that are happening. We have supported a lot of different models. We will deliver you a hire and that is pricing costs associated with it. We can say we will augment the sources that you have. Maybe you need fewer sources, or when the demand is changing, we can come and help you.

There’s a lot of flexibility of business models, but they’re all outcome-based that we deliver for them. To dig in a little bit, what does the recruiting task look like? How do you engage with the talent? That is interesting. Our recruiters tell us, by the time they’ve done the 10th message, their eyeballs dry out, fall off, and roll across the table. It is crazy the amount of hard work and crazy work that you have to do as a recruiter. With SeekOut Spot, the recruiters focus on the tasks they love, talking to the hiring managers, talking to the humans, selling to the candidates what needs to be done and Spot takes over everything in the middle, delivering your results faster and with higher quality.

Soma: That’s awesome. Sounds truly magical, but help us walk through the shift here. SeekOut Spot in my mind is a classic example of service as a software. Tell me what is the business model changes here and why is it the right thing for your customers?

Anoop: The business model change is we deliver hires or we deliver you strong candidates for this thing. That is the outcome. Eventually, what the talent leader and the business leader are looking for is, “Did you get me a hire? Are they great? Are they the right fit?” They don’t care when it’s taking six weeks. The average for a technical hire by Ashby is 83 days for a hire, and for non-technical, it’s around 63. That’s a long time, and that’s just the median. Many things take longer. So this can be so much faster and better.

Soma: As you know, Madrona wanted to hire a data scientist a few months ago, and whenever we think about hiring a position, my mental model is, “Hey, I’ll be happy if we can hire somebody in the next 90 days.” 180 days maybe, but 90 days would be like, “I’ll be thrilled,”. This hire, the data scientist hire, from start to finish, I think, took less than four weeks for us.

I was amazed at the speed and the quality of the candidates that we saw through the process. It all happened amazingly well for us. Thank you to you and to SeekOut Spot, we made that hire and that person is on board for the last few months and we are thrilled with him so far. So tell me, you mentioned earlier, this goes from 30 days or 45 days to three days, right? We’ve seen at least one example in our environment where we’ve been able to hire a high quality data scientist in about three and a half weeks from start to finish kind of thing.

Anoop: Basically what we can do, and this technology is, from the kickoff meeting with the hiring manager to the initial candidates you’re interviewing, on the fourth day. So that is the magic. Now the hiring, making the offer takes a little bit of time. We have had Discord, which is getting amazing results. We had a startup named Shaped.ai, which is getting hires within three weeks with the initial… And they’re amazed at it. If you look at the quotes on our website, we have Discord, 1Password, HP, Shaped, and Madrona. Even though it’s early, we are seeing real proof that there’s magic in here.

Soma: The other thing that I’ve heard you say, Anoop, when I was over to your place for the SKO, you mentioned, “Hey, with SeekOut Spot, we can deliver a 5 to 10X productivity for recruiters,” or talent acquisition, people kind of thing, right? Talk to me a little bit about that.

Recruiting Process and Efficiency

Anoop: Basically, here is how the recruiting process works. The recruiter talks to the hiring manager, understands the role, then they build a mental model of what they need to do, do some search, come back, do some search, send some messages, and that cycle goes on. In SeekOut, on the first day, after you talk to the hiring manager, you have this success rubric. You have explored automatically thousands of candidates, you’ve evaluated each one of them and we give you a spider graph and how they’re doing across each of the rubric elements, and you have sent out messages.

That thing, I’m saying 5 to 10X, that takes a long time for a recruiter, and that time is being compressed to the benefit of the recruiter and the customer. I’ll tell you, we have specialized, in this service, of course, there’s technology, but we also have recruiters who operate this technology because they’re some tasks that humans do best. They’re the happiest, most energized recruiters, because, “I’m doing what I love and I’m delivering results quickly.” It is pretty amazing how everyone is happier, the business leader, the talent leader, the recruiter. So it’s exciting to us.

Soma: That is great. We’ve been talking about human in the loop for a while now and with something like SeekOut Spot, what you’re really telling organizations is like, “Hey, recruiters are highly valuable. Let them focus on things that they need to focus on. And I’m going to give you agentic AI solution or AI agents that is going to work in conjunction with your talent people to be able to get things done better, faster, all that fun stuff. This notion of hybrid model where you have AI agents working with human beings, that seems like a great model to drive forward as it relates to recruiting and talent acquisition, right?

Anoop: Yes.

Soma: If I’m a recruiter today or a talent acquisition professional, and I see the world of agentic AI, you could argue like, it’s going to disrupt my world, or you could say, it’s going to reimagine what is possible and get me to do what I need to do much, much, much better, 10X better, whatever it is, what should I take away from this as a recruiter or a talent acquisition person and how should I be prepared for this wave of innovation to come?

Anoop: So here’s the way. Do what is human and only humans can do and become the best at it. One is it means when you talk to the hiring manager, how can you be an advisor? Ask hard questions. Do you really mean this? Do you really want this? What is this person going to do? If you feel confident, expert, and good at that, that is one side of it. The second is when you’re talking to a candidate, right? How do you sell? How do you say what is inspiring? How do you say what are you going to do? Why is this a great company for you or not? I think those are the skills you have to become very good at. A lot of them, messy middle, which is critical and important, AI agents will do a great job for you.

Soma: That’s cool. That’s a good way to frame and think about it. I always tell this, Anoop, to every founder or every startup, in the history of this world, there isn’t a single company which has always had a smooth journey. There are good days, there are amazing days, there are okay days, there are lousy days and everything in between.

In your journey over the last, say, seven years or so, you’ve gone through some amazing highs, some not so great lows, and everything in between. How did you and your co-founder, Aravind, how did you guys navigate through this and are there any takeaways or learnings or ahas that you would like to share that will be valuable for other founders where every founder goes through this?

Navigating Challenges and Product-Market Fit

Anoop: We had an early phase of it where we were in hypergrowth, exponential growth, and then the economic malaise, the market change, and we went through some flat portions and now we are on a path to hypergrowth again. What are the things that you need to do? I think the most important thing is continuously watching a product market fit. When the market changed, the environment changed, what was needed changed, it took us a little while to say, “How do we do?” Because market always wins. You can have a great team and the market is this thing you’re not going to succeed in. You can have a lousy team, but you’re aligned with the market, you’re going to win. So one key message is watch for market fit. Just because you have market fit once doesn’t mean you’re maintaining.

The other is to have a sense of confidence and always iterating and experimenting and keeping calm. Your organization comes along with you. That bad stage shouldn’t ruin, though you have to be very conscious about managing your expenses and how you are doing. I want to point to one piece advice, which was maybe a thing for the times. When we were raising a series B, or series C, we had not spent any money that we had raised. And you said, “Go ahead and raise it anyway. It’s good times.” And that helped us too. We’ve never had to be in a desperate situation, not that we don’t want to be scrappy, or want to be conscious. That has given us a comfort and a cushion and that was very wise advice to us.

Soma: Thank you. There are two elements to that, Anoop. One is you want to raise money when you don’t need to raise money, that’s always the best time to do it. The second time is what you said is one of the reasons I was excited about us raising that money was because I’ve seen how you and Aravind have been very scrappy. I wasn’t worried about if there was a little more money than what you need today, would bad behavior set in in terms of spending. Right? It doesn’t matter how much money there is in the bank or not in the bank. I think as entrepreneurs, as founders, as what I call efficient stewards of capital, you have to always be thoughtful about that. I say this right now, hey, your increase in investment should warrant a return on that investment. If you’re confident about that, go do that. If not, you have to be really thoughtful.

I’m so glad whenever you raise your CDC and that has helped you tie through the last year or so to put yourself back in hypergrowth mode. That’s fantastic. Going hand in hand with raising is also a more thoughtful deployment of capital.

Anoop: I totally agree. We always feel like it’s our own money, we have a responsibility.

Soma: Are there any lessons from your own journey? I truly believe that you guys were AI-first from day one, as I said, well before the Transformers and large language models came into existence. Any guidance or advice that you would give to founders today when people are thinking about like, “Hey, I want to ride this AI wave. I want to truly be an AI-first company,” what should they do or what should they not do?

Founders: Focus on Outcomes, not on Hype

Anoop: My advice to founders and actually to the customers that we have, or prospects, is to focus on outcomes, not on the hype. If I were advising this thing, and because everybody’s put AI in their marketing materials, I say look for the outcomes. What are the outcomes you’re delivering? Get to the success stories, shout those out from the rooftops — that focus on outcomes is really important. In the recruiting space, we have a lot of companies that would talk about, “We are AI. We are agentic AI,” and all they have done is maybe an LLM, and they give a natural language query, and something comes out. That is not the end solution. Part of the vertical AI is looking at the whole workflow and process that results in the outcomes. What I would say is, don’t use it as a buzzword, genuinely create value for your customers. That is the thing to do. There is a lot of power in what’s coming, but focus on the customer’s problems and outcomes.

Soma: When people are thinking about AI, and I agree with you completely, focus on the outcome and not the activity alone, how important is what I call a data moat? Do I need to necessarily have either what I call proprietary data or a data moat if I’m an AI-first company, or I don’t need to have that?

Anoop: I think there’s different kinds of data. A lot of data is available, everybody has data. One is the experience moat. As you work with clients and you get the proprietary data from the customer and the client and how you integrate it and how easily you can do that, so it becomes not your data moat and some base data that you have. Our one example would be, in recruiting, it’s not just external data, how do you integrate with the applicant tracking systems and the data that customers have, or their internal employee data and how do you bring that or how do you integrate with specialized resources and partners, whether it’s healthcare and nursing data. I think the data moat comes from delivering outcomes and the learnings, and that learnings and the data becomes also a moat.

Soma: If I take you at face value, you’re telling this from the rooftop that, every investor should be talking to their portfolio company about, “what is your talent acquisition strategy”? How are you reimagining in this world of agentic AI?'” What would you want to tell investors?

Don’t build a recruiting org too early

Anoop: I think building a recruiting org too early is not good because your demand is going to fluctuate. The quality and the people that you need will fluctuate. These are specialized roles. What a startup should have is probably one recruiting manager and recruiting coordinator that they have where the interface to the hiring managers and scheduling interviews and calendaring, work with somebody like SeekOut. Because in startups, the right hires are really important. The cost of a bad hire is so much more than just, “Here is what I did.” If you can get high-quality hires in two weeks, three weeks, that makes a difference to your business outcome. I would invite, it’ll sound a little bit selfish as I am saying it, come and check us out, talk to us. I think it can make a big difference.

Soma: I want to underscore one point that you made. For every organization, every company, this is true, but it is even more true for a startup because you have finite assets of resources. Every right hire can be a true force multiplier.

I truly believe it’s extremely important for startups, particularly the early stages of the company, to ensure that, and everybody goes through this way. Nobody bats 1000. There’s always hiring mistakes, but you want to minimize that and truly understand that every great hire is going to be a true force multiplier for your company.

Anoop: Yes, it is so important.

Soma: From your vantage point, particularly as a hiring manager, what do you think the biggest hiring mistakes that companies today are making?

Anoop: One is company strategies change. Hiring strong, fungible engineers and marketers who can change as your strategy changes is really important. That is the thing you need to do. The second thing we have found is attitude is really important. People who understand the ambiguity, who can take the punches, roll with the punches, adapt and adjust, and are there and get shit done. Those things are also super critical in the hires that you make.

Soma: Now the other side of the audience is usually founders, either existing founders or new founders or people who are thinking that in the next 6 to 12 months, they want to be founders. What is your message to them?

Anoop: My message to founders is, one is before you hire, try and do the job yourself in some of the cases. I did a lot of sales, I’ve never done sales before, and I became an expert because I didn’t know how to hire a sales person. That was one thing on the talent side that I did. Then there were many cases where I had no expertise. Let’s say on a sales leader CRO. I leveraged people at Madrona and said, “Would you interview this person for me?” You want to leverage your connections and contacts who are experts in that so that you can get a good sense of what it needs to be.

My recommendation is, to the people who are thinking to be founders, that the initial team is super critical. Become familiar yourself before you jump into hiring all those people at a level of detail so that you know what is the right thing you need. The salesperson you need for one startup versus another varies a lot. You have to say, “What is your selling motion, who is you need,” and understand that deeply, and second, leverage your friends to do that, and then leverage the right people who can feed you that talent.

Soma: How can I get in touch with you, Anoop, if I’m a founder or investor and want to learn more?

Anoop: Okay, it’s simple. My email is [email protected]. You can find me on LinkedIn, connect with me and we’ll love to talk to you and show you. Because seeing is believing. Everybody talks so much. I’m just such a passionate thinker that seeing is believing, come and see, come and experience, and we would love to partner with you.

Soma: As we come to an end in this episode, Anoop, I want to congratulate you on pushing the boundaries and pushing the envelope on what AI can do for talent acquisition for organizations of all sizes and in all industries. Is there a final word that you would want to say to people, whether they are in a smaller environment like a startup or a bigger environment like an enterprise as to what they should do about talent management and talent acquisition as they look ahead?

Agentic AI for recruiting is here

Anoop: Agentic AI for recruiting is here and now. I would say experiment with it. This is the time. Be early before the change is thrust upon you. Be the lean-forward leader, experimenting, and adapting, and flowing with the transformation versus being hit by it when somebody comes and says you’re too late. The world is changing, and it is changing in amazing, wonderful ways, that is don’t get stuck in the old world to the extent you can avoid that and look broadly on what needs to be done. Especially for the large enterprises, the transformation is going to be very huge, and even for small companies. So, my final word, this will sound very selfish, contact us. We’ll show you what we can do for you as you explore all of the different options that are out there so that you’re getting the right tires and kicking the ball out of the field.

Soma: First of all, even before I wrap this up, thank you for allowing me to partner with you and Aravind for the last seven years or so. It’s been a fabulous journey. So many learnings and so much success. For all the progress we’ve made, I think we are still in early stages and there is so much more that we can do, and I’m looking forward to that. Thank you so much for joining us here today, and thanks to everybody for listening and we’ll see you again soon.

Anoop: Thank you, Soma. It has been wonderful to have you as a partner in our journey.

 

Unscripted: What Happened After the Mic Went Off with Douwe Kiela

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Full Transcript below.


Sometimes, the best insights can come after an interview ends.

That’s exactly what happened when Madrona Partner Jon Turow wrapped the official recording of our recent Founded & Funded episode with Douwe Kiela, co-founder and CEO of Contextual AI. The full conversation dove deep into the evolution of RAG, the rise of RAG agents, and how to evaluate real GenAI systems in production.

But after we hit “cut,” Douwe and Jon kept talking — and this bonus conversation produced some of the most candid moments of the day.

In this 10-minute follow-up, Douwe and Jon cover:

  1. Why vertical storytelling matters more than ever in GenAI
  2. The tension between being platform vs. application
  3. How “doing things that don’t scale” builds conviction early on
  4. The archetypes of great founders — and how imagination is often the rarest (but most valuable) trait
  5. Douwe’s early work on multimodal hate speech detection at Meta and why the subtle problems are often the hardest to solve
  6. Why now is the moment to show what’s possible with your platform — not just sell the vision
  7. It’s a fast exchange full of unfiltered insight on what it really takes to build ambitious AI systems — and companies.

And if you missed the full episode, start there.


This transcript was automatically generated and edited for clarity.

Jon: One thing I’m learning about, I talked to a lot of enterprise CTOs, as I’m sure you do, and a lot of founders, as I’m sure you do, and I feel like even when this kind of technology is horizontal, we say you go to market vertically, or by segment, or whatever, but I don’t even think that’s quite right, I think the storytelling is the thing that becomes vertical or segmented. When you speak to a CTO of a bank versus a CTO of a pharma company, or the head partner of a law firm, or whatever it would be, none of these people, their eyes will glaze over when we start to talk about chunking. But if we can talk about SEC filings and the tolerances in there, and a couple of really impactful stories that are in the language of those segments, that seems to go so far. I’ve seen it myself, and even when a student, customers will realize it’s the same thing. And so storytelling at a time like this, where there’s opportunity in every direction you look, feels like a thing that can be a superpower for you.

Douwe Kiela: It’s not easy, because it’s like, how vertical do you want to go? We don’t want to be Hebbia or even Harvey; we want Hebbia and Harvey to be built on Contextual, but the only way to do that is to maybe show that you can build a Hebbia and Harvey on our platform.

Jon: I’ll tell you about when I’ve done it right and when I did it wrong. When I did it right was in early days of DynamoDB, the managed NoSQL data store, and we said, “Dynamo is really useful for ad tech, for gaming, and for finance, probably.” It’s because there were key use cases in each of these domains that took advantage of the capabilities of NoSQL and were not too bothered by the limitations of NoSQL, we only have certain numbers of lookups and things like that. Astute customers could realize you could use Dynamo for whatever you wanted, but we didn’t say that ever. All of our market was we had customer references, and we had reference implementation, and that helped us, like you plant your feet really well. When I’ve done it badly, also shows the power of this technique. I remember I did a presentation about Edge AI, this was like 2016, at AWS re:Invent. Edge AI, we shipped the first Edge AI product ever at Amazon.

We showed how we were using it with Rio Tinto, which is a giant mining company doing autonomous mining vehicles. We chose that because it’s fun and sparks the imagination, and we thought would spark the imagination across a lot of domains. This is a re:Invent, so it was on a Thursday, I want to say, a Wednesday or a Thursday, that I did that presentation. On a Friday morning, before I was going to fly out, I got an urgent phone call from the CTO of the only other major mining company of that scale, saying, “I have exactly that problem. Can you do the same thing for me?” I thought, “Well, gee, I aimed wrong,” because I picked a market of two, I already had one. But it shows that if you really put it in people don’t necessarily use imagination, but if you put it in terms that are that recognizable, they can see themselves.

Douwe Kiela: Yeah. So I heard that, maybe it was Swami or someone senior in AWS, said, “The big problem in the market right now is not the technology, it’s people’s lack of imagination around AI.”

Jon: That sounds like a Swami.

Douwe: Swami or maybe Andy. Yeah, I don’t know.

Jon: It could be. I would also say that that’s a major role for founders on this spectrum. There will be, put you in a group with Sergey and Larry, right? And so there’s the Douwes, Sergeys, and Larrys, there’s the Mark Zuckerbergs who are only PHP coders, and there’s the domain experts who are visionaries, they’re missionaries about solving a real problem, and they understand the problem better than other people do, and they are not necessarily nuanced in what is possible, but they can hack it together, they can get it to work enough that they can get to a point to then build a team around them.

Douwe: Who’s the archetype there?

Jon: I would think about, this is not a perfect example, but I would think about pure sales CEOs.

Douwe: Benioff or something?

Jon: Yeah, or the guys who started Flatiron Health and Invite Media. They were not oncology experts, they understood their customers really well. Jeff Bezos was not a publishing expert, nor did he wrote code at all at Amazon, I’m not sure he ever checked one line of code in a production, but deep customer empathy and conviction around that. The story with Jeff is that the first book that was ordered on Amazon.com from a non-Amazonian was not a book that they had at stock. And the team told Jeff, “Sorry, we got to cancel this order.” And Jeff said like, “Hell, we do.” And he got in his car and he went to every bookstore in the city.

Douwe Kiela: Barnes & Noble, somewhere.

Jon: Yeah and he found it, and then he drove to the post office and he mailed it himself. He was trying to make a point, but he was also saying, “Look, we’re in the books business now and we promised our whole catalog. In the first order, you better believe we’re going to honor it.” So that’s what I think about. And you do things that don’t scale and the rest.

Douwe: Doing all the crazy stuff. All the VCs are saying, “Just do it SaaS, no services. Focus on one thing, do it well.” And all of that is true, but if you want to be the next Amazon, then you also have to not follow that.

Jon: Do things that don’t scale, and you figure out, you know and I know, eventually, you can get things to scale. One of the reasons, and you would know this so much better than I do, one of the reasons Meta invested as early as it did in AI was content moderation.You would like a social media business to scale with compute, but it was starting to get bottlenecked by how many content moderators, and that’s a lot slower and more expensive. How quickly and effectively can you leverage that up?

Douwe: That’s why they needed AI content moderation.

Jon: That’s why they needed AI.

Douwe: We’re doing all the multimodal content moderations. That was powered by our code base.

Jon: Wow. And what year?

Douwe: It was around 2018. We did hateful memes. I don’t know if you’ve heard of this, the Hateful Memes Project, that was my thing. Where that came from was content moderation was pretty good on images and it was pretty good on text, like if there was some Hitler image, or whatever, or some obvious hate speech.

Jon: That’s kind of an easy one.

Douwe Kiela: Exactly. The most interesting ones, and people have figured this out, is like multimodal. It’s like I have a meme, so on the surface, to the individual classifiers, it looks fine, but if you put them together, it’s super racist, or they’re trying to sell a gun, or they’re dealing drugs, or things like that. Everybody at the time was trying to circumvent these hate speech classifiers by being multimodal. Then I’m on it, and I came in and we solved it.

Jon: How did you solve it?

Douwe Kiela: By building better multimodal models. So we had a better multimodal classifier that actually looked at both modalities at the same time in the same model. We built a framework, and we built the data set, and we built the models, and then most of the work was done by product teams.

AI, Ambition, and a $3 Trillion Vision: Satya Nadella on Microsoft’s Bold Bet

 

TLDR: Microsoft Chairman & CEO Satya Nadella shared candid insights on leadership, AI, and Microsoft’s transformation into a $3 trillion powerhouse during Madrona’s Annual Meeting on March 18, 2025. He reflected on the cultural shifts that fueled the company’s resurgence, Microsoft’s AI strategy and pivotal AI partnership with OpenAI, and why AI’s success should be measured in global economic growth. His key messages? Mission and culture define strategy. AI is still in its early days. And “The world will need more compute.”

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated.

Soma: Satya, it’s fantastic to have you here today. I don’t know if you remember this. We had you actually at our annual meeting five years ago to celebrate our 25th anniversary back then. But it so happened that once we agreed that we are going to do this two weeks before the event, we had to go on a massive scramble. The world changed from everything being in person to everything being virtual and you were a good sport and we did this virtually five years ago and that ended up being a great conversation. Thank you for doing that.
But I’m so, so excited to have you in person here today.

Satya Nadella: Likewise, I’m glad it’s in person.

Soma: This year we are celebrating a couple of different milestones, okay? First and foremost, obviously, Microsoft is celebrating its 50th year anniversary. In fact, I think two and a half weeks from now (April 5th) is the 50th anniversary. So that’s a fantastic milestone. I spent 27 out of these 50 years at Microsoft and some of those years working closely with you, so for me personally, it’s with a lot of personal joy and satisfaction to see how far Microsoft has come along under your leadership these last 11 years. Coincidentally, we’re also celebrating Madrona’s 30th year anniversary this year. Back in 1995, when the four co-founders of Madrona started Madrona; and I see Paul there. He was one of the four co-founders for us back then. The thesis and the bet for Madrona was very simple. It was all about like, hey, we are going to take a bet on the technology ecosystem, on the startup ecosystem in Seattle.

And 30 years later we are so glad that they took the bet and we all joined the journey. But for all the progress that I think we’ve seen in Seattle, I think we are still scratching the surface. There’s so much more ahead of us in the next 20 years, 30 years, 50 years that we are excited to see where the world is going and how we can play a part in help shape that world, so to speak.

11 years ago when you became the CEO for Microsoft, I actually don’t know how many people in this audience and in the world imagined that hey, there’s going to be a day not in the too distant future where we are likely to have two companies that collectively have a market cap of over $5 trillion in Seattle. Microsoft being one and Amazon being the other. But just looking at what you’ve been able to accomplish at Microsoft, when you took over as the CEO, Microsoft’s market cap was around $300 million. Today it’s around $3 trillion. It’s a phenomenal progress and one that I definitely did not imagine and I continue to think about, hey, how did this happen and what caused it to happen?

Satya Nadella on Microsoft’s AI Strategy, Leadership Culture, and the Future of Computing

Satya Nadella: But did you hold?

Soma: A lot.

Satya Nadella: That’s great.

Soma: In addition to everything else, I’m a shareholder of Microsoft. I’m excited about that. Okay. But Satya, congratulations on a great, great run at Microsoft so far, and I know there’s still a lot more to go there.
I do know that everybody here in the audience is really interested in hearing from you, so I should stop my ramble and dive into the conversation.

Satya Nadella: Sure.

Soma: I want to take you back 11 years ago when you decided that, “Hey, I’m going to take on the matter to be the CEO for Microsoft,” what were some of the things in your mind in terms of what were your expectations, what do you think might happen? And then talk about some of the key inflection points in the last decade in your tenure as the CEO of Microsoft.

Satya Nadella: Yes. First of all, thank you so much for the opportunity to be here. It’s great to be celebrating, I guess, your 30th year. And as you said, for me of late, I’ve been thinking a lot about our upcoming 50th, which it’s unbelievable to think about it. I was also thinking about it yesterday. I was seven years old, I guess, when Microsoft was formed. And a lot has happened.
In 2014 when I became CEO, Soma, quite honestly at that time, my frame was very simple. I knew I was walking in as the first non-founder. Technically Steve was not the founder, but he had founder status at the company. The company I grew up in was built by Bill and Steve. And so therefore, I felt one of the things as a non-founder was to make first class again what founders do. What founders do is have a real sense of purpose and mission that gives them both the moral authority and telegraphs what the company was created for and what have you. And I felt like we needed to reground ourselves.

In fact, back then, one of the things I felt was, wow, in 1975 when Paul and Bill started Microsoft, they somehow thought of software as a … In fact, the software industry didn’t even exist, but they conceived that we should create software so that others can create more software and a software industry will be born. And that’s what was the original idea of Microsoft. And if that was relevant in ’75, it was more relevant in 2014 and it’s more relevant today in 2025.

And so I went back to that origin story, took inspiration from it, re-articulated it as our mission now that we are to talk about, which is empowering every person and organization on the planet to achieve more. So that was one part. The other piece that I felt also, again as a non-founder, was to make culture a very first class thing. Because again, in companies that have founders, still culture is also implicit because it’s a little bit of the cult of the founder. You can get away with a lot, whereas a mere model CEO like me can’t.

And so you needed to build more of that cultural base even. I must say I was lucky enough to pick the meme of growth mindset from Carol Dweck’s work and it’s done wonders. And quite frankly it’s done wonders because it was not considered as new dogma from a new CEO because it spoke a lot more intrinsically to us as humans, both in life and at work. And so therefore, both these things, making mission a first-class explicit thing and culture, these two things. And then of course they’re necessary but not sufficient because then you’ve got to get your strategy right and execution right, and you’ve got to adapt because everything in life is path dependent.

But you don’t even get shots on goal if you don’t have your mission and culture set right. And so that’s at least what I attribute a lot of, at least our … And we have stayed consistent on that frame, I would say, for the last whatever, 11 years.

Soma: If you go back to I think you took over in February sometime and then in May that year, 2014, your first announcement externally came up as like, “Hey, we are going to take Office cross-platform.” And that I thought was visceral. Particularly people who knew Microsoft until then or who had been part of the Microsoft ecosystem in one way, shape or form, knew how big of a statement that was. Was it a conscious decision on your part to say, “Hey, I need to signal not just to the external world, but to my own organization what it means?”

Satya Nadella: Yeah. The Microsoft that you worked at and that I worked at know, you’ve got to remember, we launched Office on the Mac before there was Windows even. So in some sense, obviously we achieved a lot of success in the ’90s and so therefore we went back to Windows as the only thing that is needed, and the air we breathe and what-have-you. But it was really not the company’s core gene pool. Our core gene pool was we create software and we want to make sure that our software is there everywhere.

And obviously it’s not like I came in February and I said, “Let’s build the software.” Obviously Steve had approved that. But it worked well because it helped unlock, to your point, what was Microsoft’s true value prop in the cloud era. See, one of the things when I look back at it, if God had come to me and said, “There’s mobile and cloud, pick one,” I would’ve picked cloud. Not that mobile is not the biggest thing, but if you had told me pick one, I’ll pick something that may even outlast the client device.

And so therefore, that’s what was the real strategy, which is we knew where our position at that time was on mobile. We were struggling at third. Having seen what happens to number three players in an ecosystem, I felt like wow, that train had left the station. So therefore it was very important for us to make sure we became a strong number two in cloud at that time. And then in fact, more comprehensive than even our friends across the lake because of what we were doing on Office 365 and Azure.

And so we just doubled down. And when you double down on such a strategy, you got to make sure that your software is available and your endpoints are available everywhere. And so that was what that event was all about.

Soma: Great. You just referenced culture, cultural transformation, and growth mindset in the context. By the way, if any of you haven’t read that book, I’m a huge believer in the book. I think that book is one of the best books that’s been written on culture and please get a copy and read that. It’s a fantastic book and something that I try hard to practice every day. And I can tell you I’m still learning.

But I’ve also heard you talk a lot about changing the culture from a know-it-all culture to a learn-it-all culture. But like anything else, when you took on the mantle, they were already a 100,000-people-strong organization that was steeped in a particular set of ways of doing things and thinking about things. How easy or hard was it for you to go through the cultural transformation?

Satya Nadella: Yeah. I think the beauty of the growth mindset framework, if you will, is not about claiming growth mindset, but confronting one’s own fixed mindset. At the end of the day, the day you say you have a growth mindset is the day you don’t have a growth mindset. That’s the nice recursion in it. And it’s hard and it has to start with setting the tone.

Let’s face it. In large organizations like ours, or anyone I guess, it’s easy to talk about risk because you want the other person to take risk. Or it is easy to say, “‘Let’s change.” It’s the other person who should change. And so in some sense, the hard part of organizational change is that inward change that has to come. And so this thing pushes you on it. It gives you at least a way to live that. And by living up to that high standard of confronting your own fixed mindset, you get hope to make that large-scale change happen. And like all things, Soma, it’s always top down and bottom up. You can never do anything in any one direction. It has to happen across both sides of it and all the time.

The other thing I must say is you have to have patience. You can’t come in the morning and say, “Hey, we need to have by evening growth mindset.” You have to basically let even leaders bring their own personal passion to it, personal stories to it, give it some room to breathe. And I think somehow or the other not because we really thought it all through, it took on, as I said, some organic life. People felt like this is a meme that made them better leaders, it made them better human beings.

And so therefore, I think that that’s what really helped. And we were patient on it. Like for example, the classic thing at Microsoft would have been to metric it and then say green, red, yellow, and then start doing bad things to all the reds and then it would’ve been gamed in a second. We didn’t do that, and that I think helped a lot. And like all things, it also can be taken to the extreme. There are times when I’m in meetings where people will look around the room and say, “Here are all the people who don’t have a growth mindset,” versus saying, “Look — the entire idea is to be able to talk about your own fixed mindset.” And by the way, the best feature of that cultural thing is that it’s never done. So you never can claim that job done. Right now, oh my God, talk about it. Which is you’re in the middle of, again, saying, “Wow, we’ve got to relearn everything because there’s a complete new game in town again.

Soma: So before we talk about AI, I thought we’ll talk a little bit about something that is personal to you and hopefully something on a lighter note. You’ve been a cricket player in high school and college and it’s been fun working with you these last many years, trying to bring cricket to the US through Major League cricket. And you’ve mentioned this many times, Satya, about how that sport has shaped your thinking, your leadership style. In fact, had a positive impact on your life. Share with us a little bit about that.

Satya Nadella: Yeah, Caitlin, who works with me is here. Every time I post on cricket, I get all these likes from India and she says, “God, why don’t they do the same when you post on Microsoft products?” It’s like a billion and a half people who are crazy can do that for you.

Look, I think all team sport shapes us a lot. I think it’s one of those cultural things that … When I see leaders; and you can easily trace back to the team sports they played and how it impacts how they think about it. There are three things that I think I’ve written a lot about and I think a lot about even daily. I remember there was this one time. It’s interesting, there’s this guy that you know, Harsha Bhogle, who actually went to the same high school as me and recently I was talking to him and he was telling me about our … we call them physical directors. Think of them as a coach, I think is the best translation.

But anyway, so we were playing some league match and there was this guy from Australia who suddenly happened to be in Hyderabad of all things and playing for the opposition. And he was such an unbelievable player. And I was sitting, I was feeling at whatever, at forward short leg and watching in awe of him. So I hear this guy yell, saying, “Compete, don’t admire.” And it’s like when you’re in the field that zeal, the competitive spirit and giving it all, I think it’s just such an important thing that sport teaches you. That ability to get the energy to go play the game is one.

The other one that I’ll say, talking about teams, I’ll never forget this. There was this unbelievably important match of ours. There was this unbelievable player who was pissed off at our captain for whatever reason, because I think he changed him soon or what have you. And the guy just drops a catch just on purpose. And think about like the entire 11. All our faces dropped. We were all so pissed off, I guess. But also more let down when, in fact, your star player who somehow feels like he wants to teach us a lesson and then thereby cause us to lose.

And then the last thing I would say, which has probably been the most profound impact in me, is what is the leadership lesson? There was a captain of mine who went on to play later a lot of first-class cricket. One day I was a bowler and I was bowling some thrashy off spin. And so this guy takes the thing. He changes me, he bowls, he gets a wicket, but he gives it back to me the next over and that’s a match I got some four or five wickets. And then I asked him like, “Why the heck did you do that?” And he comes to me and he says, “You know what? I needed you for a season, not for a match. Because I wanted to make sure that I could make sure that your confidence is not broken.” I said, “Man, for a high school captain to have that level of enlightened leadership skills …”

That’s the idea, which is leadership is about having a team and then getting the team to perform for a season. And I think team sport and what it means to all of us culturally and what it means in terms of teaching us the hard lessons in those fields is something that I think a lot about.

Soma: That’s great.

Satya Nadella: And of course, I think a lot about MLC too.

Soma: Season three starts June 12.

Satya Nadella: The sports market is not sufficiently penetrated in the United States. Talk about you got to make your money somewhere else.

Soma: Let’s talk about AI now. You mentioned this, that if you look at the history of Microsoft, we are in the beginning or in the middle of the fourth platform wave. First one was Client Server, then it was internet and mobile, and then the cloud, and now it’s AI.
Microsoft, as much as we talk about AI a lot these past few years, Microsoft has had investments in AI for decades now. Tell me a little bit about how you decided, hey, in addition to everything that we are doing ourselves, how do we think about partnering with OpenAI.

Satya Nadella: I love the way you say ourselves. That’s good.

Soma: How does Microsoft think about partnering with somebody like OpenAI? And then more importantly, how has that partnership evolved till today and what do you think the future is going to be of that partnership?

Satya Nadella: Yeah, it’s a good point. I think in 1995 is when we had our first ML research team and MSR speech. That was the first place we went to. And obviously we had lots of MSR work. Here’s the interesting thing, which is even the OpenAI side, we had two phases of it. In fact, the first time we partnered with them was in the context of when they were doing Dota 2 and RL at that time. And then they went off on that and I was interested, but RL by itself, at that time at least, we were not that deep in. When they said, “We want to go tackle natural language with transformers,” that’s when we said, “Let’s go bet.”

Quite frankly, that was the thing that OpenAI got right which is that they were willing to go all in on scaling laws. In fact, the first paper I read was interestingly written by Elian Dario on the scaling laws and saying, “Hey, we can go through compute and see scaling laws work on transformers on natural language and natural language.” If you think about Microsoft’s history, for those of you who’ve been tracking us, Bill has been obsessed about natural language. And of course the way he has been obsessed about it is by schematizing the world. To him, it is all about people, places, things, beautifully organize it into a database and then do a SQL query, and that’s all the world needs.

That was the Microsoft that we dreamt of. And then of course, when we thought of AI was, oh, adding some semantics on top of it. That’s sort of how we came in. It turns out in hindsight, of course, when we were taking that bet, it is unclear to us quite frankly. But to me, when I first saw code completions in a Codex model, which is a precursor to 35, that is when I think we started building enough conviction that, one, you can actually build useful products. And software engineering, the team that you ran, even the engineers are skeptical people. No one thought that AI will go and make coding easy. But man, that was the moment when I felt like there’s something afoot. Definitely my belief in scaling laws and the fact that you could build something useful. And so then the rest is history. We just doubled down on it. And even today when I look at GitHub Copilot, it’s unbelievable to see in the, whatever, three years or so, there’s code completions.

And by the way, all of these things are happening in parallel. Code completions are getting better. We, in fact, just launched a new model even for code completion. And then chat of course is right there. You have multi-file edits. You have agents that are working at their full repo, and then we have a SWE-agent that is more like you’re going from, I’ll say, pair programmer to a peer programmer. So it’s all like a full system being built off of effectively what is one regime.

Soma: I remember now, this was before GitHub Copilot had launched in beta or whatever it is to the world. You and I were having dinner and now you literally spent probably 20, 30 minutes there talking about this new thing that the GitHub guys were doing called Copilot. I remember walking out of that meeting thinking I need to go talk to my buddies in DevDiv to understand what is happening here, because I haven’t seen you that animated and excited about something. And this was well before it came into what I call as a product finally kind of thing.
But those early days, how did you decide to take a bet on that inside the company? Because I would assume that in any organization there’s going to be some level of resistance to something new that is going to be fundamentally a paradigm changing thing.

Satya Nadella: Yeah. There were two phases to that as well. GitHub Copilot was the first product, and then ChatGPT happened. And ChatGPT, quite frankly, you should ask the OpenAI folks, but nobody thought that this is a product. It was supposed to be at best, maybe some data collection thing. And then rest is history. But I must say that was the thing that really helped, which is the beauty of at least Microsoft’s position was one, the partnership with OpenAI. Second thing is we were already building products like GitHub Copilot. And thankfully ChatGPT happened because then there was no … And we were ready so once ChatGPT happened and we had built a product and we had built the stack, it was easy to copy, paste, so to speak, across all of what we were doing.

But a lot of these waves are that, which is, if I look back at it, even in the four waves, you could say Windows, we had one, two, and three, but I joined really post three. And that was what we did. Once Windows 3 hit, it was like we knew what to do after. That’s where I think the path … And we were ahead. In some of the others, we were behind, but that’s fine. But this one we were ahead and so we executed pretty well, quite frankly, across the length and breadth of the Microsoft opportunity. But as you rightfully point out, but it’s still very early, I think in backstage, you and I were talking about it.

I think I feel it’s a little more like the GUI wave pre-Office or the web wave pre-search. I think we’re still trying to figure out where does the enterprise value truly accrue? Is it in the model? Is it in the infrastructure? Is it in one app category? And I think all that’s to be still litigated.

Soma: We have a point of view on that, but let me turn around and ask you that. If you look at the AI stack today, you’ve got AI infrastructure, you’ve got models, you’ve got applications, what we call intelligent applications. We historically always believe the application layer is where you’re going to have the most, what I call value creation over a period of time. Whether it’s horizontal or vertical or some combination thereof. Do you see that trend also following through here in the AI or do you think differently?

Satya Nadella: It’s a great question. I think that if I look back through all these tech shifts, I think all enterprise value accrues to two things. One is some organizing layer around user experience and some, I’ll call it, change in efficiency at the infrastructure layer. You can say GUI on the client and client server. That was one. Or you could say search as ultimately, although we thought browser for the longest time, but turns out search was the organizing layer of the web. And then SaaS applications and the infrastructure and databases and what have you. And same thing with cloud.

In this case, I think hyperscale, when I look at our business, if you ask the question five years from now, even in a fully agentic world, what is needed? Lots more compute. In fact, it’s interesting. When I look at, let’s take Deep Researcher or what-have-you. Remember, Deep Researcher needs a VM or a container. In fact, it’s the best workload to drive more compute.

And in fact, if you look at the ratio, even take ChatGPT. It’s a big Cosmos DB customer, which is all its state is in databases. In fact, the way they procure compute is they have a ratio between the AI accelerator to storage and compute. And so hyperscale, being one of the hyperscalers is a good place to be and to be able to build the infrastructure. You’ve got to be best-in-class in terms of scale and cost and what-have-you.

Then I think after that, it gets a little muddy, because what happens to models, what happens to app categories? I think that’s where I think time will tell, but I go back and say each category will be different. Consumer, there’ll be some winner take all network effect. In the enterprise space it’ll be different. That I think is where we are still in the early stages of figuring out, but I think the stable thing that at least I can say with confidence is the world will need more compute.

Soma: I have a lot more things to talk to Satya about but I know that we are running short of time here. I’m going to ask him one more question. You have a very unique vantage point in terms of who you talk to day in and day out, whether it’s Fortune 100 CEOs or whether it is heads of government or what-have-you. You recently mentioned something about one way to think about maybe the impact of AI success is its ability to boost the GDP of a country or the world or whatever it is. That’s a fascinating way to think about what AI’s impact would all be over a period of time. Can you elaborate a little bit on that?

Satya Nadella: Yeah. I think I said that in response to all these benchmarks on AGI and so on. I find that entire … First of all, all the evals are saturated. It’s becoming slightly meaningless. But if you set that aside, just take the simple math. Let’s say you spend a $100 billion in CapEx, and then you say, okay, you go t to make a return on it, and then let’s just say roughly you are to make a $100 billion a year on it. In order for you to make a $100 billion dollars on it, then what’s the value you have to create in order to make that?

And it’s multiples of that. And so that ain’t going to happen unless and unt il there is broad spread economic growth in the world. So that’s why I look at it and say my formula for when can we say AGI has arrived when, say, the developed world is growing at 10%, which may have been the peak of industrial revolution or what have you, that’s a good benchmark for me. If you ask me what’s the benchmark. This is the intelligence abundance and it’s going to drive productivity. I think we should peg ourselves. In fact, we should say the social permission at least for companies to invest what they’re investing in, both from the markets as well as the broader society will come from, I believe, our ability to have broad sectoral productivity gains that’s evidenced in economic growth.

And by the way, the one other thing that I’m excited about is this time around. It won’t be like the Industrial Revolution in the sense that it’s not going to be about the developed world or the Global North and the Global South. It’s going to be about the entire globe, because guess what? Diffusion is so good that everybody is going to get it at the same time. So that’ll be the other exciting part of it.

Soma: Great. Thank you, Satya. Thank you for being here, and congratulations again.

RAG Inventor Talks Agents, Grounded AI, and Enterprise Impact

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

From the invention of RAG to how it evolved beyond its early use cases, and why the future lies in RAG 2.0, RAG agents, and system-level AI design, this week’s episode of Founded & Funded is a must-listen for anyone building in AI. Madrona Partner Jon Turow sits down with Douwe Kiela, the co-creator of Retrieval Augmented Generation and co-founder of Contextual AI, to unpack:

  • Why RAG was never meant to be a silver bullet — and why it still gets misunderstood
  • The false dichotomies of RAG vs. fine-tuning and long-context
  • How enterprises should evaluate and scale GenAI in production
  • What makes a problem a “RAG problem” (and what doesn’t)
  • How to build enterprise-ready AI infrastructure that actually works
  • Why hallucinations aren’t always bad (and how to evaluate them)
  • And why he believes now is the moment for RAG agents

Whether you’re a builder, an investor, or an AI practitioner, this is a conversation that will challenge how you think about the future of enterprise AI.


This transcript was automatically generated and edited for clarity.

Jon: So Douwe, take us back to the beginning of RAG. What was the problem that you were trying to solve when you came up with that?

Douwe: The history of the RAG project, we were at Facebook AI Research, obviously, FAIR, and I had been doing a lot of work on grounding already for my PhD thesis, and grounding, at the time, really meant understanding language with respect to something else. It was like if you want to know the meaning of the word cat, like the embedding, word embedding of the word cat, this was before we had sentence embeddings, then ideally, you would also know what cats look like because then you understand the meaning of cat better. So that type of perceptual grounding was something that a lot of people were looking at at the time. Then I was talking with one of my PhD students, Ethan Perez, about, “Can we ground it in something else? Maybe we can ground in other text instead of in images.” The obvious source at the time to ground in was Wikipedia.

We would say, “This is true, sort of true,” and then you can understand language with respect to that ground truth. That was the origin of RAG. Ethan and I were looking at that, and then we found that some folks in London had been working on open-domain question answering, mostly Sebastian Riedel and Patrick Lewis, and they had amazing first models in that space and it was a very interesting problem, how can I make a generative model work on any type of data and then answer questions on top of it? We joined forces there. We happened to get very lucky at the time because the people at the Facebook AI Image Similarity Search, I think is what it stands for, basically, the first vector database, but it was just there. And so we were like — we have to take the output from the vector database, give it to a generative model. This is before we called it language models, Then the language model can generate answers grounded on the things you retrieve. And that became RAG.

We always joke with the folks who were on the original paper that we should have come up with a much better name than that, but somehow, it stuck. This was by no means the only project that was doing this, there were people at Google working on very similar things, like REALM is an amazing paper from around the same time. Why RAG, I think, stuck was because the whole field was moving towards gen AI, and the G in RAG stands for generative. We were really the first ones to show that you could make this combination of a vector database and a generative model actually work.

Jon: There’s an insight in here that RAG, from its very inception, was multimodal. You were starting with image grounding, and things like that, and it’s been heavily language-centric in the way people have applied it. But from that very beginning place, were you imagining that you were going to come back and apply it with images?

Douwe: We had some papers from around that time. There’s a paper we did with more applied folks in Facebook where we were looking at, I think it was called Extra, and it was basically RAG but then on top of images. That feels like a long time ago now, but that was always very much the idea, is you can have arbitrary data that is not captured by the parameters of the generative model, and you can do retrieval over that arbitrary data to augment the generative model so that it can do its job. It’s all about the context that you give it.

Jon: Well, this takes me back to another common critique of these early generative models that, for the amazing Q&A that they were capable of, the knowledge cutoff was really striking, you’ve had models in 2020 and 2021 that were not aware of COVID-19, that obviously was so important to society. Was that part of the motivation? Was that part of the solve, that you can make these things fresher?

Douwe:

Yeah, it was part of the original motivation. That is what grounding is, the vision behind the original RAG project. We did a lot of work after that on that question as well, can I have a very lightweight language model that basically has no knowledge, it’s very good at reasoning and speaking English or any language, but it knows nothing? It has to rely completely on this other model, the retriever, which does a lot of the heavy lifting to ensure that the language model has the right context, but that they really have separate responsibilities. Getting that to work turned out to be quite difficult.

Jon:

Now, we have RAG, and we still have this constellation of other techniques, we have training, and we have tuning, and we have in-context learning, and that was, I’m sure, very hard to navigate for research labs, let alone enterprises. In the conception of RAG, in the early implementations of it, what was in your head about how RAG was going to fit into that constellation? Was it meant to be standalone?

Douwe: It’s interesting because the concept of in-context learning didn’t really exist at the time, that really became a thing with GPT-3, and that’s an amazing paper and proof point that that actually works, and I think that unlocked a lot of possibilities. In the original RAG paper, we have a baseline, what we call the frozen baseline, where we don’t do any training and we give it as context, that’s in table six, and we showed that it doesn’t really work, or at least, that you can do a lot better if you optimize the parameters. In-context learning is great, but you can probably always beat it through machine learning if you are able to do that. If you have access to the parameters, which is, obviously, not the case with a lot of these black box frontier language models, but if you have access to the parameters and you can optimize them for the data you’re working on or the problem you’re solving, then at least, theoretically, you should always be able to do better.

I see a lot of false dichotomies around RAG. The one I often hear is it’s either RAG or fine-tuning. That’s wrong, you can fine-tune a RAG system and then it would be even better. The other dichotomy I often hear is it’s RAG or long-context. Those are the same thing, RAG is a different way to solve the problem where you have more information than you can put in the context. One solution is to try to grow the context, which doesn’t really work yet even though people like to pretend that it does, the other is to use information retrieval, which is pretty well established as a computer science research field, and leverage all of that and make sure that the language model can do its job. I think things get oversimplified where it’s like, “You should be doing all of those things. You should be doing RAG, you should have a long-context window as long as you can get, and you should fine-tune that thing.” That’s how you get the best performance.

Jon: What has happened since then is that, and we’ll talk about how this is all getting combined in more sophisticated ways today, but I think it’s fair to say that in the past 18, 24, 36 months, RAG has caught fire and even become misunderstood as the single silver bullet. Why do you think it’s been so seductive?

Douwe: It’s seductive because it’s easy. Honestly, I think long-context is even more seductive if you’re lazy, because then you don’t even have to worry about the retrieval anymore, the data, you put it all there and you pay a heavy price for having all of that data in the context. Every single time you’re answering a question about Harry Potter, you have to read the whole book in order to answer the question, which is not great. So RAG is seductive, I think, because you need to have a way to get these language models to work on top of your data. In the old paradigm of machine learning, we would probably do that in a much more sophisticated way, but because these frontier models are behind black box APIs and we have no access to what they’re actually doing, the only way to really make them do the job on your data is to use retrieval to augment them. It’s a function of what the ecosystem has looked like over the past two years since ChatGPT.

Jon: We’ll get to the part where we’re talking about how you need to move beyond a cool demo, but I think the power of a cool demo should not be underestimated, and RAG enables that. What are some of the aha moments that you see with enterprise executives?

Douwe: There are lots of aha moments, I think that’s part of the joy of my job. I think it’s where you get to show what this can do, and it’s amazing what these models can do. So basic aha moments for us, is accuracy is almost kind of table stakes at this point. It’s like, okay, you have some data, it’s like one document, you can probably answer lots of questions about that document pretty well. It becomes much harder when you have million documents or tens of millions of documents and they’re all very complicated or they have very specific things in them. We’ve worked with Qualcomm and they’re like circuit design diagrams inside those documents, it’s much harder to make sense of that type of information. The initial wow factor, at least from people using our platform, is that you can stand this up in a minute. I can build a state-of-the-art RAG agent in three clicks basically.

That time of value used to be very difficult to achieve, because you had your developers, they have to think about the optimal chunking strategy for the documents, and things that you really don’t want your developers thinking about but they had to because the technology was so immature. The next generation of these systems and platforms for building these RAG agents is going to enable developers to think much more about business value and differentiation essentially, “How can I be better than my competitors because I’ve solved this problem so much better?” Your chunking strategy should not be important for solving that problem.

Jon: Also, if I now connect what we were just talking about to what you said now, the seduction of long-context and RAG are that it’s straightforward and easy, and it plugs into my existing architecture. As a CTO, if I have finite resources to go implement new pieces of technology, let alone dig into concepts like chunking strategies, and how the vector similarity for non-dairy will look similar to the vector similarity for milk, things like this, is it fair to say that CTOs are wanting something coherent, that can be something that works out of the box?

Douwe: You would think so, and that’s probably true for CTOs, and CIOs, and CAIOs, and CDOs, and the folks who are thinking about it from that level. But then what we often find is that we talk to these people and they talk to their architects and their developers, and those developers love thinking about chunking strategies, because that’s what it means in a modern era, to be an AI engineer is to be very good at prompt engineering and evaluation and optimizing all the different parts of the RAG stack. It’s very important to have the flexibility to play with these different strategies, but you need to have very, very good defaults so that these people don’t have to do that unless they really want to squeeze the final percent, and then they can do that.

That’s what we are trying to offer, is you don’t have to worry about all this basic stuff, you should be thinking about how to really use the AI to deliver value. It’s really a journey. The maturity curve is very wide and flat. It’s like some companies are figuring it out, it’s like, “What use case should I look at?” And others have a full-blown RAG platform that they built themselves based on completely wrong assumptions for where the field is going to go, and now, they’re stuck in this paradigm, it’s all over the place, which means it’s still very early in the market.

Jon: Take me through some of the milestones on that maturity curve, from the cool demo all the way through to the ninja level results.

Douwe: The timeline is, 2023 was the year of the demo, ChatGPT had just happened, everybody was playing with it, there was a lot of experimental budget, last year has been about trying to productionize it, and you could probably get promoted if you were in a large enterprise, if you were the first one to ship genAI into production. There’s been a lot of kneecapping of those solutions happening in order to be the first one to get it into production.

Jon: First-pass-the-post.

Douwe: First-pass-the-post, but in a limited way, because it is very hard to get the real thing past the post. This year, people are really under a lot of pressure to deliver return on investment for all of those AI investments and all of the experimentation that has been happening. It turns out that getting that ROI is a very different question, that’s where you need a lot of deep expertise around the problem, but also you need to have better components that exist out there in an open source easy framework for you to cobble together a Frankenstein RAG solution, that’s great for the demo, but that doesn’t scale.

Jon: Our customers think about the ROI, how do they measure, perceive that?

Douwe: It really depends on the customer. Some are very sophisticated, trying to think through the metrics, like, “How do I measure it? How do I prioritize it?” I think a lot of consulting firms are trying to be helpful there as well, thinking through, “Okay, this use case is interesting, but it touches 10 people. They’re very highly specialized, but we have this other use case that has 10,000 people. They’re maybe slightly less specialized, but there’s much more impact there.” It’s a trade-off. I think my general stance on use case adoption is that I see a lot of people aiming too low, where it’s like, “Oh, we have AI running in production.” It’s like, “Oh, what do you have?” It’s like, “Well, we have something that can tell us who our 401(k) provider is, and how many vacation days I get.”

And that’s nice, “Is that where you get the ROI of AI from?” Obviously not, you need to move up in terms of complexity, or if you think of the org chart of the company, you want to go for this specialized roles where they have really hard problems, and if you can make them 10, 20% more effective at that problem, you can save the company tens or hundreds of millions of dollars by making those people better at their job.

Jon: There’s an equation you’re getting at, which is the complexity, sophistication of the work being done times the number of employees that it impacts.

Douwe: There’s roughly two categories for gen AI deployment, one is cost savings. So I have lots of people doing one thing, if I make all of them slightly more effective, then I can save myself a lot of money. The other is more around business transformation and generating new revenue. That second one is obviously much harder to measure, and you need to think through the metrics, like, “What am I optimizing for here?” As a result of that, I think you see a lot more production deployments in the former category where it’s about cost-saving.

Jon: What are some big misunderstandings that you see around what the technology is or is not capable of?

Douwe: I see some confusion around the gap between demo and production. A lot of people think that, “Oh, yeah, it’s great, I can easily do this myself.” Then it turns out that everything breaks down after a hundred documents, and they have a million. That is the most common one that we see. There are other misconceptions maybe around what RAG is good for and what it’s not.What is a RAG problem and what is not a RAG problem? People don’t have the same mental model that maybe AI researchers like myself have, where if I give them access to a RAG agent, often, the first question they ask is, “What’s in the data?” That is not a RAG problem, or it’s a RAG problem on the metadata, it’s not on the data in itself. A RAG question would be like, what was, I don’t know, Meta’s R&D expense in Q4 or 2024, and how did it compare to the previous year? Something like that.

It’s a specific question where you can extract the information and then reason over it and synthesize that different information. A lot of questions that people like to ask are not RAG problems. It’s like, summarize the document is another one. Summarization is not a RAG problem. Ideally, you want to put the whole document in a context and then summarize it. There are different strategies that work well for different questions, and why ChatGPT is such a great product is because they’ve abstracted away some of those decisions that go into it, but that’s still very much happening under the surface. I think people need to understand better what type of use case they have. If I’m a Qualcomm customer engineer and I need very specific answers to very specific questions, that’s very clearly a RAG problem. If I need to summarize the document, put that in context of a long-context model.

Jon: Now, we have Contextual, which is an amalgamation of multiple techniques, and you have what you call RAG 2.0, and you have fine-tuning, and there’s a lot of things that happen under the covers that customers ideally don’t have to worry about until they choose to do so. I expect that changes radically the conversation you have with an enterprise executive. How do you describe the kinds of problems that they should go find and apply and prioritize?

Douwe: We often help people with use case discovery. So, thinking through, okay, what are the RAG problems, what are maybe not RAG problems? Then for the RAG problems, how do you prioritize them? How do you define success? How do you come up with a proper test set so that you can evaluate whether it actually works? What is the process for, after that, doing what we call UAT, user acceptability testing. Putting it in front of real people, that’s really the thing that matters, right? Sometimes, we see production deployments, and they’re in production, and then I ask them, “How many people use this?” And the answer is zero. During the initial UAT, everything was great and everybody was saying, “Oh, yeah, this is so great.” Then when your boss asks you the question and your job is on the line, then you do it yourself, you don’t ask AI in that particular use case. It’s a transformation that a lot of these companies still have to go through.

Jon: Do the companies want support through that journey today, either direct for Contextual or from a solution partner, to get such things implemented?

Douwe: It’s very tempting to pretend that AI products are mature enough to be fully self-serve and standalone. It’s decent if you do that, but in order to get it to be great, you need to put in the work. We do that for our customers or we can also work through systems integrators who can do that for us.

Jon: I want to talk about two sides of the organization that you’ve had to build in order to bring all this for customers. One is scaling up the research and engineering function to keep pushing the envelope. There are a couple of very special things that Contextual has, something you call RAG 2.0, something you call active versus passive retrieval. Can you talk about some of those innovations that you’ve got inside Contextual and why they’re important?

Douwe: We really want to be a frontier company, but we don’t want to train foundation models. Obviously, that’s a very, very capital intensive business, I think language models are going to get commoditized. The really interesting problems are around how do you build systems around these models that solve the real problem? Most of the business problems that we encounter, they need to be solved by a system. Then there are a ton of super exciting research problems around how do I get that system to work well together? That’s what RAG 2.0 is in our case, how do you jointly optimize these components so that they can work well together? There’s also other things like making sure that your generations are very grounded. It’s not a general language model, it’s a language model that has been trained specifically for RAG and RAG only. It’s not doing creative writing, it can only talk about what’s in the context.

Similarly, when you build these production systems, you need to have a state-of-the-art re-ranker. Ideally, that re-ranker can also follow instructions. It’s a smarter model. There’s a lot of innovative stuff that we’re doing around building the RAG pipeline better and then how you incorporate feedback into that RAG pipeline as well. We’ve done work on KTO, and APO, and things like that, so different ways to incorporate human preferences into entire systems and not just models. That takes a very special team, which we have, I’m very proud of.

Jon: Can you talk about active versus passive retrieval?

Douwe: Passive retrieval is basically old-school RAG. It’s like I get a query, and I always retrieve, and then I take the results of that retrieval, and I give them to the language model, and it generates. That doesn’t really work. Very often, you need the language model to think, first of all, where am I going to retrieve it from and how am I going to retrieve it? Are there maybe better ways to search for the thing I’m looking for than copy and pasting the query? Modern production RAG pipelines are already way more sophisticated than having a vector database and a language model. One of the interesting things that you can do in the new paradigm of agentic things and test-time reasoning is decide for yourself if you want to retrieve something. It’s active retrieval. It’s like if you give me a query like, “Hi, how are you?” I don’t have to retrieve in order to answer that. I can just say, “I’m doing well, how can I help you?”

Then you ask me a question and now I decide that I need to go and retrieve. Maybe I make a mistake with my initial retrieval, so then I need to go and think like, “Oh, actually, maybe I should have gone here instead.” That’s active retrieval, that’s all getting unlocked now. This is what we call RAG agents, and this really is the future, I think, because gents are great, but we need a way to get them to work on your data, and that’s where RAG comes in.

Jon: This implies two uses of two relationships of Contextual and RAG to the agent, there is the supplying of information to the agents so that it can be performant, but if I probe into what you said, active retrieval implies a certain kind of reasoning, maybe even longer reasoning about, “Okay, what is the best source of the information that I’ve been asked to provide?”

Douwe: Exactly. It’s like I enjoy saying everything is Contextual. That’s very true for an enterprise. So the context that the data exists in, that really matters for the reasoning that the agent does in terms of finding the right information that all comes together in these RAG agents.

Jon: What is a really thorny problem that you’d like your team and the industry to try and attack in the coming years?

Douwe: The most interesting problems that I see everywhere in enterprises are at the intersection of structured and unstructured. We have great companies working on unstructured data, there are great companies working on structured data, but once you have the capability, which we’re starting to have now, where you can reason over both of these very different data modalities using the same model, then that unlocks so many cool use cases. That’s going to happen this year or next year, just thinking through the different data modalities and how you can reason on top of all of them with these agents.

Jon: Will that happen under the covers with one common piece of infrastructure or will it be a coherent single pane of glass across many different Lego bricks?

Douwe: I’d like to think that it would be one solution, and that is our platform, which can do all of that.

Jon: Let’s imagine that, but behind the covers, will you be accomplishing that with many different components each handling the structured versus unstructured?

Douwe: They are different components, despite what some people maybe like to pretend, I can always train up a better text-to-SQL model if I specialize it for text-to-SQL, than taking a generic off-the-shelf language model and telling it, “Generate some SQL query.” Specialization is always going to be better than generalization for specific problems, if you know what the problem is that you’re solving, the real question is much more around is it worth actually investing the money to do that? It costs money to specialize and it sometimes hampers economies of scale that you might want to have.

Jon: If I look at the other side of your organization that you’ve had to build, so you’ve had to build a very sophisticated research function, but Contextual is not a research lab, it’s a company, so what are the other kinds of disciplines and capabilities you’ve had to build up at Contextual that complement all the research that’s happening here?

Douwe: First of all, I think our researchers are really special in that we’re not focused on publishing papers or being too far out on the frontier. As a company, I don’t think you can afford that until you’re much bigger, and if you’re like Zuck and you can afford to have FAIR. The stuff I was working on at FAIR, at the time, I was doing like Wittgensteinian language games and all kinds of crazy stuff that I would never let people do here, honestly. But there’s a place for that, and that’s not a startup. The way we do research is we’re very much looking at what the customer problems are that we think we can solve better than anybody else, and then focusing, thinking from the system’s perspective about all of these problems, how can we make sure that we have the best system and then make that system jointly optimized and really specialized, or specializable, for different use cases? That’s what we can do.

That means that it’s a very fluid boundary between pure research and applied research, basically. All of our research is applied. In AI, right now, I think there’s a very fine line between product and research, where the research is basically is the product, and that’s not true for us, I think it’s true for OpenAI, Anthropic, everybody. The field is moving so quickly that you have to productize research almost immediately. As soon as it’s ready, you don’t even have time to write a paper about it anymore, you have to ship it into product very quickly because it is such a fast moving space.

Jon: How do you allocate your research attention? Is there some element of play, even 5%, 10%?

Douwe: The team would probably say not enough.

Jon: But not zero?

Douwe: As a researcher, you always want to play more but you have limited time. So yeah, it’s a trade-off, I don’t think we’re officially committing. We don’t have a 20% rule or something like Google would have, it’s more like we’re trying to solve cool problems as quickly as we can, and hopefully, have some impact on the world. Not work in isolation, but try to focus on things that matter.

Jon: I think I’m hearing you say that it’s zero even in an environment with finite resources and moving fast?

Douwe: Every environment has finite resources. It’s more like if you want to do special things, then you need to try new stuff. That’s, I think, very different for AI companies or AI native companies like us. If you compare this generation of companies with SaaS companies, there is like, okay, all the LAMP stack, everything was already there, you have to basically go and implement it. That’s not the case here, is we’re very much figuring out what we’re doing, like flying the airplane as we’re building it sort of thing, which is exciting, I think.

Jon: What is it like to now take this research that you’re doing and go out into the world and have that make contact with enterprises? What has that been like for you personally, and what has that been like for the company to transform from research-led to a product company?

Douwe: That’s my personal journey as well. I started off, I did a PhD, I was very much a pure research person and slowly transitioned to where I am now, where the key observation is that the research is the product. This is special point in time, it’s not going to always be like that. That’s been a lot of fun, honestly, I’ve been on a podcast a while back and they asked me, “What other job would you think is interesting?” And I said, “Maybe being the head of AI of JP Morgan.” And they were like, “Really?”

And I was like, “Well, I think actually, right now, at this particular point in time, that is a very interesting job.” And because you have to think about how am I going to change this giant company to use this latest piece of technology that is frankly going to change everything is going to change our entire society. For me, it gave me a lot of joy talking to people like that and thinking about what the future of the world is going to look like.

Jon: I think there’s going to be people problems, and organizational problems, and regulatory and domain constraints that fall outside the paper.

Douwe: Honestly, I would argue that those are the main problems to still overcome. I don’t care about AGI and all of those discussions, the core technology is already here for huge economic disruption. All the building blocks are here, the questions are more around how do we get lawyers to understand that? How do we get the MRM people to figure out what is an acceptable risk? One thing that we are very big on is not thinking about the accuracy, but thinking about the inaccuracy, and what do you do, if you have 98% accuracy, what do you do with the remaining 2% to make sure that you can mitigate that risk? A lot of this is happening right now. There’s a lot of change management that we’re going to need to do in these organizations. All of that is outside of the research questions where we have all the pieces to completely disrupt the global economy right now, it’s a question of executing on it, which is scary and exciting at the same time.

Jon: Douwe, you and I have had a conversation many times about different archetypes of founders and their capabilities. There’s one lens that stuck with me that has three click stops on it, there is a domain expert, who has expertise in revenue cycle management, but may not be that technical at all, A. B, there is somebody who is technical and able to write code but is not a PhD researcher, and Mark Zuckerberg is a really famous example of that. Then there’s the research founder, who has deep technical capabilities and advanced vision into the frontier. What do you see as the role for each of those types of founders in the next wave of companies that needs to get built?

Douwe: That’s a very interesting question. I would argue how many PhDs does Zuck have working for him? That’s a lot, right?

Jon: That’s a lot.

Douwe: I don’t think it matters how deep your expertise in a specific domain is, as long as you are a good leader and a good visionary, then you can recruit the PhDs to go and work for you. At the same time, obviously, it gives you an advantage if you are very deep in one field and that field happens to take off, which is what happened to me. I got very lucky, with a lot of timing there as well. Overall, one underlying question you’re asking there is around AI wrapper companies, for example. To what extent should companies go horizontal and vertical using this technology?

There’s been a lot of disdain for these wrapper companies like, “Oh, that’s a wrapper for OpenAI.” It’s like, “Well, it turns out you can make an amazing business just from that, right?” I think Cursor is like Anthropic’s biggest customer right now. It’s fine to be a wrapper company as long as you have an amazing business. People should have a lot more respect for companies building on top of fundamental new technology and discovering whole new business problems that we didn’t really knew existed, and then solving them much better than anything else.

Jon: Well, so I’m really thinking also about the comment you made, that we have a lot of technology that is capable of a lot of economic impact, even today, without new breakthroughs that, yes, we’ll also get. Does that change the next types of companies that should be founded in the coming year?

Douwe: I think so. I am also learning a lot of this myself, about how to be a good founder, basically. It’s always good to plan for what’s going to come and not for what is here right now, and that’s how you get to ride that wave in the right way. What’s going to come is that a lot of this stuff is going to become much more mature. One of the big problems we had even two years ago was that AI infrastructure was very immature. Everything would break down all the time. There were bugs into attention mechanism, implementation of frameworks we were using, really basic stuff. All of that has been solved now. With that maturity also comes the ability to scale much better, to think much, much more rigorously, I think, around cost quality trade-offs and things like that. There’s a lot of business value just right there.

Jon: What do new founders ask you? What kind of advice do they ask you?

Douwe: They ask me a lot about this wrapper company thing, and modes, and differentiation. There’s some fear that incumbents are going to eat everything, and so they obviously have amazing distribution. They’re massive opportunities for companies to be AI native and to think from day one as an AI company. If you do that right, then you have a massive opportunity to be the next Google, or Facebook, or whatever, if you play your cards right.

Jon: What is some advice that you’ve gotten, and I’ll ask you to break it into two, what is advice that you’ve gotten that you disagree with, and what do you think about that? And then what is advice that you’ve gotten that you take a lot from?

Douwe: Maybe we can start with the advice I really like, which is one observation around why Facebook is so successful, it’s like, be fluid like water. It’s like whatever the market is telling you or your users are telling you, fit into that, don’t be too rigorous in what is right and wrong, be humble and look at what the data tells you and then try to optimize for that. That is advice that when I got it, I didn’t really appreciate it fully, and I’m starting to appreciate that much more right now. Honestly, it took me too long to understand that. In terms of advice that I’ve gotten that I disagree with, it’s very easy for people to say, “You should do one thing and you should do it well.” Sure, maybe, but I’d like to be more ambitious than that. We could have been one small part of a RAG stack and we probably would’ve been the best in the world at that particular thing, but then we’re slotting into this ecosystem where we’re a small piece, and I want the whole pie ideally.

Then that’s why we’ve invested so much time in building this platform, making sure that all the individual components are state-of-the-art and that they’ve been made to work together so that you can solve this much bigger problem, but yet, that is also a lot harder to do. Not everyone would give me the advice that I should not go and solve that hard problem, but I think over time, as a company, that is where your moat comes from, doing something that everybody else thinks is kind of crazy. So that would be my advice to founders, is go and do something that everybody else thinks is crazy.

Jon: You’re probably going to tell me that that reflects in the team that comes to join you?

Douwe: Yeah, the company is the team, especially the early team. We’ve been very fortunate with the people who joined us early on, and that is what the company is. It’s the people.

Jon: If I piggyback a little bit and we get back into the technology for a minute, there’s a common question, maybe even misunderstanding that I hear about RAG, that, “Oh, this is the thing that’s going to solve hallucinations.” You and I have spoken about this many times, where is your head at right now on what hallucinations are, what they are not? Does RAG solve it? What’s the outlook there?

Douwe: I think hallucination is not a very technical term. We used to have a pretty good word for it, it was accuracy. If you were inaccurate, if you were wrong, then I guess to explain that, or to anthropomorphize it would be to say, “Oh, the model hallucinated.” I think it’s a very ill-defined term, honestly. If I would have to try to turn it into a technical definition, I would say the generation of the language model is not grounded in the context that is given, where it is told that that context is true. Basically, hallucination is about groundedness. If you have a model that adheres to its context, then it will hallucinate less. Hallucination itself is arguably a feature for a general purpose language model, it’s not a bug. If you have a creative writing department, or marketing department, creative writing thing like content generation, I think hallucination is great for that, as long as you have a way to fix it, you probably have a human somewhere double checking it and rewriting some stuff.

So hallucination itself is not even a bad thing necessarily, it is a bad thing if you have a RAG problem though and you cannot afford to make a mistake. Then that’s why we have a grounded language model that has been trained specifically not to hallucinate, or to hallucinate less. Because one other misconception that I sometimes see is that people think that these probabilistic systems can have 100% accuracy, and that is a pipe dream. It’s the same with people. If you look at a big bank, there are people in these banks and people make mistakes too.

Jon: SEC filings have mistakes.

Douwe: Exactly. The whole reason we have the SEC and that is a regulated market is so that we have mechanisms built into the market so that if a person makes a mistake, then at least we made reasonable efforts to mitigate the risk around that. It’s the same with AI deployments. That’s why I’m talking about how to mitigate the risk with inaccuracies. It’s like, we’re not going to get it to 100%, so you need to think about the 2, 3, 5, 10% depending on how hard the use cases where you might still not be perfect. How do you deal with that?

Jon: What are some of the things that you might’ve believed a year ago about AI adoption or AI capabilities that you think very differently about today?

Douwe: Many things. The main thing I thought that turned out not to be true was that I thought this would be easy.

Jon: What is this?

Douwe: Building the company and solving real problems with AI. We were very naive, especially in the beginning of the company. We were like, “Oh, yeah, we just get a research cluster. Get a bunch of GPUs in there. We train some models, it’s going to be great.” Then it turned out that getting a working GPU cluster was very hard. And then it turned out that training something on that GPU cluster in a way that actually works, where if you’re using other people’s code, then maybe that code is not that great yet. Now, you have to build your own framework for a lot of the stuff that you’re doing if you want to make sure that it’s really, really good. We had to do a lot of plumbing that we did not expect to have to do. Now, I’m very happy that we did all that work, but at the time, it was very frustrating.

Jon: What are we, either you and I, or we, the industry, not talking about nearly enough that we should be?

Douwe: Evaluation. I’ve been doing a lot of work on evaluation in my research career, things like Dynabench where it was about how do we hopefully maybe get rid of benchmarks all together and have a more dynamic way to measure model performance. Evaluation is very boring. People don’t seem to care about it. I care deeply about it, so that always surprises me. We did this amazing launch, I thought, around LMUnit, it’s natural language unit testing. You have a response from a language model, and now you want to check very specific things about that response. It’s like, did it contain this? Did it not make this mistake? Ideally, you can write unit tests as a person for what a good response looks like. You can do that with our approach. We have a model that is by far state-of-the-art at verifying that these unit tests are passing or failing.

I think this is awesome. I love talking about this, but people don’t seem to really care. It’s like, “Oh, yeah, evaluation. Yeah, we have a spreadsheet somewhere with 10 examples.” How is that possible? That’s such an important problem. When you deploy AI, you need to know if it works or not, and you need to know where it falls short, and you need to have trust in your deployment, and you need to think about the things that might go wrong, and all of that. It’s been very surprising to me just how immature a lot of companies are when it comes to evaluation, and this includes huge companies.

Jon: Garry Tan posted on social media not too long ago, that evaluation is the secret weapon of the strongest AI application companies.

Douwe: Also, AI research companies, by the way. So OpenAI and Anthropic, part of why they’re so great is because they’re amazing at evaluation too. They know exactly what good looks like. That’s also why we are doing all of that in-house; we’re not outsourcing evaluation to somebody else. It’s like, if you are an AI company and AI is your product, then you can only assess the quality of your product through evaluation. t’s core to all of these companies.

Jon: Whoever is lucky enough to get that cool JP Morgan head of AI job that you would be doing in another life, is that intellectual property of JP Morgan what the evals really need to look like, or is this something that they can ultimately ask Contextual to cover for them?

Douwe: No. I think the tooling for evaluation, they can use us for, but the actual expertise that goes into that evaluation, the unit tests, they should write that themselves. In the limit, we talked about a company is its people, but in the limit, that might not even be true, because there might be AI mostly, and maybe only a few people. What makes a company a company is its data, and the expertise around that data and the institutional knowledge. That is what defines a company. That should be captured in how you evaluate the systems that you deploy in your company.

Jon: I think we can leave it there. Douwe Kiela, thank you so much. This was a lot of fun.

Douwe: Thank you.

Dropzone’s Edward Wu on Solving Security’s Biggest Bottleneck

Listen on Spotify, Apple, and Amazon | Watch on YouTube

This week, Partner Vivek Ramaswami hosts Edward Wu, the founder of 2024 IA40 winner Dropzone, which is building a next-generation AI security operation center. Edward decided to take the leap and start his own company after spending eight years at ExtraHop, where he rose to the role of senior principal scientist, leading AI/ML and detection. Now at Dropzone, he’s tackling some of the most pressing challenges at the intersection of AI and cyber security.

On this episode, they explore Edward’s decision to leave ExtraHop to build Dropzone, his thoughts on why generative AI is uniquely suited to addressing alerts and investigation in cybersecurity, and how Dropzone is redefining the role of AI in the security operations center. They unpack Edward’s decision to leap into entrepreneurship, how he landed key customers like UiPath, and why transparency is vital in a category often skeptical of AI. He also shares his perspectives on how AI unlocks new opportunities in cybersecurity, along with lessons he learned as a first-time solo founder.


This transcript was automatically generated and edited for clarity.

Edward: My pleasure.

Vivek: Let’s kick off with having you share a little bit about your journey into security. What sparked your interest in the space, to enter into security?

Edward: I would say, quite similar to a lot of security practitioners, I grew up playing with computers, playing games, cracking games, and I think that’s what got me started with security, because a lot of the, you can say, skills or tools you use to crack games or cheat in games, jive with reverse engineering and malware analysis. Then, after I got into my undergrad program at UC Berkeley, I really made the decision to eventually pursue a PhD in cybersecurity, and that’s kind of where I spent three years in my undergrad, doing cybersecurity related research, like automated malware analysis, binary analysis, reverse engineering, Android apps.

Vivek: Yeah, that’s great. So, even back then, you were thinking about security and cybersecurity and obviously there was a lot of attacks and things like that, even back then. You spent eight years at ExtraHop, which is a Madrona portfolio company, and eventually became the senior principal scientist, led AI/ML and detection there. Tell us a little bit about that journey, and then you can tell us a little bit about why you decided to leave and launch your own company in Dropzone.

Edward: ExtraHop was definitely a very fun ride for me. I joined when I decided to quit my PhD, due to a variety of reasons. Part of it was cybersecurity academic research, frankly, is not as interesting as the real thing in the industry. When I decided to quit my program, I applied and interviewed at, practically, any and every stage cybersecurity companies I could find. I remember one of them was Iceberg. I was offered to be employee number four, and Iceberg was a Madrona portfolio company as well, so while I was looking around, ExtraHop really struck me, because back then, ExtraHop wasn’t in cybersecurity at all. It was in network performance analytics.

When I saw the demo of ExtraHop’s product, I saw so much potential, because what ExtraHop had in terms of potential is very similar to what police departments and state agencies discovered about traffic cameras. You initially have a lot of traffic cameras for monitoring traffic, but after a while everybody discovered how much more valuable information you can get out of traffic cameras from tracking, whether it’s fugitives or helping to identify other sorts of suspicious activities, so I really saw that opportunity, and ended up joining ExtraHop. Essentially helping ExtraHop to build and pivot from a network performance company to a network security company and, along the way, built ExtraHop’s AI/ML and detection product from scratch, and really spent a lot of time working with ExtraHop customers in understanding how security teams actually work.

Vivek: How did you think about even joining a startup or a scaling startup back then? Obviously, you’re interest in security, you probably could have looked at Palo Alto Networks, Fortnite, or a much larger platform. What attracted you to a startup at the time?

Edward: While I was in college, I came across a couple of blogs talking about the founding journey of different security startups, and I think those really struck me and got me excited and interested to eventually start my own company. While I was looking for my first job out of college, the number one criteria was the opportunity to learn and how to build a startup someday in the future for myself. When I interviewed with ExtraHop, and I met ExtraHop Co-founder and CEO at the time Jesse Rothstein, I told him, “Hey, the reason I’m looking at startups is I want to start my own company someday,” which is great foreshadowing for when I told him I’m going to resign and start my own thing eight years later.

Vivek: So, he couldn’t act shocked, because he would’ve known eight years from before.

Edward: Correct, correct. Back then I was looking for the opportunity to learn how to build a product from scratch, and that’s kind of where, between the choices of ExtraHop and Iceberg, I picked ExtraHop, because it was a little bit more mature. I could learn from the existing lessons and the potholes ExtraHop fell into, and then dug themselves out of.

Vivek: It sounds like you had that kernel of idea in your head, from early on, that you wanted to start your own company. Before we get into the aha moment that led you to founding Dropzone, would you suggest to other founders that it’s helpful to spend time at a company? Even if you had that idea early on in academia, thinking about starting a company, would you suggest it’s good for founders to go and spend a number of years at another startup to learn, or how would you think about that journey that founders have to go on before they start their own business?

Edward: At least in my experience, I believe that if you’re going to start a B2B company, it’s vitally important to work somewhere first, because you’ll have the exposure to how B2B actually works. I think there’s a number of, you can say whether it’s processes, or structures, that all B2B companies have to go through, and by working at an established organization, it teaches you what good engineering looks like, what good customer success looks like, what good marketing looks like, and what good sales look like. All of these will become tremendously important when you do start your own B2B company.

Vivek: So, now you’ve been at ExtraHop for eight years, you’ve learned good marketing, and good sales, you’ve seen this journey, and you’ve obviously had this idea now for eight years in your head that you want to go found your own company. What was the aha moment? Walk us through the idea you had in your head? Where did you see the opportunity that led you to actually go out and leave ExtraHop and found Dropzone?

Edward: The biggest thing was, while I was at ExtraHop, I had been keeping track of industry movements and trends, because I know the only way I could found my own company someday was by looking for the next big thing. During my time at ExtraHop, I had done a lot of analysis and paid attention to every single RSAC Innovation Sandbox, as well as other movements within cybersecurity to see, “Okay. What are other people building?” And if I were to be an investor, would I invest my money or time, right? Because as a founder, to some extent, you’re also an investor.
You’re investing in the most precious resource you have, which is your time. I’ve been doing a lot of that for years. Then, when GenAI came around, that got me excited, because for the first time I saw an idea where we can tackle one of the holy grail unsolvable problems within cybersecurity by leveraging this new technical catalyst. That combination of a very concrete, universal pinpoint, and a new technical catalyst, which essentially means there was no way to solve this problem previously, makes starting a new company a lot easier, because you don’t have tons of incumbents to deal with, and all the factors combined are reasonings behind my departure.

Vivek: You bring up a good point, and I think many of the founders that listen to this podcast and that we work with, over the last few years, after college ChatGPT came out or after Transformers really were becoming a big thing, is that they also said, “Hey, there’s an opportunity in AI. I want to go found a business.” You mentioned that, if it wasn’t for AI or the current versions that we have in AI, some of these problems likely couldn’t have been solved in security. Maybe just take us through that. What, specifically, were you seeing in this intersection of AI and security that said, “Hey, there’s a technical change. Something is different now, that’s going to unlock problems that we couldn’t unlock before,” and then maybe you can tell us a little bit about how that led you to what your core focus is at Dropzone today.

Edward: For people who are not familiar with security, one of the biggest challenges within cyber security today is the ability to process all the security alerts. To some extent, it’s actually a very similar problem to modern day police departments, which is they have all sorts of crime reports, but not enough detectives to follow up on every single report. This is kind of where, historically, it has been a very difficult problem to solve, because the act of investigating security reports and alerts requires tons of human intelligence.
You cannot hard code your way through an investigation process, because when a security analyst is looking at security reports and alerts, what they’re going through in their head is a very detective recursive reasoning process, so that has been one of the biggest bottlenecks within cyber security. There’s a couple workforce reports out there that shared, as a world needs around 12 million cyber defenders today, and there are 12 million job postings out there, but the actual workforce is only around 7 million, so there’s this shortage of 5 million cyber security analysts or defenders that a world needs to truly protect themselves, but unless somebody invents cloning or some sort of mind transfer, then some sort of software-based automation seems to be the only other solution.

Vivek: As you say, there is a shortage in the number of security practitioners that can do these kinds of things. It’s interesting, because I feel like in this first wave of AI, we saw a lot of companies going after, “Hey, there’s this intersection of AI and security. Let’s just go secure the models, or let’s think about the models themselves.” It seems like what you were thinking about is there’s an existing workflow today that is understaffed, and that’s where we see AI actually helping. Had you worked with these practitioners before, in your time at ExtraHop? Had you seen these problems of alerting and alert fatigue, and how do we actually get AI to solve problems where we don’t have enough people to scale and solve these problems?

Edward: To some extent, what I did at ExtraHop was probably one of the reasons why security practitioners are overwhelmed by alerts, because what I built at the ExtraHop is a detection engine, so it looks at network telemetry and identifies suspicious activities. User A uploaded five gigabytes of data to Dropbox. User B established a persistence connection with an external website for 48 hours, right? User C, SSH linked to the database. All of these security alerts takes time to investigate, and those are exactly the type of alerts that historically have overwhelmed security practitioners.

So, to some extent, my work in the past eight years has contributed or maybe partially caused some of the alert fatigue and overload, so I’m definitely intimately familiar with this particular problem. The way you said when genAI came along, a lot of people had this idea, “Oh. Let’s just secure the models,” my train of thought is very similar to a post I saw on Twitter, which says, one way you can think of genAI is, essentially, we as humans are discovering a new island where there are a hundred billion people with college-level education and intelligence, willing to work for free. We just talked about this huge staff shortage in cybersecurity, so why don’t we take those a hundred billion people with college-level intelligence, willing to work for free, and have them look at all the security alerts and help to improve the overall cybersecurity posture?

Vivek: You have this great term that you were describing to us, Dropzone is having a number of interns or having a whole new set of staff. How do you describe it?

Edward: If we were to zoom out, we view Dropzone as essentially a software-based staff augmentation agency for cybersecurity teams. What we’re building is, essentially, you can say AI agents or AI digital workers that work alongside of the human cybersecurity analyst engineers to allow security teams to do 5X to 10X more than what they’re capable of doing today, but without 5 or 10X of budget or headcount.

Vivek: You’re primarily selling to CISOs, the Chief Security Officer, Chief Information Security Officer, but the actual practitioners of who is using Dropzone tends to be folks that are in the security operation center, right? Who are usually the people who are using Dropzone on a day-to-day basis or interacting with it?

Edward: The primary user of our product are essentially security analysts who work in SOC or security operation centers, and are responsible for responding to security alerts and confirmed breaches.

Vivek:
Going back to one thing you were saying before, which was the nice thing about building, when there’s a new tech change, what we have with AI, is that you don’t have these incumbents, right? Or the incumbents tend to be a little bit slower to move or they’re more reactive. In this case, you can build a net new business, and you can help create a category. One thing you and I have talked about is this is such an obvious problem, in the sense that every large company or mid-market enterprise company has an understaffed security operation center.

A number of startups have sort of popped up and started to build what they call AI SOCs or agents for the SOCs, and so, if we zoom out, how do you view this landscape, how do you view this category where, on one hand, it’s a total validation of the market, saying that something like this needs to occur because people clearly want this product. On the other hand, it’s like, “Okay. Well, how am I supposed to disaggregate and decide between 10 or 12 competitors that all maybe look the same on the surface?”

Edward: If you were to zoom out, the market Dropzone operates in, the AI SOC analyst market or autonomous SOC platform market is probably the single most competitive market within cybersecurity today. Like you said, one challenge is the intersection of cybersecurity and AI is tremendously interesting. The alert investigation use case, to some extent, is kind of an obvious use case a lot of people can see. The way we think about competition is actually not as different from all previous generations of the startups, which is having a lot of competitors is great validation for the market, but the reality is most startups or most players are not going to be successful for a variety of different reasons.

So, to some extent, it’s not a competition in terms of who gets the highest grades. It’s actually a competition of who finishes the marathon, so from our perspective, when we think about competition, a lot of it has to do with how could we do better? How can we ensure that we’re delivering real world, concrete value to our end users? Because we know we’re solving a very large problem with a lot of needs and very large market. We don’t need to worry too much about our competitors right now, because frankly most of them are still pre-product at this moment. Our focus is solely on, can we sign up 1, can we sign up 5, can we sign up 10, 20, 50 paying customers who are getting real world value out of our technology? As long as we could do that, the success will come, regardless of what our competitors do.

Vivek: So, focus. You just have to focus, focus on your customers, and make sure that you’re delivering a product and experience that they really like.

Edward: Yeah.

Vivek: You could say this about other areas of security in the past too, right? I mean, endpoint security 10 years ago was a very hot category and it’s created several, multi-billion dollar companies, CrowdStrike, SentinelOne, and others. As you say, the reason that there’s so many competitors is because people clearly see there’s a lot of value in this market, but as you think about the ecosystem many existing security tools already, and you went to RSAC, and you’ll see 1000 booths and everyone has a booth. So, outside of even the AI SOC space, but in security in general, as an early stage startup, that’s not as much on the map as some of these incumbents, what are the things that you find are valuable to have customers recognize you and think about you? What are some of the tips you have for other founders in a crowded market and how to stand out?

Edward: The biggest learnings we had so far, on marketing front, is making sure you are very precise on how you describe yourselves. Cybersecurity is so fragmented, if you say, “Hey, we are using AI to solve all the problems with cybersecurity,” that’s not going to work, because there are too many vendors out there, but instead, you need to be very focused in your messaging and positioning, so the prospects or security buyers can immediately tell where do you fit in the larger security ecosystem? There are no security teams that only uses a single product.

Most security teams has 5, 10, 15, 20 products. It’s very important to be precise so people don’t conflate you with other products, and they can immediately understand what you’re trying to do. That’s kind of where you mentioned RSAC. I always love RSAC, and I love walking through Expo force, because I find that to be a really good opportunity to level up product marketing. When you walk through the Expo halls and see 1000 vendors, you can really quickly tell who has good product marketing, because every time you walk through a booth, you might have like five seconds right before you start looking at the next fancy, shining booth.

Within that five seconds, you can immediately tell what they’re doing or you’re confused like, “What is this thing?” I think that’s a great exercise. I know I, myself, have been doing this, and I’ve encouraged a lot of folks in my company to do as well, to really make sure our positioning and messaging is very clear so people can immediately tell what we’re trying to do, versus some Panacea AI magic.

Vivek:
Well, there’s a lot of those. Now that we’re a few years into this post ChatGPT wave, we’ve seen so many of these vendors that say they do AI security. If you go to the last two RSA conferences, all you would hear is AI, AI, AI, but then what are you delivering to customers, right? And so, in that way, I think it’s really helpful to hear from you, Edward, about how you all landed UiPath as a customer, really impressive, and they’re obviously a very discerning and sophisticated business themselves. Take us through that journey. How did you land UiPath? What went into that? Are they finding value from Dropzone today?

Edward: UiPath, one of their security engineers reached out to me personally on LinkedIn saying, “Hey, I saw a Dropzone somewhere. It seems you guys are doing interesting stuff. Can I get a demo?” And then, we kicked off the POC, where the end goal of the POC is to evaluate how much time saving we can create for their security team, because UiPath is growing very quickly, and unsurprisingly their security budget is not growing linearly compared to the overall headcount. As a result of that, during the POC, we worked with UiPath very closely to, not only make sure our product is automating tasks that allow their security engineers to essentially get higher leverage, but also working with them to align on the future roadmap of the product.

They’re not only buying us for what the product can do today, but also what the product can be three months, six months down the road, and that’s very interesting, because most of the time it’s a founder reaching out to 1000 people, leading, begging for a demo, not the other way around, and I think we have a very large chunk of our customers and active prospects come from organic inbound. I think part of that is because, echoing my previous point, by having really good positioning and messaging, and also very transparent product marketing, it allows security buyers to find you, versus you trying to push the ropes and trying to force the product down people’s throats.

This is where we took a very conscious effort and a strategic decision to be very transparent. For example, our entire product documentation is public on the internet. We have over 30 interactive recorded product demos, as well as an un-gated test drive and full transparent pricing. We are able to allow interested early adopters within security community to complete, essentially, 80% of the buyer journey without talking to us, and that really allows us to get these high-quality handsraisers who have already, to some extent, self-qualified themselves and know they want to try this technology.

Vivek: I love the point you made about being very transparent and being open, and that’s not common in security, right? There’s a lot of clothes selling, and you never really know how deals are done. I think I’m sure there’s some set of new generation of buyers that want that transparency. What led you to sort of stray from the path of what we would call as normal in security to be more transparent than what the norm is?

Edward: A lot of it came from my time at ExtraHop. While I was at ExtraHop, I really advocated for an interactive online demo. Back then, ExtraHop was probably the single security vendor in the entire detection and response space, where you can access an un-gated interactive demo, like actual product, not like recorded video, but an actual product. I saw how much additional credibility that marketing tactic really helped, so I decided to bring that and keep that with Dropzone as well.

Vivek: Well, last point on this is that I’m sure, as you’ve noticed, CISOs are sold a lot of bad products, and we have a CISO Advisory council here at Madrona, and the one thing that they’ll say is that they’re just inundated with products and a lot of inbound to them. With you, with this transparent marketing, and being able to show the demo and show the value, is there another step that needs to happen for you to bridge that gap to have them come and say, “Hey, take a look at our products”? Is that an evolution? How do you think about the push versus pull nature of what you’re selling and how CISOs are typically sold into?

Edward: I think it’s definitely a combination of the two. Over time, generally, what I’ve seen within cybersecurity is initially most startups are in a push market, because there’s no category awareness. Most of the security startups solve a problem that’s more or less kind of obscure to the general public, so they need to do a ton of eventualization. I would say, for us, it’s a little bit easier, because the problem we solve, again, is one of the most universal and concrete and well understood problem within cybersecurity. It’s just that nobody has been able to come up with a technical solution to solve it, so that definitely makes our lives a lot easier, because to some extent we don’t really need to evangelize the problem we solve due to the fact that it’s already been there for 20 years, and every single team experiences that every single day.

Part of getting security teams to raise their hand, part of it also has to do with the overall macro environment. For example, people have heard of Stargate projects, $500 billion of investment, as well as DeepSeek and all sorts of interesting reactions from different vendors when they really start to see competition, as well as genAI becoming real, and I would say that played in a big part as part of our marketing tailwind, because now it’s very common. I mean, obviously, I’m sure you guys have been saying the same thing to your portfolio companies, right? Which is regardless what kind of business you are in, I want to know why you are not using genAI in every single business function, and that’s a question I would say every single board has been asking the executives. When that trickles down to security teams, alert investigation, and software-based documentation for SOC, it is generally one of the first places people look for.

Vivek: To your point, we’re seeing our own companies and the customers of the companies we work with, everyone is saying we’re using AI, but they don’t want use AI foolishly. They want to be smart about how they use AI, and to your point, in the security space, it’s hard to just put AI and say, “Hey, let’s walk away,” right? Security is security. It’s a very important piece of both the application and the infrastructure side of businesses, so being able to already have that pull from the SOC team, saying, “We’re already drowning in alerts. We need help. However way you can help us is going to be important,” and you can come in and execute against that, I think, is really interesting.

Edward: Absolutely. We have seen, thanks to ChatGPT, I think ChatGPT is probably the biggest marketing gift OpenAI has given to all these genAI startups, because it enlightens everybody, whether they’re technical or non-technical, on the potential and capabilities of genAI or this kind of new technology. I remember getting calls from my parents, asking like, “Hey, Edward. You have been doing AI stuff for eight years.This genAI thing looks very cool. Why don’t you go build a stock trading thing using this technology?” Because of that, I think that made a lot of security practitioners start to play with this technology themselves.

We have seen a good number of open source projects, and a good subset of the prospects we run into, a lot of times they’ll be like, “Hey, Dropzone seems very cool,” and by the way, we have been internally playing with GPTs and trying to build our own open source AI agents who automate small stuff within cybersecurity, so we know the technology can get there, but at the same time, we know, as a security team, we’re not like a hundred percent developers. This is not our specialization, so we already built confidence, have confidence in the technology. All we need to find is a reputable, trustworthy, actually technology solution provider. That definitely, again, makes it a little bit more kind of a pool-based marketing, versus trying to push ropes.

Vivek: Yes. Well, you can tell your parents that, “Hey, you may not be building a stock trading app, but stock trading apps can use Dropzone,” which is really cool.

Edward: Correct, yeah.

Vivek: I’m going to transition into some rapid-fire questions we have for you. Edward, you’ve been a founder for a couple of years now. You’re both a solo founder and a first-time founder, so what are the hardest-learned lessons that you’ve had so far? What is something that you wish you knew or wish you did better on this early journey of yours?

Edward: Probably the biggest thing, and surprisingly, as a solo, first-time founder with a engineering background, is I wish I learned more about sales before I started. One common misconception technical founders have is, as long as we build the best product on the planet, people will magically come to us. But that’s definitely not the reality. You could argue I couldn’t be further from the truth. So, sales is actually very important.

To be frank, while I was at ExtraHop, I obviously had a number of engagements with customers, but one thing I always wanted to do at ExtraHop, that I wasn’t able to, is work part-time as a sales engineer, for like six months. I never got a chance to do that, even though I always had this idea in the back of my mind, but after funding Dropzone, I think that kind of forced myself to learn how to be a sales engineer and how to be a account executive. I think those skills are tremendously important, because if a technical founder cannot sell a technology or a product with all the vision, enthusiasm, and in-depth product understanding, then nobody else could. I think sales capability and knowing how to use different techniques, how to qualify customers, and how to have a good sales demo are the key skills I wish I had before I got started.

Vivek:
Great point. Sales is so important. It doesn’t matter what your product or businesses. Sales is very important. What is something you believe about the AI market that others may not?

Edward: One thing I believe about the AI market is the fact that distribution is going to be a very important factor, and how I think most people probably underestimate the power of human trust, and how much that plays within the overall business ecosystem. This is where I’ve seen a number of startups trying to build technologies that completely substitute certain roles and responsibilities. I think, at least from my perspective, I think there are roles where the technical deliverables is maybe a fraction of the value proposition, but the other fraction is actually this human trust, human responsibility, and accountability.

This is where AI startups are looking at different industries and verticals, and try to identify insertion points for AI agents. I do believe we should be very respectful of the fundamental human trust, and how having automation itself is not completely obvious. That’s one of the reasons why I suspect software engineers will get more automation versus, for example, account executives because nobody is going to really build, have a relationship with an AI agent, posing as an account executive. This is where this human relationship, human trust building channel is something that I think it’s a lot more difficult for AI to substitute.

Vivek: Well, we see this when you’re driving down the 101 and you see multiple AISDRs. Which do I go with, right? Who do I have a better relationship with? I’m not sure right now, but outside of Dropzone, or you can even think outside of security, what company or trend are you most excited about?

Edward: Probably robotics. Part of it is I love watching animes, and there’s a number of animes where they talk about future societies with all sorts of cyborgs and robots, and I think humanoids robots. I think those are all very cool, but also part of it is a little bit maybe self-fulfilling, because obviously, as a cybersecurity vendor, I see more robots there are around us, I think the more important cybersecurity will become as well.

Vivek: Last question. This will be an easy one for you. There’s a 90s movie with Wesley Snipes called Dropzone. Is the company named after that movie, or what was the basis for calling the company Dropzone?

Edward: I actually have never heard of that movie, so maybe I should check it out or maybe ask ChatGPT about it. We named the company Dropzone, because we envision the future, when we have the resources and the needs to sponsor a Super Bowl ad. We want the ad to involve a scene where you have cyber defenders surrounded at the hilltop, overwhelmed by attackers, and then cyber defender essentially deployed Dropzone, which is, in my mind, I’ve been thinking about some sort of portal or Stargate, kind of a warp gate kind of a construct. They’ll deploy this portal, and through that, they can summon additional reinforcements to help them push by the attackers, so we named the company Dropzone, because we view Dropzone as a portal of, you can say, software-based staff augmentation for cybersecurity teams.

Vivek: Love that. Well, thank you so much, Edward. We really appreciate it.

Edward: Great to be here.

AI+Data in the Enterprise: Lessons from Mosaic to Databricks

Listen on Spotify, Apple, and Amazon | Watch on YouTube

How do AI founders actually turn cutting-edge research into real products and scale them? In this week’s episode of Founded & Funded, Madrona Partner Jon Turow sits down with Jonathan Frankle, Chief AI Scientist at Databricks, to talk about AI+Data in the enterprise, the shift from AI hype to real adoption, and what founders need to know.

Jonathan joined Databricks, a 4-time IA40 Winner, as part of that company’s $1.3 billion acquisition of MosaicML, a company that he co-founded. Jonathan is a central operator at the intersection of data and AI. He leads the AI research team at Databricks where they deploy their work as commercial product, and also publish research, open-source repositories, and open-source models like DBRX and MPT. Jonathan shares his insight on the initial vision behind MosaicML, the transition to Databricks, and how production-ready AI is reshaping the industry. He and Jon explore how enterprises are moving beyond prototypes to large-scale deployments, the shifting skill sets AI founders need to succeed, and Jonathan’s take on exciting developments like test-time compute. Whether you’re a founder, builder, or curious technologist, this episode is packed with actionable advice on thriving in the fast-changing AI ecosystem.


This transcript was automatically generated and edited for clarity.

Jonathan: Thank you so much for having me. I can’t wait to take our private conversations and share them with everybody.

Jon: We always learn so much from those conversations. And so, let’s dive in. You’ve been supporting builders with AI infrastructure for years, first at Mosaic and now as part of Databricks. I’d like to go back to the beginning. Let’s start there. What was the core thesis of MosaicML, and how did you serve customers then?

Jonathan: The core thesis quite simply was making machine learning efficient for everyone. The idea is that this is not a technology that should be defined by a small number of people, that should be built to be one-size-fits-all in general, but that should be customized by everybody for their own needs based on their data. In the same way that we don’t need to rely on a handful of companies if we want to build an app or write code, we can just go and do it. Everybody has a website. Everybody can define how they want to present themselves and what they want to do with that technology. We really firmly believed in the same thing for machine learning and AI, especially as things started to get exciting in deep learning. And then, of course, LLMs became a big thing halfway through our Mosaic journey.

I think that mission matters even more today to be honest. We’re in a world where we bounce back and forth between huge fear over the fact that only a very small number of companies can participate in building these models, and huge excitement whenever a new open-source model comes out that can be customized really easily, and all the incredible things people can do with it. I firmly believe that this technology should be in everyone’s hands to define as they like for the purposes they see fit on their data in their own way.

Jon: It’s a really good point, and you and I have spoken publicly and privately about the democratizing effect of all this infrastructure. I would observe that the aperture of functionality that Mosaic offered, which was especially about hyper-efficient training of really large models, putting it in the hands of a lot more companies, that aperture is now wider. Now that you’re at Databricks, you can democratize more pieces of the AI life cycle. Can you talk about how the mission has expanded?

AI+Data in the Enterprise: The Expanding Mission at Databricks

Jonathan: Yeah. I mean, it was really interesting. Matei, our CTO, I was looking at his notes for a meeting that we had for our research team last week and he had written in his notes casually, our mission has always been to democratize data and AI for everyone. I was like, “Well, wait a minute. That sounds very familiar.” I think we may chat at some point about this acquisition and why we chose to work together. It’s the same mission. We’re on the same journey. Databricks obviously was much more further along than Mosaic was and wildly successful, but it’s great to be along for the ride.

The aperture is widened for two reasons. One is simply that you don’t need to pre-train anymore. There are awesome open-source base models that you can build off of and customize. Even pre-training was the thing that wasn’t quite for everyone, but that’s not necessary anymore. You can get straight to the fun part and customize these models through prompting or RAG or fine-tuning or RLHF these days.

The aperture is also widened to the fact that now we’re at the world’s best company for data and data analytics, and the world’s best data platform. What is AI without data and what is data without AI? We can now start to think much more broadly about a company’s entire process from start to finish with a problem they’re trying to solve. What data do they have, what is unique about that data and unique about their company? Then from there, how can AI help them or how can they use AI to solve problems? This is a concept we call data intelligence.

The idea that it’s meant to be in contrast to general intelligence. General intelligence is the idea that there’s going to be one model or one system that will generally be able to solve every problem or make significant progress in every problem with minimal customization. At Databricks, we espouse the idea of data intelligence that every company has unique data, unique processes and a unique view on the world that is captured within their data, how they work, and within their people. AI should be shaped around that. The AI should represent the identity of the business, and the identity of that business is captured in their data. Obviously, this is very polemic to say data intelligence versus general intelligence. The answer will be somewhere in between. To me, honestly, every day at work feels like I’m doing the same thing I’ve been doing since the day Mosaic started, just now at a much bigger place with a much bigger ability to make an impact in the world.

Jon: There’s something very special about the advantage that you have that you’re seeing this parade of customers who have been on a journey from prototype to production for years now, and the most sophisticated among them are now in production. And so for that, I have two questions for you. Number one, what do you think it was that has finally unblocked and made that possible? And number two, what are those customers learning who are at the leading edge? What are they finding out that the rest of the customers are about to discover?

AI+Data in the Enterprise: Scaling from Prototype to Production

Jonathan: So, I’m going to, I guess, reveal how much less I’m a scientist these days and how much more I become a business person. I’m going to use the hype cycle as the way to describe this, and it breaks my heart and makes me sound like an MBA to do this. Among enterprises, there are always the bleeding edge early adopter tech-first companies; they’re the companies that catch on pretty quickly and the companies that are more careful and conservative. What I’m seeing is those companies are all in different places in the hype cycle right now. For the companies that are early adopters in tech-forward, the peak of inflated expectations, they hit that two years ago around the time ChatGPT first came out. They hit the trough of disillusionment last year when it was really hard to get these systems to work reliably, and they are now getting productive and getting things shipped in production. They’ve learned a lot of things along the way.

They’ve learned to set their expectations properly to be honest, and which problems make sense and don’t make sense. This technology is not perfect by any stretch, and I think the more important part is we’re still learning how to harness it and how to use it in the same way that having punch cards back in the 1950s or ’60s is still turning complete and still a little bit slower, but just as capable as our computing systems today from a theoretical perspective. 50 years of software engineering later, and it’s much easier to build an architect a system that will be reliable and build it in much other way, and all these principles we’ve learned. I think those companies are furthest along in that journey, but it’s going to be a very long journey to come. We know how big of a system we can build at this point without it keeling over and where the AI’s going to be unreliable, and where we need to kick up to a human, which tasks make sense, which tasks don’t make sense.

A lot of them I’ve seen have whittled it down into very bite-sized tasks. The way that I typically frame it for people is you should use AI either in cases where it’s open-ended and there’s no right answer, or where a task is hard to perform, but simple to check, and you can have a human check. I think GitHub Copilot is a great example of this where you could imagine a situation where you ask AI to write a ton of code. Now, a human has to check all that code and understand it. Honestly, it may be as difficult as writing the code from the beginning or pretty close to it. Or you can have the AI suggest very small amounts of code that a human can almost mechanically accept or reject, and you’re getting huge productivity improvements. This is a scenario where the AI is doing something that is somewhat more laborious for the human, but the human can check it very easily.

Finding those sorts of sweet spots is where the companies who have been at this the longest. They’ve also been willing to take the risk and invest in the technology. They’ve been willing to try things, they’ve been willing to fail to be honest. They’re willing to take that risk and be okay if the technology doesn’t work the first or second time, and keep whatever team they have doing this going and trying it again. A bunch of companies are in the trough of disillusionment right now, companies that are a little less on the bleeding edge. Then a bunch of companies are still at that peak of inflated expectations where they think that AI will solve every problem for them. Those companies are going to be very disappointed in a year and very productive in two years.

Jon: Naturally, a lot of founders who are going to be listening are asking, how do they get in these conversations? How do they identify the customers that are about to exit the trough, and how do they focus for them? What would you say to those founders?

AI+Data in the Enterprise: Landing Customers from Startups to Fortune 500s

Jonathan: I have two contradictory lessons from my time at Mosaic. The first is that VCs love enterprise customers because enterprise customers are evidence. At least if you’re doing B2B, you’re going to be able to scale your business, and you have some traction with companies that are going to be around for a while, that have big budgets, that when they invest in a technology invest for the long run. On the flip side, the best customers are often other startups because there’s no year-long procurement process. They’re willing to dive right in, they understand where you’re coming from and understand the level of service you’ll be able to provide because they’re used to it. You can get a lot more feedback much faster, but that is taken as less valuable validation. Even when I’m evaluating companies, enterprise customers are worth more to me, but startup customers are more useful for building the product and moving quickly. So, the answer is — strive for enterprise customers, don’t block on enterprise customers.

Jon: I think that’s fair, and optimizing for learning is really smart. There’s another thread that I would pull on, and this is something that I think you and I have both seen in the businesses that we’ve built, which is the storytelling. I won’t even say GTM, the storytelling around our product can be segmented even if the product is horizontal as so many infrastructure products are. Mosaic was a horizontal product. Databricks is a horizontal family of products, but there are stories that we tell that explain why Databricks and Mosaic are useful in financial services, really useful in healthcare, and there’s going to be a mini adoption flywheel. Not so many in each of these segments where you do want to find, first, the fast customers and then the big customers as you dial that story in. There may be product implications, but there may be not.

Jonathan: That’s a great point, and there are stories, I think, along multiple axes. These days in a social media world and in a world where everybody’s paying attention to AI, there are horizontal stories you can tell that will get everyone’s attention. One of the big lessons I took away from Mosaic was to talk frequently about the work you’re doing and have some big moments where you buckle down and you do something big. Don’t disappear while you’re doing it, but releasing the MPT models for us, which sound so quaint only a year and a half later. It really was only a year and a half ago that we trained a 7 billion parameter model on 1 trillion tokens. It was the first open-source commercially viable replication of the Llama 1 models, which sounds hilarious now that we have a 680 billion parameter mixture of expert model that just came out. The most recent metamodel was a 405 billion parameter model trained on 15 trillion tokens.

It sounds quaint, but that moment was completely game-changing for Mosaic, and it got the attention up and down the stack and across all verticals, across all sizes of companies, and led to a ton of business. Further moments like DBRX more recently same experience. Storytelling through these important moments, especially in an area where people are paying close attention, actually does resonate universally. At the same time, I totally hear you on the fact that for each vertical, and for each size of company, there is a different story to tell. My biggest lesson learned there is getting that first customer in any industry or in any company size or anything like that is incredibly hard. Somebody has to really take a risk on you before you have much evidence that you’re going to be successful in their domain.

Having that one story that you can tell leads to a ton more stories. Once you work with one bank, a bunch of other banks will be willing to talk to you. Getting that first bank to sign a deal with you and actually do something, even for the phenomenal go-to-market team we had at Mosaic was a real battle. They had to really fight and convince someone that they should even give us a shot, that it was worth a conversation.

Jon: Can you take me back to an early win at Mosaic where you didn’t have a lot of credentials to fall back on?

Jonathan: It was a collaboration we did with a company called Replit. Before we had even released the MPT models, we were chatting with Replit about the idea that we could train an LLM together, that we’d be able to support their needs there. They trained MPT before we trained MPT. They were willing to take a risk in our infrastructure, and we delayed MPT because we only had a small number of GPUs and we let Replit take the first go of it. I basically didn’t sleep that week because I was monitoring the cluster constantly. We didn’t know whether the run was going to converge. We didn’t know what was going to happen. It was all still internal code at that point in time, but Replit was willing to take a risk on us, and it paid off in a huge way. It gave us our first real customer that had trained an LLM with us and been successful and deployed in production. That led to probably a dozen other folks signing on right then and there. The MPT model came out after that.

Jon: How did you put yourself in a position for that lucky thing to happen?

Jonathan: We wrote a lot of blogs. We shared what we were working on. We worked on the open-source, we talked about our science, and we built a reputation as the people who really cared about efficiency and cost and the people who might actually be able to do this. We talked very frequently about what we were up to, and that was a lesson we had learned early on where I don’t think we talked frequently enough, but we wrote lots of blogs. When we were working on a project, we would write part one of the blog as soon as we hit a milestone, we wouldn’t wait for the project to be done. And then do part two and part three. And those MPT models were actually, I think, part four of a nine-month blog series on training LLMs from scratch. And that got Replits attention much earlier and started the conversation.

Maybe one way of looking at it if you want to be cynical is selling ahead of what your product is, but I look at it the other way, which is to show people what you’re doing and convince them that they can believe you’re going to take that next step. They want to be there right at the beginning when you first take that next step, because they want to be on the bleeding edge. I think that’s what got the conversation started with Replit and put us in that position. We were going to events all the time, talking to people, trying to find anyone who might be interested in enterprise that had a team that was thinking about this. There were a bunch of folks we were chatting with, but we had already started contracting deals with folks, but Replit was able to basically move right then and there. They were a startup. They could just say, “We’re going to do this,” and write the check and do it.

Jon: So being loud about what it is that you stood for and what it is that you believed.

Jonathan: And being good at it. I think we worked really hard to be good at one thing, and that was training efficiently. You can’t fake it until you make it on that. Like we did the work, and it was hard and we struggled a lot, but we kept pushing. At the strong encouragement of Naveen and Hanlin, our co-founders, they kicked my butt to keep pushing even when it was really hard and really scary and we were burning a lot of money, but we got really good at it. And I think people recognize that and it led to customers, it led to the Databricks acquisition. And I’m now seeing this among other small startups that I’m talking to in the context of collaboration, in the context of acquisition, anything like that.

The startups I’m talking to are the ones that are really good at something. It’s clear they’re good at something. It’s been clear through their work, I can check their work, they’ve done their homework and they show their work. Those are the folks that are getting the closest look because they’re genuinely just really good at it, and you believe in them and you know the story they’re telling is legitimate.

Jon: There’s one more point on this, which I think complements and extends what you said, that your folks believed in something. This is not about a story and it’s not about results either that you believe training could be and should be made more efficient. A lot of the work you were doing anticipated things like Chinchilla that quantified how it could be done later.

Jonathan: Oh, we didn’t anticipate. We followed in the footsteps of Chinchilla. Chinchilla was early visionary work, and I can say this, Eric Olson, who worked on Chinchilla is now one of my colleagues on the Databricks research team. I mean, there are a few moments if I really want to look for the pioneers of just truly visionary work that was quite early. When I look back is just like tent pole work for LLMs.

Now, Chinchilla is one of those things. The other is like a EleutherAI putting together The Pile dataset, which was done in late 2020, 2 years before anyone was really thinking about LLMs. They put together what was still the best LLM training data set into 2022. We did genuinely believe in it, I think to your point. We believed in it and we believed in science, we believed that it was possible to do this, and through really, really rigorous research. We were very principled and had our scientific frameworks that we believed in our way of working. We had a philosophy on how to do science and how to make progress on these problems. OpenAI believes in scale, and now everybody believes in scale. We believed in rigor and that doing our homework and measuring carefully would allow us to make consistent, methodical progress, and that remains true and remains the way we work. It’s sometimes not always the fastest way of working, but at the end of the day, at least ait is consistent progress.

Jon: So here we are in 2025, and amazing innovation is happening and there’s even more opportunity than there has been, it seems to me. Even more excitement, even more excited people. How do you think the profile and the mix of skills in a new team should be the same and should be different as to when you formed Mosaic?

AI+Data in the Enterprise: How Research Shapes Business AI

Jonathan: It depends on what you’re trying to do. We hire phenomenal researchers who are rigorous scientists who care about this problem and are aligned with our goals, who share our values, who are relentless, and honestly who are great to work with. I think culture cannot be understated, and conviction is the most important quality. If you don’t believe that it is possible to solve a scientific problem, you will lose all your motivation and creativity to solve it because you’re going to fail a lot. The first failure, you’re going to give up, but beyond that, I think this is data science in its truest form. I never really understood what it meant to be a data scientist, but this feels like data science. You have to pose hypotheses about which combinations of approaches will allow you to solve a problem. About measuring carefully and developing good benchmarks to understand whether you’ve solved that problem.

I don’t think that’s a skill that’s confined to people with Ph.D.s far from it. So, the fact that Databricks was founded by a Ph.D. super team now means that more than 10,000 enterprises don’t need a Ph.D. super team when it comes to their data. I look at our Mosaic story through to our Databricks story now in the same way. We built a training platform and a bunch of technologies around that, and now we’re building a wide variety of products to make it possible for anyone to build great AI systems. In the same way that when you get a computer, and you want to build a company, you don’t have to write an operating system, you don’t have to build a cloud, and you don’t have to invent a virtual machine. I mean, the abstraction is the most important concept in computer science. Databricks has had a Ph.D. super team to build that low level infrastructure that required it to build Spark and Delta and Unity Catalog and everything on top of that.

And now it’s the same thing for AI. The future of AI isn’t in the hands of people like me. It’s in the hands of people who have problems and can imagine a solution to those problems. In the same way that, I’m sure, Tim Berners-Lee who pioneered the web, did not exactly imagine, I don’t know, TikTok. That was not what he had in mind when he was building the World Wide Web. The startups I’m most thrilled about engaging with today are companies that are using AI to make it easier to get more out of your health insurance, making it easier for you to solve your everyday problems, making it easier for you to just get a doctor’s appointment or for a doctor to help you. For us to spot medical challenges earlier, that’s the people who are empowered because they don’t have to go and build an LLM from scratch to do all that. That layer has now been created.

The future is in the hands of people who have problems and care about something. For a Ph.D. super team these days, there’s still tons and tons of work to do in making AI reliable and usable, building the tools that these folks need, building a way for anyone to build an evaluation set in an afternoon so that they can measure their system really quickly and get back to work on their problem. There’s a ton of really hard, complex, fuzzy-like machine learning work to do, but I think the interesting part is in the hands of the people with problems.

Jon: How is your role changing as you adopt these AI technologies inside Databricks? And you try to be, I’m sure, as sophisticated as you can be about it.

Jonathan: I’m still a scientist, but I haven’t necessarily written a ton of code lately. But I spend a lot of times these days connecting the dots between research and product, and research and customer, and research and business. Then come back to the research team and say, “I think we really need to do this. How can RL help us do that?” And then go to the research team and say, “You’ve got this great idea about this cool new thing we can do with RL. Let me go back to the product and try to blow their mind with this thing that they didn’t even think about because they didn’t know it was possible.”

Show up with something brand new and convince them, we should build a product for that because we can, and because we think people will need it. In many ways, I’m a bit of a PM these days, but I’m also a bit of a salesperson. I’m also a manager and I’m trying to continue to grow the incredible skills of this research team, both the people who have been with me for four years and the people who have just arrived out of their Ph.D.s and make them into the next generation of successful Databricks talent that stays here for a while, and maybe goes on to found more companies like a lot of my former colleagues at Mosaic have.

It’s a little bit of everything, but I have had to make this choice about whether I’m going to be as really, really deep as a scientist, write code all day, get really, really good at getting the GPUs to do my bidding, or get good at leadership and running a team and inspiring people and getting them excited and growing them, or get good at thinking about product and customers, and what combination I wanted to have there. That combination has naturally led me away from being the world’s expert on one specific scientific topic, but towards something I think is more important for our customers, which is understanding how to use science to actually solve problems.

Jon: There’s an imaginative leap that you have to make from the technology to the persona of your customer, and the empathy with that that I imagine involves being in a lot of customer conversations, but it’s an inversion of your thinking. It’s not, here’s a hard problem that we’ve solved, what can we do with it?” It’s keeping an index of important problems in your head and spotting possible solutions to that maybe?

Jonathan: I think it’s the same skill as any good researcher. No good researcher should be saying, “I did a cool thing. Let me find a reason that I should have done it.” Sometimes very occasionally, this leads to big scientific breakthroughs, but for the most part, I think a good productive everyday researcher should be taking a problem and saying, “How can I make a dent in this?” Or finding what the right questions are to ask and asking them and coming up with a very basic solution. All of these sound like product scenarios to me. Whether you’re building an MVP, like figuring out a question that hasn’t been asked before that you think is important to be asking in building an MVP and then trying to figure out whether there’s product-market fit, or the other way around finding a problem and then trying to build a solution to it.

I don’t think much research should really involve just saying, “I did this thing because I could.” That is very high risk and it’s hard to make a career out of doing that all the time because you’re generally not going to come up with anything. I’m going out and trying to figure out what the important questions are to be asking, both asking new questions and then checking with my PM to see if that was the right question to ask, and talking to my customers. It’s just now, instead of, my audience being the research community and a bunch of Ph.D. students who are reviewers and convincing them to accept my work, my audience is now customers and I’m convincing them to pay us money for it. I think that is a much more rigorous, much higher standard than getting a paper and then the reps. I had dinner with a customer earlier this week and they’re doing some really cool stuff.

They have some interesting problems. I’m going to get on a plane in two weeks and go down to their office for the day and meet with their team all day to learn more about this problem because I want to understand it and bring it back to my team as a question worth asking. It’s not a 100% of my time, but I think you should be willing to jump on a plane and go chat with an insurance company and spend a day with their machine learning team, learning from them and what they’ve done, and hearing their problems and seeing if we can do something creative to help them. That’s good research. If you ever sent me back to academia, that’s probably still exactly what I do.

Jon: One of my favorite things that you and I spoke about in New York some weeks ago was the existence of a high school track at the NeurIPS Academic Conference about AI. I wonder if you could share a little bit about that and what you saw, and what that tells you about the next wave of thinking in AI.

Jonathan: The high school track at NeurIPS was really cool, and also controversial for a number of reasons. Is this another way for students who are incredibly well off and have access to knowledge and resources and a parent who works for a tech company to get ahead further, or is this an opportunity for some extraordinary people to show how extraordinary they are and for people to learn about research much earlier than certainly I did and try out doing science? There are generational changes in the way that people are interacting with computing. This is something that my colleague Hanlin, who was one of the co-founders of Mosaic has observed and I’m totally stealing from him, so thank you Hanlin. Seeing companies that are founded by people who clearly came of age in an era where your interface to a computer was typing in natural language, whether it’s to Siri or, especially now, to ChatGPT.

That is the way they think about a user interface. You want to build a system? Well, just tell the AI what you want. On the back end, we’ll pick it apart and figure out what the actual process is in an AI-driven way. Build the system for you and hand it back to you. That’s a very different way of interacting with computing, but that’s the way that a lot of people who have grown up in tech over the past several years, a lot of people who are graduating from college now or have graduated in the past couple of years who are in high school now, especially that is their iPhone, that is their personal computer, is ChatGPT. It’s not buttons and dropdowns and dashboards and check boxes and apps. It’s tell the computer what you want. It doesn’t work amazingly well right now. Someday it probably will, and that day may not be very far away, but that’s a very different approach and one that is worth bearing in mind.

Jon: I want to switch gears a little bit and get to a technical debate that we’ve had over the years as well, which is about the mix of techniques enterprises and app developers are going to use to apply AI to their data. And of course, RAG and in-context learning have been exciting developments for years because it’s so easy and appealing to put data in the prompt and reason about that with the best model that you can find. There has been a wave of excitement, renewed wave of excitement, I’d say, around complementary approaches like fine-tuning and test-time compute, reinforcement tuning from OpenAI and lots more. I wonder if now is the moment for that from a customer perspective, or if you think we’re far ahead of our skis. What’s the right time and mix of these techniques that enterprises and app developers are going to want to use?

Jonathan: My thinking has really evolved on this, and you’ve watched that happen. We’ve reached the point where the customer shouldn’t even know or care. I want an AI system that is good at my task, and I want to define my task crisply and I want to get an AI system out the other end. Whether you prompt, whether you do few shot, whether you do an RL-based approach and fine-tune, whether you do LoRa or whether you do full fine-tuning, or whether you use DSPy and do some prompt optimization, that doesn’t even matter to me. Just give me a system, get me something up and running, and then improve that system. Surface some examples that may not match what I told you my intention was, and let me clarify how I want you to handle those examples as a way of improving my specification for my system, and making my intention clearer to you. And now, do it again and improve my system.

Let’s have some users interact with the system and gather a lot of data. Then let’s use that data to make the system better, and make the system a better fit for this particular task. Who cares whether it’s RAG, who cares whether it’s fine-tuning? The only thing that matters is did you solve my problem and did you solve it at a cost I can live with? Can you make it cheaper and better at this over time? From a scientific perspective, that is my research agenda right now at Databricks, but you shouldn’t care how the system was built. You care about what it does and how much it costs, and you should be able to specify, “This is what I want the system to do.” In all sorts of ways, natural language examples, critiques, human feedback, natural feedback, explicit feedback, everything, and the system should just improve and become better at your task, the more feedback you collect. Your goal should be to get a system out in production even if it’s a prototype as quickly as possible. So you start getting that data and the system starts getting better.

The more it gets used, the better it should get. The rest, whether it’s long context or very short context, whether it’s RAG with a custom embedding model and a re-ranker or whether it’s fine-tuning, at that point, you don’t really care. The answer should be a bit of all of the above. Most of the successful systems I’ve seen have had a little bit of everything, or have evolved into having a little bit of everything after a few iterations.

Jon: In previous versions of this conversation, you’ve said, “Dude, RAG is it.” That’s what people really want. There’s other things you can do to extend it, but so much is possible with RAG that we don’t need to look past that horizon yet. I hear you saying something very different now. I hear you saying, customers don’t care but you care and sounds like you’re building a mix of things.

Jonathan: Yeah, I think what I’m seeing, the more experience I get is there is no one-size-fits-all solution, that RAG works phenomenally well in some use cases, and absolutely keels over in other use cases. It’s hard for me to tell you where it’s going to succeed and where it’s not. My best advice to customers right now is try it and find out. There should be a product that can do that for you or help you go through that scientific process in a guided way so you don’t have to make up your own progression. For me, it’s now about, how can I meet our customers where they are? Whatever you bring to the table, tell me what you want the system to do, and right now we’ll go and build that for you and figure it out together with your team.

We can automate a lot of this and make it really simple for people to simply bring what they have, declare what they want, and get pretty close to what a good solution or at least the best possible solution will look like. It’s also part of my recognition that this isn’t a one-time deal where you just go and solve your problem. It’s a repeated engagement where you should try to iterate quickly, get something out there and get some interactions with the system. Learn whether it’s behaving the way you want it to, learn from those examples and go back and build it again and again and again and again, and do that repeatedly until you get what you want. A lot of that can be automated too. At least that’s my research thesis that we can automate or at least have a very easy guided way of going through this process to the point where anybody can get the AI system they want if they’re willing to just come to the table and describe what they want it to do.

Jon: What’s the implication for this sphere of opportunity of new model paradigms such as test-time compute, now, even open-source with DeepSeek?

Jonathan: I would consider those to be two separate categories. I was playing this game with someone on my team earlier today where he was telling me, “Yeah, DeepSeek has changed everything.” I was like, “Didn’t you say that about Falcon and Llama 2 and Mistral and Mixtral and DVRX and so on and so on and so on?” We’re living in an age where the starting point we have keeps getting better. We get to be more ambitious because we’re starting further down the journey. This is like when our friends at AWS or Azure come out with a new instance type that’s more efficient or cheaper. I don’t go and look at that and go, “Everything has changed.” I go and look at that and go, “Those people are really good at what they do and they just made life better for me and my customers.”

We get to work on cooler problems, and a lot more problems have ROI because some new instance type came out that’s faster and cheaper. It’s the same thing with models. For new approaches, it could be something like a DPO or it could be something like test-time compute. Those are probably not comparable with each other, but these are more things to try. These are more points in the trade-off space. I think about everything in life as a Pareto frontier on the trade-off between cost and quality. Test-time compute gives you this very interesting new trade-off, possibly between the cost of creating a system, the cost of using that system, and the overall quality that you can get. Every time another one of these ideas comes out, the design space gets a little bigger, more points on this trade-off curve become available, or the curve moves further up into the left or up into the right depending on how you define it.

Life gets a little better, and we get to have a little more fun. For this product and the system that we’re all building at Databricks, things get a little more interesting, and we can do a little more for our customers. So, I don’t think there’s any one thing that changes everything, but it’s constantly getting easier and constantly getting faster and constantly getting more fun to build products and solve problems. And I love that. A couple of years ago, I had to sit down and build the foundation model if I wanted to work with it. Now, I already start way ahead.

Jon: I love that. Jonathan, I’ve got some rapid fire questions that I’d like to use to bring us home.

Jonathan: Bring it on.

Jon: Let’s do it. What’s a hard lesson you’ve learned throughout your journey? Maybe something you wish you did better, or maybe the best advice that you received that other founders would like to hear today?

Jonathan: I’ll give you an answer for both. I mean, the hardest lesson I’ve learned is honestly, it’s been the people aspects. It’s been how to interact productively with everyone, how to be a good manager. I don’t think I was an amazing manager four years ago, fresh out of my Ph.D.. And my team members who have been with me that long or the team members who are with me then will surely tell you that. I like to hope the team members who are still with me think I’m a much better manager now. The managers who have managed me that entire time, who have trained me and coached me, think I’m a much better manager now. Learning how to interact with colleagues and other disciplines or other parts of the company, learning how to handle tension or conflict in a productive way, learning how to disagree in a productive way and focus on what’s good for the company.

Learning how to interact with customers in a productive way and a healthy way, even when sometimes you’re not having the easiest time working with the customer and they’re not having the easiest time working with you. Those have been incredibly hard-won lessons. That’s been the hardest part of the entire journey. The part where I’ve grown the most, but also the part that has been the most challenging. The best advice I’ve received probably from my co-founders, Naveen and Hanlin.

One piece of advice from Hanlin that sticks in my mind is, he kept telling me over and over again that a startup is a series of hypotheses that you’re testing. That kept us very disciplined in the early days of Mosaic, stating what our hypothesis was, trying to test it systematically, finding out if we were right or wrong. That hypothesis could have been scientific, it could have been product, it could have been about customers and what they’ll want, but it was turning that into a systematic scientific endeavor for me, made it a lot easier for me to understand how to make progress when things were really hard and they were really hard for a long time. I know that wasn’t a rapid-fire answer to a rapid-fire question, but it’s a question I feel very strongly about.

Jon: Aside from your own, what data and AI infrastructure are you most excited about and why?

Jonathan: There are two things I’m really excited about. Number one, products that help you create evaluation for your LLMs. I think these are fundamental infrastructure at this point. There are a million startups doing this, and I think all of them are actually pretty phenomenal. I could probably give you a laundry list of at least a dozen off the top of my head right here, and I bet you could give me a dozen more that I didn’t name because we’re all seeing great pitches for this. I have a couple that I really like, a couple that I’ve personally invested in, but I think this is a problem we have to crack. It’s a hard problem, and it’s a great piece of infrastructure that is critical. The other thing that I’m excited about personally is data annotation. I think that data annotation continues to be the critical infrastructure of the AI world.

No matter how good our models get and how good we get at synthetic data, there’s always still a need for more data annotation of some kind. And revenue keeps going up for the companies that are doing it. The problem changes, what you need changes. I don’t know, I think it’s a fascinating space, in many ways, it’s a product. In many ways like my customers these days, the data scientists at whatever companies I’m working with are also doing data annotation or trying to get data annotation out of their teams. Building an eval is data annotation. I mentioned two things, these are both my second favorites. I think they’re the same at the end of the day. One is about going and buying the data you need. The other is about tools to make it easy enough to build the data that you need, that you don’t need to go and buy it.

I have a feeling both companies have made a lot of progress on AI augmentation, or both companies on AI augmentation of this process. When I do the math on the original Llama 3.0 models, this is the last time I sat down and did the math. My best guess was $50 million worth of compute and 250 million worth of data annotation. That’s the exciting secret of how we’re building these amazing models today. That’s only going to become more true with these sorts of reasoning models where I don’t know that reasoning itself is going to generalize, but it does seem like you don’t need that many examples of reasoning in your domain to get a model to start doing decent reasoning in your domain. And that’s going to put even more weight on figuring out how to get the humans in your organization or to get humans somewhere to help you create some data for your task that you can start to bootstrap models that reason on your task.

Jon: Beyond your core technical focus area, what are the technical or non-technical trends that you are most excited about?

Jonathan: There are two, one as a layperson and one as a specialist. As a layperson, I’m watching robotics very closely. For all of the interesting data tasks that we have in the world, there are a lot of physical tasks in the world that it would be amazing if a robot could perform. Thank goodness for my dishwasher, thank goodness for my washing machine, I can’t imagine what my life would look like if I had to scrub every dish and scrub every piece of clothing to keep it clean. Robotics is in many ways already in our lives. These are just very specific single-purpose robots. If we can make a dent in that problem, and I don’t know if we will this decade or in three decades. Like a VR, I feel like robotics is a problem that we keep feeling like we’re on the cusp of, and then we don’t quite get there, but we get some innovation.

I love my robot vacuum. That is the best investment I’ve ever made. I got my girlfriend a robot litter box for her cats a few weeks ago. I get texts every day going, “Oh my God, this is the best thing ever.” And this is just scratching the surface of the daily tasks we might not have to do. I would love something that could help people who, for whatever reason, can’t get around very easily on their own to get around more easily, even in environments where they’re not necessarily built for that.

I have a colleague who I heard say this recently, so I’m not going to take credit for it, but the idea of things that make absolutely no logistical or physical sense in the world that you could do if you had robots. In Bryant Park right now, right below our Databricks office in New York, there’s a wonderful ice skating rink all winter. If you were willing to have a bunch of robots do a bunch of work, you could literally take down the ice skating rink every night and set up a beer garden, and then swap that every day if you wanted to. Things that make no logistical sense because they’re so labor-intensive. You could do that, and suddenly that makes a lot of sense. You can do things that are very labor-intensive and resource-intensive. So that gets me really excited.

Jon: From data intelligence to physical intelligence?

Jonathan: Well, somebody’s already coined the physical intelligence term, but yeah, I don’t see why not. Honestly, we’re dealing with a lot of physical intelligence situations at Databricks right now. I think data intelligence is already bringing us to physical intelligence, but there’s so much more one can do, and we’re scratching the surface of that. It cost Google, what, $30 billion to build extraordinary autonomous vehicles. The whole narrative in the past year has completely shifted from autonomous vehicles are dead, and that was wasted money to, “Oh my gosh, Waymo might take over the world.” So, I’m excited about that future. I wish I knew whether it was going to be next year or in 30 years. I spend a lot of time in the policy world, and I think that’s maybe even a good place to wrap up.

Before I was an AI technologist, I was an AI policy practitioner. That’s why I got into this field in the first place. That’s why I decided to go back and do my Ph.D.. I spend a lot of time these days chatting with people in the policy world, chatting with various offices, chatting with journalists, working with NGOs, trying to make sense of this technology and how we as a society should govern it. It’s something I do in my spare time. I don’t do it officially on behalf of Databricks or anything like that. I think it’s important that we as the people who know the most about the technology, try to be of service. I don’t like to come with an agenda. I think that people who come from a company and come highly motivated to make sure a particular policy takes place are conflicted like crazy, and will always come with motivated reasoning and can never really be trusted.

I think coming as a technologist and asking, how can I be of service and what questions can I answer, and can I help you think this through and figure out whether this makes sense? It’s a very fine line and you need to be careful about it. If you come in with your heart set on figuring out how to be of service to the people whose job it is to think about what to speak on behalf of society or to think on behalf of society, you can make a real difference. You have to be careful not to come with your own agenda to push something. A lot of people have highly motivated reasoning about, we shouldn’t allow other people to possess these AI systems or work with them, so you’ve got to be careful. You’ve got to build a reputation and build trust over many years. The flip side is you can do a lot of good for the world.

Jon: That is definitely a good place to leave it. So Jonathan Frankle, Chief AI scientist of Databricks, thank you so much for joining. This is a lot fun.

Jonathan: Thank you for having me.

Scaling, AI, and Leadership after Big Tech: Lessons from Highspot’s Bhrighu Sareen

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Thinking about going from Big Tech to startup?

In this episode of Founded & Funded, Madrona Managing Director Tim Porter sits down with Bhrighu Sareen, who took the leap from Microsoft to Highspot and has been leading AI-driven product innovation ever since. They talk about the realities of transitioning from a massive company to a high-growth startup, scaling AI-driven teams, and how Bhrighu’s helping transform sales enablement through automation and intelligence.

His insights on navigating change, working with founders, and executing at speed are a must-listen for any leader considering the leap from stability to startup chaos.

Tune in now for practical strategies and an inside look at how AI is reshaping go-to-market execution.


This transcript was automatically generated and edited for clarity.

Bhrighu Sareen: Hi, Tim. Good to be here and thanks for having me. It’s always a pleasure to talk to you. I’ll say if you’re thinking about making a switch, Tim’s a really good person to speak to. I remember the first time we met, I had spoken to Robert and the co-founders a few times, and on the fence, not sure, and then Tim met me at this coffee shop in Kirkland, Washington.

Tim Porter: Zoka.

Bhrighu Sareen: Exactly, yeah. He was only, I think, 20 minutes late, maybe more and gave me some excuse about it. He was trying to raise a lot of money for the next fund or something. Not that I could verify, but we’ll believe him. So, memorable, not just from that, but thank you for always being there in terms of providing guidance and how to think about it and also specific examples from your past.

Tim Porter: Thanks, Bhrighu. That is a true story. My only defense, I don’t remember why I was late, but at least I came to you. I drove across the lake and to a coffee shop in your neighborhood.

All right, Bhrighu, why don’t we start with your journey from Microsoft to Highspot? What were you doing at Microsoft? Say a little bit more about building Teams and what inspired you to make the switch from Big Tech to startups. This is definitely a very common thing. People who have had successful careers at Big Tech companies and “Hey, I want to do something earlier stage. How do I do it? Why do I do it?” You’ve done it super successfully. Tell us about that journey.

Bhrighu Sareen: Microsoft Teams was the last project I worked on, but prior to that, I had worked in Bing, MSN, and Windows. Microsoft is a phenomenal place because you can have multiple careers without leaving the company. I got an opportunity to learn a lot, started my career there and just worked up through the levels. Numerous challenges, great managers, great mentors, and a lot of very, very smart people that allowed me to grow and challenge myself.

Teams was a phenomenal experience. It was this small product. I joined before we even went to beta, and no one knew where it would land up, but the concept of redefining how information workers actually work, reduce friction, and having to use multiple tools to get the job done was appealing. It was a phenomenal challenge, and it gave me a chance to learn. When I joined, like I mentioned, we hadn’t even shipped the product or it wasn’t even in beta, but then went from there and took it from zero monthly active users to 300 million monthly active users when I left, so phenomenal growth.

I’d been there six and a half, seven-ish years and done a lot of different roles, taken on different aspects of the product, built in PM/ENG together, ecosystem, partnerships, customer growth, and I was looking for my next challenge. I think every product person has this thing in the back of their mind like you mentioned, which is like, “Huh, should I do a startup? Is the grass greener on the other side?” I had been in Big Tech for 17-ish years with Microsoft, and so I think in priority order for me it was that I wanted a challenge, and a place where I could learn. The two different aspects, could I have taken another job at Microsoft and learned? Absolutely. I would’ve taken on a new challenge, a new dimension to it, but the depth of learning that this change allowed me to take was drastic. I think the greater the challenge, the greater the learning.

The other end of the spectrum was going from a big company to a smaller company. I report directly to the CEO, and that provides a peer group like the CFO, CHRO, so learning in terms of what are the issues that are impacting the entire company. You’re not just focused on your product area and on your scope, and that’s the only thing I’m going to do, learning in terms of speed and agility. As a startup, you don’t have a billion dollars in cash sitting in the bank. You don’t have 25,000 developers that you could pivot into whatever area you want to go. It’s just speed and agility.

Tim Porter: Teams, zero to 300 million, it’s insane to think about that kind of growth.

Bhrighu Sareen: It is. It is.

Tim Porter: I remember our first conversation, and I was struck by a few things about you. One — that this point about agility is that you seemed very much about making decisions and cutting through broader group dynamics.

Bhrighu Sareen: Exactly.

Tim Porter: Sometimes part of the art of being a good executive at a big company is how you embrace big group dynamics, and you were more sort of cut through it and had lived through this hyper-growth, so that seemed like a great fit.

Even across our portfolio there were more than one company that were recruiting you. Why did you pick Highspot? I mentioned a little bit about the product and what Highspot did. You wanted the right scenario with founders and challenge, but you had ultimately different product areas to pick. Why build in this area?

Bhrighu Sareen: Another area of learning, in addition, was I mentioned all the teams I had worked in at Microsoft, and I’d never worked in Dynamics or in anything to do with sales or MarkTech (marketing technology.) It was another dimension where I could grow in terms of a new technology, a new space, and one that was evolving very rapidly. Then, coming back specifically to Highspot, I think there are a few dimensions why. One was the people. When you’re making such a drastic change, if it doesn’t feel right in terms of the people front, it could become very, very messy.

You hear these stories about people working, and oh, man, it didn’t work out. In three to five months, they’re looking for their next opportunity, so the people was one very important part. Highspot is super lucky. The three founders, Robert, Oliver, David, are good human beings to begin with, and they care.

Second, was the space was new, and then third was Highspot, when I joined, was known for content and guidance, and that was the key thing. How do you equip salespeople with the right content at the right time? They had all the right ingredients. A lot of companies never make the switch from being a single-product company to a two-product company to a multi-product company. With our release in October 2024, what’s happened is Highspot has made that transition from a single-product company to a multi-product company.

When I saw these different ingredients, I remember after the first meeting with Robert … I got a cool story. Should I digress a little?

Tim Porter: Let’s hear it.

Bhrighu Sareen: I remember I wasn’t actively saying, “Hey, I want to leave Microsoft,” because I really enjoyed working there. Somebody mutually connected us, and it was a meeting from 4:00 to 5:00 P.M. It was 4:00 to 5:00 P.M. on a Thursday, and our offices are downtown Seattle by Pike Place Market. So, I drive up and go and meet Robert 4:00 to 5:00 P.M. One thing leads to another. When I got back into my car, it was 7:15 P.M. I called the person that mutually connected us, and this person’s like, “How’d it go?” I said, “I think it went well, but I think Robert was just being nice because we were supposed to be done at 5:00. We finished at 7:15.” And he’s like, “No, no, no, no. Robert’s met other people. Usually, it doesn’t go this long.”

You feel that connection, and so when I went back the next time to meet Robert, I was like, “Hey, you’ve built an LMS. You have a CMS. You’ve got all these different pieces. If we stitch this together, Highspot’s time can be significantly greater than what it is right now based on the product offering you have.” So, when you take all those three or four things combined together, it felt like it was a good place to be.

Tim Porter: Fantastic. I want to come back to how you’ve worked with these founders. When you joined Highspot, there were, and there still are, three founders who are super active in the company: Robert, Oliver, and David. We did a podcast, Oliver and I, five years ago about Highspot. That’s one of my favorites ever. You and Oliver work super closely together now on product?

Bhrighu Sareen: Yes.

Tim Porter: All three are really involved, which has been such an amazing part of this company, but also, in theory, not easy coming in and having this big role. You’re running half the company, product engineering, yet there are these three founders who are all very opinionated around product and engineering. Yet, it’s worked out. What has it been like coming in in your role and working effectively with founders?

Bhrighu Sareen: I’m super grateful to Robert, Oliver, and David. One, for giving me the opportunity and two, how they welcomed and included me into Highspot. A bunch of friends of mine gave me advice that this was going to be a really bad idea because they’re actively, actively involved. Like you said, they have opinions. They have more contacts than I can ever have because they’ve been doing this for a decade.

Tim Porter: Absolutely.

Bhrighu Sareen: They have more knowledge. They have more connections. They have more foresight, because I had never worked in this space. David was doing the show before I joined, and I’ve said this to David. I would not have been even half as successful or delivered even one-tenth the impact for Highspot if it weren’t for David. David has been this voice in my ear all the time, selflessly giving me advice, giving me guidance, whether it was people, product, process, or strategy. Hey, Bhrighu, here’s the pitfalls. Watch out for this. Watch out for that. He does it in a super humble way. He’s not being arrogant. He’s not trying to show me down. You could have said, “Okay, hey, great job. We got someone, good luck. If you have questions, let me know.” But being proactive about helping me and helping us move forward, so super grateful to David for that.

Tim Porter: I think there’s a question I framed around how you navigated it, but there’s a great message to founders here around how you onboard a new exec, empower them, get them ready, work together with them, and I think that’s super critical. I was going to ask how did you get comfortable with that, if that was going to be the case? Unfortunately, I’ve seen some cases where, “yeah, I talked it through with the founders. They said they wanted me to be there, and then it turned out they didn’t.” On some level, it’s a human thing, and you build trust. Was there anything on how you got confidence that they were serious about not just saying that we’re ready to bring somebody else in and do all the things you mentioned that David has done to help make you successful?

Bhrighu Sareen: Two things. One is — if you remind yourself that we’re all going to win together, it isn’t as if David or Oliver are going to have a different outcome than I’m going to have. So, I tell myself, “Okay, are they coming from the right place? And wherever we’re going, whatever decision we’re making, is it going to take Highspot to the next level?” If you think about Highspot first — is it the right decision? Then it becomes easy. Then it’s not about ego. Is your idea right, or is my idea right? It doesn’t matter. Is it the right idea for Highspot?

Tim Porter: Clearly, these folks wanted to win.

Bhrighu Sareen: Exactly, and so if you come in with an attitude to win, with an insane bias for action, and are humble enough to say, “okay, we learned” because not all decisions will be positive, “We learned from that, and now we’re going to go back and fix it,” I think it’s hard. Even the human nature part I think because founders in general have a different mindset, but these three things should align with every founder out there.

Tim Porter: So, having a great partner on the engineering and product side in David and Oliver.

Bhrighu Sareen: Yes.

Tim Porter: Robert is probably the best strategic product vision exec I’ve ever worked with. I remember seeing that way back at Microsoft and now, in the last 12 years at Highspot, but it can also be hard for you. He’s always the head of product in some ways, and that’s made the company so successful. How has that dynamic been?

Bhrighu Sareen: One other suggestion I’d make to the folks listening is communication is key, and building relationships is key. Yes, all three founders are Microsoft, and so was I, but our paths never crossed. I had never spoken to these folks. A lot of people say, “Oh, yeah, this was easy. You picked Highspot because they were Microsoft. You probably interacted with each other, and then they pulled you over.” I’m like, “I did not know the individuals by first name, last name or even existed.” We had never crossed paths, whether in social circles or at work. When you’re starting new, commit to frequent communication and building trust.

One of the things that we put in place was every Tuesday, Robert and I have lunch. The second thing we do is the three founders, and I would meet every Tuesday as well. Initially, they committed to actually sharing context, helping me grow. We said, “Hey, we’ll do this for three months-ish. I’ll be ramped up. Four months, you’ll have enough context, and we’ll cancel the meeting. We don’t need it.” Fast forward two-plus years, we still meet every Tuesday, all the three founders and myself.

Tim Porter: It’s often simple, smart mechanisms that are persistently applied.

Bhrighu Sareen: Yeah, there’s no shortcuts.

Tim Porter: There’s no substitute.

Bhrighu Sareen: Exactly. There’s no substitute, absolutely. There’s no shortcut in building trust. You have to put in the time. I think, animosity builds up when you are making stories up in your mind. He said this, so he actually means that, or she said that, so this is what’s going to happen. Why don’t you ask them what they mean? But you’re so busy in your day-to-day life that you don’t. So, my lunch with Robert every Tuesday forces us to discuss a whole bunch of things.

Tim Porter: Maybe mention some of the initiatives or structural things that you’ve done at Highspot. As someone on the board, I’ve been struck overall by how the company’s gotten more efficient, but also shipping even more things, the velocity of things. Your shipping is amazing. You’ve done some things around team and offshore but talk about some of the things that you put in place, building on the great foundation that the founders and others had built and how that’s gone. Has it all been easy? Has it been challenging? What’s it been like?

Bhrighu Sareen: A super complicated question. I’m not sure where to go because you’ve thrown in “puting structural things in place, how do you increase product velocity, how did you get our costs into control.”

Tim Porter: We can throw you a softball. You can take it in any direction. Specifically, how would you describe the major initiatives that you had to put in place when you got to Highspot from a product and engineering standpoint?

Bhrighu Sareen: I lived it the last two-plus years now, and when I joined, I have to start by saying everyone’s heart was in the right place. The clarity of where we were going was always there, and that’s super important because if your founders, the exec team, the VPs, senior directors, directors, and so on and so forth aren’t clear on where we’re going, that’s a big problem. At least those two parts where everyone’s heart is in the right place.

So, then coming back to the question you asked with that context is we realized that the company was growth at all costs. Before I joined, the previous 18 months or two years was phenomenal. We’ve been hiring people all over North America, and so now that you have clarity of where you want to go, how do we make sure that we maintain the velocity by removing roadblocks for the crews. We identified that our smallest unit of impact is actually a crew, and one of the things we put in place was something we called an edge meeting because your ICs are the edge.

Tim Porter: How big are crews, roughly?

Bhrighu Sareen: Eight to 10 with a product manager, an engineering manager, a bunch of IC engineers, and a designer. So, the crew is your unit of work, and they’re the ones that get stuck because it could be cross-team dependency, it could be because the designer is working on some other project, they aren’t clear on the architecture, or they’ve taken a decision that three months later you got to rework. The product leadership team decided that every two weeks we’re going to meet every single crew. Every week we do crews one, two, let’s say 12, and then 13 to 24 will be next week.

We were spending about 15 hours a week, and initially, the teams were like, “Whoa, whoa, whoa. This is micromanagement. It’s a bad idea, or it’s a waste of our time.” And I said, “One second. Except for the product leadership team who’s going to be in all the meetings, for a crew, you’re only spending one hour every two weeks.” But what it allowed the crews to do, when you look back everyone’s like, “Oh, this was great.” The uber concept around that was enable every discipline to have a voice to raise any issue in an open and transparent manner that is predictable.

It isn’t because some PM and engineer had a meeting with our VP of engineering, and they took a decision, so the design team was like, “Hey, you left us out.” It wasn’t as if they were doing this out of bad intent. It was just because the speed at which we wanted to move, if you met someone, and you got on a Zoom call, you took a decision, you moved on. It wasn’t because I’m purposely leaving the engineer out, I’m leaving the PM out. So, whoever got to the decision makers, took a decision, got a roadblock moved, and they will move forward.

That edge meeting thing was initially hard for people to say if it was worth it, but now, when you look back, engineers bring in architecture documents. PMs bring in specs. We look at Figmas, and we’ve just been able to remove so many blocks.

Tim Porter: Making sure everyone has a voice, smaller teams where everyone can be heard, and then also the frequency increased. So, instead of every other week, how often do you have this now?

Bhrighu Sareen: No, it is still every two weeks.

Tim Porter: Okay, got it. They get much more out of those times together.

Bhrighu Sareen: Correct, but if they need additional time, they can always ask for it. But it’s predictable, as in everybody in the company knows, “Oh, this particular crew is going to have a review at this particular time.”

Tim Porter: That’s fantastic.

Bhrighu Sareen: Then one more thing that I did because you asked, “Hey, what are the structural things,” one other thing I did was cross-team dependencies, usually for companies, depending on the size, whether you’re mid-size or larger, even Big Tech, one crew can never ship an entire thing. They’re dependent on some infrastructure pieces, a cross-team, or UX, or whatever it might be. One interesting thing happened six months after we put this thing in place. A particular crew comes up and says, “Hey, we are unable to ship this.” We’re like, “Why is that the case?” “Oh, because we have a dependency on this other crew, and they’re unable to do it.”

So, we evolved the edge meeting to say any crew can summon any other crew, or any crew can join any other crew because it’s transparent. Everyone knows the schedule, which crew’s presenting when, and they can come in. It did a phenomenal thing. 90% of cross-team dependencies get resolved before they ever show up to the PLT. You have a crew. I have a crew. We’re PMs. Each of the crews are like, “Hey Tim, I really need you to show up to the product leadership team, our edge meeting. We want you in there because there’s dependency.” Tim’s like, “Oh.” You’re like, “Hey, hey, can we just resolve this right here?”

Tim Porter: Absolutely. Create an environment where teams can work it out amongst themselves and not have to bring everything up and bubble it up at a much faster pace. That’s fantastic. Maybe talk a little bit about how you’ve organized around offshore and a few centers of excellence. I know a big topic of conversation for lots of companies is back to office and being together, but then there’s also a need to be able to have engineering groups in other geographies for various reasons. I think you’ve done a nice job of finding a great way to do that. Maybe describe how you’ve done that.

Bhrighu Sareen: I started in September 2022, and if there are folks listening to this that are considering this move and you’ve always been like, “I was always in product and never in other kinds of roles,” if you’re thinking of joining or once you join, I would recommend sitting down with a CFO, VP of finance to understand how the numbers work. Chris Larson, our CFO was super gracious with his time, and after I started in September ’22, showed me the numbers. ’22 was this interesting year where, in the second half of ’22, things had started slowing down, and people weren’t sure what was going to happen to the economy. Are we going to enter a recession? Is it going to be soft landing? It could go any which way.

I look at the numbers, and I’m like, “Hmm, if our goal is to hit profitability, it doesn’t feel like we’re going to, this glide path isn’t moving in the right direction.” After getting educated, I realized we’ve got to make a few changes. One of the things we looked at was, we want access to a lot more talent, and can we do it, where can we do it, and how would it work out? The first thing we did was in November of ’22 with my direct reports, and through a connection at the Canadian Consulate in Seattle, we actually went up and met the government of British Columbia. From Seattle to Vancouver, it’s only a three, two and a half, three-hour, depending how fast you drive, Tim, two and a half drive or a three-hour train ride, and it’s super convenient. You could do a day trip if you needed to where you can meet the team, spend time.

We got a lot of good support from the government of BC, and we decided to open up an office in Vancouver, Canada. When we started, we said, “Hey, over a two-year period, we’ll put about 50 people in Canada in Vancouver.” Actually, in 18 months, we had already hit 50 people, and that provided us the ability to have our first distributed beyond remote. We were already doing remote before I joined Highspot, but it allowed us to have a center of excellence, access to a lot of talent, and allowed our engineers and leaders to see could we do a remote center and practice before we do something big.

Then after we saw that working, we decided to start a development center in India. Now, that was a lot of debate internally. A lot. You can imagine all the different reasons why we should not do it and maybe a couple of reasons why we should, but we moved forward. Again, this is about agility, speed, let’s try things out, and India offered Highspot access to an insane amount of talent. Over the last two and a half decades, talent in India has evolved. You could always get really good college hires. Over the last two and a half decades, so many companies have opened up offices there. What that has allowed us is middle management, and typically having the right managers will help you because they can coach, mentor, grow the early-in-career talent. That matters. Then senior leadership, there’s actually a decent population of team leadership who could run your development center in India.

Tim Porter: That’s been key for you. You have a great leader on the ground.

Bhrighu Sareen: Gurpreet.

Tim Porter: He’s been an awesome partner in making this work and growing it.

Bhrighu Sareen: That brings me to the other point I was going to make, which is the Indian population outside of India who have lived here, who have worked for these companies, a lot of them are actually moving back. Gurpreet actually spent, I think, 30-plus years in the United States, and then for family reasons, he wanted to move back. So, he moves back, and all of that knowledge on how to work with an American company, how to work cross-geo, how to work cross-time zone, how to actually have conversations with customers because when you’re building product, end-to-end ownership, there will be moments when they’re going to talk to your customers, and you want to feel comfortable that they can handle that situation.

Gurpreet is a great example of having a leader who can take an office of zero, and real stats, July of ’23 is when he signed his employment offer. Fast forward to September of ’23, we had four employees, Gurpreet, two engineers, and a designer. Fast forward right now, we have 125 people there and lots of open positions.

Tim Porter: Fantastic. We’ve been talking about things that have gone well, and coming to an earlier-stage company from a big company, I’m sure it’s not always that way. There’ve been challenges. We’ve managed through them really well. Just for the listeners, what was a surprise? You had a good sense with this team and everything, but there’s still surprises. Was there something that was a surprise that like, “Hey, this was a big transition to go from the big company, lots of resources to something smaller,” and how did you deal with that?

Bhrighu Sareen: I think if I were to prioritize all the challenges, I think the number one was how drastically the economy changed.

Tim Porter:
Yeah, there was a big externality that came about that wasn’t really expected to the extent that it hit.

Bhrighu Sareen: Exactly. I’m changing jobs, changing scale, changing employer, changing manager, changing technology, changing all these different dimensions, and then the last thing you expect is we tried coming from the middle of nowhere you’re like, “Oh, my gosh, what’s going to happen now?” But if you ask me how do you think about it, I actually look at it from another perspective, which is like I said earlier, my number one goal was learning a number of different dimensions. Guess what? That learning just got accelerated, and when you come out the other end, you’re going to be, “Huh, I did all of the things I was trying to go learn but then did it in an environment that might not show up.”

It gets cyclical, so it will show up at some point, but maybe not for another five years, four years, 10 years. Who knows what the exact timeline is? But that, I think, was the biggest challenge. You’re on our board, so those board meetings were interesting where, okay, burn. Your burn rate is crazy, but hey, can we just raise money like we have been every single round previously before. Somehow money dries up. The goals that the market is expecting you to hit change, and the company had never done layoffs before. Brutal. I think it’s just super hard.

Tim Porter: It was a hard period. The biggest thing is that a company like Highspot that had been growing meteorically in a large part as existing accounts were just hiring lots more go-to-market people, seat-based, and so the renewal rates just kept going up and up and up, and all of a sudden, all our customers budgets didn’t just get frozen, they got slashed.

Bhrighu Sareen: Correct.

Tim Porter: But we managed to keep innovating through that, find more things that give them an add-value while also getting efficient ourselves through some of the things we’ve been talking about.

Bhrighu Sareen: Exactly. So, the second thing I’d say on that topic, the previous question you asked, which was you touched on a really good point because we’ve all heard stories that x amount of companies during the dot-com bust actually decided not to do product innovation and just ride it, or through the financial crisis, not do product innovation and just ride through it and save money and conserve cash. But thanks to the folks on the board and the leadership team, we decided to just say, “No, we’re going to make this transition,” and now we’re seeing the fruits of that investment decision actually start paying off.

Tim Porter: That’s perfect. Let’s talk about that. Highspot’s always been an ML-based company. I think of the very early days, and how do you find all the sales content. Well, if you get the right signals and you put those into an ML system, you can find the right things more effectively, but it wasn’t AI in the way it is now. Talk about some of those. Give some examples. What are the things that you’re shipping? How do you use AI today? We can dig into that a little bit. Every company’s becoming an AI company, but Highspot is really there and has it in production with customers.

Bhrighu Sareen: You’re absolutely right. There’s patents with our co-founder’s names on it and other folks on there around ML technology and things of that. One of the interesting things, because also the stage we’re in right now, there’s the hype cycle where the trough of disillusionment, I think, is where we are right now. A number of our customers asked us, saying, “Hey, is any of this thing real, or is it all just hype?” What I had to land with them is if you’re Big Tech and you’re trying to reason over the significant amount of data you have for an individual, it could be across their emails and meetings and calendar and everything else, there’s three dimensions. It’s like how much compute do you want to put against it, or what’s the latency because if you give the large language models or your algorithm enough time, it will actually give you the right answer, or three is the cost. How many dollars do you want to throw at this problem, or every time there’s a query that’s sent?

I was talking to some of our customers saying in relation to that, the amount of data Highspot has to reason over for a particular individual or across an entire domain or entire customer is tiny. It’s super tiny in comparison to what the technology allows us to do today. It’s tiny. Two, there’s a number of dimensions that we could put together offline, so process it once a night, so the latency part gets taken care of, and there are real scenarios with real value.

As an example, most companies do a sales kickoff once a year. You bring your sales people together, you have a conversation, and you share the new products you’re launching. In some cases, they’ll ask them to do a golden pitch deck, which is marketing’s created the pitch deck. Let’s say we’re colleagues sitting around a table at the sales kickoff or you’re my manager. What I’ll do is I will read up, learn about it, and then I’m going to pitch it to you. You’ve got a rubric. You will score me on those things, and then I’ll do the same back for you. That’s expensive because you have to take people offline out of their day jobs, go get them to do this, or you have sales people record it, then the managers have to take time out and score.

A lot of managed sales managers are like, “I need to hit quota this month. I don’t have time for this.” So, we built a feature as part of our training and coaching product where the person creating the learning and development, the training has the training, has a rubric, verbal and nonverbal skills that they’re looking for for the salesperson to be successful, and saying the right words or how the pitch should come out. A fintech company used this and sent it around and got 800 responses back. Then the manager gets to see the video recording, the rubric. The AI uses the rubric that was provided to grade all 800 of these.

What we saw was in 55% of the cases, the manager left the AI-graded feedback as is. In another whatever 91% minus 55 is, in those cases, they either added or deleted one sentence, and only 9% of cases did managers actually delete what AI recommended as feedback and rewrite it.

Tim Porter: Wow.

Bhrighu Sareen: This financial services company, not fintech, huge financial service company, super impressed, and they’re actually now rolling it out. There’s a few hundred people, 800 or so that had access to it. Now, they’re talking about thousands of licenses for this one feature because the value and the time they’re getting back is significant.

The focus on features that will save salespeople, marketing people, support services, learning and development people that are creating it time and showing the right ROI. I can go on feature after feature after feature that’s been resonating.

Tim Porter: That’s awesome. This is this AI, real-time coaching in that the AI is really working both, and the feedback is accurate in that the manager doesn’t have to rewrite it. Then you see the impact with the end users.

There are a bunch of features. You talk about the Highspot AI Copilot. There’s the ways you do score carding. There’s ways you do content summarization, auto document generation. I think people are interested that maybe don’t know Highspot. Yes, they can go read the website, but maybe just rattle off a couple of these other new features in how you use AI for customers.

Bhrighu Sareen: Yeah, perfect. One other one that I’ll talk about is a lot of organizations have a system of record for their CRM, like for the customers. They have a system of record for, let’s say, that’s the ERP, but today, they don’t have a system of record for their go-to-market initiatives. It’s super interesting because you have so many different disciplines that have come together to actually take an initiative.

One of the initiatives as an example, and I’ll use that to just outline the capabilities we’re talking about, which is in during 2022, 2023, the CFO or the VP of finance inserted themselves in the buying process. We saw more and more sales cycles were getting elongated, and it wasn’t just procurement being able to, once the solution owner says, “I want to buy this particular product,” the procurement does the paperwork and is able to negotiate and get the deal done, but finance was like, “Whoa, whoa, whoa. We need to make sure this spend is happening. Is it correct? Is it worth it?”
There were initiatives that a lot of our customers wanted to roll out, which were: how do we train our salespeople to talk to the finance people? They do it with all our training. Now, if one of your initiatives was to increase expansion on product line ABC, there are so many pieces that have to come together for that to be successful. One is what content will get used, and is it being used? Is it created? Is it effective? How do we provide training? For example, one of the training could be financials, like how to talk to people in finance, and then product training on whatever the product line it is that you’re talking about. What’s the right sales play to use? Here’s a digital room. All these are capabilities that Highspot has, but you include it in your initiative.

Then, you want the ability to say, “Okay, all this training that I’m providing, the keywords I want to use, are they being used?” So, how do you check that? The really cool thing is today, a lot of meetings are getting recorded, and so Highspot’s conversational intelligence capabilities, again using AI and other aspects is not only able to draw out who said what, but then understand the intent behind that, so now you can have a single scorecard, a system of record for here’s all the initiatives that we care about. Here’s the cohorts because it could be a mid-market initiative, it could be an enterprise targeting enterprise customers or mid-market customers or commercial customers.

Then, in a single view, if you can have a conversation with your CRO and say, “All right, this is the content being used. Here’s how it’s being used.” Your CMO can look at how it’s performing and decide if they need to make changes. You can see the impact of all the training in real time and look at your salespeople. Are they saying the right thing? How are customers responding back to it? Again, it shows up in your initiative, and you can look at individuals, like I said in the case of the other previous example, where you can see, hey, this rep, for example, on these 12 metrics, on eight of them, you’re doing really well, nine of them you’re doing well, but on these three of them you could actually do a little better.

Highspot is that one unified platform that you go from content to guidance to understanding what’s happening with the salespeople, what skills and competencies they need to get better and then recommend the training because a lot of platforms out there today can tell you, “Hey, this meeting went well. Here’s the agenda. Here’s the topics that we discussed. Here’s aspects of the intelligence around the meeting.” That aspect is now a commodity. But when you have to take action on that, “Oh, they could have used the meetings over now. Send a digital room with this aspect. Highspot can do that. You want to take another action around, here’s a set of skills that you could do better in. Highspot can do that. Now, recommend a training based on the skills they need to improve in. Highspot can do that.

You want to now be able to follow a sales play because we’ve combined data with CRM to then say, “Okay, this is a pre-sales opportunity in the financial services business. Marketing has created this template. Let’s automatically generate a document, the presentation that will go for this particular thing, and bring in pricing if you choose to do that.”

Tim Porter: It’s so cool and illustrative that it’s such an integrative story from purely the customer’s perspective who doesn’t necessarily care about technology. They want outcomes. I see this across so many of the companies, even the very early startups, all this time and effort on we’re going to do this initiative, we’re going to do this campaign, we’re going to launch this thing, and so we have to get everybody ready. We have to train them. Then, at the end, or as it’s going, you see results like is revenue going up or down, and you can see individual reps are hitting quota or they’re not. But the why within that is so hard to tease apart.

Bhrighu Sareen: Exactly.

Tim Porter: So, you can go from all the way from the how do we train folks to what were the actions to what ultimately is working, and that way drive more revenue, more efficiency, the two golden things. Every company is like, “How do we get more revenue? How do we get it more efficiently?” So, not only is it an integrative experience for the customer, but you’re using AI in so many different ways.

Bhrighu Sareen: Exactly.

Tim Porter: Call recording and insights, automatically generating content, analyzing it, so that’s pretty neat, too.

Bhrighu Sareen: Recommending content.

Tim Porter: Recommending content.

Bhrighu Sareen: A lot of different aspects.

Tim Porter: Putting it together. What does that mean? We invest in new startups. There’s lots of innovation happening in AI, and one of the exciting things about it is you can build things really fast that have an impact. There’s a lot in your space. There’s a lot of cool things happening around the AI SDR or all the different pieces across sales even. How are you thinking about all of those startups and the opportunity for them to innovate versus what Highspot’s doing?

Bhrighu Sareen: Tim, I’m going to go back to something you said to me in ’22.

Tim Porter: Uh-oh.

Bhrighu Sareen: It was something interesting you said that there was a realization in the startup world that a lot of companies that got funding prior to ’22, it was a feature, but it could grow up to become a standalone company. I don’t remember your exact words, but my takeaway was that there was a number of startups that could have very, very good outcomes, but as opposed to being standalone, their outcome will be as a capability as part of a bigger product, something like that.

Tim Porter: Yep.

Bhrighu Sareen: That stuck with me because when I look at a startup, especially early stage, if you have a great idea, that’s awesome. Keep going, but know when it is time to be part of a bigger thing. You think it’s a really cool idea, it’s growing well, you’ve found product market fit. Either you have to start expanding into verticals or areas or product spaces that are adjacent, or you have to figure out when’s the right time to say, “Okay, I got to just be part of a bigger product.”

Tim Porter: It also, to connect back to something you referenced earlier, it often comes back to data too, doesn’t it?

Bhrighu Sareen: Oh, yeah.

Tim Porter: Access to data.

Bhrighu Sareen: Yeah, that’s another good point. Super good point, yes.

Tim Porter: You mentioned the big Co’s, like your former employer, they just have so much data across everything.

Bhrighu Sareen: Insane.

Tim Porter: Yes, it’s almost always better to have more data than less data, but it also creates challenges around, it’s just super expensive to process all of that. Then you have new companies — where it’s like, do you have enough of a data set? In your case, you said you don’t have that big of data relative to someone like Microsoft, but yet, I’m thinking through some of our biggest customers, some of the biggest technology companies, the biggest logistics companies, financial services companies, medical device companies, you have their whole corpus of sales, marketing, content docs, decks, white papers, et cetera. Is there something about having the right amount of data for these feeds? Say more about that.

Bhrighu Sareen: You’re absolutely right. People always say, “Hey, get all the data. Get all the data. Get all the data,” but you’ve got to figure out the right sweet spot for the scenario you’re trying to deliver. I think that’s something we’ve been very disciplined about internally is getting the data, cleaning the data, attaching it, and getting the right insights around the data. Then I think just as important, switching tracks as I agree with the point you made, is the user experience. I think as startups you have to decide, “I want to be the single pane of glass where everybody shows up.” But guess what? I think Teams, Slack, Zoom, Outlook, and Gmail are the horizontal applications where regardless of whether you’re a sales, marketing, finance, procurement, engineer, PM, or designer, you’re going to spend your time. That’s where you’re spending your time. Those are the tools you’re spending your time in.

Then you have role-specific. If you’re a designer, it could be Figma. It could be Adobe. If you’re a finance person, you have your own application. If you’re a salesperson, it’s CRM. So, there’s a set of horizontal applications that people spend their time in. The UX part, I think, is another aspect which a lot of companies need to make a decision is are you going to say that your data and your insights or the AI experiences that you’ve generated should only stay in your own and operated properties, or is it okay to show up inside Zoom, Slack, Teams, Outlook, Gmail where users are today?

Then there’s another question you’ve got to ask yourself is the new user experiences around these agents and co-pilots, do you surface your data, your insights there or not? Then, once you figure out the data, insights, user experience, once you figure out that stack, what do you do about the business model? Let’s say I’m a customer, and you’re here to sell me. And I said, “Hey, I’ve built my own internal copilot, like that’s our company-wide thing. I love this insight around content recommendation,” That’s a great example, “and our salesperson is sending out an email. Our co-pilot sits inside our email platform whether it’s Outlook or Gmail, and we want you to plug into that. So, we don’t really need to buy this other whole thing because we’re not using the product. We’re not using the scenario inside your product. We just want API access, so we’ll just pay for a data transfer fee, and that’s good enough.”

But you’re like, “Wait a second. I ran all the analysis, and I’m delivering you an insight. I’m not just delivering you something over an API for access to data.” These are interesting conversations that we’re going to have to have.

Tim Porter: Let’s put our future hats on here. So much continues to happen in this space with AI. Agents are a big topic of conversation. Salesforce is certainly talking about those a lot and lots of new startups. If you think a year or two ahead, maybe pick a year ahead, what are some of the things in AI you’re most excited about that Highspot will maybe be going to in that timeframe?

Bhrighu Sareen: Last year, you would’ve asked this question — you would not have used the word agent. You would’ve used the word co-pilot if we had recorded it. This year, you’re calling it agent. Two years from now, I have no idea what is going to get called, but the one thing that I am confident about is that month over month just because the speed of innovation is amazing. It’s such an amazing time to be in tech. I think every decade we say this is an amazing time to be in tech.

Tim Porter: It keeps being right.

Bhrighu Sareen: Yeah. It continues to be an amazing time to be in tech because every month, it doesn’t matter what you’re working with, whether it’s a big company or a small company or a medium-sized company or you’re thinking of starting something new, if you ask yourself, and we ask ourselves this thing is, how do we take advantage of the technology that we have access to provide value to that task in a day, to that user. So, regardless of what it’s called, if we focus on that and our list of capabilities where we can have measurable return back to our customers and delight, I think that’s the two things.

A lot of times it’s like, “Oh, the ROI is huge or not, but can you also have delight?” A lot of companies focus on ROI. Very few focus on the delight, and we have this unique opportunity where I have a 200-slide deck, which is actually a library that I have to then customize before I go present. I’m a salesperson before I present to the customer, and then they would spend, I don’t know, an hour, two hours to do that, putting in the right logo. The logo is not the right size. Making it smaller, making it bigger, all kinds of things. The delight of being able to complete that by answering four questions and getting it done in six minutes — mind blowing. The look on the reps’ faces is priceless.
So, Tim, I’m not sure what we’re going to call it, but I would say focusing on return and delighting the customers. I think that will be.

Tim Porter: What I hear you saying is there’s going to be ongoing radical productivity gains and being able to do tasks so much faster, maybe more accurately. Just so you’re not the only one putting yourself out there, I think there is a huge thing to this agent notion, and yes, it’s a newer naming, but where people will be able to interact with Highspot through natural language, through voice, and the system will complete tasks for them. So, to your point about reducing steps, et cetera, it might be the same outcomes, but getting there faster and the system does more of it autonomously, my guess is we’ll be talking about that in your user conference in the year or two to come.

Well, Bhrighu, thank you so much. Thanks for all you’re doing at Highspot. Thanks for this great advice on innovating in AI and thinking about making the jump from a big company to an earlier-stage company and super excited to see what we’re going to go build in years to come.

Bhrighu Sareen: Perfect. Thank you very much.

Tim Porter: Thanks so much for being here.

Bhrighu Sareen: Tim, thank you for having me. Thank you very much.

Serial Entrepreneur Mohit Aron on Founding, Scaling, and Leading Great Companies

Listen on Spotify, Apple, and Amazon | Watch on YouTube

What does it take to go from engineer to founder, from startup CEO to scale CEO? In this special live episode of Founded & Funded, Madrona Managing Director Karan Mehandru sits down with Mohit Aron, founder of Nutanix and Cohesity, to uncover the lessons behind his success. Mohit shares his unique hiring strategy, how to identify product-market fit, and the power of balancing vision with execution. Whether you’re navigating your first company or scaling your third, Mohit’s advice is a masterclass in resilience, grit, and building a legacy.


This transcript was automatically generated and edited for clarity.

Karan: I have a lot of questions for you, so I’m going to pull up my notes here to make sure I don’t miss anything. If you do it once, you’re lucky. If you do it twice, I feel like there’s a lot of skill. So the first question I have for you is, did you always know that you were built to be a founder? And what do you think the best founders have as far as capabilities, characteristics, traits? What does it take to be one that scales these companies to multiple tens of billions of dollars versus ones that fizzle out? Where do you begin?

Mohit Aron: All right. I’ll say for the first part, did I always know that I am meant to be a founder? Heck, no. All I knew was that I liked to put myself into uncomfortable situations. And that’s what it takes, number one, to be a founder because if you’re just sitting in a comfortable position, you’re not going to be a founder. So, trying to be a founder was just one more attempt to put myself in an uncomfortable position. That’s how I began.

Karan: Maybe they have cryogenic chambers for that. You can run a marathon, but you ended up starting a company.

Mohit Aron: It’s the one thing that I thought, “Let me put myself into yet more ways to make myself uncomfortable.” What it means to me to be a founder is, are you passionate enough to solve a certain problem? I’ve seen people build companies for the wrong reason. A lot of people build companies for the sake of building a company, or just to be a founder. That’s the wrong reason to build a company. Another reason why people build companies is, “Oh, I want to make a lot of money.” It’s also the wrong reason. If you want to shoot for that reason, if you want to build a company for making a lot of money, you’ll probably make some but you’ll not a lot. I promise you that. Because building a great company requires a lot of ups and downs. The minute you have a down, you’re probably going to get cold feet and you’re going, “Okay, let me go do something else.” Maybe, start another company, right? And now, you’ve potentially left a lot of potential on the table.

So if you are really passionate about some problem, then, and only then, you have an opportunity to really build a great company. As part of building a company, there’s going to be lots of ups and downs. And so, what your passion for shooting behind solving that problem means that you’re going to persist. Persistence and grit is one of the key things you need to have because you’re going to fall down a lot of times, no matter how many times you’ve done it before. For me, that’s what it means to be a founder.

Karan: That’s great. One of the things that I’ve repeated multiple times is that the probability you build a $10 billion company is inversely proportional to the number of times you state that as your goal. And it’s very true in your case, having known you for almost a decade and a half now.

You made three transitions in your career that are really hard to make. You went from a non-founding engineer to a founder, then you went from a founding CTO to a startup CEO, and then you went from a startup CEO to a scale CEO. Each one of these has its own set of challenges. Walk us through what the challenges were as you made each one of these transitions. What were things you had to leave behind? What were the new traits that you had to pick up?

Mohit Aron: Sometimes people who are employees ask this question, how come the founder has such an outsized equity or whatever? I also used to ask that. The answer is that eventually, the buck stops with the founder. Everything that goes wrong, eventually, is the founder’s problem. The employees are just doing that one thing.

Karan: It may not be your fault, but it is your problem.

Mohit Aron: That is right. You get blamed for everything, and that’s the first big jump I had to make. That as a founder, whether or not it was because of me, I owned it.

Karan: It’s the owner mentality.

Mohit Aron: It’s the owner mentality. That also means that you’re doing things that you’re not expert at. That was the first big thing that I had to become comfortable with. The second thing was hiring. Again, as a non-founder, maybe you hire people that are in your areas of expertise. As a founder, you’re now hiring people that so are not in your area of expertise. You have to learn and you have to keep them happy, because guess what? If you hire great people, they also have lots of other opportunities. Even if they decide to join you right now, if you can’t keep them happy, there’s plenty of companies. Plenty of great companies that VCs found, right? It doesn’t take long for them to jump ship.

So, becoming a people person at the same time as you are doing what you do best is very important. That’s another transition that you have to make. Companies are, after all, all about people. The best people. How do you hire people? How do you learn how to hire people? How do you learn how to hire people outside of your expertise? If you are a technical founder like I was, you have to learn how to hire people, let’s say in sales, and marketing, and stuff.

These are some of the things that sometimes I had to learn the hard way. If you’re a non-founder, somebody else set the vision for the company. Somebody else decided what the company is going to do. Now, it’s you to blame. If that doesn’t go well, it’s your on us. Learning how to set that long-term roadmap on what the company is going to do is another thing that I had to learn. I had to build a strategy for doing that. That’s the strategy behind my companies.

Karan: Let’s talk about hiring because that’s a big topic for all of us here. We actually heard a lot of folks here had to let go of people in the last two years, and then now we’re starting to hire. You mentioned something that has always stuck with me, which is you are a technical founder, you’re a product architect, yet you were able to hire some amazing go-to-market talent in the Nutanix and Cohesity journey. I remember pulling some people that are your lieutenants and saying, “Okay, use some words to describe Mohit.” And they used to say, “Tough but fair. He’s a hard boss.” The way I interpret that is that you have never tolerated much less celebrated mediocrity in your organizations.

What are the things that you look for? When you’re hiring somebody, for example, a sales CRO in a company where you’ve never been a CRO before. What are the things you’re looking for as you hire somebody in that role?

Mohit Aron: I’ll lay it out as a generalization. First of all, great companies are built by great people. I’ll say that again. If you have an underperformer in a job, either you have to do the work for that person because the buck stops with you, and then you cannot scale. By the way, I will talk about repeatability again and again. It’s all about repeatability. We’ll get to that.

To hire, I literally came up with hiring strategy. The way I do it is for every role, I come up with what I call a list of competencies you need in the role. For instance, if you are hiring a, let’s say a sales leader, maybe you need the person to have done it before. Maybe the person should have been a VP of sales at a prior company and maybe for a number of years, and maybe the person should come from enterprise background. Whatever those competencies are. I split those competencies into what I call scorecards. Three scorecards. The first one is what I call a pre-interview scorecard. This is competencies you can figure out by looking at the person’s resume or by maybe doing a phone screen because I don’t want to waste time. There’s a lot of time that goes in interviewing people. I don’t want to waste time if the person doesn’t even meet the basic competencies. So, that’s a pre-interview scorecard.

Then the next one is what I call an interview scorecard. This is what is used. These are the competencies I’m going to test for when the person is interviewed. I don’t test all of them. Maybe, I’ll test three or four of them and then I’ll have other interviewers test the other ones. Collectively, we form a very data-driven full picture of the person.
The last one is what I call a reference check scorecard. These are the things that you ask when you do reference checks. Now in the absence of this, here are some of the mistakes people make. The first mistake people make is they will interview someone. They really like the person, the way the person speaks, the way the person moves. Basically, it’s a chemistry match. But please understand that chemistry match is only one of the competencies you need in a role. You may need some 10 other competencies. Ten other big rocks, if you may. You need to look for those. Unless you have them written down and unless you explicitly test for them, you’re going to make mistakes. Yet, another mistake people make when they do reference checks. Everyone knows reference checks are important, but here’s the big mistake people do. They’ll call up the reference, “Hey, is this a great person?”

The person says, “Yeah, this is a great person. Hire the person.” And that’s the absolute worst reference check you can do. Again, you need to lay out what you need to ask in a bunch of competencies. Maybe the first question is, “Would you hire this person again on a scale of 1 to 10?” Because 6 is a failure. It’s a failure mark. People hesitate giving a negative one. So if you push them to put a number at it, they’ll say 6 or 7. That’s really a fail. Unless it’s a 8 or 9, I would not hire the person. Similarly, did good performers in your company. Did they value this person? Then maybe if you want to validate some strengths, “Does the person have a good methodology in a day-to-day execution?” Something like that on a scale of 1 to 10. Everything is on a scale of 1 to 10. You start getting the real answers. For instance, if they say 8, you ask them, “So, why not a 9?”
Then they’ll say, “Well, there is this one occasion when they didn’t do well.” Then, you can poke into that. But if you just ask, “Is this a good guy?”

“Yeah.” That’s the absolute wrong reference check that you can do. This is the hiring strategy I use. It has significantly increased my probability of hiring good people, but I would say it’s not 100%. No hiring algorithm is 100%. Even if you hire a good person, they might be good at the time you hired them. Couple of years down the line, especially if your company is growing at 100% or whatever in three years, it’s 8X or something like that, the person may no longer be good. People feel really uncomfortable with this, but you have to performance-managent.

Karan: That’s great.

Mohit Aron: When you hear people saying that I’m tough, it’s when that person is not working out. I don’t want to make it a hire and fire culture, so there’s a period of time when I’m trying to uplevel with the person. And guess what? The person is in pain. My goal at that time is to either uplevel the person, it’s up or out for me. Either the person elevates himself or herself, or they’re going to be out. As simple as that. And there’s a time threshold I have for that. That’s it.

Otherwise, I will end up doing the work for the person. Or worse, nobody does the work and the company is not doing well.

Karan: One of the things I’ve always appreciated about the way you manage and lead, there’s a combination of autonomy and accountability that you held very tightly, and I think that’s worked really well. By the way, there’s a survey in all of your phones right now. On a scale of 1 to 10, would you take money from Madrona again? So, we’re going to follow up on that part later.

Let’s switch gears a little bit. Some of the companies here are well past product market fit, and at this point, are thinking about product market pricing fit after Madhavan’s talk. And some of them haven’t reached product market fit. I’ve asked this question to many entrepreneurs, I’ve thought about it myself. What does it actually take to say that we have product market fit? Is it a feeling? Is it a scientifically calculatable metric? How do you think about product market fit? When did you know you had it at Cohesity?

Mohit Aron: I’m a B2B person, building enterprise companies. I have a very crisp definition for a product market fit. Again, it goes back to repeatability, but here’s my definition. If an average salesperson, again, average is the key, not an elite salesperson. If an average salesperson can sell to an average customer, again, not an elite customer, an average customer without involving people in headquarters, without involving me, without involving my C-level staff, then you have a product market fit.

If you think about it, if it’s a average person, you cannot hire all A-players as salespeople, and average person needs to be able to sell your product. You cannot all have elite customers who understand your product. The rank-and-file customers are average. The average sales guy can sell to an average customer without involving headquarters, because If I’m getting involved in every deal, it’s not repeatable, it’s not scalable. Once you can do that, you suddenly have repeatability. It means that now, I can hire tons of salespeople and they can sell to tons of customers without putting a load on the headquarters, and now you have a product market fit.
That’s my definition of a product market fit. When big deals start coming without any involvement, without me even stepping into the headquarters of the customer, suddenly the gong goes off and some big deal happens, I know there’s a product market fit.

Karan: That’s great. By the way, if anybody has questions, just raise your hand. This doesn’t have to wait until the very end. I’m sure folks have questions for Mohit, then I’ll pause. Just make sure somebody tells me if somebody’s hands are risen.

All right, let’s move to one of the things that I observed about you is that one thing that you navigated really well is that when you pitch for money or when you pitch for your idea to employees and other prospects to hire them, there’s this constant battle between you have to be focused enough to get the right wedge in the market, but you also have to have a big vision to be able to excite people to really understand this is a 10 billion whatever company. It’s hard for founders to navigate it because you’re constantly getting the feedback, it’s too niche-y, or you’re getting the feedback that it’s too unfocused.

How did you navigate that? Cohesity, in some ways, has replaced multiple companies as a platform, but you didn’t pitch that, well, you did, but you didn’t raise money and you unraveled those layers of the onion as time went on. Walk us through how you navigated that challenge.

Mohit Aron: Absolutely. First of all, if you want to build a great company, the vision has to be big. If you have a small vision, even if you build the best product, that vision can be copied. Very soon, your product will be a commodity. But at the same time, when you have a big vision, you can’t wait tons of years to build that vision. Customers are not going to pay for the vision, they’re going to pay for the actual product. It’s very important to have in my mind two components.

One is a vision. The second is, what I hesitate using MVP. I don’t like minimum valuable product. I think Madhavan was also using minimal, I think, valuable product. I prefer a minimum lovable product. You need to have a very clearly defined minimum lovable product that you can actually sell to customers.

The bigger vision is useful, A, for hiring great employees because after all, they’re joining you on a mission. They don’t want to do some small thing and then there’s nothing more to do. Second, the bigger vision protects you against competitors. By the time they try to copy your minimum lovable product, you are traded on the vision and moved ahead. And third, it’s also important for the customers. They want to solve a problem right now, but they want to solve it in a way so that they can keep your product for years and you can add further value.

Having these two components is key to building a sustainable business. There’s plenty of companies that were flash in a pan, did well for a few years and then became a commodity. We can go on and on and on. They eventually had to shut down or got bought by someone for a small price. It’s basically the problem of not having that bigger vision. Or conversely, some companies only have a big vision but don’t have a very well-defined minimum level of product. Maybe you spoke about feature creep when Madhavan was here. Some of that stuff is lots of vision but no lovable product.

Karan: By the way, one of my favorite stories of Mohit and he’s too humble to admit to it, but he’s one of the best engineers in the Valley. Period. When I invested in Mohit in the Series B, it was pre-product, and I remember him pitching that he’s going to launch the GA. This GA product was pretty big but it was still that minimum lovable product of an even bigger vision. He said the product is going to be GA’d four months later. It was October. We were somewhere in June, July. He was like, “It’s going to be October 14th.”

I was like, “All right. Well, I’ll just add another three months to it,” because no product goes GA when the founder tells you that it’s going to go GA. And I remember calling him like a month into it, in the afternoon, and I asked his EA, “I want to talk to Mohit.” He’s like, “Oh, no. He doesn’t take calls between 1:30 and 4:30.” and I’m like, “What the hell? I just invested $20 million in this company.” It turns out that from 1:30 to 4:30 p.m., Mohit would put on headphones, and he would sit there and code. He wasn’t taking any calls from investors or customers or anything. October 14th was the launch day, you launched the GA product on October 15th. And we sold, what, a million bucks?

Mohit Aron: Yeah, in the 1st quarter itself. That day when we GA’d the product, one of the customers with whom we were doing alpha testing, he surprised me. He stood on the stage and said, “I’m going to buy it for 300K.” That blew our numbers right there, the target right there. More orders came the very first quarter. So the very 1st quarter, we actually surpassed a million dollars.

Karan: Now, you found product market fit. Your average salesperson is selling to an average customer an average product. What do you do after? What is the next thing you do as a CEO? You feel it, you’ve see it. You see it it’s happening. How do you change the operational cadence of the company right after you got product market fit?

Mohit Aron: Repeatability. I mean, the repeatability is a necessary condition for product market fit. I think the company needs to operate very differently when pre-repeatability or pre-product market fit. And very differently, post-product market fit.

Karan: You stopped coding for three hours with your headphones right after that.

Mohit Aron: So pre-product market fit, your job is to get to repeatability. Your job as a founder and the rest of your C-level suite is to get involved in anything that’s important and get it finished. My job was to finish that code pre-product market fit, get a solid. If there’s no repeatability, there’s no big business. Post-product market fit, the equation changes. By the way, pre-product market fit, your job is also to conserve capital. You’re trying to do more with less resources because you want to extend the runway, whatever.

Post-product market fit is actually a mistake to be conservative because now you have the product market fit, a lot of companies are watching you or a lot of competitors are watching you. They want to copy you, that sort of stuff. You want to press on the gas. Nobody should be able to surpass you or catch up to you. That’s also where delegation comes. Use as much leverage as possible. You hire for the right roles, you want to delegate responsibilities.

Pre-product market fit, my job is to finish anything that needs my attention. Post-product market fit, it’s to, A, delegate as much as possible. Look, you’re always going to have people who may not be able to do the job that they’re assigned. Your job is to be a symphony or kind of like a symphony orchestra player. I’m sorry, a symphony conductor, not an orchestra. But not an orchestra player. You’re a symphony conductor to oversee a lot of things. You take on a breadth roll rather than a depth role, and your job is to descend and parachute down in areas that may not be doing well with the goal of coming out quickly. What that means is either you replace the person who’s not able to do the job or train them. Or you bring it to a point where it’s going to operating at 80% efficiency, and now the team can take it onwards. Not to finish the thing because you’ve got to watch out for a bunch of other things.

It’s a very different mindset. You have to change that mindset. Once you attain product market fit, suddenly if you don’t change that mindset, you’re going to kill the company.

Karan: There’s a really good quote by Aaron Levie who said, maybe like 8, 9 years ago now. He said, “The job of a startup CEO is to do as many jobs as possible so the company can survive, and the job of a scale CEOs to do as few jobs so the company can survive.”

Mohit Aron: Yep, that’s right.

Karan: I think that articulates what you were saying really well.

Audience Question: I love this theme and so I’m curious. A correlator to that is that a lot of times, you’re kind of the hub spokes, and all the spokes come to you to solve the problem. What did you learn in terms of finding ways for the spokes to solve the problem so that everything didn’t have to go through you, the hub? That’s just not scalable, especially as you get bigger. I’m even thinking about it the right way as you upleveled yourself and tried to work with a very capable executive team and not have to solve every problem.

Mohit Aron: That’s a great point. Rule number one, when you bring a problem to me, I also want you to bring the solution to me even if the solution is wrong. I will not give you the solution otherwise. It’s too easy for them to just bank on you. You present the solution and you keep doing that, they become more and more dependent on you.
Even if the solution you bring is wrong, that’s okay, but you need to bring a solution. If you do that, very soon, you’ll realize that they’ll start solving problems themselves.

Karan: That’s excellent. I have to use that. We have to use that internally, too. Let’s switch a little bit to generative AI. You can’t go through any conference, any session, any meeting without talking about it. We’re seeing a lot of investments in the infrastructure layer and the model layer. What’s your view of where we are in AI today? What do you think the world believes about AI that you don’t think is true?

Mohit Aron: I have a balanced view on AI. On the positive side, I think it’s a mistake for any company to be ignoring AI. If you’re doing any company and you don’t have AI as a component within it, it’s probably a mistake. Every company needs to view AI seriously. AI is here to not just stay, but also change most businesses in a fundamental way. I think I’m not saying anything that is outlandish there. But on the other side, I also believe there’s a lot of AI-washing going on. Companies attach .AI in their domains and project themselves as AI companies when they really don’t have any deep AI in them. A lot of people I know who are basically doing nothing more than after the popularity of ChatGPT, they basically have some sort of a chatbot attached to their product, and they’re calling themselves an AI company. Or basically, they have some twist of RAG, retrieval-augmented generation, and they extract information in some way. They call it an AI product. So, there’s a lot of that going on.

Look, the fundamentals of doing companies has not changed. You need depth in your product. You need a significant competitive differentiator to be able to build a sustainable business. And even with AI, you need all of that. If you don’t have these things and you’re just slapping on AI in one of the ways I said, it’s not a real AI company, in my mind. There’s a lot of that going on. I think people know a lot of companies have come down, they raised funding at big valuations, calling themselves an AI company but then they couldn’t prove all of this, and their valuations came crashing down. So, that’s also happening. I think a lot of the negative returns, you see. I think back in the last two years, a lot of companies got funded at big valuations and then they couldn’t live up to the promise. And so, that’s also happening.
So, it’s a balanced view. I think it’s a huge tool that we have now at our hands to really push technology forward. It’s irresponsible to just throw the name AI out there and pretend that you’re an AI company when you’re not. So, the fundamentals of doing business have not changed.

Karan: Any more questions? Otherwise, I’ll keep going. Somebody there.

Audience Question: Thank you. When you talk about vision and minimal lovable product, that’s something I sometimes struggle with because some visions like Disney is to make the world a happier place or whatever it is, which is sort of ambiguous enough to pretty much play in any particular field. Then I hear other visions that are almost like the product on steroids. You know what I mean? It’s kind of like, “Our vision is to make the best banking software in the world,” which that doesn’t sound that awesome.

I’m curious about how you set the vision at the right altitude to not be, maybe, so broad and nebulous that no one really believes it but also not so narrow that it’s really just a supersized version of what your product already is.

Mohit Aron: Yeah, great question. I think the answer will come from asking what additional problems you can solve? Otherwise, it’s a big abstract thing or the best thing since sliced bread or whatever. It means nothing. So, your minimum lovable product is solving some problem for the customer. As you build on your vision, what additional problems are you solving for the customer? What additional things are you enabling for the customer? And if that is a growing thing, then you have a bigger vision, otherwise your vision statement doesn’t probably make any sense. You need to keep on adding incremental value, keep on adding that value to customers, and then you have a bigger and bigger vision.

Said another way — you better have a roadmap beyond your minimum lovable product. What’s the next thing you want to deliver? When you talk about that stuff, by the way, with customers and they see the additional value you’re going to bring, that’s when they buy your vision. Otherwise, all these buzzy words mean very little for the customers.

Karan: Mohit, you’ve managed to, obviously, build great companies, but you’ve also raised money with Tier 1 investors along the way. You’ve got Sequoia, Excel, ourselves, Battery, a whole bunch of venture investors that know you, that want to invest in you. As you think about picking investors, as you think about building a board, what are the things that you would advise the founders here as they’re going on to raise their follow-on rounds? And their first round, sometimes. What are the things that they should be optimizing for?

Mohit Aron: The first thing I would start by saying is be very clear on what you need from your investors and your board members. I’ll tell you what I look for. Number one, I look for investors and board members who will stick with the company in not just the highs. Everyone sticks through the highs. But what about the lows? Even in the highs, sometimes when the things are heading high, they’re like, “Oh, let’s make an exit. This is the time to get the money back.” When it’s a low, it’s like, “Oh my God, let’s sell the company before it gets too low.” So, do they have the stomach to stomach the lows? That’s number one for me.
Number two, you’re hopefully trying to build a company that will become much bigger later on, and this is not the only round of funding you’re raising. So, will this person be able to stand up to his or her partners in future rounds and thump his fist and say, “I believe in this company.” There’s always going to be naysayers who try to push back, “Hey, let’s not fund this company at this bigger valuation.”

Karan: Nonbelievers, yeah.

Mohit Aron: Yeah. So, can this person actually stand up and say, “No, I want to fund it again”? That’s a huge benefit. If you have one of your existing investors standing up in future rounds and saying that, “I believe in this company. I will put money again,” that’s a huge incentive for people who are non-investors right now to come write checks for you. That your existing investors believe in you so much.
In the absence of if a person doesn’t have that conviction, okay, now I’m left with my existing investors who are not standing up to say they want to invest. Why would another one invest, right? The key to raising big rounds, and I’ve raised some pretty big rounds, is that your existing investors have the cojones to stand behind you.

Karan: You had the quality problem of saying no to people more than most people.

Mohit Aron: Now, trust me. When you raise big rounds, everyone is jittery, right? That’s when the river meets the road. I thank Karan. Karan’s always been very good at that. He always fought his partners, “I’m going to put money behind.”

Karan: Took a lot of flack for that in the early days, but I think it worked out for everybody. There was a question there.

Audience Question: This intersection of vision and product market fit, like how you set the vision, you said something that I wanted to pick on, which is that you solve additional problems, especially as there’s fundamental technology shifts happening always but nowadays with AI. How do you think about existing workflows and tackling existing workflows versus new workflows as you’ve built multiple companies?

Mohit Aron: Yeah, great question. The first thing that you need to do, and this is one of the things I always do when I start companies, is you need to build a hypothesis of what it would take for the company to succeed. Haven’t you run into people who are, basically, running failed companies or failed products, failed companies, or the company has already failed or the product has already failed? You’ve said, “What were they thinking?” I could have told them that this is not going to succeed.

It’s actually amazingly true. I thought that why not move back to the time when you’re conceiving of the company and do that at that point. Literally, I have this framework where I write a hypothesis. It consists of four parts. The first one, the first section, the first part is elevator pitch. If I literally run into my prospective customer in an elevator, in five minutes, how am I going to convince that person? Number one, it needs to be a real problem that the customer cares about. Number two, I need to have a solution that solves the problem in a good way. Number three, the product needs to be differentiated. These are the key components of an elevator pitch. That’s section one.

Number two is that minimum lovable product. What does it look like? The fact that the Fire Phone was what it was because they didn’t have a crisp definition of a minimal lovable product is. They kept throwing features. You need to have a very crisp definition of a minimum lovable product. Number three is why would the company succeed? What are the trends that are helping you? Number four is why would it not succeed? What would a naysayers say? What technology shifts might happen in the future that would disrupt this? You are to rebuttal against each of them. And then, you write this hypothesis. I literally do it for every one of my companies. I share it because I might be drinking my own Kool-Aid. I share it with people who are objective and knowledgeable. If they say, “Okay, this is a solid hypothesis,” then I have a company.
What I’m trying to tell you is that upfront, I’ve thought about these shifts. I remember Doug Leone from Sequoia, he asked me when we did Nutanix. After Nutanix achieved the product market fit, he asked me, “What has changed since your initial vision?”

I’m like, “Zero. Nothing.” Cohesity, I shared my initial deck that I used to raise my Series A funding with some of my employees. They were surprised to see that 10 years later, we are basically building on the vision that I said 10 years earlier.

Karan: That’s great, yep.

Mohit Aron: These things are thought upfront. Now, there are some shifts that are going to happen that you can’t foresee, and that’s where you put your best friends together. You collaborate with them and then you align that, “Okay, this is how the vision needs to shift to accommodate for these technological shifts.” And if you do that, basically what I’m going to say is that you not only have a hypothesis at the beginning, you also keep building and improving this hypothesis as the company goes long. I think you have a fair chance of building and foreseeing upfront these technological shifts that might come and interrupt what you’re doing, and then navigate around them.

Karan: We’re almost out of time. I have one last question for you. I’m a fan of Patrick O’Shaughnessy. I don’t know if you’ve listened to his podcast Invest Like the Best. I know Matt does, and I do. He ends all his podcasts with a question that I love, so I’m going to ask you that, which is what is the kindest thing that anyone’s ever done for you?

Mohit Aron: Look, I’m blessed. You don’t get here without kind things being done to you. I think if I may pick a few, the kindest things that people have done to me is when nobody believed when I was in a tough spot, the few kind words that were said meant a lot at that time. Of course, when the going gets tough and I’m having a hard time raising funding, when there are some backers like Karen who backed me when things were tough, that’s very kind.

So look, you make mistakes, you fall down, you get up again, you run again. When you fall down, who’s there to actually show you kindness is what matters.

Karan: Thank you for that. Thank you for the kind words. Thank you for your partnership. Thank you for your leadership. Most importantly, thank you for your friendship over 15 years. Thanks for being here today.

Mohit Aron: Thank you for having me here.

Terray’s Jacob Berlin on The AI-Powered Future of Medicine

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Terray Therapeutics CEO Jacob Berlin returns to Founded & Funded after three years to share how the company has scaled from an ambitious startup to an industry leader in AI-driven biotech. Learn how Terray’s proprietary hardware, combined with the world’s largest chemistry data set, is powering new discoveries in small molecule drug development.

Jacob also discusses how the company’s $120M Series B fundraise will prepare their internal programs for clinical trials and further enhance their AI platform. He shares insights on where he sees the future of AI and drug design and dives into how founders can balance internal innovation and high-profile partnerships.

Don’t miss this deep dive into the intersection of AI, biotech, and innovation!


This transcript was automatically generated and edited for clarity.

Jacob: Thanks, Chris. Super fun to be back here. It’s pretty wild that it’s been three years and everything that’s gone on, and I’m super excited to be back and talk about Terray some more. One of my favorite topics.

Chris: So, since it’s been a while, can you give me the brief overview and maybe even the elevator pitch of what Terray does and what’s happened since then?

Jacob: Absolutely. Terray is a biotech company focused on autoimmune disorders and immunology down in Los Angeles. We bring our unique proprietary hardware and experimentation to enable AI-driven small molecule drug discovery in a way that’s impossible without it. We’re deploying it for that internal pipeline on autoimmune disorders and also for our partners across a range of indications.

Chris: That was a good elevator pitch. It’s succinct. Before we get into everything, you mentioned small molecules there a couple of times. Could you explain why small molecules and why that’s an important thing to be working on?

Jacob: Small molecules would be the medicines that you’re probably all most familiar with, the pills in your bottle that you can take by mouth, you can carry them around when you go travel, and probably some of the oldest types of remedies available to humans. At this point in time, there have been incredible advances such that there are other classes. We now call those small molecules because there are large molecules, which are typically antibody-type therapies or protein-type therapies, so large molecules are typically made by biology or analogous to biology. Then there are, of course, also cellular therapies, genetic therapies, and others on the scene. For us, we’re exclusively focused on small molecule therapies which remain the world’s most abundant, most impactful medicines. There are a lot of opportunities still to develop new and better medicines in that area.

Chris: I think about it as if, for most cases, if you could develop a small molecule therapy for something that worked equally well, not all cases, but for many, it’s better for patients and very impactful to have your medicine delivered in that form factor.

Jacob: 100%. All of the different forms of therapy that bring relief are incredible, but they do have really different levels of complexity in terms of manufacturing, distribution, and investment to do it. You can see it today, for example, in genetic medicines, which are really amazing, like lifelong cures to previously uncurable diseases, but they take many months per patient to do and millions of dollars in cost. Although that one in particular, I don’t know if you can make a small molecule analog to, you can see that as you move down to a pill that you can carry in your pocket and take for your disease while you travel the world and go out with friends, that’s obviously an advantage modality provided that’s safe and effective. I do think they remain the medicine of choice when you can make them.

Chris: There’s a funny story that I wasn’t planning on sharing, but I’m going to share, which is something that you shared with everybody when you last came and spoke to Madrona to give the update, which I think was close to a year ago now, and we were working through the pitch deck and talking about how the Series B was going to go. I remember this special appendix that you brought, and it was basically — here’s where we were, and here’s where we are. The “where we were page” had one chart with a couple of dots, and that was it. The “where we are now” page had, I don’t know, as many dots as you could fit in while still making a legible, and that was just a sub-sample. That explains a lot of just the amount of scale that you’ve put into this company since then.

Jacob: Yes, it’s really incredible. When we came and chatted last time three years ago, by the way, we count zero measurements the way we count today because the first three and a half years of the company were about taking that experimental innovation, the proprietary hardware from an academic invention where my co-founder, Kathleen Elison, was pushing buttons on the syringe pump. We were doing one microarray a week, and it was all artisanal, and we’d measure 32 million measurements maybe that week, which was a lot. It was huge. It’s way more than I’ve ever done in my career, but nothing like what we do today. When I last came in, we were at that moment where we had industrialized it, and we had this incredible array of automated systems both making and using those arrays and measuring those data sets, following up and making molecules to put into downstream assays and following them through the drug development pipeline.

We were, at that moment, though, at the beginning of our pipeline journey and the beginning of our AI journey because we didn’t yet have the data set to drive those two. In the last three years, I say zero because we had made, of course, many, many measurements before that day, but that was the day that we locked to a consistent format, and there’s a bunch of needy technical details that nobody needs to know about, nor will I tell you to go into what we decided to do with certain elements of the science on the chip. But once we locked it, now we’ve measured over 150 billion raw measurements on the interaction side, which mapped to 5 billion unique measurements because every data point that we use in our modeling, we measure about 30 times and replicate to make sure it’s a high quality, precise data point.

In the intervening three years, we’ve made 5 billion unique measurements, which has led us to build then unique generative AI tools to design small molecules. That, most importantly, led us to then really move our pipeline and move our partnership work. We’ve realized a whole number of significant milestones between here and there. Looking back is always, I don’t know, shocking, exciting, terrifying for a founder. You look back at the old deck, and you’re like, “I’m so glad somebody funded that. We’re doing a lot better today.” And that one’s true. Looking back at what you all backed in the beginning, it was very much the vision and the core of what we do, but the realization’s come in the last few years. It’s been exciting.

Chris: It’s a fun one for me to think back on because the first time we met in person was a couple of weeks before the COVID shutdown. And since then, even dealing with that, it’s just a different company. It’s fun to be able to have this conversation when you’re now much more of a scaled-up founder leader of a company. You’ve learned a lot of these lessons, and so I want to jump into some of those. Since you mentioned, and we’ve talked about it a couple of times, one of the major milestones that you recently hit over the summer and announced in the early fall was this large $120 million series B fundraise. We’re all super excited about that. I think it’s a pivotal moment for the company, but I’d love for you to share a quick overview of what that is going to get for you. That’s a lot of money. People think biotech companies raise a lot of money. You’ve got big plans for it. I’m curious just what exactly will that enable?

Jacob: It’s really incredible. I think all the founders out there know the saying, “The first dollar you raise is the hardest.” I think that is still true, but the markets have certainly been, as I think everyone involved with biotech knows, a little bit bumpy. Maybe all times are interesting to start a company, but we started it right before COVID, ran into all their operational challenges like that you mentioned, and then maybe one of the greater boom markets for biotech funding and progress and then one of the greater devastatingly bear markets that followed on the heels of that. We’ve stayed really focused on execution from day one through all of that and into today. We’re really excited because my life’s mission has been to cure somebody for decades now, and we’re finally coming up on that. Owing to the nature of the market, we’re probably not going to cure one person. We’ll probably cure many, many people hopefully.

This money is so important to us because we’ll be bringing our first programs into the clinic from the first wave of targets we worked on, one of the unique aspects of the scale of the integration of experimentation and computations that we can work on far, far more targets than your average biotech company of our scale and then pick out the best opportunities and move those forward. We have the first couple of those headed into the clinic out of what we call our first wave of targets. We have a second wave of targets behind that that we’re also super excited about, and we’re continuing to invest in the platform as well, both for our own pipeline progression, like those programs, which I should note everything internal is in autoimmune disorders and immunology, but also delivering for our partners.

When you think about the milestones from three years ago to today, not only did we industrialize the technology, generate the data, and build these AI tools, but we moved those first programs of our own toward the clinic, and now we’ll move into the clinic. We started to build the second wave and all the programs behind it, and we saw scale in partnerships with Calico and BMS. Now we have a co-development deal with Odyssey, which is another exciting opportunity for us to use our technology advantage to really deliver medicines that matter. And then, as you know, just recently, we signed a deal with Gilead to bring the same approach forward and solve really challenging problems for them. We’re really excited to become the AI-driven small molecule provider of choice for large pharma, and it’s been incredibly gratifying to see that. We’re going to continue to invest in the platform and be best in class at the intersection of large-scale, precise iterative experimentation and AI, but also super excited. The primary piece is moving that pipeline into the clinic.

Chris: It’s the partnership velocity since you started to sign these partners or go after them has been pretty incredible. We’ll get back to that because that is a hot topic for companies across the board. Something you mentioned I think is also a hot topic, which is this platform versus product debate that seems to rage on a cyclical basis and biotech investing, whether it’s coming from the investors or the companies and Terray is very much building a platform. I mean, we certainly have lots of products, but there is a big platform vision there. I think it would be great for you to talk for a second about what being a platform means in biotech, why you have conviction in that approach, and why you think for Terray that’s the right path to take.

Jacob: It is probably one of the existential and repetitive questions in our industry. Are you platform? Are you asset? And the market certainly moves back and forth with its own opinions about which one is in and out, but I think for us, we’ve always been drawn to trying to solve the problems that are unsolvable and really transforming the cost, the speed, but most importantly by far the success rate of small molecules in development. I think probably almost everybody out there knows drug discovery is really hard, that with all of the incredible expertise and all of the tools that are available today, the failure rate even after reaching the clinic is the vast majority of molecules. That doesn’t count needing to go from the idea to the clinic. Overall, it’s clearly a very, very hard problem. It has an urgent need always for better approaches that give you a transformative opportunity to bend the whole curve and transform what can really be done out there.

For us, we came at it from that macro, good for the world, good for value, good for our science approach, and have been a platform company since day one. The core innovation was transforming how you measure chemistry, which then let us transform what you can do on the AI compute side. We faced the same tensions though because our product is not, our microarray chips, it’s not our AI model, it is the molecules. The back end of it is, of course, assets, the molecules themselves, and as they move through, and as you know, and probably many listeners know, the market has moved towards the asset world, move towards the clinic, but that’s why we do the partnership work. It’s why we have a diversified internal pipeline. We feel very strongly that the right way to monetize, realize value and deliver maximum impact from a platform is to basically translate it into as many assets as is possible, leveraging both private capital but also partner capital and partner resources to move multiple programs across different opportunities.

There’s room for both. There’s a lot of patience and need that you can address either by working off of a singular item and finding a clever way to do it. But also I think there’s a lot of room for transformative new approaches. I think you see that right now. Obviously, AI-driven small molecules and large molecules have been a huge topic of interest because they offer the opportunity to really transform success rates, which would be worth millions of lives and billions and trillions of dollars. Goodness knows if you really can change the whole thing.

Chris: One thing you mentioned in there, which involves this platform strategy, and you’ve mentioned to me really since day one, is creating a long-term company versus something that, “Oh, you can build an asset maybe for three to five years, and that’s going to look really great for a pharma to go acquire.” Either I made this up or you said it to me, but I remember asking you, “Hey, what are you going to be doing 10, or15 years from now?” Your answer was running Terray. I think that’s a great answer, but it’s a lot about how you’ve thought about the vision, what you’re building, and where this can go on a true long-term perspective.

Jacob: I guess this comes from the quintessential entrepreneur too naive to know you’re wrong type plan. I’m eyes wide open that the number of new biotech companies that transition to full-fledged commercial scale pharma companies is, I don’t know, one a decade, one every couple decades, but I really think Terray can be that one. We’ve always been focused on returning, now I sound like a broken record, but maximal impact to patients and of course maximum value that comes with that. That’s always seemed to me to be realizing the inherent advantage of the platform at scale and bringing those medicines all the way through, which means building the whole thing.

Obviously, in our industry along the way, sometimes people show up and make offers that everyone says yes to, but I think you’ve got a plan for the stuff you can control and plan for the strategy that you can execute by yourself and plan for the strategy that you think overall is most valuable and most successful. For us, literally since day zero, that’s been we’re going to make and sell our own medicines one day, a whole bunch of them, and we’re going to change the way the world does this, and we’re part of the way along that journey now, which is really exciting, but we still have a long way to go. As in our industry, the timelines and capital costs and scientific risk and discovery and development remain large, but we put ourselves in a position to execute it now.

Chris: I’ll say for me, it’s super fun to be able to work with a team like Terray and you and Eli, your co-founder, because of that true long-term view. It’s really differentiating. We’ll get back to a couple of your thoughts on the business-building side of this, but I think it’s a good time to take a nice detour or deep dive into the AI and the science that’s going on here. Given I think it’s the hottest topic in biotech right now, maybe besides the GLP-1 obesity drugs, we’ve got to talk a little bit about the AI that you’ve built. You said this before, but I think it’s really interesting. The AI came a little bit after the data generation came, but since then you’ve built a ton of it. I’m curious how you think the small molecule AI world is different than the protein design world or the antibody world and what you’ve done internally to build out this AI infrastructure.

Jacob: Now you’ve wandered into my favorite topics, although I love talking about everything. I can’t resist my origin story, anything science, risk, sending me down the rabbit hole for the rest of the podcast. Come for the AI discussion, stay for the enantiomer discussion that follows in the organic chemistry section. In all seriousness, it really follows the data. I say this a lot, but I think about the world as AI is transformative when it rests on top of the right type of data, which I think are those three pillars, large, precise, iterative. In every case where that data comes about and is transformative, it rests on top of hardware innovation that compresses the cycle time, transforms the cost exponentially and allows you to realize it. The one out there in the world that’s easy to pattern match to is digital photography.

You go from old photography, where you probably never have enough images to build DALL-E or Sora or any of these tools or facial recognition, to digital photography. Now, there are millions and billions of images and you can train the models and retrain them and refine them. As people probably talk about other podcasts, you teach what a cat is, and you teach you what a dog is, and you need all the images to train it into which one’s a cat, which one’s a dog before you then go ask it for like, “I’d love a picture of my kids cuddled up with a bunch of cats.” Now it knows, and it makes you a picture. The same problem exists in our space. That’s why I did my postdoctoral work. It’s why I ran the lab. It’s why I started Terray, which is that chemistry data is hard to get.

And traditionally it was me and people like me making molecules and putting them in a flask or putting them on a 96 well plate or 1536, yes, they know they’re different well formats, and measuring them and it just is slow. It takes a lot of time to make those molecules. There’s some interesting automation chemistry approaches to it, but mostly that problems remain very stubborn. Making molecules at scale and putting them into assays is still pretty slow and still pretty expensive. Where AI has come into our world, it’s come into where there have been curated high-quality data sets like AlphaFold of course, where the government fortunately curated a large crystallography database, but also there’s an enormous sequencing database that came about thanks to next-generation sequencing and the plunging price of sequencing and the time taken to do it.

That’s done transformative things for AI around protein design, obviously, protein folding large molecule design, and I think that’s why you’ve seen AI be most successful in biotech, first in large molecules. The question we’re tackling is, “Great, now I want to put a small molecule in there.” That data set has been smaller. The entirety of public data, there is maybe a hundred million measurements spread across a variety of different assays. We’re really convinced that the unlock for AI there is the data, the measurement of small molecules interacting with proteins at a large enough set and across enough targets and enough molecules to build generalized models that can solve these problems quickly and go work for humans that couldn’t before.

That’s what we’ve been after and that’s why the sequence we always knew our data would fit with, back then we called it ML, but now we call it ML AI. We always knew it would fit with these large computational approaches because we generate too much data. We generated 5 billion data points in the last three years. What human is going to flip through that and do anything? But what to build? We needed to get the data in first, and now we’ve been able to build really transformative tools, the first of which was COATI, which actually doesn’t depend on the data. It’s the large language model of chemistry that we built such that we can work with our data in a computational way and smoothly traverse chemical space to optimize molecules.

Chris: Can you explain exactly what COATI unlocks, maybe what it is, and then what it unlocks for doing AI in this world?

Jacob: Oh yeah, that’s an easy one because COATI is a South American raccoon, so I think that pretty much wraps it up. But no, in all seriousness, in addition to being a South American raccoon, it is our large language model of chemistry. For any of these AI applications, you need basically a mathematical space within which the optimization is taking place, but you need to take the real thing that you want at the end and convert it into math, if you will. That’s what COATI does for chemical structures. Chemical structures can be represented in a variety of ways. One is as a three-dimensional object, which is probably the closest to what’s really going on out there in the body in the world. That’s a series of atoms and bonds that make a three-dimensional shape, but they can also be written down in abbreviated notation like a word.

You can write them down as both and people use them interchangeably in different applications in our industry, but neither of those is a math representation. What COATI did was it’s a contrast optimization where you train on those two to build a common math language that can translate back and forth between either of those. I think of it as like a chemistry map where it’s basically mapping how similar or different molecules are in a math space so that if you optimize within that space and move close by, the molecule looks similar, and if you go far away, it looks dissimilar.

Getting that right took a lot of work and the team did an incredible job. It was published recently, it was on the cover of JCIM, and we open-sourced the first version for people to work with it. It’s done tremendous things for how you can translate structures back and forth into math and then move around to optimize. That’s just the first building. If the data’s the foundation, the COATI large language model is the next piece that allows you to traverse, but then you need the AI module, if you will, that will combine those two and move around and solve the problem after solving.

Chris: What’s interesting to me is you haven’t been able to take off-the-shelf machine learning or AI tools. There’s some of them in the workflows and say, “Hey, go to work on our data. You’re going to get great molecules out of this.” You’ve built this whole AI infrastructure, including the data infrastructure from scratch alongside some partners like Snowflake and NVIDIA who have been part of this conversation. I’m curious how you think about the reasoning for doing that and why we’ve had to build all of the models internally and what that does for our scientists.

Jacob: It’s been an incredible journey and one that, I don’t know if when we started, we knew how much of each of them we would do. This part’s always stressful. There’s so many people that have gone into making that possible. Narbe, who’s our CTO has been a driver, John and the entire ML team, Kevin, the whole data team. Because as you mentioned, you have to first be able to get at the data. Our workflow is very custom. We obviously have invented proprietary hardware. The way we read it is with imaging, and so we generate over 50 terabytes of images a day that we need to convert into the numerical values that we’re going to use to drive the models. That was a whole process that we built from scratch because nobody else made exactly what we made, and nobody processed it like we needed to process it.

Obviously, we stand on the shoulders of giants like all scientists, and there was stuff we borrowed from, but we built our own because we needed to be able to do that really quickly and efficiently. We work with AWS and Snowflake because we generated a data set the world hadn’t seen before in the early years of Terray. We want to not invent stuff. We want to use stuff off the shelf that’s cheap and works and does what we need and move on to other hard problems. But when we showed up to vendors and we’re like, “Hey, we have 5 billion measurements coming up soon, can we put them into your stuff?” They said, “Chemistry measurements?” And we’re like, “Yeah.” They’re like, “Ooh, no, that’s a lot.”

We worked with instead, on the flip side, like Snowflake, which is obviously a service built for data sets that large. We saw the same thing with the foundation model of chemistry. We tried every model that was available out there, and we found that when we applied them and use the power of our unique data set to ask, are these models really then connecting molecules the way we want to connect them for optimization? We got some suggestions we didn’t think were that reasonable, and I think we’ll come to this, but it’s one of the real keys of having expert humans in the loop when you build and use these models, because they were answers that our medicinal chemistry teams were immediately like, no way. This is off. It’s a rocker, it’s way off.

We had to go build something that constrained it and gave answers that made sense and really allowed us to optimize molecules. The same has happened with the generative side of the AI problem. The team’s done incredible work building all the way from the ground up, from the data processing through the foundation model of chemistry to the generative and predictive models that go into designing molecules to solve the problems. We built it all because we couldn’t find what we wanted out there.

Chris: It’s interesting. I joke and we’ve joked before, biotech companies are obviously not software companies in many senses, but on the other hand, you’ve pretty much built an entire software company from the bare metal infrastructure up within a biotech company, and it’s about equal to size of the science that is going on. It’s a fascinating change in how companies are built.

Jacob: We use as our slogan, everything small molecule discovery should be, and we picked it intentionally because we feel really strongly that you can’t anymore be all one thing, that you’re at a huge disadvantage if you’re only compute or only traditional discovery. Our intersection is of compute, so AI ML software, so that’s a huge piece of the business building all of that, but also the experimental side. We have a huge investment in build and robotics automation, large-scale data with precision, the iterates now, like I said, I repeat myself a bunch, but it goes then in the service of the pipeline and the preclinical development.

We still have the teams that you would identify anywhere else, Medcam, your biological assays, and everything else that goes into that cell assays. I think you need all of the teams working really closely together. The last piece is that we also essentially have a little mini manufacturing business in that we make our microarray, proprietary microarray technology by assembling a variety of different things and building our custom libraries in-house. We have four businesses under the hood at Terray, but they all go together to drive the one singular value driver, which is the outcomes. I don’t think you can do just one of them and be successful in the way that we are.

Chris: I tell people at Madrona all the time, and other people, if they find themselves in East LA, they should visit Terray because it’s just visually so striking, the amount of automation, hardware innovation, and robotics that it’s just there and required. Every time I go and take a peek, it blows me away. I know we’ve seen that with when the New York Times visited, for example, and other investors, you have to see it to believe it.

Jacob: It is really different. As my brother Eli and co-founder advertises it, it’s not just a lab tour, although it is just a lab tour, but it’s an awesome lab and it’s one of the other milestones. Since three years ago, we’ve really lived the startup physical footprint journey. It’s been incredible in that same look back, we started the company in a local incubator at a shared bench and a shared essentially closet that we did our imaging in. Last time we did the podcast, we had moved from there and matured into a step-up space and we were working in a couple suites in a shared building, but we’ve been really fortunate since then that we moved into a, 50,000 square foot headquarters in Northeast LA, Monrovia for those in the know, great spot. And we’ve really then been able to build our workflows the way we wanted into the physical footprint of the building.

If you come to Terray, or our partners or the New York Times, or others, you have seen this whole first floor where the automated imaging and liquid handling systems are running to use these little microwave chips and make them millions and billions and billions of measurements all in a way, like a whole field of them. It is strikingly different. The interesting thing is that upstairs then looks in many ways, like a canonical biotech drug discovery company, although with a lot more robots in the hoods than on average. You can see and almost feel how the pieces fit together and work together, except perhaps as we talked about for the AI piece where you just see really smart people working at computers, but you see the impact as you move upstairs and downstairs and see the molecules that are being made and tested. It’s pretty incredible watching it all come together. I encourage anyone who’s interested to reach out and let me know. We’d love to show people what we’re doing at Terray.

Chris: It’s a pretty great tour. I’m lucky I get to go all the time, but it’s pretty fun. I want to get into a couple of your business theses and lessons you have to share. But before that, circling back to the AI side, I think one of the things that Terray is really good at doing is predicting completely de novo structures. And so when I say that, I contrast that toward a bunch of other AI platforms, which are very good at predicting things, especially binding molecules, but they look 99% similar to known binding molecules. That’s impressive in itself, but it’s very different than how you’ve approached the problem and how you think about this pure de novo or unrelated structural prediction. Talk a little bit about why that’s hard and why you think that’s also the way forward.

Jacob: It’s interesting. In my mind, this one connects back to the platform versus asset question in the same way that there’s value to a company being wholly invested in one medicine and bringing it through and being successful. There’s value to patients and to the ecosystems, to taking previously known molecules that either do work or almost work for something and making them better. There are innumerable examples of that, including the statins. Everyone knows and we’re taking, it wasn’t the very first one that became the most ubiquitous. There was refineman and the most ubiquitous one was an optimized version thereof. Those are exciting problems and problems that we can tackle. But I think the most exciting and the biggest benefit for both human health as well as value is solving the problems that just can’t be solved out there. That’s like, as you mentioned, would be what we call de novo where nobody knows where the molecule is or what it looks like.

It’s out there in COATI’s chemical space mapping somewhere, but goodness knows. The key then is to be able to do your own measurement to get a starting toehold where there wasn’t data before. As I talked about, AI always needs data. I don’t think it’s any surprise that AI’s first impact on small molecule design has been predominantly working in areas where there was already data, working around known molecules, patents, things that were out there and making better versions of those, which again, are very valuable, have impact and are also honestly often much quicker to bring to the clinic because of the path that’s been trod before you. We work on a different approach, which is to bring us your hardest thing. I think this is why you see the partnerships with large pharma because they’re bringing us of course, the things that they can’t do themselves otherwise they do them.

We’re out there working on very hard things where often there is no known starting point. We do this for our internal programs as well. For us, that’s why we use our sequential iterative process where we use our platform to measure very, very, very broadly across chemical space, 75 million plus molecules, but we’re obviously chemical space, infinite. We’re very sparsely sampling, looking for a starting point, where can we possibly get going on this? And that gets you going on the de novo, but it doesn’t possibly give you enough data for the model to be impactful. We followed that with a design and test cycle where we then build a new library of millions of molecules around that area of interest such that we massively enrich the models with a lot of local knowledge around the area where we do know now that there’s an answer in there.

That sequential build lets the model both broadly understand chemical space, what doesn’t work mostly, and then enrich into what does work and become essentially like a AI co-pilot for the Medcam team where they’re able to ask it questions as they go about their work and think about, “Hey, I need this molecule with these improved properties, where should we go?” I’m super excited about it. As you can tell, there’s nothing more exciting than finding a totally new answer to a intractable science and health problem. I think our approach really gets it done.

Chris: I know that you’ve now found many of those molecules because I get to see the outputs of that not in real time, but on a regular basis. It’s impressive how that’s been able to occur with all the work that you’ve done. I have two more questions for you, both are more on the business side and the business philosophy. Now that it’s three years in, I would say you’re an experienced founder.

Jacob: Three years since the last podcast.

Chris: That’s true.

Jacob: Six plus total. I’m a deep expert now.

Chris: That’s right. You’re a very experienced founder. I mean, you’ve scaled the company a bunch since the last time we talked. I want to hit two things. On the business side, Terray’s always been about the partnerships as well as the pipeline. I am curious how you think about this strategy because there are a lot of companies that will only focus on their internal pipeline and that’s not how you approach the business strategy.

Jacob: This also goes to the platform build question. I was influenced by an article I read long time ago looking at expected returns across numbers of assets. Back then I think the conclusion of that particular analysis was like, well, if you have 20 programs in the clinic that are appropriately sized to their market and whatnot, you’ll positively return over them. Of course, you see the real world version of this large pharma is a successful profitable industry that makes many bets, but in part also does it through acquisition and letting the bets play out outside of their ecosystem. If you can resource enough thoughtful bets, you’re likely to be overall successful. The inverse of that, as we certainly know, that one singular bet actually is odds on to lose.
As you know, I’m a baseball fan and so I talk about this a lot is sequence luck. One team, all their hits come together, they score a bunch of runs. The other team only gets one every inning, they lose. And then one, I’m a boss, I just like to point out one team sometimes also drops the ball for an entire inning and blows the World Series, but that happens. Coming back to what we’re talking about, we work with partners because I mean, it would be great if you guys would give us a few billion dollars and then we would resource all of our own programs, but that just hasn’t worked out yet.

Chris: Someday.

Jacob: Partners give us both. They give us the opportunity to resource more programs than we otherwise could through their capital commitment, not only in what they pay us to do the partnership, but the fact that they’re going to then carry the backend development of those molecules through the clinic and out to patients. We have an opportunity to realize value where we otherwise wouldn’t reach patients. The other piece is that it also realizes expertise. Internally, we were fully on immunology and autoimmune disorders, but with our partners, we touch a variety of other therapeutic areas that would’ve been a whole another build for the company to move into those types of indications.

It’s a way to realize the promise and value of the platform while you’re still a smaller company and be capital-efficient as you build and grow, tilt the odds of overall success in your favor from a singular coin flip, if you will, although the coins from very negatively weighted in biotech, to an ensemble approach that starts to give you a leg up on sequence luck. If you do seven programs and the first two fail and the last five succeed, that’d be incredible. That’d be the biggest home run ever. You might not get to do the last five if you only have the first two bets. This is a way to do them all at the same time, do them with really expert, wonderful partners who are just well-resourced and well experienced to be successful at the programs we do with them. So yeah, it’s always been both for us.

Chris: That’s really well put. Finally, I want to ask you about one of my favorite parts of Terray, which is the very unique and extremely high-performance culture you’ve built, and you’ve set an incredibly high bar just to get a job at Terray. I can think of one time, maybe in the last four years since I’ve been deeply involved, it is actually, I guess, closer to five now, since I’ve been deeply involved in the company when we’ve lost anybody, or we’ve lost someone, we’re even like, “Oh man, that was really terrible that we lost that person.” How have you done that?

Jacob: It’s really remarkable to me because as opposed to some of my friends and colleagues in other markets, it’s not one of the things I worry about when I go to work very much. It’s like, oh, we’re unexpectedly going to have a large churn in the company. We’ve been really fortunate to work with wonderful people including yourselves and the rest of in the investing ecosystem too. It’s really remarkable to me how mission-aligned everybody involved with Terray is. I’ve always been, as you probably can feel through this, super mission-driven, I’m here to make the world a better place and this is how I want to do it. I think that shines through as we hire people and build the team. It’s been one of the most incredible parts of this last journey because the other piece of the milestones is last time we talked, the company must have been four times smaller, and we’ve been through that growth and maintained, as you noted, the quality people we want, intensity.

It’ll sound cliche, but it’s because we hire for the person and the culture and the way we work together, not necessarily just for the skillset, which does make our searches take forever. We talk about this all the time. The trade is always time, because you can find the person who not only does what you want, but also does it how you want to do it. It’s going to take if you hold to both bars. There are times when that’s really tough, and it’s like, we really need somebody, but overall, we’re always happier and more successful when we get both. We’ve built an interview for that since the beginning. As we’ve talked about, I’m not a huge fan of just canonical words for values like, “Oh, we’re about excellence.” Of course, we are. So is everybody else. I hope. Otherwise, I don’t know what you’re doing. We’re really focused on how we work with each other and the operating principles, how we communicate with each other, how we make decisions, how we treat each other.

It’s been just a real joy to watch that cascade down through the teams. I have a little rotating lunch I do across the company, like three or four people every week just to say hi. It’s explicitly non-work, they just get to hear my awesome baseball jokes and thoughts about movies and TV and whatnot. One of the new employees was there, and I was like, “Oh, how’d you find Terray?” They’re like, “Oh, well, my friend who used to work here. She left for a school opportunity.” That was awesome for her. Was like, “You got to work at Terray. It’s awesome.” And nothing makes me happier. The science part obviously motivates me. I love science still. I’ll go back and tell you more about organic chemistry if you’d like, but the building and the people side is every bit and maybe even more gratifying to see such a wonderful teamwork together. I don’t know what the secret is except not making compromises on that aspect. There’s never anybody who’s good enough that you’re willing to compromise how you want to do it.

Chris: Well, I can’t think of a better place to end the discussion on that note about amazing people. You are one of them. It’s been really fun to work together and I really appreciate you joining me three years later for this discussion.

Jacob: Well, I appreciate it, Chris. Not only the awesome conversation today, but as you know, you guys have been convicted supporters of our work from the beginning, and it’s not that easy to find people who want to take the big, big bet and go for the whole journey. It means a lot to me and the conversation we just had, you guys have been mission aligned and how we want to work together aligned from the beginning. So appreciate it. So excited to be back, and thank you so much.

Chris: Thank you.

Curt Medeiros on Revolutionizing Precision Medicine and Scaling Ovation

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Imagine a future where healthcare isn’t just reactive but deeply personalized — where every patient gets the right treatment at the right time based on a precise understanding of their biology. This is the future Ovation is building, and on this week’s Founded & Funded, Madrona Partner Chris Picardo dives into it all with Ovation CEO Curt Medeiros.

From transforming underutilized clinical data into rich multiomics datasets to forging industry-leading partnerships with companies like Illumina, Curt shares how Ovation is shaping the future of precision medicine. He also opens up about the challenges of building a scalable, privacy-first platform, the lessons he’s brought from leading large healthcare businesses, and why collaboration and diversity are key to solving complex problems in healthcare.

This transcript was automatically generated and edited for clarity.

Curt: Thanks for having me on, Chris.

Chris: Yeah, thanks for being on. This is really fun. I think it would be great for everybody. If you could just do a little bit of a reintroduction to Ovation. Why was the company created? What’s the role in precision medicine and data innovation? And how do you think about Ovation in this emerging data world?

Curt: Absolutely.

So, the founders put together Ovation with a simple concept. There’s a ton of clinical data and samples flowing through labs across the entire United States, in fact, the world, that are being used for clinical care, but the data and samples are not really being used for research. And so, how can we tap into that?

And that was the founding vision for Ovation. As we’ve evolved, we set up a software-enable platform that works across clinical laboratories to be able to help them with their workflows but also be able to understand what are high-value patients for research flowing through those systems — identifying them, de-identifying their data and samples, and then being able to bring them to the market as multiomics data. What we see, as you know, a really important role in precision medicine is enabling large-scale multiomic data sets. Right now, it doesn’t really exist in diseases outside of oncology and rare diseases. Folks have been building data sets in oncology for almost 20 years, and they’ve seen the benefit of that.

As the landscape gets more competitive with precision medicines and oncology, and there’s still a lot of room to grow there, we see pharma, biotech, and other researchers in academia really starting to turn to — how can they leverage data and these tools to create precision medicines, outside of oncology and rare disease. That’s where we see our critical role.

Chris: What is the relation of data to precision medicine, and how important is it? I know, broadly speaking, people have heard the term precision medicine. I think it’s not always precisely defined, but certainly, in Ovation’s view of the world that the data side of this and the precision medicine side of this are pretty linked. So, I’m curious if you could expand on that a little bit and how you see those two fitting together.

Curt: I mean, if you go all the way back to the beginnings of the industry, people were looking for extracting different chemicals from plants. That has evolved to understanding animal models and how they can represent different human systems. But, obviously, animals are not really a close link. They provide valuable data, and they provide a roadmap to how people react in the clinic. But by no means is it perfect. We’re focused on human genomics and multiomics. It’s the closest you can get to understanding what’s going on at the biological level across an entire human, across their different organs and systems. And having that data from the genomic side, so whole genome through RNA expression of how that is actually expressed in different tissue to the proteins that are produced and ultimately to glycomics and metabolomics, really helps paint a broad picture of the cascade that’s going on at the biological level. And that’s what’s necessary because what people are looking for is what is that target protein or that biomarker that’s going to help people understand what patients will respond to a particular treatment and what patients won’t.

And that way, you can get them on the right treatment at the right time.

Chris: Yeah, that makes a ton of sense. It’s basically like you need this giant cross-sectional database of individual patients and their associated data to really figure out what are the best treatments that we can build going forward.

Curt: Absolutely. And it’s really there to power models for future discovery. So, we’re getting close to the point where there will be enough data in the next five or ten years where you can start to model again all the way from your germline genomics all the way through what’s happening with the proteins at the cellular level. And that’s what’s going to enable a much more precise approach to these targets and biomarkers, much higher success rates in the clinic, and ultimately much higher success for patients, which is the ultimate goal.

Chris: As we continue to accelerate in this world of AI models for everything, a popular one has been applying AI to human health on the protein-folding side. It has been really interesting and very compelling, but it’s less built out in the rest of the healthcare world. Would you say that’s largely due to the data challenge or people not having resources like Ovation before in order to train and iterate on these models?

Curt: Well, it’s both an availability challenge as well as an economic challenge, right? So, let’s start with the economic challenge. To sequence a whole genome of an individual not so long ago was tens of thousands of dollars. We’re now entering the area where it’s hundreds, and soon it’ll be only a couple hundred.

And that’s really exciting because a big part of the challenge was — it was so expensive to create this type of data in the first place. Now that’s being solved. The second part — as people start to allocate budgets to doing that, as the prices come down — are really the availability of high-quality samples that can be de-identified and linked to clinical records.

Because the genomics and the multiomics are really important, but they have to be correlated with highly, highly curated clinical data on the individuals so that way you understand not just what diseases they have and what medications they might be taking but also what’s their journey. Are they getting worse? Are they getting better? Are they having side effects? It’s important to understand all this clinical context to really understand what are the right targets and biomarkers to go after.

Chris: Yeah, so it’s basically, the single point type data is fascinating, but without the deeply annotated clinical record, the data on all of the sort of related conditions around that individual patient and their biomarkers, it’s hard to piece together the insight that you need.

Curt: Absolutely. And scale is another challenge. There are some really good data out there. The UK Biobank has done a great job putting together a tremendous asset. But it’s a really good start. Most of the clients that we’re talking to are talking about millions and millions of patients worth of data.

And so, we’re looking to build upon the success of places like the UK Biobank. And one other thing that’s unique about our model is it’s not something that we’ve collected over 20 years, like other people that have assembled data. We’ve put together our biobank of over 1.6 million samples on over 600,000 patients, about a third of which is tissue, which is really hard to get.

But also importantly, it’s very representative right now of the United States and, hopefully, over time, the rest of the world, where we have a very diverse patient population. One of the challenges with the existing data sets is that it’s greater than 80%, sometimes greater than 90 percent, Caucasians of European descent. It’s hard to find diverse answers when you don’t have a diverse population. And so that’s part of what we’re building.

Chris: I’m curious just as you give people the context on why everything you’ve done at Ovation is so unique. Why has this been hard? I mean, there is data out there, right? Like we know people, like you said, UK biobank is out there. I’m sure that people have heard about 23andMe and some of those approaches, but clearly, it’s harder than that. And so, I’d love to give you a little bit of a window to say, this is a really hard problem that we’ve solved.

Curt: It is a hard problem. And it goes back to what we were talking about earlier in the economic equation versus the scale equation. So when you’re talking about consumer or clinical tests, they have to be affordable. Whether someone’s paying out of their pocket or whether it’s going through their health insurance. And what ends up happening is the amount of information that actually gets sequenced is a small fraction of someone’s actual genome. A very small fraction. It wasn’t until we started to see the whole genome come down to the $100 of dollars that this is starting to become scalable.

What we’ve been able to do on our side is build a platform that not only creates the scale that we talked about, but we continue to add the potential in the next year is hundreds of thousands of samples per month.

And so, what that allows us to do is find through our software and our data, the most interesting patients to study.

And that way we can focus on sequencing those first. When you go into some of the other data sets, part of the challenge is they’re big at the top level. But then when you get into individual diseases, and then you want to segment that population into, mild, moderate, severe, let’s just take the most basic one. And then you add a segmentation on top of that around what drugs they are on.

You start to get to really small numbers. And so having not only an existing biobank that has been scaled but the ability to continue to add new patients, add scale, add diversity, and then capture patients as new drugs are launched in the future is really important to power this type of research.

Chris: That’s a really important point — the Ovation approach there is to continue to build out both the depth and the detail in real time, so you have cohorts that are representative of what’s happening right now with patients and a diverse set of patients.

I think one question that people ask when they hear all this talk about patients is the privacy aspect of it and how you think about that. And I think the corollary to that is, when you talk about larger pharma companies, say, using this data for modeling or approaching. What are they trying to do with it? Is it building amazing new therapies? Is it building great diagnostics? How do those pieces fit together?

Curt: Let me first address the privacy question because that’s obviously of critical importance. We work with common technology in the industry to tokenize, which means we remove all of the patient-specific information, and the software basically translates that into almost a complicated serial number. And the way the software does it, there’s no way to go backward. So once that patient information is removed, it’s completely de-identified. Then, as we add in the clinical information, similarly with the matching token, so we’re not exchanging any information on the patient at all. With that matching token, we then have to construct what the data set will look like and get that certified — so it will not be able to be re-identified in the future. And there are complicated statistics and what data is included and excluded and at what level that goes into that. But we are 100 percent following the same type of processes that I did in my prior life to make sure we ensure the privacy of the folks who are contributing their data. Absolutely.

In terms of what happens on the pharmaceutical company side, at the end of the day, they’re looking to make better medications for as broad a population as possible. But in the precision medicine world, the way that they’re doing that is by identifying biomarkers that help them understand what treatment is going to be right for them at the specific moment in their care journey. And then understanding how they build a portfolio of treatments that help those patients both across diseases as well as along that care journey. The pharmaceutical companies and biotech companies take the privacy piece just as seriously as we do in the rest of the healthcare industry. They understand that having the trust of the patient and having the trust of all of the different stakeholders is of paramount importance.

And so, we work tirelessly with them to ensure that we’re all using and transmitting the data in the right way because that end objective is getting to those precision medicines. You know, I was thinking about “Anchorman” and Ron Burgundy, and I forget what he was talking about, but he said something like, this works 100 percent of the time, 50 percent of the time. And when you think about broad-based medications, they work 100 percent of the time, 30 percent of the time, right? And so, when you’re talking about precision medicines and some of what we’ve seen in oncology, I mean, we’ve seen complete patient populations get very close to 100 percent endpoints in recent trials. And on average, you’re talking about 70 plus response rates, as long as they have the correct biomarker identified. That’s tremendous. It’s not just tremendous for patient health. Which is obviously the first priority. The second is by making it more affordable. Because you’re not spending money on treatments that aren’t going to work.

You don’t have to go through 2, 3, or 4 treatments that aren’t going to work before you find the right one. And that’s the ultimate objective is first on patient health and then on how we can bend the cost curve in the healthcare system at the same time.

Chris: It’s amazing to look at some of the trials of drugs that have failed simply because they gave them to the wrong set of patients. And yet they would have worked incredibly well if they gave them to the right set of patients. And I, And I think that this sort of data approach that Ovation is taking to say, “Hey, go find those right set of patients. There are better therapies that are already out there for you — if we can just figure out what subgroup you are in? Who’s the right set of people to give this to?” And there’s going to be new medicines that will be created using that data that are going to be even better.

Curt: 100%. And so when you look at the first wave of precision medicines that came on, it often was through the route that you said, either the trial failed, and then they did a reanalysis of subpopulations, and then they went and found the biomarkers, and then they redid the trial, or they had enough data to submit the data from the original trial, but with that subpopulation Or sometimes it was even after they were launched on the market, right? The early medicines were launched without biomarkers. And then biomarkers came later as they learned from the market and what their success rates were. We’re enabling people to do that from the beginning on purpose. And that’s the key thing is — why go through that trial and error? And the whole system is learned, right? They’ve learned these lessons tremendously with oncology and rare diseases — that’s why this is a great time for Ovation because as we’re getting this data out to the market, people have already figured out how to do this and now they’re just as they’re changing priorities and moving toward other disease areas, they’re ripe for this type of data to follow the same playbook they did in those other diseases.

Chris: Yeah. It’s incredibly valuable, and I think that it’s endlessly needed by the pharma companies. And that actually brings me to the next great point, which is that on the Ovation side, you’ve really been on a roll for the last year, really landing some of these partnerships with larger companies. I’d love to have you talk about that a little bit — and why now is the time and why it’s been such an exciting time for Ovation.

Curt: I will answer that, but before we, we, we go on, I wanted to go back to the research point. So, our objective, we’ve talked a lot about pharma and biotech, and they’re obviously the most active, and they spend the most money on R and D. But our objective is not just simply to serve those customers. It’s really to make sure that, and we’re exploring, academic partnerships with health systems that do research.

We want to enable research with this type of data broadly and eventually get it in the hands of the payers. Being able to have a set of truth in the data on what’s happening from the research all the way through the market, again, is going to help not only get better medicines and have a better effect for patients’ health. But ultimately, we want to enable folks to make better decisions.

And not just clinically, but on coverage. So, this also starts to bend the cost curve. So our vision isn’t just pharma and biotech. That’s where we need to start, but we want to enable research and decision-making in healthcare broadly with this type of data because that’s the type of change that’s needed.

Chris: I think that’s a great clarification that this data is exceptionally valuable and useful across the entire healthcare and care paradigm. And that whether it’s from the payers or the doctors or the academic researchers who are working on the science, this is the type of thing they need to accelerate their work.

Curt: Yeah, so on the traction front, there has been so much going on. We created our first pilot data set at the very beginning of 2024 and launched it. One of the things that we learned, as you can imagine, is if you were selling just genomics and multiomics data or you were making available just the clinical data, either one of those is completely crazy scientific sales. You have to be in every single detail, not only of what can you do with it, but what is it? Where did it come from? How’s the data model set up? So on and so forth.

A big part of our learning was we needed to show people that we could do this. For a long time, we were a startup with a presentation and a big biobank. And although people got very excited, when you’re asking people to invest in creating these data sets — because even though the cost of sequencing is coming down, it’s still not an inexpensive endeavor.

Chris: Sure.

Curt: So, we created the pilot data set and then were like, how can we get this in the hands of folks so they can see the quality of what we’ve been able to produce? We had to go where the researchers were. And we did that in a couple of different ways. We partnered with DNA Nexus, really the leader in managing and analyzing this type of data across the globe and worked with them both to get to different conferences where researchers in the space — we did irritable bowel disease first, so IBD — where are those people going to learn about what’s the latest and greatest going on in the space.

We put together posters and abstracts, and we got accepted at a couple of the top conferences and were able to present. We actually had small little luncheons as well for people to come and ask questions and give us feedback, and being able to connect where those technical researchers, those technical buyers were — where they normally would show up to learn about new things was really important. And DNA Nexus was a huge part of that.

And then we did pilots. So, we worked with DNA Nexus to get the data into the hands of actual customers and get their feedback. And that was an absolutely tremendous set of learnings for us. And that’s what’s really created the momentum. Showing that we can do it, letting them touch the data, and then being able to say, yeah, we’ve done this in IBD. Now, we can start to do this in other areas.

We just signed our first contract right after Thanksgiving with GLP-1-treated patients. So, both in the diabetes space as well as obesity as, you know, a lot of these patients have multiple comorbidities. So, this is a really interesting group to study. And they’re also having challenges getting reimbursement for these drugs. So, there’s a big push if you could find a response marker, who’s actually going to do well and who’s not. That’s not only going to help the patients, but it’s going to help people understand how to get this to the right people and not spend money when it’s not going to work. Right? Really, really important.

Also, got our first contract in the IBD space at the same time. And we’ve now built a pipeline across, primarily in IBD, but also in metabolic and cardiovascular. I don’t think anyone would accuse those of being precision medicines. So, the opportunity now is we can do a lot, lot better. Especially as we’re learning how different racial and ethnic groups respond to medications differently — male, female. The opportunity to study those from the genetic level all the way forward is right in front of us, and that’s what’s helping us build our pipeline. It’s really exciting.

Chris: Yeah. That is super exciting. And it’s allowing, you know, both obviously Ovation and then your partners to deeply understand what’s going on in all of these areas of health. Obviously, there are the extremely big current ones like GLP-1, but also, to your point, the pervasive ones like IBD and cardiometabolic and places where people have been chipping away at that for a long time and now have an amazing resource that can help them accelerate and move a lot faster.

Curt: Yeah, the other big opportunity that we have in front of us is our collaboration with Illumina which we’re really excited about. We signed that collaboration back in October and have been working tirelessly with them to bring this out to the market and really what excited them about Ovation was not only do we have a large biobank of samples that are already banked and collected. So, we have a lot of inventory where we can work with them and pharma partners to sequence.

And we’re talking about potentially hundreds of thousands of patients’ worth of data through this collaboration, which is really exciting. But the other part is, when we started to show the numbers, because of the mechanism and the platform we have, because we can match the data and understand what diseases folks have before we ever biobank the sample, we have a lot of very high-value patients.

So, if you go out in the normal population and I, customers have told us about other data sets, and I won’t mention who they are. Sometimes, only 20 to 30 percent of the patients are actually interesting because there are a lot of people who got sequenced that don’t have serious diseases that might be healthy.

Chris: We’re both one of those people.

Curt: Yes, well, right now, yes, at least me. But I’m sure there’s something in store for me in the future. And so, being able to get folks that have serious diseases that have multiple comorbidities — and not have to spend the time or the money on the healthy 25-year-old who hasn’t had a chance to get any diseases yet — is really, really important how you put this data together. So that was another thing that was really exciting to them is really high quality, high disease burden set of population.

Third, is really the diversity of the population. So, we have over 190,000 patients and underrepresented minorities. That’s huge in terms of the diversity of the population and then the diversity of the results. The last part is the ability to continue. A lot of the work that has been done is with biobanks that have been collected over 15, 20 years. You go through and sequence it, you create the data, and then you’re done. Either because they don’t have any more samples or the number of samples flowing in is small on a monthly basis.

I mentioned we had over 600,000 patients and 1.6 million samples. That means we have multiple samples on average per patient, and that will continue to grow as we collect more. And so, to look longitudinally at patients, what’s changing, especially when you get into the proteomic side of things, is really interesting because you would expect to see different data and different results over time as their disease progresses or gets better with treatment. And so those are the things that we’re excited about Illumina, and we’re very excited to be and honored to be in a partnership with them and look forward to getting a couple of pharma partners on board to get going.

Chris: Huge congratulations. It’s such a big achievement and such a big partnership that was the result of years of work and reflection of how unique the platform and the data asset is.

Curt: Absolutely, Chris. And I think the good news is we’re just getting warmed up. The team at Evasion has done a tremendous job building the network, building the platform, and bringing the partners in who are contributing data and samples. I’m really excited about some of the academic and health system partnerships because not only will it enable a return of data to them for their own research and put into clinical practice, but also drastically expanding our access to tissue, which is absolutely critical in understanding, what’s going on in the organs at the disease level.

Chris: Yeah. I think the vision, the value already, the acceleration into 2025 —that it’s a good time to be at Ovation, and it’s pretty impressive how this all continues to come together and accelerate. You know, I think one thing we want to spend a little bit of time on is you have an interesting journey to Ovation. You used to run a large business unit at Optum and are a deep expert in the space. I think it would be great just to share both how you thought about that transition from a large, massive company but running a big business unit to now Ovation and also how you think about your leadership philosophy and company building, certainly as you’re building all this momentum.

Curt: Yeah, absolutely. So, for all my former Optum colleagues, it was a big business for everyone outside of Optum, but within Optum it was not by far the biggest business. But it was a really exciting and innovative team and business that I had the pleasure of leading for many years. I really enjoyed my time at Merck another large company, for a big portion of my early career. And then at Optum for the decade before I joined Ovation. One of the benefits is you get to see so many different things.

A lot of what I’m applying here came from my experience at Merck working, watching, and learning from some of the top researchers in the world and understanding how they think about identifying targets, and what does a good clinical candidate look like, and how do we put together the infrastructure to go after biomarkers and bring them into the clinic with the drug candidates. Absolutely tremendous learning experience, and similarly, at Optum, I got to see every single aspect of the healthcare system. Me and my team’s role was how do we bring analytics to solve those problems? And that’s data software and, and people.

Moving to Ovation — a startup — I was really looking forward to it. Optum is very much like a large company that is an affiliation of small companies in certain ways, so it’s not quite the same as a Merck, but still, your flexibility, your ability to pivot, your ability to gain investment and go try new things is, always limited when you’re talking about a large company with know, quarterly earnings and lots of sign-offs and decision makers to get things done. I love the hustle, the ability to move quickly, to try things — not all of them going to work. You try things, you learn from it, you pivot, or you augment what you tried and try something. That part of it’s really exciting, the pace and the flexibility.

Growing up in a couple of those large companies, you are surrounded by people who have had a set of broad experiences and broad relationships and a lot of the same development paths in their careers. In a startup, you’re a much smaller team. And so, by definition, you have people with very different experiences. That is both a positive and a challenge because sometimes you take for granted that person X should understand this or person Y should understand this. But also, on the other end, sometimes you don’t ask the question you should ask because you don’t know that they have that experience. A big part of what I try to do with the team is make sure that we bring the best minds together on any particular problem. But also create a culture of — we’re all going to make mistakes, and we’re all going to fail at certain things. It is absolutely the right thing to do to ask for help. I ask for help from the team at all times. There are a lot of topics that we talked about today where there are much better experts on the team than I, and that’s who I go to when I need to understand something or there is a critical decision. We make sure we get the right folks in the room to make the critical decisions — this isn’t an army of one in any particular area. This is a team. We succeed or fail together. And so, asking for help and asking for people to rally is absolutely the culture that I think we have. And we’re continuing to try to foster.

Chris: Yeah. And I think, you know, to your point, too, it’s a much smaller team, but having those diverse perspectives that are different, right. And sometimes maybe unpredictably different, going after problems that haven’t been solved. And I think that brings me to my last question, which is what is most exciting for you about Ovation and the potential in the coming years?

Curt: I see us as, first and foremost, the world’s leading multiomics data provider with the ability to have very, very dense data for each and every patient. So the way people are doing these things, in general, today, they might have very small data sets with each of the critical components together, but if they have anything scaled, usually they have one component with one population, a second component with a second population, a third component with a third population, and then they’re trying to use analytics and AI to sort it all out. And it’s not to say that that’s not a good approach, given the history of how this field has evolved, because it’s really the only thing you could do.

We have an opportunity to put together all of those different pieces of the multiomics puzzle for the same patient with rich clinical data in a way that can really speed everything up, speed up the model development, speed up candidates into the clinic, and ultimately, to the market. That’s what gets me super, super jazzed. I also think that we’ll have an opportunity, as we continue to grow the data set and add more data, to become true experts in this and start to transition to building some of those models or providing some of the analytics as well. So, our clients can, instead of focusing on how we find the targets and biomarkers, they can start to focus more on model building and application after that.

How do they actually speed things into the clinic? How do they speed things to the market, which is where their true expertise lies. Not to say that they’re not experts in, in finding the targets and biomarkers because they are, but if we can be an essential resource in helping provide those answers, they can then apply their expertise downstream, which we will never have. Right. So that’s, that’s super, super exciting to me.

Chris: What you can do to your point with both the data and the modeling and the analytics on top of it, is pretty incredible. And I think that brings me to the last question, which is you really are an expert in precision medicine. And so broadly, on top of Ovation or outside of Ovation, what most excites you or what’s most going to surprise everybody to the positive in the field of precision medicine over the next five years?

Curt: I think there are multiple aspects to that answer. So, the first is, you know, the benefit to the patient. When I think about 10 years, 20 years from now, what does that look like? It’s actually having multiple biomarkers, so you can actually discern even more granularly what is the best fit because as competition increases in individual diseases or individual disease states — so say mild-to-moderate IBD. People are going to come in; they’re going to copy those individual biomarkers. So, it’s going to start to expand where you start to look at a host of different biomarkers. And I don’t know if it’s three or seven or 10 ultimately where the science goes. But you’re going to be able to discern the patient population even more and more finely.

And what that’s also going to enable is a much more structured way of how you select treatment for the individual, but then how do you actually select treatment across the care journey? So, when the first medication, which is a great fit and is working fantastically, starts to work less effectively in year two. Already have the data and information on that patient as to what, what are the signs to look for, and what’s their next treatment. And being able to have it not just at the population level but across the care journey. I think that’s going to be really, really important.

And the second part of it is, you know, what we mentioned earlier. So, as the population gets continually refined and more personalized, right? And competition increases because there’s more data available, you can go through the development and commercialization process faster, the cost of developing these drugs is going to come down, but competition in the market is going to go up. Ultimately, it’s about getting better care for the patients but also being able to do it at a much more affordable price. Piling on more expensive drugs is not a long-term outcome, but enabling this type of innovation in development and commercialization, and then enabling the coverage and clinical selection with this type of data across the entire industry, is really going to enable people to do this at a more affordable price. And that’s part of the ultimate goal. It’s not just better patient care. That’s number one, but it’s also how can we build this in a way that’s economically sustainable and competitive? So that way we can help drive the cost of health care down for the individual at the same time.

Chris: I think that’s such a compelling vision and to think, if you can really leverage all this data, the ultimate outcome is better care for more patients, much more affordably — such a good vision to have and to build toward. And so Curt, I really appreciate you diving into all of this with us on Founded & Funded. It’s super fun to talk about all things Ovation and precision medicine and all of the incredible acceleration at Ovation. And we really appreciate you having this conversation.

Curt: Thanks for having me. If you ever have an empty spot in your podcast schedule, I can talk for three more hours. So let me know. Thanks again, Chris.