From Google and Amazon to NewDays: Why These Tech Vets Bet on AI for Dementia

 

In this episode of Founded & Funded, Madrona Managing Director Tim Porter sits down with Babak Parviz and Daniel Kelly, co-founders of NewDays, a platform purpose-built for older Americans living with cognitive change to help them reclaim abilities, preserve independence, and keep being themselves.

Babak and Daniel share how their experience at Amazon and Google led them to apply AI in the most human way: helping people with dementia and mild cognitive impairment (MCI) with scalable solutions.

They dive into:

  • Why we chose to build at the intersection of AI and cognitive health
  • How to translate clinical science into accessible daily experiences
  • What “meaningful velocity” looks like inside an early-stage company
  • How to balance mission-driven purpose with commercial viability
  • Lessons learned from scaling startups and building impactful teams
  • Why the next wave of generative AI is human-centered, not model-centered

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Tim: Well, let’s start with the obvious. This is a massive problem, and it’s a deeply personal one. One in three people over the age of 65 in the US is dealing with some form of cognitive impairment, dementia, or early-onset Alzheimer’s. We all have someone in our lives, either a parent or a relative, an aunt, uncle, a close friend, and we know it’s heartbreaking because there’s really no treatment to date, there’s nothing you can do. You’ve now identified a treatment path that is actually validated, and technology, specifically large language models, has made it possible to go build a service to address this. But yet it’s a big step to decide to start a business. With that big problem, what made you say,” this is the right ‘why now,’ let’s go start a business to address this opportunity or this big problem?

Babak: Maybe to take a step back and share where this all came from. I was at Amazon for a number of years, and one of my roles was to figure out what the company should do next. So we ran many different investigations in many different areas, and one of the investigations that we ran was about aging and what happens to older people. We looked at this nationally and globally, and we surfaced many problems, and this was, I would say, one of the most daunting areas that we looked at. But it was really hard to find radically new solutions to these problems that we could believe or have ourselves believe that they could move the dial in a meaningful way.

And as you mentioned, if you think about the scale of the problem, today, if you look at the population over 65 in the United States, 11% of them have dementia and on top of that, 22% have something that’s called the mild cognitive impairment, or MCI, but that’s truly a euphemism. Someone with MCI is impaired enough that they may not be able to pay their bills anymore. So these are very substantial cognitive issues. So one third of people over 65 are battling with these cognitive changes. And we all know that our ability to remember things, our ability to reason about things, they really, to a large extent, define who we are. So once they start to go away, we feel like we are fading away, and people around us also feel that the person is fading away.

So there’s substantial emotional toll on the person. And after that comes financial issues, many health issues. So these are very, very serious things that we have to deal with. And even though we just say, okay, one third of people deal with that over 65, but if you think about any person living in the US today, they are very likely to have a spouse or parents or siblings, so with the likelihood of 90%, this situation would hit one of us sooner or later. So basically, it’s going to hit all of us.

And if you think about what solution is available to people today, if I go to a doctor and get a diagnosis of MCI, for the most part, the next step is good luck. There’s not much you can do. There is no drug today that can cure these diseases. There is no drug today that can even stop these diseases. So this is highly motivated both Daniel and I to do something about it. And the way that we went about doing something about it was to step back and see where it is that we have solid clinical evidence of an intervention that can help people? So we did all the background research to surface randomized clinical trials, and that’s the gold standard of medicine, that showed any form of intervention that was beneficial to people.

So we went after those, but we realized none of them have really scaled because they’re limited by the availability of trained expert humans. And then we realized that now we have also radically new technology available to us in the form of generative AI. So we put the two together. There was a massive need, unmet need, and there was a radically new technology available that could bring medically proven interventions to a large number of people, to put them all together and decided that this is the right time now to go after this problem and help many people that have no recourse otherwise.

Tim: Amazing. Babak. I was not aware of this research. And just in simplistic terms, these therapeutic conventions really boil down to immersive conversations frequently. And there’s a clinical rubric behind it; you guys should describe that more. How does that come together in a product?

So you had this insight, there’s real research that underpinned it. There’d been no ability to turn that into something that scaled before. Daniel, tell the audience what actually is this thing? I’m sure a lot of people are like, “My gosh, I have people in my life I want to introduce this to.”

Clinician-Guided Conversations, Powered by AI

Dan: Absolutely. I like to think of NewDays as a therapeutic intervention for people with MCI or dementia. And really, that comes down to, so it is a telehealth clinic where you meet with a clinician that has expertise in cognitive therapy and then we kind of amplify the ability of that clinician to work with you through these conversations that you do with a large language model.

As far as the product goes, there is just an ability… Well, really all of it is kind of based on these three different methodologies of cognitive stimulation therapy, cognitive rehabilitation, cognitive training. And I think that those methodologies really spell out a range of conversations that can be beneficial for people with MCI and dementia. On one end of the range is just casual conversations, long-form casual conversations. I almost think of this as like going for a walk. These long-form casual conversations are just good things for you to do to maintain your ability to continue to have conversations with the other people around you in your life.

And within our product, these casual conversations are a good way to promote reminiscence about past memories. They’re a good way to practice verbal fluency, so finding words that you want to use to express yourself. They’re also a fantastic way to reinforce the concepts that you’re working through with your clinician in the telehealth setting. Repetition is important for the memory of someone with MCI and dementia, and these kinds of casual conversations are a good way to promote repetition.

Maybe on the other end of the spectrum are more challenging conversations, so these are maybe like doing sprints. These are designed really as a stimulus to stretch your ability within a particular cognitive function, and then that stimulus becomes something that your brain has to respond and adapt to. And then ideally, those challenging conversations would happen in a context that is as close as possible to a real-life scenario, so that you are challenging yourself in these conversations. But then in your real life, there’s a quick transfer between realizing like, “Oh, I was just working on this same skill through NewDays, but now I find myself in a similar situation in my everyday life, and I can apply the same methodologies there.”

Babak: And if you think about all of them, they’re highly personalized and they’re delivered in a conversational form and they require someone who’s trained to deliver these therapies. So they’re highly dependent on the availability of that trained individual to work with a patient to deliver the therapy. And that has been the issue with scaling these interventions because we do need millions of trained professionals to deliver these therapies. They’re unavailable, and even if they’re available, the cost of the highly trained humans to deliver these therapies would be astronomically high.

Clinical Foundations of AI for Dementia

So that’s why these therapies did not scale, and that’s where our NewDays.ai comes in. We allow the scaling of these therapies that are highly personalized and conversational through the use of generative AI or large language models. So you mentioned some of the clinical trials that we have anchored our work on. One of the most exciting ones that we came across was a study that was led by Professor Hiroko Dodge. She’s a professor of neurology at Harvard University. This was a long study, took many years to run it, but the results are pretty fascinating.

What’s the most amazing result about this study is that they managed for the population that was participating, this is a randomized clinical trial, registered, they managed to increase, this is highly unusual, increase the cognitive score of some of the participants. And what this practically translates to is to push back the symptoms of cognitive decline by six months or more. So this is really incredible for giving people time with their cognition.

If you look at those conversations, they look like normal conversations that you might have about the particular topic. Let’s talk about the Second World War or something like that, but under the hood, they are designed to encourage reminiscence. They are designed to challenge the person’s vocabulary in a particular way, and they’re designed to challenge the person’s critical thinking. So even though on the surface they look like normal conversations, when they’re delivered, there’s actually a purpose for these conversations.

So we saw this, we got super excited about it. We licensed the clinical methodology exclusively for our company. So that’s actually one of the things that we deliver through our AI system is dual steps of conversations. So what we are building, as Daniel mentioned, is a system that has two parts. One is that the patient interacts with the clinician on video. That’s not as frequent, could be once every two weeks or once every month, but every day of the week, the patient is interacting with AI.

We still have the human expert in charge, but by using AI tools that are augmenting and amplifying the human expert, we can do 20 times more. So there is the patient-clinician interaction, there’s a patient-AI interaction, and very importantly for us, and that’s a lot of technology that Daniel is building, there’s also the AI clinician loop of informing the clinician and the clinician controlling the AI.

Tim: Great blend of empowering the human to do more, the therapist, and using AI and technology to create a great patient experience, great user experience. And everybody has played around or done more than play around with ChatGPT. It’s incredible, but it can also get off the rails. There are a bunch of problems with it. This is a lot more than just slapping a voice front end on ChatGPT in the background. Maybe talk about some of the things that were hard problems to go from, yes, you can go interact with ChatGPT to having a delightful and reliable exercise and solution you’ve built.

Making Conversational AI Reliable

Dan: These days, I really kind of break down voice-based interactions with an LLM almost into two categories. There’s almost like verbal interactions with an LLM, and then there’s conversational AI. And when I think about verbal interactions with an LLM, it’s almost like I have a query in mind. I have an objective of why I’m coming to this LLM, and I know the result that I’m looking for. And sure, I’m choosing to use my voice to interact with it, but I’m really going to rate that experience based on whether or not you gave me the information that I was looking for in a relatively simple way.

When it comes to conversational AI, I think it’s just a much different problem overall. Users will come to it without a particular objective in mind, and a lot of the problem shifts into this space of having an engaging conversation. And part of it, there is really the semantics of the conversation, and there’s also the mechanics of the conversation. And so the semantics of the conversation is really the content of that conversation, and is the LLM responding in an engaging and interesting way?

I think LLMs are reasonable at this today. A lot of times, though, they can become quite repetitive, and this is in how they’re trained, and it’s also in that the conversation history can kind of shoot the model into responding with the same patterns every single time that it responds. I don’t think that’s representative of what a true conversation is like. I think there’s a lot of variety in conversation. I think that when you have interactions with an LLM where the content is kind of formulaic and repetitive, you start to lose a lot of those engaging characteristics that would make you want to have a long-form conversation with the system overall.

The other piece is the mechanics of the conversation. This is kind of how things are said. I think that there are just a lot more of both of those components involved. When it comes to conversational AI. The content is really important, the variety of the conversation is really important, the mechanics of the conversation are really important, and all of that is just a very different scenario than sitting down and bringing a query to ChatGPT, but just choosing your voice to interact with ChatGPT.

Tim: Very complex system to make it work reliably and really human-like. Babak, you mentioned earlier that there are a lot of clinical trials that underpin that this type of therapy works. You’ve actually licensed some of that research, and then that gets baked into the product. Maybe explain a little bit more about, it’s both interesting technically, how you take sort of this private data and incorporate it into this type of system, but it’s also just super important about the fidelity of what we’re doing. It’s not just having random conversations. It’s actually underpinned with the type of approach that a therapist would take with you.

Personalization, Memory, and the Clinician-in-the-Loop

Babak: So I was at Amazon when we launched Alexa. Daniel and I were at Google when we built the voice interface for Google, which started with, “Okay, Google.” Well, at first was, “Okay, glass,” and then became, “Okay, Google,” and everyone used it on their phones. But, so that the state of the conversation with an AI system up to very recent times was, “What’s the time?” It’s 8:30.” That was the end of the conversation. So multi-turn was very difficult. Now with the advent of the new LLMs, especially ChatGPT, that is widely available, you can have a multi-turn conversation with an LLM, but these are not really optimized to hold a meaningful half-hour-long or longer conversation.

So one of the core technologies that Dan has built is really to enable a long form, half an hour or longer conversation with an AI system that feels natural and engaging. This actually is extremely difficult. It’s done with many agents and many models under the hood. So the technical implementation is quite complex, but we’re at the point that we can actually hold long-form conversations, unlike any other LLM system, which is not really optimized for this purpose. The other point that’s important is that we need to get to know the person and personalize it as we have these conversations, because these are meant to be daily.

So, Daniel and the team have actually had to build a very specific type of memory for the system that begins to learn about the individual, which is very different from a generic memory. Especially for the population that we are dealing with, because as an example, we may hear from the person that, “I don’t have a brother.” Two weeks later the person might say, “My brother said X or Y.” So then the question is what would the memory do about this fact? Because the previous fact that we stored about this person to personalize is that our knowledge is that this person doesn’t have a brother. Now they’re saying their brother is saying something. Is this because of their dementia or is there something else going on?

So the memory actually that’s built for this system that’s optimized for the population that we’re serving is also very specific type of memory that gets to know the user and it sort of reflects on what the conversation is doing. So that’s the second big difference with something very general like ChatGPT. So long form conversations, very specific type of memory for personalization for this population. The third part is that we are having very frequent conversations with our users. We need to report back to the clinician what’s going on in these conversations.

So periodically, our AI system publishes a report to the clinician of what is really happening in these conversations. Is it something that they need to pay particular attention to? And that’s actually something that no GPT system at the moment has. And then our clinicians also give guidance back to the AI of what kind of conversations to have. So at the moment, this is the only system that we know of that is under direct supervision of a clinician. And what our team had to build the ability to deliver these conversations while maintaining the particular therapeutic reason behind these conversations. So those are the things that we are doing right now. Super exciting.

Tim: If you put it in the perspective of the company, and what are the moats that you’re creating, or a little bit, what was the investment thesis? You guys are an amazing team. This is a really big, important market. You’re building really hard tech that somebody else can’t go build. You have proprietary data that’s getting incorporated into it.

And then there’s this human piece about having an actual, the telehealth part also, we decided that was really important and that’s a big lift. You’re operating in three states right now. You’re seeing patients in those. Maybe say a little bit either about other parts of the what’s hard moats or just like why the heck do this human part of it? Why not just launch this AI app out there? That sounds easier.

Why Humans Stay in the Loop: Care, Safety, and Trust

Babak: The human part is really important. This is actually a major part of our thesis that these types of interventions, for the foreseeable future, they should be supervised by human experts. So we don’t want to just run the AI open loop without supervision. We’re always going to maintain the human supervision for this. It gives us confidence that we’re doing the right thing. I think it also gives our users have more confidence that-

Tim: Big time.

Babak: Yeah, the right thing is being done for them. So there’s a pilot in this plane, so it’s not an autopilot.

Dan: I think I see what there’s this saying within the healthcare industry for startups that services save lives and software improves efficiency. And so I think when you’re in healthcare and you’re dealing with someone’s cognitive decline and they have now been diagnosed with that condition, just having a person to speak to about that condition is amazingly and powerful versus, “Hey, I’m just going to go interact with a piece of software every day without any sort of oversight from another person,” I think would just never be able to get off the ground in the same way.

Tim: You all, the product’s live so people can go try it. You can tell others in your life to go try it. Newdays.ai. You built an incredible amount in a short period of time, you’re barely six months into this, and the product’s live. This isn’t the first time you’ve done that. You’ve actually gone from an empty room to astartup before. We’ve referenced Google and different things here in the conversation. Maybe back up, talk a little bit about how did you all decide to work together? What was your history before that’s made you say, “Yeah, let’s go jump into this big problem and do it together as co-founders”?

Dan: Babak and I have known each other for 17 years now. It’s been a while. We’ve worked together a bunch of times. That part of it is absolutely fantastic. I think that the reasons to go do this company, we saw cognition as this huge problem that we should try and go make a difference in that space. We saw LLMs and generative AI as this transformative technology that could be very beneficial there. We both have, Babak has family histories of the disease, I have family histories of the disease. It kind of made a very personal connection for both of us as well to something that we would be just invested in working in because we know how much of a problem this can be for patients and families within the space.

And then, yeah, I think that as far as co-founders, there are aspects of both working at Google, Google X and Amazon Grand challenges. I mean these were very much startups within these giant companies anyway, so in some ways Babak and I have worked together within startups for a long time now, but I can also say that doing a startup and co-founding a startup with somebody that you know so well and you just trust 100% is fantastic because there’s so many things to go do, there are so many problems to solve.

And that if you just know that the other person is capable of getting those things done, it just makes the whole process that much more seamless because you trust each other and you just say, “All right, I know that these are the problems you’re working on and these are the problems I’m working on and we can chat about those problems, but I also 100% trust that you’re getting these things done,” and that makes… Startups are always a journey, but that makes the journey just a little bit easier.

Babak: Plus one to everything that Dan said. I would say both of us are highly motivated, not to do something that’s cool and interesting, but to do something that’s meaningful. So something that’s not a niche product, something that’s genuinely good but also can help millions of people. That really highly motivates both of us,, and we both like velocity of execution because we’ve built a number of things from the ground up from robotic surgery to Google Glass to e-commerce services to human-machine interfaces.

So many things, many times from really the grounds up, like an empty room with nothing. And we’ve built these products, launched them. Some of them have been quite successful, some of them have been less successful, but we’re not afraid of building from the-

Tim: Like all startups.

Babak:Yeah, So we are not afraid of building from the ground up. We love velocity, and we love working on something that’s meaningful. I also have to add that we want to make this a successful business. Because what we’ve learned, I actually do believe in philanthropy and I think in our personal capacities we are engaged in that, but I just put that aside. In order to truly actually change the world with a solution that does good for people, we need to make it financially viable. That’s the only way for something to scale. So we’re building this, even though our intentions are as we mentioned, is really to do something good for the world first and foremost, but we know that we need to make this a successful business in order to have the impact that we want to have. So that’s equally important for us because otherwise it’s not going to have the impact.

Tim: In these early days, I think you’re really focused on initial users getting lots of people to try and experience the product and we’ll come back to that aspect of it, but I think in healthcare, there’s always this, how do you get distribution? There’s multiple ways. It’s early days, but maybe how are you thinking in a really customer-centric, user-centric way about ultimately how do you go to market to use that startup term here for this type of service?

Babak: Yeah. I would say our GTM is still a work in progress. We haven’t exactly nailed it.

Tim: Absolutely.

Babak: We have two things that we are doing. So we’re doing D2C, so we’re trying to remove all barriers for people to access us as fast as possible and with a minimal amount of effort. So there’s a D2C flow to us operating and there’s partnerships that we’ve had a number of conversations with different entities for partnerships that would not be of a D2C type. It would more be of some form of enterprise. So they’re both in progress. We’re trying to figure out which one is the best way.

We’re pursuing both at the same time because they’ve both shown promises and they’ve both shown challenges for us. So we’re learning and we are experimenting with. What I would say is that something that has been highly motivating for us is we recently exited our beta, is the feedback from our beta users. Because as you know, a startup has lots of ups and lots of downs, and every time we get feedback from our beta users is really having a shot of espresso for the whole team because we actually see that we’re helping people, and this has been really meaningful for us. Just in terms of-

Tim: Let me break in on that because I want to talk about this. I mean, Amazon’s famous for work backward. From the customer, I’ve been super impressed by how you’ve gone about it, not just like, of course, we need to go get user feedback, but you’ve had a real process leading into the beta and now the GA is going to lots of different users. Maybe just talk about how you’ve approached that around being really customer driven and getting feedback on how the product’s working as well as any aha moments from hearing this feedback from end users.

Babak: I would say that’s both. Daniel and I have spent a number of years at Amazon, and that’s something that Jeff is amazing at drilling into people, to be customer-obsessed, and that’s really a superpower of Amazon, and that’s learning directly from Jeff of how to be like that. So we tried to.

Tim: I think it’s been a superpower for NewDays too.

Babak: Yeah. So we’ve tried to bring basically that part of the Amazon culture into the company and some of the mechanisms that Andy and Jeff have built at Amazon. But yes, from day one we have been very customer and user obsessed and maniacally basically ran user studies, ran beta tests, measured CSAT, customer satisfaction score, to make sure that we are building something good for people and people actually like it. And I’m happy to report as of now our, CSAT is really high, so it’s measured in the scale up to five. For our AI exercises, our CSAT at the moment is 4.55, which is very high. For our full clinical service is 4.8.

Tim: Wow.

Babak: So people love this service. So that’s the numerical measurement but also the anecdotal, what we are hearing from people is that they are actually seeing really meaningful results. So that has been extra motivating, an extra motivating factor for us to continue. And encourage us to exit beta and make the service more widely available.

Dan: I do agree, as MCI and dementia progress, people tend to socially withdraw a little bit. They don’t seek out social interaction the way that they used to. And there was one participant that was engaging with the exercises and he used to be this really outgoing, gregarious individual and has definitely pulled away from that, but as soon as he started interacting with the exercises, all of a sudden that aspect of his personality came out all over again and his care partner was there with him as he was interacting with this system and her mouth just hit the floor because she had not seen this aspect of her husband in a very long time.

And was almost like these exercises are just kind of a safe space. They can interact with them without all of the social pressure or other potential drawbacks to the kind of fear and worry that come from interacting with other people. And just hearing that sort of feedback and seeing those sorts of things is incredibly empowering.

Tim: So powerful. For me, just thinking about it, it’s like, okay, is someone going to really want to have this in-depth conversation with an application? And not only have you seen they do, this idea of maybe you’re self-conscious in groups of people and unsure of yourself a bit, but that hey, there’s no downside here and then that kind of really builds on itself that you’ve seen that multiple times with your early users.

Dan: Absolutely. I think that there is a certain amount of interacting with another person where there’s some amount of concern that you may repeat yourself or you may not just come across as the person that you think of yourself as. And so that may lead to the decision to just refrain from the conversation and not participate in the same way you would’ve. But when you have a system like this, there’s no judgment there. You can make mistakes. It’s the place that you’re supposed to practice in order to continue to have those interactions with the other people in your life.

Babak: Again, to complement what Daniel was saying, we’ve heard this multiple times now from our users that they feel like this is a no-judgment space. So when they talk to Sunny, our AI character is called Sunny, they feel like they could just talk and they’re not worried about being judged. And we love that because we want them to have these conversations because we kind of know the therapeutic results there, but we want to encourage them also to socialize more.

So it’s not really meant to be a substitute for socializing with other people. It’s meant to be a safe place to practice and get better, so they feel more confident, so they can actually socialize more. So the more they socialize, the better, and this is a place for them to practice and build confidence, and hopefully they feel overall better about themselves and their abilities.

Dan: The other place where we’ve heard really great feedback is from the care partner as well, and that the person with MCI or dementia can sit and have a conversation with our system and it’s almost half an hour to 45 minutes of respite care for the care partner where they get a break and they don’t have to be so involved in the care and management of their loved one.

Tim: You mentioned earlier you’re building a business, people might be wondering, it’s early days and this point of, which I’ve really encouraged too, is to reduce all friction to getting users to experience this thing, wanting to come back, and then those positive references I think build on the system. Eventually, you’ve got to make money. It’s early days, you’re experimenting, but how do you think about pricing a product like this? Some days as an insurance could pay for type thing? How do you think about that aspect of the business?

Babak: So we have two components now. There’s access to AI and all the exercises overall, the AI platform. And then the other one is the clinical visits. So we already accept insurance for our clinical visits. Then that really helps lower the cost, the actual cost to people that participate in the program. The AI access part is out of pocket and we have a subscription for that. There’s a cost associated with that, at the moment is priced at $99 per month, which appears to be fairly affordable for the population that we are targeting. And every day actually you can go in and do another free conversational exercise and then if you’d like, at some point you can upgrade to become a full member of the clinic. But we’ve definitely tried to make it easy for people to access the system.

Dan: Maybe one thing I’ll add on to that is that an additional feature that we think is really important that we’re building towards now is just providing people feedback about their cognitive health. We have to do this. There’s a lot of UX associated with providing feedback, but I think one benefit to the feedback is, or maybe my general impression is that I’m not totally convinced people just want to go sit and have a friendly conversation with an AI for no particular reason. It’s not like you’re walking down the street and you see a stranger and you just stop and talk to them for 45 minutes for no reason.

I think that there is, you want to know what is the motivation or justification for me participating in this particular program. And I think one thing that’s top of mind for all of our users is, is this helping? How am I doing? Et cetera. Feedback along those lines? And so if we can provide feedback, it’s just another piece of the puzzle as far as going from casual conversations to complex conversations, all of that overseen by the memory and the things that we’re learning about you to make it very particular to your cognition and your life experiences. But then also using all of that information to provide you feedback to continue to give you justification for coming back to these exercises to understand that process overall.

Tim: I mentioned how fast you’ve built the service and the company and you’d never do that without a great team. And part of being able to move fast was that you hired a great initial team quickly, and hiring is hard. Maybe talk a little bit about how you were able to do that and the type of culture overall you’ve tried to instill in NewDays here from the inception.

Dan: I mean, we definitely leveraged as much of our personal networks as we possibly could in order to find people that were interested in this space. It’s nice having-

Tim: The mission’s important though.

Dan: Yeah, the mission is kind of one of the big selling points for coming to work for this company, is that if you are the type of person that wants to take your technical expertise or your business expertise and apply it towards making a real difference in someone’s life, then this is just a fantastic company for you to come look at.

And so yeah, leveraged personal networks to the greatest extent possible. And have also just been super scrappy about finding fantastic talent to help us go build things as fast as we possibly can. And we leverage, we have people here in Seattle, we have people in New York, we have people in Argentina, we have people in Indonesia, and they have all just been… Every day I’m impressed with the work that they’re getting done. And so yeah, I think it’s just a fantastic team so far.

Babak: I think that’s an accurate statement that every single person that we tried to recruit, we actually successfully recruited. So that has been our track record. And hopefully we can-

Tim: Knock on wood.

Babak: Knock on wood. Hopefully we can maintain that. But pretty much everyone that we wanted to get, we got. Because I think part of it is because they really found not just the technology exciting because this is obviously cutting edge technology and AI, but the mission is meaningful so that both components to work on cutting edge technology, which is the hottest technology of the day, but use it to do something that’s very meaningful and humanly relevant. I think that the combination of the two has been a good combination. As Daniel mentioned, our team is quite distributed. We try to fly people in with the regular cadence, so we physically actually get together with the regular cadence and that has been helpful in building the company culture.

And that’s something that I got to confess that that’s different post-COVID because I guarantee that before COVID, if Dan and I were building this team, we would’ve insisted everyone to be in the same building in Seattle. This is different. So the post-COVID world is different. We are learning how to operate in a more distributed way. So this has been in a sense liberating because we can recruit from all across the US and globally. As I said, we have people in three different continents right now, but we also, we have to be even more intentional about building the culture of the company to build trust and build velocity. And part of it still relies on flying people in to spend some time physically together every month.

Dan: I agree that it’s the post-COVID world is just totally different. Babak and I chatted a lot about this and we both agreed, we’d been part of a hundred percent remote teams during COVID and agreed that that didn’t work. We’ve also been part of teams where everybody’s in the office five days a week and you almost get some of the most toxic environments, competition between different teams. And so it’s not just like being in the office all the time is the solution either. I think ultimately the solution is you have to put work into culture, the company culture, just the same as you have to put work into the technology you’re building or the business you’re building. And if you’re not doing that, then probably the culture’s going to get away from you.

Tim: You raised a nice round of funding early on to start the company. Madrona was, we were grateful to be able to invest in General Catalyst. Holly Maloney has been an amazing board member and investor, lots of healthcare experience, but still the amount of resources paled compared to doing this inside Amazon, doing this inside Google where you did start with an empty room, but there’s more resources there if you need them and can make the case. What other learnings generally, advice for other founders, how to get going, how to move fast, have you learned? Probably some things you’ve learned the hard way here, even in these first months.

Babak: If you get a no from a VC firm, do not necessarily get totally discouraged because that, premiere VC firms see a lot of pitches and the success rate in these firms is 1% or below. So that no doesn’t necessarily mean that your idea is bad or you’re a bad entrepreneur. That no might actually mean that the answer is no at that particular time, and maybe next year the answer is yes. Or there’s a lot of things going on inside the VC firms that might actually result in a director saying no. So I would say to an entrepreneur, if you hear a no, don’t get discouraged. So continue if you’re convinced that your idea is good.

But the other thing that I would recommend is to maintain execution velocity. Because if I think about the startup, Daniel and I have been also part of one of the biggest companies in the history of the world actually. Big companies have a lot more money, they have a lot more resources, they have a lot more people, and actually good and smart people. So they have everything going for them. They have more access to customers. Everyone they call will pick up the phone. The only thing that the startup has is the velocity.

So as a founder, if you find in a situation that your velocity is starting to stall, that’s a major red flag. So we have to have the velocity. That’s the only way actually to be able to be competitive in a place that you don’t have massive amount of money, you don’t have massive amount of infrastructure, people, all that. So I would say paying attention to velocity is super important and that’s part of the culture building in the company that we have to be intentional. The company actually incorporates velocity of execution as part of the core of the values of the company early on.

Dan: I was having this conversation with Trevor, our CTO the other day, and we were talking about moving fast. And I gave this example of there may be a problem that’s posed where someone says, “Which one of these two objects is bigger? The bus or the motorcycle?” And our engineer instincts are like, “Oh, I know how I can figure this out. Here’s my process description of how I’m going to do it, and I’ve calibrated all my measurement instruments, and I know exactly the experiment that I’ll do.” And then someone else walks up and just says, “Yeah, the bus is bigger,” and you move on.

And I think that when you’re doing a startup, there are a lot of times where you’re facing those sorts of problems. The bus versus the motorcycle, it’s just harder to recognize because you’re working in a brand new space. But there are just a lot of times when you almost have to turn down your engineer instincts a little bit just to move faster when the solution is actually obvious. And it would just help you to move faster and go figure those things out quicker rather than dialing up all of the engineer processes that you know how to do and have been beaten into you for a long time.

Tim: And then jump on the motorcycle.

Babak: So I would add also one thing that might be counterintuitive, but I would say a bad decision is better than no decision, because a bad decision would allow us to proceed and figure out the problems, and the decision was incorrect and course-correct and get to the right decision, but indecision and making no decision would basically torpedo the operation.

Tim: Well, thank you both so much. So excited about the future of NewDays. It’s early days, but the results point to a future where AI for Dementia complements clinicians and restores confidence. Appreciate the opportunity to work together and build this service. So congratulations and thanks.

Babak: Thanks so much for having us.

Dan: Thanks, Tim. This is great.

Learn how AI for Dementia supports patients and care partners at NewDays.ai.

The Future of Biology Is Generative: Inside Synthesize Bio’s RNA AI Model

 

In this episode of Founded & Funded, Madrona Investor Joe Horsman sits down with Jeff Leek and Rob Bradley, co-founders of Synthesize Bio, a foundation model company for biology that’s unlocking experiments researchers could never run in the lab.

Jeff, chief data officer at the Fred Hutch Cancer Center, and Rob, the McIlwain Endowed Chair in Data Science, share:

  • Why a startup the right fit for generative genomics
  • How generative genomics could reshape research, drug trials, and more
  • Why now is biology’s “ChatGPT moment”
  • What makes Synthesize a true foundation model for biology (not a point solution)

Whether you’re a founder, biotech innovator, or AI researcher, this is a must-listen conversation about the intersection of AI, biology, and the future of medicine.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Joe: Maybe we can start off from the beginning. What is the founding story of Synthesize? Take us back to the conversation that kicked off this company, and why we’re having this conversation today.

Jeff: Rob and I have known each other for a long time. We’ve been academic colleagues for probably 20 years and have followed each other’s science. I moved back to Seattle about three years ago, and as academic leaders, we were running into each other in the halls. This idea of building a foundation model for biology was something that was on both of our minds. We started talking about it, and there was just enough information out there and just enough of an idea out there that we felt like we might be able to take a crack at it.

And so we started thinking about it a little bit and then immediately emailed you and Chris here at Madrona and said, “We need to talk to you right away.” Because we sort of felt like this was the convergence of the moment in time when there was the right kind of data to make this possible and the right kind of technologies to make it possible. And both Rob and I were really excited about trying something, taking a big swing, and trying something cool.

Joe: Maybe to take a click up in elevation. Today, there’s this Cambrian explosion of AI and bio. And I’ll maybe put “AI” in quotes here. There are lots of people who are building things that are probably equally well done by linear regression. But as you point out, there have been some amazing advancements in things like protein design.

There’s a Nobel Prize; lots of companies are now using this to develop drugs or to develop drugs in collaboration with the biopharma ecosystem. But you’re doing something different at Synthesize. I’m curious why do something different, but also how are you thinking about where this fits into the broader ecosystem of everything from people in your labs getting their PhDs and masters, all the way through big pharma trying to find the next blockbuster drug?

Jeff: When we set out to do this project, we didn’t think about solving one particular biological context. There are certainly specific problems we could have tackled, and we both have in our academic careers, which are very highly contextual to a specific disease in a specific set of people. And our goal from the start was to try to do a model that would, in a similar way to large language models, have enabled many applications, we wanted to build models that would enable many different applications. And so I think that’s one difference of why.

One way in which we distinguish ourselves from a lot of different groups is that we’re trying to build this broad-based foundation model with lots of applications. And so whether the pharma company of interest is interested in cancer, or whether they’re interested in neuro, or they’re interested in cardiovascular disease, our model has capabilities in all of those areas. And so for us, it’s less about one specific target and really building that foundation that lots of people can use to accelerate most, if not all, of science across drug development. And so we’re excited about putting it in people’s hands and seeing how they can try it out and use it in a lot of different contexts.

Rob: I think there are a lot of reasons from computer science and machine learning literature to think that modeling the most diverse possible training data set will give the best results. It’s clear that with language models, this is the case. For a long time, people made really focused ML models, and they made progress. But it just turned out that taking the biggest corpus of text you could and then shoving it into a model and letting it model it.

Joe: Yeah. The bitter lesson.

Jeff: It’s pretty tough if you were working on that very specific application, then to have these foundation models come in and just do better on all of the specific applications.

Rob: It’s incredible. Right? I think very few people would’ve predicted this. That like, if your goal, for example, is to help people with legal contracts, maybe you’ll do better by starting with modeling the entire internet.

Jeff: It was not intuitive that that was going to be the solution. And I think probably the same for us.

Rob: Exactly.

Jeff: If you care a lot about this gene in a particular brain of a particular human, it’s not clear that modeling the entire corpus of gene expression data ever collected is the right way to solve that problem.

Rob: Exactly. I mean, I’ve built various ML models throughout my career in my highly specific scientific problem. And some have been useful, some have not. But I would say none of them have made me say, “I can do something I couldn’t do before.”

Joe: Can you explain how this fits into how we think of biology and maybe the central dogma biology that we all learned in high school?

Jeff: To recall your high school biology — everyone has the same DNA sequence in all of the cells in their body, more or less. But then there are little chunks of DNA sequence that code for something called genes. Those are encoded in RNA, and we measure those quantitatively. And so your heart cell is different than your brain cell because you have different abundances of these genes expression, which ultimately get translated into proteins and do functions in your cell.

And RNA is a molecule that’s much easier to measure than something like protein. So there’s a large abundance of measurements of this molecule. And so we, when we were thinking about this idea, we were really focused on where is the capacity right now to generate the kind of training data that would enable us to build a foundation model that really spanned a lot of different applications. And for us, that was RNA.

And it helps that both of us have a lot of experience scientifically, both of our careers are kind of built in that area. And so that was where we started to focused at the beginning. But the nice thing about an RNA molecule is it’s dynamic. It responds to environmental stimulus, it responds to drugs, it responds to what you eat in the day. And so you actually get this readout of biology from this molecule. And so if we can model it, if we can generate data that looks like realistic data from humans, we actually get a real window into the biology of what’s happening in those humans.

Joe: Totally. Like real-time biology?

Jeff: Yeah, exactly.

Joe: And so this is an RNA model that you’re building. Where does this fit into your everyday experiments and the biology you’re trying to unravel at the end of the day?

Rob: I think it’s helpful to always think about analogies with other AI models. At this point, we’re all pretty familiar with large language models, right? And these are generative AI models in the sense that you ask them to do something and then they generate something for you. So a large language model, you give it a prompt, you ask it to do something, and then it creates a bunch of text. And there’s something that’s really useful, lots of people we communicate that way.

We write lots of text. So we wanted to do something like that for biology. What scientists do; what biologists do most of the time is, we generate data then we analyze the data and we do experiments. We might run a clinical trial, and then we get the data and analyze it. And so we wanted to model it was the analogy of an LLM, but for what biologists actually do every day, which is to generate data to do an experiment, get the data to inform the next one.

Joe: And so in building this platform, how do you think about the problems that need to be solved? What can’t be done today that is going to be possible in this new world of Synthesize?

Rob: I think about my own personal scientific experience as a person working at the computer, at the bench, running a lab, et cetera, where I, and people like me, people like Jeff, we’re constantly faced with the need to make decisions when we don’t have enough data. And sometimes that’s because we just don’t have time to get the data. Sometimes it’s because it would cost so much money that it’s not possible, but a lot of the time it’s because the data is just not reachable.

There’s no way we can get it. Now imagine, for example, somebody’s developing a drug to treat neurodegeneration and that drug acts on cells in the brain. There’s no way that you’re going to be able to look inside and see what’s happening in the cells in a patient’s brain who’s taking this drug. But we need this information to make a decision. And so scientists are constantly faced with this impossible task. We have to make decisions. Do we proceed with the drug development when we just can’t get the data that we need? And so we wanted to build a model that would let us get these data.

Joe:. I quite like when all my brain matter stays in my brain. So-

Jeff: That’s the right place for it to stay. Yeah. Exactly.

Rob: Even if there are ethical challenges, you might not want to participate in this experiment.

Jeff: That doesn’t make it less important. It’s such a critical piece of understanding how a drug might function. And so a lot of these experiments that we want to do — an example from my background is the very first data I ever analyzed came from a study where they randomized patients to get either endotoxin or saline solution. And endotoxin is this horrible thing to get where you get really, really sick if you get it. And so people were randomized, so they had to kind of wait and see whether they were in the control group or the get sick group.

But they could only do it on a very small number of people. And they were trying to study the genomics of blunt force trauma. And so this is something that’s pretty hard because you can’t put people in car crashes. So we could do it only at this very small scale where it was just a few people that got randomized to get this really bad intervention. But if you can do that experiment in a model instead of doing it in a human, we could do it for hundreds of people, thousands of people.

There are no constraints around what the experiments you can do. And similarly, if you can sample people’s brains, if you can sample all the tissues in their body, you can get a much more comprehensive view on what’s happening in response to disease, in response to trauma, in response to treatment. And you can do it at a scale and a speed that’s sort of really hard to pull off in a traditional laboratory experiment.

Joe: So I think, as you’re right to point out, this is solving an impossible problems. These are inaccessible samples. These are experiments that are unethical to run. Was there a light bulb moment for you where you’re like, “Wait a minute, I think I can actually simulate these things. It’s maybe not intuitive to me that this would actually work.”

Rob: That’s an important question and something that we thought about a lot. We’re both scientists, I think scientists spend decades being trained to be

Joe:Skeptical by nature?

Rob: very skeptical. Very rigorous. So I mean, we really started out actually by thinking not about how do we make the best model possible, but if we had a model, how do we test if it’s working, what are the ways that we can assess is this doing something useful? Is it producing data that’s meaningful that would actually be useful to people like us? And we actually played this game, this is Jeff’s idea, which is a great idea, which is we started taking the model and simulating an experiment.

We take the model, we would have it generate data for an experiment, and then we put that data next to data from a lab from a parallel experiment. So data either from a scientist doing an experiment in cell culture for example, in the petri dishes or our AI model doing the same thing, but of course in seconds or minutes instead of weeks or months. And then put those data together. And then Jeff would send me these data and say, “Can you tell which is from a lab and which is from the AI data?” And I’ll say for a long time, it would take one or two seconds to say “That’s the AI data.” But there came a time when I couldn’t tell.

Jeff: And the story within the company is this is reinforcement learning with Rob feedback. He was one of the best people at picking out which data set was the AI data, so and which data was the lab data. And sort of once we could get it past Rob, we were like, “Okay, we’re kind of onto something here. We’re at a point where these data are really looking like what you would get from an experiment. They’re sort of indistinguishable.” And a really important point that is actually Rob’s point is when you’re measuring machine learning models, you usually do them in bulk measurements.

You’re measuring their overall accuracy, root mean squared error, things like that. But when you’re measuring a biological foundation model, it’s about what one gene in one environment in one context. And so the way that you measure errors isn’t in this bulk context, but it’s like looking at this particular receptor in this particular tissue under these conditions. And so we would look at areas that were very specific to Rob’s research area and have him look for the exact sort of genes that should be turned on and turned off in the right context. And if you’re savvy about this, you’ll be able to detect them pretty quickly if the models aren’t really accurately describing the whole distribution of what’s going on.

Rob: And I think this is a really important point, and it’s both a challenge and an opportunity, right? The challenge is that in order to build an AI model, train it, do inference, all these kinds of things, you need these aggregate statistics that describe how well the model is recapitulating the kind of whole shape of your data. That’s how you train a model. That’s how you assess it. But at the same time, exactly like Jeff was saying, much of biology, maybe even almost all of biology is about highly specific things. Right?

I’m an RNA biologist, but what I really know about is a couple of genes, and I know about how those genes interact. They make proteins interact with a couple of other proteins and this is my area of expertise. And the same goes for most other biologists and the same is true for drugs. Drugs tend to have ideally a few specific targets that they act on. The same is true for physicians treating patients; they specialize in specific areas.

And so we had to kind of merge these two goals, right? To have a representation of the whole shape of biology, of gene expression data, all these experiments that people have done while also making sure that we captured all the fine details. Because I can tell you that for me as a scientist, if somebody comes to me with a machine learning model and says, “This represents all the data really well. Look at my statistics” and then I look at the one gene that I’m an expert in and it doesn’t seem to understand what that gene is up to, this model is not useful to me.

Joe: That’s exactly what I did the first time you shipped me over the model. I was like, “I’m going to plug in the experiments I know.”

Jeff: I remember that. You sent us back exactly your area of expertise.

Joe: Here’s the genes I want to see.

Jeff: Exactly, these are the genes I want to see.

Rob: It’s exactly right. I mean, maybe it’s like if you have a large language model and it produces text, looks pretty good, but there’s four words that it always misspells. We, as users of language, are going to notice this and fixate on it.

Joe: So I want to come back to the data, but first I have to ask why a company? Right? You both are professors, this is your bread and butter of you could build this, get some amazing papers out. Why do this inside of a company as opposed to just, “Hey, my lab now does synthesize.”

Jeff: That goes back to the email I sent you and Chris right at the start was this was such a cool idea, and we wanted to get going on it immediately. We didn’t want to have to wait. And I mean, there are many amazing things about being an academic researcher, but being able to capitalize on a big swing idea on a very short time scale is a hard thing to do just the way the systems are set up. And so we wanted to move really fast and we wanted to go really big and it felt like the best context to be able to do that was as a company. I feel like that was what drove a lot of our interest in moving this way. What do you think?

Rob: I totally agree. Velocity and scale. We sent you this email, we had some conversations and then we were going. We were doing the things. And that’s exactly what this needs. And I think the second point is scale. We want to build something. Like Jeff mentioned, I mean we’ve done a lot of things in our career and it’s been really awesome, but we want to do something that’s going to affect a lot of scientists, maybe all scientists in biology. And to do that we need scale. We didn’t want to model just a couple of gene expression experiments. We wanted to model everyone we could access.

Joe: Yeah. I think this rhymes with a lot of what we see on the tech side where there is a moment right now. Do you think when we’re in the biology realm, is there an inflection point specific for biology. And kind of a two-parter here. Has life sciences and biopharma had this ChatGPT moment or is that still around the corner? Have we truly had the AHA as a field for this?

Jeff: So to answer kind of both questions at once, I would say I don’t feel like we’ve really had our ChatGPT moment in the sense that there haven’t been a lot of these models that have been deployed in a way that anybody could use them, access them, and build on top of them. Even people who are building sort of something that would be akin to a foundation model have tend to do them inside of a single company and not share them with other groups. And so I think there haven’t been as many swings at these sort of foundation models that anybody can use except for in one space where the protein space feels like there has been some of that work, like the protein structure, protein design space.

So some of our colleagues work in that area, and they’ve been based in a similar way on open datasets that then they built models on top of. And we’ve seen the sort of explosion of interest as people have made those available. And so I think we feel the same thing is possible in all the downstream consequences of biology past those sort of drug target identification with proteins and things like that. And so excited about really contributing to that. And I think that moment is coming though because there is the availability of these huge collections of data that’ve been supported by federal funding and lots of other organizations, and now there’s an opportunity to capitalize on really doing the same kinds of things that were done with large language models. And that’s certainly been our approach to this problem.

Rob: I think there are even closer analogies to be made with large language models. The thing that I find so inspirational about these large language models that we have now is not just that they can do things that I can do really well. Right? I mean, it’s cool that they can write text, this is very useful to me, but they can do things that I can’t do and I never can do. Right? They can translate between any two languages instantly. They can program way faster than any human ever can.

They can do these things that are just beyond human capabilities right now and are probably never going to be within the scope of human capabilities as we understand them. And this kind of by analogy, what I find equally inspirational is — it’s amazing that the protein structure problem has, in many ways, been at least partially solved. I think that’s incredible. But what I find truly inspirational is protein design, making novel proteins that didn’t exist before, that don’t occur in nature. And I think we can do the same thing in other areas of biology. That’s what we’re trying to do here for gene expression.

Joe: That comes back to the data where there’s no internet to scrape for biology. I mean, there are lots of papers out there, but it’s messy. Where do you think the field needs to go in data? Why do you think there is data sufficient to build what you’re doing at Synthesize and what is the foundation on the data that you’ve been working for the past over a year at this point to get to a generative model for biology?

Jeff: I think this is where picking the right molecule is so important. Gene expression data is amongst the molecules that you see in the central dogma biology, the one that’s sort of measured in the most conditions and been studied in the most context. And so there was a real opportunity to capitalize on the fact that the field in general has measured the experience of humans in a variety of different contexts and measured their RNA. And so while there isn’t an internet to scrape, there are a huge collection of existing experiments. The big challenge though is that it’s using other people’s data in the sense that there’s other experiments that have been done.

They aren’t normalized and synthesized to be worked together in one specific context. And so it’s a huge amount of both intellectual work and engineering work to bring all these data sets together and set them up in such a way that you can actually train a model on top of them. And so we’ve been capitalizing on the sort of ability of our team to bring together a large collection of data sets, the expertise that they have in really synthesizing and normalizing the metadata so that the descriptions of those experiments are common and unified across thousands and thousands of human experiments so that we can build models that understand the contextual representation of gene expression across different conditions, across different tissues, across different treatments.

Rob: I think here we really have to give a shout out to Jeff who saw maybe not the exact use of training generative AI models, but certainly the importance and potential of creating harmonized standardized data sets a long time ago.

Jeff: So my lab has been doing that for a while and I didn’t realize it was going to be a training set when we started. We were normalizing and synthesizing data largely for reproducibility, for helping scientists do their work. And that work sort of was where we built our original prototype, was on those data that I had developed in an academic context.

We were able to use those to build our first prototype of the model that we showed to you when we got together. Ultimately, our team has gone wildly beyond where we were when we started with that data set, especially on the side of sort of normalizing and harmonizing the sort of descriptions of the experiments. But yeah, that was sort of part of the reason why we had the AHA moment is we had already been thinking about these big collections of data that we were using in an academic context.

Rob: One of the things that’s been interesting about building this big proprietary data set where we have pair gene expression experiments and then this highly curated metadata we put together is we can see really unexpected things. So one thing that we noticed is that a surprising fraction of experiments are closely related to ones that our model understands well from seeing in the training data. So the way we, this is kind of getting into the weeds, if you’ll bear with me, but we really were into scientists skeptical, et cetera. We wanted to validate our model and so we thought, “Okay, we want to predict future gene expression experiments, so let’s validate it doing that.”

So we picked a date and that was our training data cutoff and all data that is in the public domain generated before that date we trained on and everything that was generated subsequently by scientists and then deposited in public archives, we called our validation set or test set. So we never looked at that data, right? It was totally holdout data and those were future experiments for the purposes of our model. And we can go into how well our model did on that data, which I think did very well and surprised all of us. Really kind of amazing.

But the point I was going to make was about metadata. And one thing that’s really interesting is because we created all the metadata, we could look at statistics aggregated across experiments. And one interesting thing to note is that approximately 95% of all experiments conducted in the future after our training data cutoff date were either in biological context like say primary tissues or cell lines or involve specific chemical perturbations like small molecule drugs or biologic compounds or gene knockdowns, perturbations to genes with CRISPR, et cetera, that we’d seen before. So the key point is that 95% of all future experiments were in a domain that were very close to our training data where we had extremely strong reasons to believe our model not just might perform well but should perform extremely well.

Joe: So can you maybe give a specific example of how you would use either in your labs or someone in your labs or someone in the biopharma ecosystem would use this? So I think putting a concrete example together would be useful for people.

Jeff: I’ll say from my lab, so my background is in biostatistics, that’s where I got my PhD, and so I end up helping people design studies all the time, whether those are clinical trials or preclinical studies or just research studies. And in all of those, you have to figure out what sample size to collect, which population to look at, how do you sample. There’s a wide variety of questions, and usually, you’re just making it up. You’re trying to figure out what it might be, and you’re gambling on a lot of things that you’re making a lot of assumptions about. And so now, we don’t have to make those assumptions. We can just generate the data from all these different circumstances and then we can pick the design that’s going to maximize our chance of success. So I think this is going to accelerate a lot of things where you have to design those studies in advance, and we can now kind of get a sneak preview of what the study’s going to look like before we ever do it, which we couldn’t do before. So it’s really exciting.

Rob: I’m going to give an answer that requires really going into the weeds.

Joe: I love getting in the weeds.

Rob: So one of the things that we’ve done that is super technical but also super exciting is that we have developed a model that lets you not just specify an experiment and then generate the data that results, but also add in data from a lab or a clinical sample and then see what might happen if you modify it. So what that might actually look like for example, is you could take an experimental description like say a sample of a particular cancer treated with a drug of interest and then simulate the gene expression result. That’s something our model can do that we’ve been talking about. The new thing, this in the weeds technical thing I’m talking about is that we can also give it information about what that sample might look like without the drug.

And this is super interesting because that information could, for example, come from a biopsy of a patient who has an active cancer and we’re trying to figure out which treatment course is best so we can give that information to the model and then it can give a patient specific prediction about the effect of the drug. And that is what I see as totally transformative. As you know, there’s lots of academics, lots of companies who are trying to do precision medicine and these efforts are incredibly important, incredibly exciting. But I think our contribution to that is going to be that we can have this huge model that can model essentially anything, but that we can also tailor the results to one person, to one sample.

Jeff: I think that kind of speaks to generally the approach that we’ve taken, which is we want to build these big foundation models that allow you to tackle many different applications. The two things we just talked about, if you go to talk to scientists and say, “Are these two things related to each other?” They’re super, super far apart.

They’re totally different academic disciplines. You would be talking to totally different humans. But our sort of underlying foundation model enables both of these kinds of applications and many others. And so if you ask me what I’m most excited about, it’s actually the things neither of us has even thought about yet.

It’s sort of what the grad student who’s staying up late and trying to figure out what problem, how to solve their problem that tries to apply this and can move forward a whole field that didn’t have answers before. And we don’t know what all of those applications are, but I think that’s so exciting about this to me is sure, we can come up with our ideas, but I’m excited to see what other people come up with.

Joe: On that, I guess let’s fast-forward 10 years. What do you envision the new lab is going to look like with tools like you’re building in the hands of every scientist having this out there for anyone to use? Where do you see this being game changing? How does it impact not only the day to day but I’m going to call it the year to year for the biopharma ecosystem?

Rob: Well, we’re really different scientists, so maybe we can just give our own answers. I would say for me the future I’m excited about that I want to help build is where there’s a seamless blend of what we’re calling generative genomics and wet lab experimentation in clinical trials. I think there needs to just be constant flow of ideas, data, everything between these different areas that a scientist in 10 years or hopefully we’re trying to move quickly, two or three years can do something like simulate a drug screen using our model in an hour in their computer.

Use that then to choose a cell line where they’re going to do an experiment in the wet lab. Get those results back and then use that to then inform, “Okay, we actually need data from this other system using the AI model,” et cetera. So there’s just this constant interplay between different sources of information and data.

Jeff: So my lab is largely computational and so I don’t have a wet lab like Rob does. So for me, it’s really empowering for all the students that work with me. We typically have to form collaborations with people like Rob to generate these data and sometimes that’s the only way to do a kind of experiment and sometimes there are creative ideas that we just can’t find the right collaborator for. And so this is empowering students and postdocs and site research scientists to try experiments that they would maybe not have the right collaborator for or maybe not have the right funding for, be able to go pursue that kind of wild idea that would be tough to pursue otherwise.

So I think I’m really excited about that enablement. The second thing that I think about a lot is how many bets we make. We have to bet in people’s time, in resources, and those bets are often made on relatively little information. So you sort of read the literature, you know what your friends are working on, and you’re sort of gambling that this next idea is going to be the right idea and that the experiment’s going to work out.

And as a person who collaborates with lots of different labs, I’ve seen firsthand how many of those bets don’t pay off and what the consequences are for science and both speed and the people that are actually working on those projects. It can have a huge impact on their careers and their lives. And so I’m really excited about increasing the win probability on every science bet that people have to take. They get to see a little bit in advance, they get to make better bets. Even if we increase that by 15, 20, 30 percent, that’s a lot of resources, that’s a lot of speed that we’re buying for the whole field.

Rob: Our vision is that what we’re building can be used throughout the research chain and the drug development value chain. The examples we just gave are of basic research or maybe translational research, but I hope we aspire for our models to be equally useful in a clinical setting. Earlier on I was talking about how scientists have to make these impossible decisions. Right? You have a certain amount of data, you’re not going to get any more. It’s not enough to know, but you got to make a call. And I think a great example is with clinical trials. Like phase one trials are designed to test drug safety. That is their purpose.

Nonetheless, if you can get any information on efficacy, that is going to be very useful. And so people find themselves in a situation where they’re using a trial that was not powered to make efficacy statements and you’re kind of reading the tea leaves on are there efficacy signals? Is this going to inform the decision that we make about moving forward? Which is a very important decision. Because if you move with one trial, you probably won’t move forward with another one, right?

It’s not just about that one drug, it’s about all your shots on goal. You could imagine, and we’re actively working on this, doing things like taking our model, conditioning. This is this reference conditioning, in the weeds thing I was talking about earlier, conditioning on the results from your small, say N equals 12 patients phase one trial, and then inferring what the results would be like if you’d run a fully powered trial with hundreds of patients. Now, of course, this isn’t the same as running a trial that costs a hundred million dollars, but it’s a lot more information than you had before. And it’ll-

Jeff: And cheaper too.

Rob: It’s a lot cheaper. And it’ll help you make a better decision.

Jeff: And if you’re going to make a $100 million bet, you might want to have some information before you make that $100 million bet.

Rob: Right? I mean, that would just be so useful to get more information to say, “I have a little bit more confidence now in either dropping this program or doubling down.”

Joe: Rob, Jeff, I really appreciate the time today. I want to leave it for you for the last minute. You’re building something for scientists that they can pick up today. Where can people go to learn more about Synthesize, to get access to your models, to start becoming the future of science where I can be empowered by a foundation model?

Jeff: First of all, thanks for having us. We really appreciate it and thanks for being such great supporters in general. It’s been amazing to work with you and Chris and Matt and the rest of the team here at Madrona. People can go to Synthesize.bio and access our models, whether they want to access them directly through our web platform.

We also have API access that they can get access to them in R and Python, which is where a lot of computational biologists live. So they can go access those today and they can go read our preprint about our GEM-1 model that’s online right now and they can see how we’ve carefully evaluated our results and make sure we’re being skeptical scientists.

Rob: Just to double click on that, we really want as many people as possible to use our models. We’re making them available for free right now so that anybody, regardless of where they are or what they’re doing, can experiment with our models, see where they work well for them and let us know if there’s areas where they don’t. Because one of the cool things, the opportunities about a model like ours is that it can be used for almost anything in biomedical research.

And so we’re still figuring out what are the areas where there’s going to be the most transformative impact. Just like with LLMs, I mean five years ago, I wouldn’t have told you the LLMs would revolutionize programming. I don’t think anybody would’ve said that. And we’re trying to figure out what are the areas where we’re going to see the most increases in velocity and scientific power from using the model that we built.

Joe: That’s amazing. I’m super excited. I think we’re in the very early days of this, and so I’m really excited to see where this goes. Like I said, not ChatGPT moment yet, but I think this is going to be impactful for the future of drug development. So thank you both so much for coming on today, and look forward to continuing to work with you.

From $1M to $10M ARR in 6 Months: How Fyxer is Winning AI Productivity

 

Fyxer.ai is an AI-powered assistant that helps users reclaim hours of their day by managing their inboxes, writing replies, and taking meeting notes. But Fyxer didn’t start with AI. It started with a decade-long executive assistant agency that gave them a treasure trove of data and insight that became the foundation for integrating AI.

Fyxer CEO and Co-Founder Rich Hollingsworth joins Madrona Managing Director Karan Mehandru on this episode of Founded & Funded to share how Fyxer went from $1M to $10M ARR in six months with just 30 employees.

In this episode, Rich shares:

    • Why great startups start with real-world pain
    • How there is still room for disruption with email
    • Why quality data matters more than quantity
    • The PLG-to-enterprise playbook that actually works
    • The rowing team mindset that drives their decision-making

Whether you’re a founder, operator, or investor, this is a masterclass in AI execution, startup focus, and scaling fast without breaking.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Karan: You and your brother Archie didn’t just stumble into this; you spent years in the trenches of this core problem before you made an AI product. So, walk us through that journey. What did you learn? What did that teach you, and how did that experience of owning an EA agency inform the blueprint for Fyxer?

Fyxer turned years running an EA agency into AI productivity tools for email and meetings — and hit $10M ARR just 6 months after launching. Launched by brothers Richard and Archie Hollingsworth.

Richard: The intention behind starting the EA agency, which we did in 2016, was to use it as a platform to build the AI solution, but we recognized the technology wasn’t there at the time. When GPT-3 came out, we realized that the technology was available, and we were ready to go, and we got together with our third co-founder, Matt, who’s our CTO, and drew from our executive assistant agency, a kind of data pool, and a series of customer insights that helped inform building the product, and eventually get us product market fit instantly when we released it. I’d say the main takeaway that we believe in the market is most people really underestimate the role of an executive assistant. They think that it’s about executing tasks, book me this meeting, schedule this appointment. Whereas in reality, the value of an assistant is determined by how proactive they can be for their boss. And so building a memory and a knowledge of the customer is the key to being able to unlock real value to the user. And we took that thinking from day one, and that’s why our product is A, built within the user’s existing workflow, and B, it’s built across the workflow. There are lots of point solutions in the market. There are very few products that are a full suite, and that we saw as an essential route to unlocking the real proactive value of an assistant.

Fyxer turned years running an EA agency into AI productivity tools for email and meetings — and hit $10M ARR just 6 months after launching. Launched by brothers Richard and Archie Hollingsworth and Matt Ffrench.

Karan: When did you know that you hit product-market fit? What did it feel like, and what advice would you have for other founders who are still trying to find that first wedge that takes off?

Richard: So, there are two moments that I think about. One was after launching the product, we moved to San Francisco for four months, joined an accelerator, and during the course of that, we grew by eight times in four months.

Karan: Which is where we first met.

Richard: Exactly. I think looking back at it, we had hit product-market fit, but at the time, it didn’t feel like that. We were just pissed that we didn’t hit 10X, to be honest. The real moment that I felt it was at the beginning of this year, when I was speaking to a user, and they told me that we had saved them from getting divorced that year, and that’s when I knew that we were onto something.

Karan: So, as you look back, and you see this explosive growth that Fyxer has seen, what do you think are the scaling risks or missteps that you think were crucial learning moments for Fyxer, and potential pitfalls for other founders to avoid?

Richard: I think the biggest thing that we’ve gotten wrong, and I see it happening a lot, is people spend too much time planning what’s going to happen if their plan doesn’t work, thinking about what they do in that scenario, rather than thinking about what happens if their plan works. And at the beginning of this year, we grew from $1 million to $5 million in ARR in two and a half months, and we hadn’t planned for that outcome well enough. And so we saw, for example, our customer support response time go from five minutes to five hours, and we found lots of similar scaling challenges in the process of growing so quickly. And that was a lesson that we won’t make… A mistake that we won’t make again.

Karan: I can certainly resonate with the thinking around what if things go wrong versus what if things go right. So, that’s sort of what venture capital is all about, too, because we have to believe in the art of the possible. That makes sense to me. So, as you look at Fyxer today, you’ve got capital, you’ve got resources, you’ve got a huge market, and so you almost have infinite choices in front of you, and where you can focus your energy, and your efforts. How do you make the decisions on where you prioritize, what you prioritize? How do you focus the team on the things that potentially matter, the things that you need to do today versus save for another day? What is the framework that you use?

Richard: So, I think this is where we’re exceptional. We have the fortune of having a map from our EA agency so we know exactly what we’re building, but not only that, we know exactly how to build it, because we can see the workflows that have worked historically, and we copy them wherever we can. The mentality we have within the company is, so in 2012 the British rowing team were training for the Olympics. They eventually won the gold. And the question that would always be a challenge to anyone who suggested doing something new was does it make the boat go faster? And we obsess about our end user who is a 55-year-old real estate broker in middle America, and our job is to make their email experience better. If the new tech idea, or the new direction people want to take the company in, or the new user they want to serve doesn’t do that, then we just don’t do it.

Karan: I love that. I love the analogy, too, of the rowing team. One of the things I’ve noticed, now being involved in the company with you, has been that you’ve balanced the bottom-up PLG motion with a top-down enterprise motion. So, you’ve got a lot of pre-users that convert into paying users, but then you’ve also got some really large customers like EXP, and others. How have you done that? It’s typically very hard for a startup to manage, and get both of them right, but how do you manage the bottoms-up PLG usage versus the top-down enterprise deals sales cycles?

Richard: We knew that this wasn’t optional. This is essential that we need to capture enterprise deals if this is going to become a billion dollar revenue company, not just a billion dollar market cap company, but the vehicle in order to do that very quickly at scale is using the PLG motion for individual users at enterprise companies signing up to the product, us building critical mass in the organization, and then taking it to their leadership in order to sign an enterprise deal. And we’re fortunate in our founding team, we have that skill set, and my co-founder Archie, who ran that motion from day one, and we managed to land a $1.2 million contract within our first six months, but it was all generated from individual user signing up, it expanding across the company, and then us capitalizing on that opportunity.

Karan: In some ways, the North Star, the ethos around product experience, and onboarding really haven’t changed that much. It’s just the nature of the deals that are coming in both from in bottoms-up, and the top down.

Richard: Yeah, we think about the user in both scenarios as exactly the same person — that 55-year-old real estate broker in middle America could be working for a large organization, or could be an individual broker, but the buyer of the software is different. And so there’s kind of a sales motion around that, and an onboarding experience around that we’ve had to allocate time for.

Karan: So, I don’t think you can go a day these days not reading, or hearing about how AI is disrupting everything, how it’s innovating, and how AI has permeated every sector of society. So, if you think about all of the stuff that you hear, and read about AI, and specifically related to the market that you’re going after, which is productivity, what do you think most people get wrong, or what do you think most people believe about the future of AI in your space that you don’t necessarily believe?

Richard: The biggest one I think is when I look at what’s happened since GPT-3 was launched, everyone thought that by now email would be a solved problem, that AI would’ve taken this off of our hands. And the reality is that just hasn’t happened at all. I remember when we started Fyxer, everyone said, “Oh, Google are going to do this really well. Microsoft are going to do this really well.” But the current competitive landscape is there are no startups I know who have gotten over a million in revenue, and Google, and Microsoft, I don’t know a single person who likes the products that they put out. So, our view on why that is that people believe that its quantity of data is what gets you the best results from the model, not the quality of the data that you train it on. And we have over index on quality. And I would say that’s the biggest differential we have to the rest of the market.

Karan:
Hearing you speak about email reminds me of that Mark Twain quote, which is the rumors of my death have been greatly exaggerated. And I think people have been talking about the death of email for years now, and it just hasn’t happened. It’s in fact gone in the opposite direction where we’ve gotten bombarded with emails. Let’s come back to Fyxer’s growth, and it’s very rare to see a company go from one to 10 in six months, and I know what aspirational plans you have for the company this year. As you think about thinking about hiring, and building a team that can capitalize on that opportunity in front of you, how do you think about hiring? How do you think about the people that you bring in to Fyxer in context of the growth plans that you have, and are seeing in the company? Are there frameworks that you use? Are there specific skill sets you look for? Are there specific types of people that you’re bringing in to Fyxer? Who are the folks that work well versus not? So, talk a little bit about your hiring strategy.

Richard:
Yeah, I think hiring is the one place that we really invest in looking in the long term as much as possible. Day to day at Fyxer, we’re obsessed by execution, and we’re focused on how do we move the company forward in the next day, in the next hour, and the next minute. And we focus the company very strongly there. Hiring though, we try, and look out as far as we can, and I think about it across two dynamics. One is if we’re hiring an exact person, I want them to have been there done that before. And so all of the folks that we’ve hired in the most senior roles have been a part of businesses that have scaled to as far out as I can see the next three years for Fyxer. But the crucial component is they need to be willing to do the very unglamorous work.

So, when at the beginning of this year we raised our series A, we had a team of only four people, and I had the opportunity to build this with a blank canvas. And the first people that I hired were an exec team, which really was quite an unusual, unorthodox approach. And with that it meant that we could have senior leaders build out their entire teams, which is fantastic, but it required all of them doing the grunt work, if you like, in their first three to six months. And them wanting to do that is the biggest important component for them getting the role. And on the other side, we like to hire people with less experience, but with a real growth mindset. So, hiring for the slope rather than the intercept. I know, is a mental model that you have. We very much lean into that, really, because we like the naivety, almost, of people having not got a playbook for how to do something, for thinking as much as possible from kind of first principles.

Karan: Yeah, I think that’s great, and I love that, and I can see it apply to so many real realms of society, including your work, and my work where pattern recognition, and having seen the story helps you for the first call it 70 yards, and then that healthy level of naivety, and first principle thinking is where the real magic happens. And I’ve seen that pretty much every day in my interactions with you and the team. So, kudos to you on sort of living that philosophy. Let’s go back to something more personal. Not many siblings would even think about starting a company together, yet you and Archie have done that, and done that successfully. How has that relationship either helped your journey as a founder, and how did you guys meet Matt and bring him on as your third co-founder? So, walk us through a little bit of that personal story.

Richard: Archie and I actually grew up on a farm. So, the nature of a family farm is that work, and family life are kind of intertwined very, very closely. So, it always felt natural to us that we would work together, particularly because we have very complementary skill sets. He is a born and bred salesperson, and my role as CEO — I’m a real generalist, so it just fits very nicely. He also has a very low ego, so he doesn’t mind his older brother being his boss as well. And I think that’s quite a key component to making the dynamic work. The journey then, so in the process of building our executive assistant agency, we knew that we wanted to build it into a tech company, and Archie met Matt, actually at a poker game maybe four or so years ago, and Archie was convinced that Matt was the person who should help us build that. And the reason for that is he’s a born and bred product engineer, and so has a 360 view of the engineering team, and we knew that he was very sought after.

So, we knew that we needed a very, very compelling pitch in order to persuade him to join us. And this is someone who the former CTO of Stripe described as the most impressive future CTO he’d ever met. So, we knew that we needed something really compelling for him. And the way in which we did that was taking what was a 500,000-hour time tracking data set from our executive assistant agency that could tell Matt exactly what were the key workflows to build, and then we recorded the screens, or we had our assistants record their screens of the work they did so that he could map the exact workflows. And so our pitch to him was, join us for instant product-market fit. Join us because you have a CEO with experience in building, in company building, and you have a head of sales whose experience in delivering multi-million dollar enterprise sales contracts in this market was what kind of convinced him that we were the right people.
Karan: And having personally interacted with both Archie and Matt. Now, I feel like I definitely resonated to a lot of the comments that you made. Matt strikes me as one of those people who is a man of few words, but when he speaks, people listen, and so do I.

Richard: Yeah, that’s very fair.

Karan: And then Archie, I’ve often felt if there was a picture next to the word grit in the dictionary, it should have a picture of Archie on there. Great. Well, to finish us off here, how about a few rapid-fire questions, if that’s okay?

Richard: Yeah, sure.

Karan: All right. Well, what’s the one thing you’d wish you started doing earlier as a CEO?

Richard: I wish we had hired people earlier. We pushed it too far in the end of last year where the four of us at the time, so it was the three co-founders, and on one employee were working seven days a week, 18 hours a day, and I felt like we paid the price for that towards the end of the year where we didn’t capitalize on opportunities that we could have done.

Karan: Got you. This goes back to your earlier point of planning for success versus…

Richard: Yeah, yeah, that’s a good point. Yeah.

Karan: How do you see the AI system category evolving over the next two to three years?

Richard: I see startups winning it, and incumbents continuing to flounder. I think they’ve got so much… There’s so much disruption being caused by AI that it is redrawing all of the existing product categories that we have that the incumbents can’t help but protect their core businesses. And as a result of that, are going to be unable to innovate to the same degree that startups are, particularly when there’s as aggressive, there’s so much market pull that you’re able to grow a business much faster now than historically has been possible.

Karan: Definitely attest to that. What’s one founder skill that you find yourself learning right now? What are you working on in yourself?
Richard:

We’re fantastic executors. We obsess over doing the unglamorous work. We need to get better at promoting ourselves, and that’s one of my main goals for this year.

Karan: Well, welcome to this podcast.

Richard: Exactly.

Karan: Other than Fyxer, what is the one other AI tool that you and the team can’t live without?

Richard: I think this is probably the answer for 95% of people, but honestly, ChatGPT. I mean, our team relies heavily on tools like Cursor, but the one that they can’t live without is ChatGPT. I think that has found its way into everybody’s workflow and become more and more valuable, over the last year in particular. I’ve tried every other version, Perplexity, and Claude, etc., but I just haven’t found something that works quite as well as GPT.

Karan: I’m a huge fan of this podcast, Invest Like the Best, and they always end the podcast with this question. So, I’m going to ask you the same thing, which is, what is the kindest thing that somebody’s ever done for you?

Richard: I probably just think of my wife. She’s done lots of nice things for me over the years. Last year, she moved our life, including our newborn son, to San Francisco at the drop of a hat, literally within 24 hours of me asking her to move to San Francisco for four months. She just did it without question, because she knew that it was important to me, and that has paid dividends for my life and my career. So, I’m incredibly grateful for that.

Karan: That’s great. I get asked a lot of times, what is the biggest decision I’ll make as a founder and CEO? And I always tell people, the biggest decision that you’ll ever make as a human being is the spouse that you decide to spend the rest of your life with. And sounds like you made a really great decision in that. So, thank you for sharing that. Well, that’s all we have time for, but I am so happy and excited that you and I got a chance to talk. Really proud of our partnership. Really excited to see what the future of Fyxer has as we partner on this journey together, and thank you for sharing some of your minutes in the day with us today. Thank you.

Richard: Thanks, Karan.

Autonomous CRM: How Clarify Is Reinventing a $100B Category

 

In the latest episode of Founded & Funded, Madrona Partner Sabrina Albert sits down with Patrick Thompson and Austin Hay, co-founders of Clarify, the startup that’s pioneering the idea of autonomous CRM.

Their vision? A CRM that actually does the work for sellers of driving outreach, managing pipeline, and surfacing insights without constant manual input.

Patrick and Austin share:

  • Their journey from Iteratively, Ramp, and Amplitude to founding Clarify
  • Why the CRM market is ripe for disruption with AI
  • How Clarify’s unique pricing model flips the script (free CRM, pay only when the AI agent works for you)
  • The concept of autonomous GTM teams and the rise of the Go-to-Market Engineer
  • Lessons in building culture, hiring talent, and embracing the “beautiful mess” of startups

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Well, let’s start with the founding story. You both had other startup experiences before jumping into Clarify. And I’m curious, what made you decide to start Clarify, and why did you decide to tackle such a legacy industry a large one at it?

Austin: We like to joke that it’s a little bit of a romantic story because I was one of Patrick’s first customers when he was building a prior company called Iteratively. I really fell in love with actually Patrick and Ondrej and the way they approached customer-centric problems.

Patrick and I stayed in touch over the years ever since. And a couple summers ago, we got together in New York. And as the story goes, basically had been thinking a lot about the CRM space. I had been thinking about it from the perspective of coming from ramp. I’d spent around two years there watching as we spent millions of dollars to effectively rebuild the same architecture that lots of companies before them had kind of built and failed.

And Patrick was working on Amplitude and kind of experienced the exact same set of problems, really, really smart people, tons of money, talented engineers, but effectively building a huge stack on top of legacy platforms.

And nothing in that stack was incorporating all the modern lessons that we had learned from data hygiene, data collection, taxonomy, and the CDP space. And so when we got together in New York, I remember sitting down for dinner, I was like, or maybe it was lunch. And I was like, “Okay. Well, I’ll share my idea first, and then you tell me what you think.”

And as we shared the idea, we realized that it was actually the exact same one. It was the premise of what would happen if you take all the lessons around a customer data platform and apply that to a piece of legacy software, like a customer relationship management platform. And there’s a lot we could talk about, but I think at its core it was this idea that CDP really invented a much more modern, flexible kind of customer-centric piece of software.

It captured event data, which was a new marketing piece of jargon that now everybody knows and loves. And it was to have the architecture and flexibility to combine the data elements that we’ve become used to from marketing perspective with the sales platform. And I think since then, it’s obviously taken on its own life, and we’ve really leaned into the idea of contextless data. So being able to take almost any type of data source and pull it into the platform and allow you to actually drive a go-to-market motion from there. But when it first started, we actually really brought the lessons from, I think, the CDP space.

Patrick: Yeah, I think the one thing of spending about five years working on CDP is just helping companies put these massive reference architectures together where your CRM was just at the core of it, but you have to bolt on all these other providers, really try to get value out of it. And so when we were talking about the problems and the amount of spend and the amount of tools required to build a modern go-to-market team with dozens of tools, we thought this is a problem that’s worth solving and effectively built a company around it.

Sabrina: I’d say most startups would be very intimidated to go after the $100 billion dollars plus market of CRM. There’s a lot of incumbents in the space, Salesforce, Hubspot, a bunch of new startup players. Why do you think there’s a specific why now behind building the company and going after this huge antiquated industry?

Patrick: In the context of why now we’re at this turning point just as an industry with AI. And so when you think about the different areas that you can actually go address, this was a one that was effectively ripe for disruption. And for us in particular as a team, especially as a co-founding team, there was no other opportunity that we wanted to go tackle that was bigger and bolder than this one.

And we felt like the timing was right. We felt like we had the right team, and we were pretty excited about it. There’s nothing else I’d rather be doing myself. This is exactly what I want to be working on.

Sabrina: I love that. And Austin, maybe you can share a little bit about how you’re seeing some companies that are ripping out their traditional CRM systems, be that Salesforce or others, and building things or stitching together things internally or even just leveraging Clarify to do it.

Austin: I have worked in RevTech for 10 to 15 years now. And I have integrated and manage pretty much every MarTech and RevTech tool under the sun. And what I still remember when we first started talking to people summer or two ago about, “Hey, what’s the number one pain you have with Legacy Solutions?”

It was, one, it just is so painful. It’s a joyless experience. And two, it’s price. So I know we’re going to talk in a little bit about our price and some innovation there. But the number one thing was joy. It was this idea of, “Hey, it’s just hard to use.”

And actually I think there’s some fundamental relationship here between that experience and the law of last clicks, which is to say you want to reduce the number of clicks that somebody has to convert. It’s the same thing that happens for a salesperson every single day inside their CRM.

So if it takes five to 10 seconds for them to get to an action or an outcome. They’re going to be joyless. They’re not going to want to enter the data that they want. And the whole model of CRM is built in this premise that, “Hey, we’ve built a database, and all we need is the human to enter the data.”

And so I think what I’m seeing when I talk to people is this very big willingness to try something new because that assumption of, “Hey, I have to do work in order to get value has just been lingering for so long,” and people are tired of it.

So it’s really interesting in that when we talked about activation energy and what would be the hesitations for people moving from a different solution to Clarify, I thought it’d be like data integrity, and I thought it’d be security, all of the classic things that an SMB and enterprise account will want.

But actually, I think all that is being overcome by the joy and hope for the future. The idea that, “Hey, I actually will have a tool that’s designed purpose-built for me as a founder or sales operator,” and it’s actually really joyful to use. It’s not going to take me 10 seconds to get to an account record.

It’s not going to take me a minute to get to some page. If I have a question about the people I know in New York because I’m visiting, I can get the answer really quickly. These are all the hopes and dreams that people have had with the CRM for the last 15 years, and it’s just remained silent.

So I actually think the thing that is driving a lot of people to Clarify right now is not, “Hey, I’m going to get some specific value.” It’s the hope and the joy that the solution we’ve designed is bringing them. And especially with the fact that we’ve built this in basically a year, it’s really, really valuable. And AI is only getting better every day. So there’s also this anticipation of, “Well, if I can already talk to my meetings and get insights that I can already look at a table and run an insight query,” it’s going to be so much more powerful in six months. I’m here for the ride.

Patrick: I think one of the goals that we have that somebody uses Clarify at a company and then they refuse to join another company because they don’t have Clarify if that’s the type of product that we want build. They love it so much that shapes the way that they think about defining their career path, because I can tell you, having worked organizations where we’re on legacy incumbents, I will never use them again.

Austin: Totally.

Patrick: The tools that you use shape the work that you do.

Austin:

And it’s not even legacy incumbents either. When I was at Ramp, there was a tool that we’re using for productivity. And our CTO, Karim, always used to say, “Let the operators pick the tools, because they’re the ones doing the work.” And all of us wanted this one tool, and it wasn’t even a legacy incumbent. We just had a different tool besides this well-known one in the productivity space.

And it was so bad that we were protesting to our CEO to try to get this productivity tool. So it doesn’t even have to be a legacy provider. There are often alternatives right in market. And we want people to fall in love with Clarify even over our direct competitors and over legacy providers.

Sabrina: Yeah. I love that. Nobody wants to spend time manually adding a new deal opportunity into Salesforce. I think we’ve all been there. And if you could just have it out of magically in the background, be doing those sort of things for you so different people can actually focus on what brings them most joy. I love that kind of concept.

And then just empowering them to be more productive at their jobs and allowing them to do the fun things. One thing that I’ve also heard from customers is that they like how you’re reinventing the business model. With this new wave of AI, you no longer shouldn’t just have to pay for subscription-based, seat-based ways of thinking. How have you guys thought about the business model fundamentally? I know you guys are exploring different ways to price the product. How are we thinking about it?

Patrick: Yeah. So I think generally in the context of AI, I feel like most applications are going to be a race to zero. So we’re not charging for CRUD anymore. We charge for the agent. We only charge when our agent does work for you. It’s effectively a usage-based model. So think of our CRM as completely free. You have free CRM. And what we monetize is the agent that does work on your behalf.

You can think about it as a teammate who’s doing all of those actions for you, and that’s what you’re paying for. This is something I think is radically different than every other competitor that we have in market, which is charging you based off seats. I had the fortune of working in an Amplitude where we had an 800-person company, where we had 400 sales receipts, and we’re paying seven figures for Salesforce at the time.

But half our company didn’t have access to customer data. We think that’s a huge problem. We want everyone inside your organization to have access to customer data. That’s effectively the fuel that drives growth. And effectively, all you’re paying for is our agent.

And if our agent’s great, great, you use it more great. Great. We make more money, but you get more value. And if you don’t like some of the features that our agent does, you can turn them off and save costs.

Sabrina: One concept that you’ve talked a lot about is this idea of autonomous CRM. I think that’s a new shift in some of the other words you’ve used around ambient intelligence and bringing to life what a CRM should be doing for you. Can you share a little bit more about what those two terms mean to you guys?

Patrick: Yeah. I think let’s go talk a little bit about the pain points of the existing CRMs to start with. One is data quality is always a challenge, which has I’d say been that way for decades. The other is that it requires a lot of manual data entry. So we thought about how do we solve some of the challenges that CRMs are plagued by?

How do we want to improve data quality but also help automate as much of the time spent by sellers and founders and reps putting data into the CRM? And that’s really where what we think about autonomous go-to-market. It’s really trying to reduce a lot of the human input and make it so that your tools are really working for you and not against you. And speaking as a founder who set up one of these tools for our last startup, I was spending hours inside of these legacy CRMs, and they weren’t really giving me a ton of value, but I was told that, “Hey, I should be doing this,” and this is the best practice.

So when we really think about the principles of building autonomous go-to-market at Clarify, it’s how do we build things that actually provide you value, take work off your plate so that you can spend more time working with your customers and growing the business less time just doing manual work.

Sabrina: So it’s this shift essentially from autonomy or automation to autonomy and allowing the system to actually go and do all the work in the background on its own, essentially. So you don’t actually have to have the human in the loop to do a lot of these manual things, right?

Patrick: Yeah. And if you think about marketing automation or sales automation, this term automation has existed for decades where we’ve had many, many companies try to go solve the automation side. I think the difference in the context of AI or autonomous go-to-market is the fact that we have agents now that can take action and can problem-solve interaction that you didn’t have previous to this. So a lot of what we’re building at Clarify is how do we incorporate agents or AI at the core of the workflows that we’re doing. But effectively, it is a lot of automation done autonomously.

Sabrina: So how does this work today? If a customer comes to you, how do they get up and running? What are some of the things that the agents are doing in the background already for you that historically maybe you couldn’t have done or a human would’ve been doing in the past?

Austin: Think about the entire journey for the end user. So it’d be a founder, an operator, a seller, sales VP. What are they doing in a day? They prospect. They find people. They go on LinkedIn. They send emails. They have to reach out to those customers. They have to schedule calls. They have the call. Everybody’s recording calls these days.

And then in the manual workflow way, they’re taking that call data. They’re transcribing or creating the emails that they send. And then, afterwards, they have to remember to follow up. They have five, six, seven steps to get to the final deal creation. And at every single step, there’s a touch point that goes back to the CRM usually to inform a CRO or a revenue leader about how is this deal progressing? How is my pipeline doing? How is my business doing? And so when we talk about automation and administrative work, that takes up 20 to 30% of a seller’s life.

People don’t realize that the vast majority of the work and what defines a great seller is doing that work in a critical way. And so not only are we thinking about, “Hey, what’s the next generation of automation?” because you back up five years, people were using tools like Zapier still. They were using out-of-box integrations to move data from one point to another in order to kind of put systems on top of their legacy CRM to do this work.

But we’re really saying, “Hey, actually no, you should just be able to show up with your inbox, get all that stuff out of box naturally from Clarify. When you log in to Clarify, we automatically create deals for you. We automatically send the emails through. We automatically allow you to peruse the meetings that you’ve recorded and ask questions and then maybe draft an email. We automatically remind you from the email that the person you were emailing never responded.

Imagine snoozing your CRM. To me, that is really the power where you don’t have to think about what happens in the sales process. It’s already been thought of for you.

Sabrina: And I think one of the issues with, I guess, if you just go back a few years, is that there are so many tools set off these people. They’re berated with a bunch of different offerings. And one of the things that you just talked about, Austin, is that you can just easily get up and running. It’s kind of like a one-stop shop in a lot of ways.

And so I love that concept of, “Hey, you can just connect your email inbox, and we could do all these things for you.” We have these agents that are running in the background, doing these intelligent tasks on your behalf. Do you think about it a little bit like it’s a co-worker or an employee that you can work with? Is that a concept that resonates with you guys or not so much?

Patrick: It’s an interesting concept. We don’t personify our agents. We don’t give them a name, although we joke about it every so often. It’s like, “Oh yeah, we have a CSM named Devin. Maybe, we should build a Devin support agent.”

And that’s, I think, very common these days. It’s a question of is that the model that we’re going to have two, three years down the line or is this more of a flash in the pan? In the context of humans interacting with agents, yeah, I think that’s going to be around for down until the end of time. I think the hardest part is just how do you personify the work that we do?

And so for us, we’re a lot more trying to design it where it just seamlessly integrates, and you don’t have to think about the AI running versus having to actually interact with the AI. And this is the biggest difference between designing an agent versus designing more ambient AI as we focus a lot of our time on the design side of just making it just work, batteries included, no manual required.

Austin: And I think this actually comes back to, obviously, we have a lot of personas we’re serving, A lot of founders are using Clarify, but there’s also a lot of Rev operators using Clarify, and they don’t want another person to talk to, right? Having another Zapier, having another iPass tool, having another integration to manage is actually a burden.

So I think that’s some of the hypothesis that we’re testing right now, and we’re learning as the market grows, is do people actually want to have the full flexibility of designing an agent or do they just want to show up and be taken care of? And I think we’re watching that evolve right now in the market.

Sabrina: Do you find that your customers are still leveraging Clarify plus another tool of some sort, or do you feel that they’re coming to you and trying to have a truly all-in-one place where they could do a lot of the different functions that a selling organization would do?

Patrick: Yeah. The nice thing right now, when it comes to product prioritization, is that we have a ton of feedback from our customers, which is both good and bad.

They’re asking us to build a lot. I think that’s cementing a lot of the all-in-one strategy that we have to be the one place that people can go both on the pre-sales, sales, and post-sales side. We’re an early-stage startup. So we only have so much things that we can build.

So we do partner with other companies, and we do have customers that use us with other tool, and we have really good open APIs for that as well as things like Zapier integrations to get data in and out of our system. But, yes, our customers are asking us to build a better all-in-one solution for them.

Sabrina: And you guys also recently wrote about this concept of autonomous GTM. It’s a thing that I think you wrote with one of our colleagues, Loren, about, and how teams, again, back to this conversation, are re-architecting the way that they think about going to market with a new set of tools.

Can you define a little bit about how you think about this new world of selling and what an autonomous GTM would mean? And specifically maybe just double-clicking for companies that sell with a more PLG motion and then companies that sell with a more enterprise sales motion, how can an autonomous GTM strategy work for them?

Austin: So if autonomous CRM is helping automate the process of selling, autonomous go-to-market is automating the ways that teams themselves operate and sell. And there’s a lot of market chatter right now around this new role called the go-to-market engineer.

And I actually think that it’s the genesis is something that’s been happening for a couple of years, which is actually that go-to-market teams are becoming more technical in nature because the systems that they’re using are far more technical in nature.

The problems that Patrick and I faced when we decided to start Clarify were not new to the industry. Pat was sitting at Amplitude seeing millions of dollars in spend working as a pseudo-technical resource with teams to try to understand how these systems work. I was working at Ramp as a Rev and MarTech operator managing $10 million in a team of 26 to help our massive sales team be successful.

And I think that, to me, actually is the genesis of autonomous go-to-market is this idea that if you want to be the most successful rev team in the world, you actually have to apply core EPD principals to the art of revenue generation.

So now fast-forward today, you have really complicated systems, but you also have people who have a deep desire to make the most of that. So what I think is happening is that you have many more folks who were traditionally in non-technical roles spanning a huge set of skills in sales.

So just as an example Travis on our team who’s an amazing seller, is working in Clarify, working with workflows, working with APIs. I think part of what’s happened is there’s not only a necessity to be more technical and go-to-market, but there’s tools gifted by AI that allow people to do that.

So what I’m seeing is now there’s a big blend across the roles, and you have less division between a salesperson, a sales ops person, deal desk, and then the sales engineer. And now a lot of those roles, especially at small and medium-sized companies are just being shared.

And what that means is actually I think in the short run, you’re going to have either smaller teams that are more nimble and more resource effective, or you’ll have bigger teams that are just able to produce a lot more volume because they have a vast majority of skills across the team.

And what I think this ultimately leads to is this idea though that you can effectively automate your entire revenue stack and the way in which you generate revenue by using systems tools and people with AI. I still think we’re in the really early stages though.

What’s not talked about on LinkedIn is just how much work and effort goes into setting up these systems and managing them. This gets back to the idea of the autonomous CRM. There’s lots of really cool things you can do with tools now. Even n8n is great, amazing Zapier, Lovable. There’s applications you can build, but the maintenance costs right now is still super high.

So we actually had one of our customers do a post with us last year about the 17 different tools they were using on top of their legacy CRM. They actually ended up winding back a lot of those tools when the person who managed them left because they’re saying, “Okay. Well, we thought that having all these tools would help us, enable us to be more successful in revenue generation.” But actually, what it ended up doing was just creating a lot of maintenance cost, and I think that’s maybe a recurring theme and an unspoken part of the autonomous go-to-market system that we haven’t yet discovered.

Patrick:

The only thing I’d add there is I do think people are trying to do more with less and/or scale quickly. And we talk a lot about even the context of AI the 10-person billion dollar company, right?

And the only way to be able to do that is to be able to build systems that can scale and effectively build your go-to-market motion to scale regardless of if it’s a sales led motion or a PLG motion. Having spent a lot of time working at companies like Atlassian where we did have a huge PLG function and transitioning that over to more of a sales organization over time and then having working at Amplitude that both had a PLG and sales led function. It’s like the more you can look at things like product usage data, the more intense signals you can look at, the more your sellers are working the right accounts.

So when it comes to Clarify, one of the things that we spend a lot of time is just how do we get better intent data into the CRM? How do we be able to action product usage data for building things like PQLs or doing upsell or cross-sell opportunities or even surfacing retention problems or customers that we should be investing more time into?

And that’s effectively just, again, looking at the data and making sure that we have the right automations and the right intelligence on top of that so that the people that we have on our team are more impactful operationally.

Sabrina: Tell us a little bit about your data strategy. Where are you collecting data from? What’s your hypothesis on this idea of intent data? Everybody’s talking about this as it’s kind of like this jewel, but what does that really mean and how do you actually leverage that into Clarify?

Austin: I feel like a lot of this reminds me of the great marketing term in the CDP landscape, which was orchestration. What actually is orchestration? Nobody really knows, but we all talk about it. It’s the same thing here with intent data. I think that a lot of times, you have to back up to what is the actual customer problem.

So if the customer is trying to have insight into the meetings they’re having and the deals that they’re working on, we provide some of the best class data out there. With intent data, in particular, though, I feel like it’s become a little bit of a marketing term to capture data that’s not always clear. You probably won’t get such great data if you’re looking at cookies only.

And then the outcome is what are you trying to do with it? Are you trying to pounce on accounts? Well, if you’re trying to pounce on accounts, you might as well add a third-party SDK to your site to try to deeply identify people and then go after them either with a targeted email message, a website message, or something like that.

So I think a lot of the times, we mix up the marketing term for something new and fancy. And in fact, we should just be looking at the customer outcomes we’re trying to achieve.

Sabrina: And how clean does the customer data have to be? Do you always have to make it structured so you can help them understand what’s behind it and actually utilize that data? Is there a lot of work that you guys have to do for the agents to go be doing the tasks when it comes to cleaning up that data, or is it pretty easily accessible now with how you might leverage?

Patrick: Yeah. So in the context of what we’re building, there’s some level of determinism that you want to build when building how you’re going to market motion. Yes. Agents, AI, and LLMs in particular are really good at building and spotting patterns.

We still believe in structured data. So when you think about the platform that we built for Clarify, it is really focused on relational data, time series data and unstructured data. So we look at things like email bodies and call transcripts to be able to spot patterns.

Our entire system is in venture events. So we look for in particular things like selling intent on an email to decide whether or not we should update a deal or not update a deal. And that’s looking at the overall context of the email and understanding your business at a high level.

But, again, when we go update a deal, we’re updating structured fields on that deal for you. We think that’s extremely important because you want to be able to have an interactivity log of all of the updates on that particular deal, so you can trace it back, you can improve it, versus there’s a lot of, I think, folks who are like, “Hey, I think AI is fairly good.” Just throw it into a noteboo, LM, and make sense of it. That works really good for just general customer intelligence, but it’s not really good for building out automations or capabilities to really scale a business.

Austin: Yeah. And that insight, I think, came from our time working in CDPs, where you absolutely have to have structured data to meet the needs of downstream marketing tools. But it was actually pretty cool. When we started Clarify, when we were talking about the architecture, there was a whole thought process around the idea of what would it look like if you had structured data, but you actually allowed the user to select the interface in which they view that data, which today is not really possible.

But you can imagine a world where even if you collect unstructured data and maybe you structure some of it for marketing purposes or analytics purposes, a person could render the view they want on the fly of the data that’s most meaningful to them. That’s never really been possible before. And I think so much of legacy systems define the mode that you view data and you access data and consume it. And we’re entering this world where, actually, maybe you don’t have to have that at all. And maybe, you can actually let the end user, the salesperson, the VP, the sales manager, define what they see and they’re in control, their own go-to-market process.

Patrick: And you can do this in Clarify because, effectively, Clarify is just an application layer built on top of a warehouse. So you can define or have the AI define whatever fields are being shown up, whatever relational fields effectively build a joint on the fly or do think a computed field as well. And at the end of the day, we’re just rendering SQL that’s querying the data warehouse.

Sabrina: Yeah. I love that, and I love that you’re building a system that surfaces insights and analytics even before you ask a lot of the smart reminders. I’ve used the system before, where it flags. You need to follow up with this person, or you can show signals that it’s been certain amount of days since you’ve let this email pass, and how do you make sure that you’re on top of some of these sort of things? And so I think it’s this idea of surfacing things before you let them slip, which is really, really great.

Patrick: I think at the end of the day, that’s the value that Clarify provides. There are a bunch of crud applications that you can purchase on the market, traditional CRMs. But at the end of the day, when we think about what we’re building, we’re building effectively the platform for go-to-market teams plus the agent that interfaces with it and effectively the agent is really the value here that we’re providing.

Sabrina: One thing that I want to talk about is team and culture. You guys are both founders. Before building Clarify in the past, we are building in one of the most competitive spaces and times of our lives with AI, everybody. There’s a fight for a lot of talent. How have you guys found hiring to be? And what are some of the lessons that you’re learning in terms of just attracting great, great talent to Clarify?

Patrick: One of the things I think we talked a lot about is building the culture to attract great talent.

So I think if you build the right culture, the right people want to come work for your organization. I think we’ve done a really good job, and we codified our values and the culture that we wanted to build early on, in particular, specifically, outcomes to hiring engineers. We think of engineering as part of the product owners. We really want engineers to own more. We want engineers to really be customer-facing.

So there’s a lot of talk right now within AI companies around forward-deployed engineers, but we live and breathe that model. Our engineers are the first line of helping customers resolve issues. That’s helped attract some really great engineers to our team, some that we worked with before and others that are definitely new to the organization. But giving folks a high degree of autonomy and really empowering them to do the best work has made it really easy to hire some awesome talent.

Austin: Yeah, I think two other things come to mind too, one, we chose to reach deep into our networks to hire people that we’d previously worked with and folks that we knew, loved, and trust. So that was an advantage that we had as founders and previous operators just having a large bench to call upon.

But I would say, actually, I give Patrick a lot of this credit is that your job is to be selling everybody on not just the product that they could buy, but the vision for the world. And I think one thing I’ve certainly learned from you, Patrick, is that every conversation is an opportunity not just to sell the person on your product, but to get them to fall in love with the thing that you’re doing and building.

And so one thing that we’ve done early on is we have conversations with people regardless of whether we can sell to them regardless of whether they’re higher or not, we try to help as many people as we can because I think the vision in the long run is that the more you put yourself out there, the more you be consultative in nature, the more you just try to have good karma and help people in whatever they’re trying to do, the more that’s going to come back to you.

And I think one example of that really paying off is we’ve had hires where they didn’t work out and then came back a year later.

We had a hire where we thought maybe we’d sell to them, and they became a customer. We had a hire where there was no chance in heck they were ever going to leave their job. And now, they’re working for us and having a great time. And so I think the lesson, if there is one for founders, is just never miss an opportunity to just lean into a relationship and try to provide value. I know this is hard because everybody’s busy. And so you have to have some level of judiciousness with your time, but you definitely live and breathe this. And I feel like that’s really influenced me and Ondrej and a lot of the team, and just how we think about working in service to others.

Patrick: This also goes into selling. There’s never a “no.” It’s just a “not now.”

Sabrina: I love that. As we close out here, I want to ask a couple of final questions as we think about the bigger picture and the future for Clarify. As we think about 2026, what do we think the go-to-market team of the future looks like? What roles are changing? What roles may disappear or what new ones may emerge?

Patrick: There’s a couple of things here that I’d love to touch on. I think there’s generally convergence back into a unified account manager role where you don’t have the divergence across pre-sales, sales, and post-sales.

I think most of this because you have the tools and you can enable these motions that scale now that you tend to have an account manager that owns the entire relationship, whereas I think previously, there was a lot of different handoffs. But because you want sales and primarily with AI sales to be much more authentic today, you really want somebody to build a relationship with a customer.

And I think that’s what you’re going to have this convergence back into this account manager role. And you’ve kind of seen this with post-sales moving into, I’d say, companies investing less in post-sales, not more in post-sales. And then, additionally, in the pre-sale side, a lot more investment generally in market into automated SDRs, BDRs. And I think that will increase volume over time. But, generally, you still need that handoff to an account manager to actually run the sales process.

Austin: I also think that teams are just going to become more technical in nature. We’re going to see more engineering involvement in revenue tech and revenue teams in general. The line between growth engineer, systems engineer, and rev ops is totally blurred. Now, you just have one person who can actually work across all those different disciplines to just deliver value.

And I think that no matter if we achieve our mission, not of making a fully autonomous CRM, there’s always going to be people who want to use APIs and build their own custom applications on top, especially we go to the enterprise.

And so what I think that means though is that if you have a world where API and technically driven systems are table stakes, then you’re going to have tech teams embedded inside of sales teams that’s going to continue to occur, and it’s only going to get better, especially as things like APIs come out, MCP servers come out. Now, you can get to a world where actually the work of a single engineer could have replaced an entire team of 10 working on integrations between 15 different tools.

Maybe, you just have one to two tools, and then an engineer writing custom code on top of MCP servers to a bunch of different tools handling the kind of custom work that used to be a multi-million dollar division. I think that’s not a crazy thing to see in the next five years.

Sabrina: That’s fascinating. Okay. So for founders listening, what is the most high-leverage thing they can do today to build towards this autonomous go-to-market future?

Patrick: Just make sure that you’re prepping your sellers as well, the people that you’re hiring on the go-to-market side that they’re used to using these tools. My favorite question to ask in any interview, but primarily with AEs and BDRs, SDRs is, “Tell me how you use AI. Are your folks that you’re hiring, are they AI literate? Are you hiring people?” This is a standard question. If I was to hire an engineer right now, I’d want to know how they’re using Claude Code or Cursor or Windsurf.

But we don’t think about asking those same questions on the go-to-market side about how does an AE use Claude or ChatGPT to do account planning and account prep. And you should be drilling this into your team, like, “Hey, we’re going to get left behind if we’re not investing in AI, and we’re not at the forefront of what this is offering us.” The alpha right now for companies is really leaning in on that. And yes, Clarify is a part of that story.

Sabrina: Awesome. Well, everybody heard it here first. You’ve got to use Clarify. Give it a try. I think it will truly change the way that you do selling, and just think about what it means to build the future of a go-to-market team.

So to wrap us up today, just a fun question, what is one lesson from this journey, even just in the past 12 months, that each of you knew maybe going in before starting, Clarify. For founders who are listening, what is something that they can take away with them?

Patrick: The team is everything. Just build a great team that’ll carry half of the way there.

Austin: One thing I was thinking about is that Pat and I have both been through a lot of different startups, and he founded Iteratively. I built a small company. I’ve been working for four different founders in my lifetime. And I thought when we started Clarify that we would make the best company that ever existed because we’d learned all the things that you can possibly learn about startups after working for four or five founders, or seven or eight between all of us.

And the thing that I actually learned is that no matter how hard you try, it’s still going to be a beautiful mess. So my advice to founders would be just embrace the fact that it’s going to be messy. Embrace the fact that you’re going to do things wrong, things are going to go awry.

All the expectations you have about prior building, you can still apply, and there’s going to be new unknown parts of the equation that come up throughout the process. And so just lean into the fact that building startups is messy, and that’s half the fun.

Sabrina: Awesome. Well, thank you so much for joining us today, Pat, Austin. This was a lot of fun for me, and very glad to have you on.

Patrick: Thanks, Sabrina.

Austin: Thanks for having us, Sabrina.

V0’s Creator on What’s Next for AI Dev Tools

 

In this episode of Founded & Funded, Madrona Partner Vivek Ramaswami sits down with Jared Palmer — designer, developer, and founder of Turborepo (acquired by VercelHQ ), and former VP of AI at Vercel, which was a 2024 IA40 Winner.

Jared walks through his unique path from Goldman Sachs to Vercel, and how he combined finance, design, and engineering to create beloved developer tools like Formik, TSDX, and Turborepo, and v0.

The two dive deep into:

  • Why vertical integration is the future of AI-native dev platforms
  • The founding and launch of Vercel’s v0.dev
  • How Vercel is positioning for a world with 700M code-generators, not just 28M developers
  • What makes teams and products move fast
  • Why “text-to-app” will soon become “text-to-business”

Whether you’re a founder building dev tools, a product leader thinking about AI-native apps, or a developer curious about the future of your craft — this episode is packed with lessons and foresight.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Vivek: So let’s get into this because I think there was an interesting thread we were just pulling on. We were both in the same summer analyst class in banking. I ended up sticking with it, and that’s a whole other thing. But you had an amazing evolution from starting in banking to building products, the developer world. Just take us through that transition. What were those early career moments for you?

Jared: My finance career started and ended at Cornell. I really enjoyed it. I think I did it for the wrong reasons. I think I was doing it for the… Well, I shouldn’t say that. I think I was doing it for the status of it. At the time, I grew up in New York City, and so nobody’s parents were software engineers. They’re all bankers and doctors and lawyers because that’s what people do in New York. And then when it came to college, all of what I thought were the super smart kids were in finance, and so that was the competitive thing to do. And me being a pretty competitive person, it just seemed like there, good thing to do and I was really good at it too. I did dabble with physics and math, but finance was like, okay, let’s actually get a job. And my brother works in finance too. He still does. He still works at Goldman, so it was like a family thing too, of course.

So I graduated at the top of my class at Cornell, and I get THE internship, which is Goldman Sachs, Investment Banking, Financial Institutions Group (FIG), although I think you actually did a little better…

Vivek: I don’t know about that.

Jared: Because you got TMT, which is what I wanted to do.

Vivek: In New York, FIG was the thing.

Jared: Well, yeah, so I wanted a TMT, but they’re like, well, FIG, which is very prestigious. It’s one of the most hardcore groups. It also was a very different group. As you know, in FIG, the valuation system, is very different. You don’t value banks off of the income statement. You value it off the balance sheet, and so you don’t get to use all the same models and systems. It’s totally black magic and totally isolated, and it is very pigeonholing, especially me. I was on the banks team, so I was doing investment banking for banks.

It was academically interesting. The hours were grueling, but it was a great experience. I learned how to work really, really hard. But I also came home one day from the internship and I remember this visceral conversation with my dad and I was like, “I don’t want to be anybody that I work for. They’re all on their third marriage.” All the partners, first of all, they like facts edits with… They mark them up. That was a sign. But they’re not going to their kids’ sporting events, they’re not going to the lacrosse game, they’re doing presentations, and they’re helping the world in some way. It wasn’t like a passion, “Oh my God, I need to go save the world.” It was just like I didn’t want to be these people. I also didn’t like the job, so I only lasted a summer.

Vivek: I get that. And it’s amazing how little has changed in many ways. But what’s interesting is, as you say, you started your career, very, very early part of your career in finance, but then you ended up deep in the developer role. Software engineering, but not just application, but deep on the developer side, Formik and Turborepo. What led you to that space? What was the evolution of your journey in the developer side?

Jared: So it came all from design first. But basically, there’s a gradual progression towards deeper and deeper into the stack. I was always the guy that did our frat t-shirts. My mom was the VP of design of Estee Lauder for many years in the ’80s and ’90s. My dad is a creative. He was a music producer at the time, now he’s a tech blogger, and he would do a bunch of commercial production as well, and they taught me Photoshop as a kid. And so I had this innate sort of design family, if you will. And so I was always the guy, yeah, do the frat t-shirts and do the invitations and stuff like that. So one of my fraternity brothers had this idea for an app and kind of like in Legally Blonde, like classic Elle Woods, I was like, “What? Like it’s hard.” Just crazy confidence. And so I just designed it in Photoshop, and that was the tool at the time, and it was fine.

Anyway, that led to a couple more apps and a couple more stuff. And basically coming out of college around 2013, I had some burgeoning freelance design opportunities in front of me, and I lived in Manhattan and I could live at home. I was very fortunate and privileged to be able to do that. And I was like, okay, I’ll go back to finance in a year. I’m just going to do this. And I have gotten some pretty sweet offers for some apps that I can design, and that was great. I did that. And I never went back to finance. But I kept designing, and I designed for my friends, designed for different applications, stuff like that. And my freelance portfolio grew and grew and grew.

And at the same time, there were some new prototyping tools that had come out. The company Framer, which is actually still around today, wasn’t initially its Figma-type site’s competitor. The original version of Framer was actually a prototype into. When I started using Framer, everything sort of just clicked. And I had taken some intro to programming courses at Cornell, but 101, 100 level, right? Nothing serious. But something about Framer just made it amazing because what it let me do is you could import your Photoshop layers and then animate them very quickly with a little bit of code, not like scary amounts of code, but a little bit of code. And you’d see it and it would feel like super high fidelity. You could hand it to a client and they’d be able to play with it before it was built.

So I was addicted to it.

Vivek: Mindblowing

Jared: Oh my god. And it’s funny because now V0 actually is kind of a similar interface with the preview on the right and the code, or now it’s AI in the chat on the left, but it’s still kind of the same thing in my mind, it’s just the AI version of it. But anyway, this grew and grew and grew, and eventually I started posting to the Facebook group of Framer and joining the community, going to the meetups and stuff like that. I was running the Framer New York meetup at the time, and this just grew into a freelancing job. And then I realized, about two or three years into it, I would make a lot more money if I just built the whole app.

And so I did. Just figure out how to do it. My prototypes were also getting so intricate and realistic that I was starting to use Firebase and other things like that. I remember getting a DM one day on Messenger or on Facebook at the time, and it was like, “Hey, I saw you posting in the Facebook group. My name’s Mikhail Lumens. I am a designer at Instagram. I would love for you to come in and interview to design at Instagram.” And I was like, “What?” Anyway, so I did. I didn’t get the job. I failed one of the interviews, but they also said that I just had the craziest resume ever. They were like, “We don’t know what to do with you.”

So anyway, moving along there, my freelance career kept going, and I kept going deeper into the stack, building full-stack applications. I teamed up with a couple of people in New York and sort of grew that into an agency for a couple of years. I was using my dad’s, some of my contacts there, too. And the way I would attract talent was to produce open source. I was seeing what Vercel and Facebook were doing and how they were doing it. They were able to attract talent by leveraging GitHub and using it to not only lower their HR and recruiting costs, but also share and influence the way that they want technology to be built. And I did the same thing.

And so that led me to, when I was working with our clients, whenever I would come up with some novel solution, I would open source it immediately, and that led to Formik and that led to, I don’t know, a bunch of other GitHub projects. Then, I also thought that GitHub was going to be my way to credentialize myself, because with my background, I’m a designer, finance guy, but no one cares if you’ve got a zillion GitHub stars and you’re the creator of blah, blah, blah. No one’s ever going to talk about my background ever again if my experience is like, “Oh wow, he designed blank,” or, “He’s the author of blah, blah, blah.” So I figured, okay, this is my ticket. And so yeah, I just got really deep into open source and again, also building a bunch of applications. And fast-forward five years, I like to say I feel like I built a lot of the internet because I just got so many reps in building so many different kinds of applications.

Vivek: I love that. I mean, I think it’s so unique. Like finance, design, and then open source. You’re combining so many different things. But we’re going to talk about V0, because that’s really exciting. But how did you even make your way into Vercel? You were sort of acquired into Vercel.

Jared: Yeah, that’s correct.

Vivek: What was that journey like? Did you know Guillermo before? Yeah, how did this happen?

Jared: It’s a funny story. So I was building a lot of applications at the time, and Next.js, which is Vercel’s flagship open source project. It is, I’m going to say the most popular web framework in the world. If you name the company, they probably used Next.js, Anthropic, Perplexity, Sora, the old ChatGPT, they’re all Next.js applications. But when I was working for my clients at the time, they were clients, not customers, Next.js was in it’s very much infancy, and I didn’t like all the decisions. And so I actually published my own competitor to it, Trolling Jerimo, I called it After. js.

Vivek: That’s one way to get noticed.

Jared: Definitely. I called it After.js and immediately got his attention, and that sort of started this long relationship that the two of us had. I didn’t actually like everything that Next.js was doing at the time. And so we built our own in-house, and we open-sourced it. And I believe that ours was actually decently popular. I think Coinbase was using our After.js or a while, and so was Microsoft Store for a minute. Well, but basically my most popular open-source project was called Formik, and it was a form library for React. It was the most popular form library for React, but I wanted to bootstrap it a little bit more. Fast-forward another year or so, and the pandemic happens. I got some term sheets, I turned them all down. Pandemic happens, and things were going okay, but not amazing. It turns out forms are really hard and it’s not a great business.

It’s an okay business. But while I was building it, I was running into a pretty difficult problem with build times. And this led to my next company, which is ultimately called Turborepo. So my build times were taking 10 minutes, and that was because I was deploying to AWS Fargate at the time, and there’s no way to roll a Fargate container in under 10 minutes. So if I changed my backend, it took 10 minutes to deploy. I’m very impatient. This was no bueno. I cannot do this. So it’s just me. I need to stay nimble.

So I hacked together some crazy CircleCI scripts, and it was some serious hacking. It was disgusting YAML file that was like thousands of lines long to figure out, okay, when I change this, only deploy that. And I figured there’s got to be some way that somebody solved this before. I mean, this is not a novel problem. It’d be sweet if I could only deploy the thing that’s actually changed. So I realized that yes, this has been solved before and it’s called a build system, and they’re very sophisticated. Google has one, it’s called Blaze internally. Bazel’s the open-source version of it. Facebook has one. They’re all stupid names by the way.

Vivek: I don’t know why that is. It’s weird.

Jared: They all have dumb names. Twitter’s is called Pants. That’s my favorite one by name. Facebook is called Buck. And they all, when I say this part, you’re going to be like, “duh, it obviously works that way.” When they make a change to Facebook, they don’t rebuild all of Facebook. They only build the part that’s different and the part that’s impacted by the change. Of course. But that’s not how the JavaScript ecosystem evolved.

And so this was a sort of Eureka moment for me. I was like, “Huh, well that’s interesting. I wonder if there’s something there.” And I looked and there was some stuff, but it didn’t have the right energy to it. It wasn’t the way I would do it. And so that’s what ended up becoming Turborepo. I was like, “Okay.” So I found a way to get the best of both worlds, which is this idea of let’s only do the least amount of work possible, be as lazy as we possibly can. Only incrementally do what’s necessary, and then cash as much as possible.

Vivek: So Turborepo takes off. Super important, highly efficient. How did this end up at Vercel?

Jared: So I did this all by myself, solo, and then I was going to raise a seed round, and Guillermo was like, “It’d be super sweet if you just joined Vercel.” And I was like, “Well, why don’t you pay me?” And then I actually never told this story. I also then took Guillermo’s offer to Netlify, which is Vercel’s biggest competitor at the time. And that started a bidding more. But I had a lot of leverage because I had term sheets, I had an acquisition offer, and then there was just like, okay, so at any point in either acquisition, I could just walk and just be like, “Okay, well, I’ll just take the round and go.” So ultimately, that went on all summer of 2021, and Vercel made an incredible offer that I just could not refuse, and I didn’t take a dime of funding and joined Vercel.

Vivek: That’s pretty amazing.

Jared: It’s pretty awesome.

Vivek: Does Guillermo know that story now?

Jared: Yeah, yeah, yeah. He knows that story now..

Vivek: He’s not going to be watching this podcast and saying, “Wait a second, I didn’t know that.”

Jared: No. But pro-tip… I’ll probably regret this with my future M&A. But pro-tip to any founder — a great time to sell your company is while you’re raising.

Vivek: Interesting. Why is that?

Jared: Because you have the most amount of leverage, right? Because you can always just walk away and say, “Peace, I’ll just see you in the next round. It’ll be more expensive next time.”

Vivek: Do you need a term sheet in place, or do you think you can just do it even if you’re fundraising?

Jared: I don’t know — it probably depends. The fundraise makes it quite real. If that makes sense?

Vivek: It does. Yeah. No, I think it’s funny. See some of your early M&A skills from 12 years ago, 14 years ago, coming in handy now, right? So now you’re in Vercel, you’re an ex-founder, you’ve been running your own business, set of businesses for a long time, and now you’re inside this fast-growing company, fast scaling company. What is that experience like going from someone who owned everything – to now you’re in a business.

Jared: Yeah, it was different. So initially at Vercel, I came in as a software engineer and-

Vivek: IC?

Jared: Yes, as an IC, and then I built out the Turborepo team over the course of… It wasn’t my first assignment, but I remember the transition away from coding every day was very strange for me. And the other thing that was very strange for me was being blocked. I never experienced this.

Vivek: Someone said no.

Jared: Someone said no to me. What do you mean I have to wait for something? I’m always used to, well, if I can’t solve it that way, I’ll just hit it again tomorrow or I will attack it in another direction or whatever. But when I got to Vercel, I was like, “Oh yeah, the person who’s responsible for Terraform is actually asleep right now because they’re in Europe, and you need to wait until tomorrow to talk to them.” And I was like, “Wait until tomorrow?”

Vivek: That’s crazy.

Jared: And Vercel’s one of the fastest-moving companies in the world, but we’re very distributed. And so time zones are a real thing. This was a very strange thing. I’m used to like, okay, I’ll go figure out, okay, this Terraform configuration and I’ll get it done. And then I realized that’s not an efficient use of time. So that was a big difference. I did adjust very quickly, but that was one that caught me off guard. The other thing that caught me off guard was I was always so sure I was going to get fired, and so I was super terrified of it. So I would always work extremely in public. And this was actually even different for Vercel, but it’s one of the things that my teams do, I’m very proud of. I just treated our public Slack channel like a group chat, and I didn’t really even care the entire company was seeing it, but this was radical at the time. This was like, “Who is this guy?”

Vivek: Radical transparency, right?

Jared: Yeah, exactly. Radical transparency. And I would commit war crimes on Slack.

Vivek: Right, right, right. So they’re like, you have to be here, right?

Jared: Every time I DM him, he’ll just say, “Let’s move to public channel.” I wanted receipts because I was so convinced that I was going to miss something or someone’s going to ask me to do something and I just going to miss it or not interpret it. So if it’s all in the open, well then I got receipts. There’s a paper trail.

Vivek: There’s a record. There’s a paper trail, there’s a record.

Jared: It became a superpower on my teams and it’s actually all of my teams work that way. Even the V0 team works that way right now. And it’s awesome. Because the thing that I did learn though is that you need a certain type of employee that will thrive in that type of team, and you need a, I call it a classroom environment if you will. Right. So there’s certain types of people that raise their hand in class and will get something wrong, and they’re okay with it. They’ll just brush it off and they’ll raise their hand again, and that’s totally fine. There’s other types of people where they raise their hand in class and that’s it, they get it wrong, they-

Vivek: It’s done.

Jared: They’re done for the day. It ruins their day. And there’s some people that will never raise their hand that know the answer. That’s not what we’re looking for. When people who raise their hand are okay with being wrong, can get over it, can check their ego at the door, and the more important thing is the mission, right? So that was what I ended up self-selecting my team for, if that makes sense?

Vivek: And even before getting into V0, you need leadership that allows you to do that. That makes it okay for you to raise your hand, get it wrong, you learn from that mistake and then you come back to the well.

Jared: Totally. But it ended up being actually a key thing that I think helps my teams work so fast. And I think that was the other thing that I learned that I was a little bit faster than others. Again, because I was completely self-sufficient, right? I had done my 10,000 hours, but it was on my own pace. This was different than other people’s pace.

Vivek: Right. So speed matters.

Jared: Speed matters.

Vivek: Especially in a place like Vercel. So okay, let’s take us to V0 — one of the fastest-growing AI apps out there. For folks that know, there’s plenty of people who are users of the product, they’re going to be listening to this podcast. What was the origins of V0? Did you have a team that you spun up that was thinking about a product like this? Take us through the beginnings and the roots of V0 and to where it is today.

Jared: I spent about a year building out Turborepo in Vercel, and then I went from IC to director of engineering for all of Vercel’s open source projects. Which that’s Next.js, that’s the React core teams, Felt, Turborepo, Turbopack and web tools. So the crown jewels included. I did that for about a year. And I was helping the Next.js team dog food, the latest version of Next.js. And instead of building a to-do app, I started playing with some AI stuff. And then I built what is now the AI SDK’s playground, and that led to the AI SDK, which is different than V0, but that came first.

And I knew the AI SDK was going to be big because it was a really big problem. Streaming was very hard. Working with LLMs is very difficult. And so the AI SDK came first. But then fast-forward to the summer… Anyway, after I launched the AI SDK, I basically got the green light to go with full in on AI and go take Vercel AI from zero to one. We didn’t know what that was going to be yet, but we knew that the AI SDK is probably a thing, and we bet big on this AI thing. It’s interesting because the Vercel was always the home of all the crypto apps, but we never really leaned into crypto. It just happened.

Vivek: You just made a home of that.

Jared: It just happened, of course.

Vivek: Yes.

Jared: But we’re not going to take this passive stance on AI. We’re going to go very hard on AI. And this was 2022 something like that. 2023 I guess.

But we had one rule, which was that no random acts of AI, no slop, it had to be pretty good. And we’d seen what other people had been doing, some docs, chatbots and stuff like that. And that was what we would call a random act of AI. And so there were a couple of ideas, there was basically two proposals that I made to Guillermo. One was called DevGPT, which was a ChatGPT slash Perplexity that was very much focused on just our audience – just developers. And it was going to have indexes of all the most popular docs that we would hand-index. And I would do retrieval, very much Perplexity. Whatever we could do for coding, best effort, talk to your Repo, stuff like that. That was in the summer of ’23. This is like GPT-3.5. It was pretty early. The other proposal was called Webjourney. And Webjourney was this idea was that, well, we were all in love with Midjourney, it’s amazing, right? Well what if you could do that instead of for images, you generate an image, what if you could generate user interfaces? What’s funny is we ended up doing both, but we ended up going with Webjourney. And we did some prototypes, and I soon realized that summer that a couple of things were key unlocks. The first was that these models are really good at HTML and they’re really good at Tailwind CSS, which is a special type of Inline CSS framework.

And Tailwind is really incredible because all of your style information, what it looks like in the user interface, is encoded and co-located right at each div that matters. That’s relevant. This is amazing for LLMs because you don’t need to separately stream a CSS file or something like that. And so I started rendering the output of GPT-3.5 Turbo and prompting it, only use Tailwind, only use HTML. And then we’re like, okay, well, if we can do HTML, we can probably do JSX, which is the markup language for React.

So after some prototypes, we did it. So we launched, we then couldn’t call it Webjourney, we launched V0 in 2023 and it was amazing. Just took off from there. But initially, it was this idea of going from text to UI, and the other reason why it was important, we call it SIUI, not even apps yet, is because we didn’t even do full code generation at the time. We just did markup. And that was an important constraint because it allowed us to render the user interface on the fly. And it allowed us to have that sort of Midjourney style pick one, still felt like some sort of image thing. And we launched that, and it really, really took off.

Vivek: And so at what point did it become full-stack text to application? Was that a journey after you had launched it?

Jared: Yeah, so we are always limited by the models. We’re always building a model generation ahead. And basically when we launched V0, I think we were using GPT-3.5 Turbo, GPT-4 320K, and we never actually got to GPT-4 Turbo because it wasn’t good at this problem space. I always thought GPT-4 Turbo was a smaller model than GPT 4 32K, but that’s just me. That’s what I think.

Vivek: The naming convention would make you think that.

Jared: Yeah, but still at a certain point though in context… The other big problem with V0, why we couldn’t do chat initially is because of context length. We only had 4,000 output tokens and 4,000 input or 8,000 input. And then GPT-4 16K and 32K came out. But right now we’re at a couple of hundred thousand, now some are even a million token context windows, you can actually fit something in there, but we couldn’t do that at the beginning, so we had to invent all these weird techniques to avoid that. So we actually couldn’t do chat. But fast-forward another nine months or so, and finally it was like, okay, now we can do chat. And I remember we rebased towards chat in the course of a month. We wrote the whole app.

Vivek: It’s fast for a big company.

Jared: Yeah, it’s pretty fast. Yeah, yeah. And it took us, I remember the stats it was nuts. It took us 10 months to get to our first million of ARR for V0, and then it took us 14 days to do the next and 14 days to do the next-

Vivek: Crazy.

Jared: … and 14 days to do that. It was nuts. And that was all after we launched chat.

Vivek: So you had pretty exponential growth after chat, obviously amazing growth even before then. That’s spawned a number of competitors, both ones that came out maybe a little bit before you guys, and then ones that have come out after. It’s a very crowded space at this point, but obviously there’s a reason for that. It’s a big market. And so how do you all think about V0 in this kind of competitive landscape in terms of the kind of folks that you’re going after, the market you’re going after, and then how it fits in the broader company?

Jared: So, backing up, we’ve gone from text to UI to now we’re at this text to app modality, and I think we’re going to get to text to business in the future.

Vivek: Watch out.

Jared: Wow. VC moment there.

Vivek: Easy. Let’s go.

Jared: So we’re text-to-app right now. And I think about it as there’s sort of two sides of this. You could argue or steal, man, it’s kind of Snapchat and Instagram stories and Snapchat exists. How do I say it’s not a problem? It’s not a problem, it’s an app. It exists and it’s a company and people use it. And that’s awesome. Instagram stories also exist and it’s also awesome. And we say it’s easy. It’s obviously not easy to build a high-quality photo-sharing application at internet scale. It is certainly not easy to build a high-performance text-to-app service, but at the same time, I actually think it is being commoditized. So what else are you going to bring to the table, if that makes sense? And for Vercel, it’s vertical integration. V0 will likely succeed because V0 is the vertical integration of coding framework, AI, editor and infrastructure. Super nuts. We own Next.js, we own V0, we just published this awesome post about how our models work. We design the website and the editor and we own the infrastructure of Vercel.

And so that is, iPhone like, if that makes sense? You think about vertical AI is a big deal, in the developer space, vertical AI means vertical around the framework and for us that’s Next.js. And so it’s the perfect thing to match our audience, if that makes sense?

And so as it relates to what our competition is doing, I think that there’ll be some that go towards more consumers, some that goes towards more developers and some that go in between. And I think they’ll explore all of them. Again, it’s a modality, so it’s going to be slapped in all different types of variations. But I’m not worried about too much competition because the market size is absolutely ginormous because we’re going from, I think I was looking at Perplexity the other day and it was like, how many total software engineers are there in the world? And the answer was like 28 million. I think that number from people who can, let’s define a software engineer in the future, is somebody who generates code. That number is going to the limit of Excel users, which is like 700 million people on Earth. So there’s a huge-

Vivek: The audience just gets so much wider.

Jared: Totally.

Vivek: And it’s like if you went from Vercel being thought of as a developer company, it’s still going to be a developer company. It’s just the audience for V0 is not just developers anymore. It’s everyone. It’s everybody.

Jared: It’s everybody. And this is what we have to figure out. And I do think there’s some decisions have to get made. I’ve always thought that you should always sell to your existing audience. It’s far easier. And we ourselves are the heaviest users of V0. It’s very funny, the pitch decks of our competitors for their rounds they were raising are like, “We’re going to be the next Vercel and we’re going to build Vercel on the back of this thing.” And well, we already have that, right? So it’s up to us to vertically integrate that.

And I think that’s where, ultimately, what’s super powerful up V0 is that you’re generating Next.js code. We develop Next.js. You’re in an editor that’s also Next.js app, which we’re using to make everything about it better. So you could do that as well. We’ve open sourced the AI SDK, which is the framework that we use to work with AI in V0. We’re about to release some products around our sandbox and virtual machines that you can use to build your own V0. And then when you click the deploy button in V0, you’re not getting some toy little app infrastructure; it’s the same infrastructure that it supported, like three Super Bowl ads this year.

Vivek: Nice. It’s seen scale.

Jared: That can burst scale to infinity. And that’s pretty cool. So I like our position in that sense in the developer experience world, but I also just think the market is so gigantic that everybody’s going to win.

Vivek: It’s an exciting time to be in this market for sure. Jared, let’s end with something that I’ve heard you’re very good at, which is spicy takes. And so two ways we can go with it. One, where are we in this AI hype cycle? I’m curious to get your take there, but also just generally what are things that you believe about AI or where we are in AI that you think others don’t believe or should believe?

Jared: I think that you’re going to have AI managers far sooner than people think. And let me explain why. So have you played with Codex?

Vivek: A little bit. Yeah.

Jared: So we have a system coming like this too. You enter in your task, and it goes and spins up an agent and it’s coding task in this case. It will download your repository, very similar to V0. You can launch multiple Codex tasks at one time. You can do the same thing for Devin and we’ll try to work. So play that out one model generation and say, okay, first assume it’s going to get good at that, which is a good and fair assumption because they’re going to do reinforcement learning. The model’s going to get better, smarter. And so let’s just presume that that is a way that you’re going to orchestrate some work.

So when you’ve got multiple of these tasks that get launched in parallel, which one should you do first? And so they’re, you’re going to have an AI engineering manager that works for you, if that makes sense? That’s going to be managing these little agents that are going to go off and do your tasks. If it gets sufficiently good at scheduling, I’ll call it that, right? And it gets sufficiently trained, it should actually be an amazing engineering manager in theory. In a couple of generations of model, it becomes incredible at planning. Have you ever driven in a Waymo?

Vivek: I’ve been in many, yeah.

Jared: You ever want to go back in Uber?

Vivek: No. Absolutely not.

Jared: Do you want to work for a human manager. They don’t respect you.

Vivek: This is the question we’ll end on — I think this is a very topical debate right now about what’s the role of the software engineer and the software developer in the future? And some folks are saying, I would not want my kids to get into computer science. And other people are saying, well actually we’re always going to have a developer. If you look out even three to five years, what’s your sense of what’s going to happen here in this space?

Jared: Man, I think that I would still learn to code. There’ll always be a market for people who get things done. If you can get things done, the more things you can get done, the better off you’re going to be. There’s always going to be a market for high agency people. And so that’s something you should hire for. That’s something you should foster and try to learn. And as someone who just sat there and Googled to learn to code, now it’s even easier. You should learn as much as you can about how to build amazing stuff and get things done. And you’re always going to be employed. And your job may look very different than the software engineer of today or a couple of years ago, we would go on Stack Overflow or Google something.

But believe me, when your AI manager tells you to go get some high level tasks done and then you’re operating these a swarm of managers, you’re going to need to know all kinds of stuff. And I think it’ll be very similar to… I’ll use this analogy, there are people who play guitar and there are people who could play Guitar… You even play Guitar Hero?

Vivek: Yes

Jared: They were more moving towards a little bit more Guitar Hero than guitar. That being said, so that’s one trend and the other trend is the way of writing. A lot of people write and a lot of people will generate software. They’ll also be professional authors, if that makes sense? And professional software engineers, I don’t think that’s going away, but a lot more people will generate code and you want to call them developers? Then yes, sure, fine.

They’ll have so much leverage. It’s incredible. That is the thing that I came back to is, like, yes, it’s not going end up being replaced, it’s just the ones that are great, just have 10X, 100X leverage now. Especially as things get more age agentic. And it seems like the other trends you’ll see is that seed rounds will get confusing. They’ll be much smaller and also much bigger. And the ambition of companies and the polish they’ll have, consumer expectations will skyrocket. So pricing and seeds may not be the greatest thing for SaaS, but you’ll also see extreme amounts of competition. Because again, you’re going to have a small team of people has never had more leverage in the history of the universe. And I think they’ll have as little leverage as they will have the rest of their lives today.

Vivek: Yes.

Jared: So that’s not going away.

Vivek: Totally. And every market, we’re seeing this for seed stage companies where every market has 15 competitors right off the bat.

Jared: Oh my gosh. It’s like V0 has-

Vivek: It’s like six years ago.

Jared: Every YC batch is like three different competitors.

Vivek: Well, hey, you are in a great place when all the companies out of YC and just every seed company is looking at Vercel and saying they’re doing something right. And it’s really incredible what you guys have built and so thank you very much for sharing your really unique journey into Vercel and what you’re doing today. Congrats on everything. Thanks so much for joining us.

Jared: Thank you.

 

AI Won’t Replace Developers: Qodo’s Take on the Future of AI-Powered Engineering

 

In this episode of Founded & Funded, Madrona Investor Rolanda Fu is joined by Dedy Kredo, the co-founder and chief product officer of Qodo. Formerly CodiumAI, a 2024 IA40 winner, and one of the most exciting AI companies shaping the future of software development. Dedy and his co-founder, Itamar, are entrepreneurs who have spent their careers building for developers, and with Qodo, they’re tackling one of the most frustrating problems in software engineering — testing and verifying code.

As AI generates more code, the challenge shifts to ensuring quality, maintaining standards, and managing complexity across the entire software development lifecycle. In this conversation, Dedy and Rolanda talk about how Qodo’s agentic architecture and deep code-based understanding are helping enterprises leverage AI speed while ensuring code integrity and governance.

They get into what it takes to build enterprise-ready AI platforms, the strategy behind scaling from a developer-first approach to major enterprise partnerships, and how AI agents might reshape software engineering teams altogether.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Rolanda: Well, before we dive into Qodo, Dedy, could you just share a little bit more about your journey? You’ve navigated diverse roles across the tech landscape. You and your co-founder both are still entrepreneurs. What pain points did you guys experience, a developer workflow that made you say, “We have to solve this, and AI is finally ready?”

Dedy: So I have diverse background. I’ve been in engineering roles, data science roles, product roles across both small startups and larger organizations. And I think throughout all of this, software quality has always been kind of near and dear to my heart.

It’s always kind of a challenge to strike this balance between wanting to move fast, develop fast and providing high-quality software. As a startup, you’ve got to be moving really fast. And I think with AI, it’s now becoming even more important now that the market is changing really, really fast. It doesn’t matter which field you’re in. If you’re impacted by AI, you have to be moving really fast. But you have to strike the balance with also providing high-quality software, and that’s always been a challenge.

So Itamar and I have known each other for many, many years. And basically, we realized that as AI starts to generate more and more of our code, the challenge kind of shifts to how do I make sure that my code is well tested, well reviewed, secure, aligned with the company best practices, especially in very, very large enterprise organizations?

That was a challenge that we felt that is going to become the next frontier. And we realized that pretty early on. So if you look at our seed investment deck from 2022, it’s kind of like we’ve been seeing the same pitch for a while now, and I think now it’s actually all coming together where we’re really well-positioned for this. So yeah, it’s exciting times.

Rolanda: So you and Itamar are both serial entrepreneurs and have known each other for a long time. How do you think about whether you were the right founders for building this? How do you think about this new category of intelligent coding platforms, right? How did you think about this idea, and how did you get the confidence to build in this space?

Dedy: Yeah. Well, on one hand, for both of us, software quality has always been very near and dear to our hearts. I have a lot of experience working with US companies, with large enterprises. I spent a lot of time working with financial institutions, for example. And Itamar was a CTO several times before, and he was kind of ready to step up for the CEO role, and we share a lot of the same values.

We grew up in the same small town halfway between Tel Aviv and Jerusalem. And then, yeah, we just knew that this is very interesting times and very exciting times and that basically software engineering is being reinvented and is being transformed in a massive, massive way. And we believe that the right way to enter or penetrate into this market is through enabling organizations to embrace AI for software engineering in a responsible way. And we’ve had a similar pitch since day one. Yeah.

Rolanda: Yeah, that’s awesome. And maybe with that, let’s dive into the Qodo platform a little bit more. What’s been your guys’ North Star the whole time since those early seed days, and how do you think about what you have versus the broader space of the plethora of AI coding tools out there these days?

Dedy: Basically, there’s a lot of excitement, and I would say some hype also around AI code generation, a lot of talk around vibe coding, and how AI is going to write everything. And we believe that for enterprise to really embrace gen AI and really have gen AI impact their organization in a way that helps them to increase productivity significantly, they’ll have to find a way to kind of balance this with quality and with making sure that code is aligned with the best practices, that code is well tested, well reviewed.

And in order to do that, the foundation for everything has to be very deep understanding of the enterprise code base. So this is something we’ve been investing in a lot. The foundation of our product is something called Qodo Aware. It’s a layer of understanding code bases, indexing code bases, and understanding how different components relate to each other in very large code bases. So that’s one major area of focus. And then on top of that, we have two major product areas. One is around code review and code verification.

So this is our Qodo Merge product that integrates with different Git providers and basically helps take code review to the next level. Because if you think about code review, it hasn’t changed in decades. And basically, developers open a pull request and they start reviewing diff by diff and trying to figure out if there are issues or bugs or anything like that. And with Qodo Merge, we make pull requests a lot less painful. We help developers understand the actual changes, and we create a detailed logical walkthrough.

And then we also try to catch bugs, and we automatically generate best practices for each team and each repo and for the organization as a whole. So that’s on the code review side. And basically, we believe that as more and more code gets generated by AI, the bottleneck shifts to how do I review this at scale? How do I maybe auto-approve code that is maybe smaller changes or that doesn’t have any major issues, but how do I help developers review code at scale and catch issues fast?

And then we have the code generation side, where we basically have different IDE plugins for different… for various IDEs. Our approach is basically not to have developers need to switch their IDE into something different. So we integrate with existing heterogeneous environments, both JetBrains IDEs, for example, as well as VS Code. And then we’re just about to launch our CLI. So essentially, we have the same kind of coding agent in the background that is driving both the IDE plugins and CLI, and also agents that would run in the background.

And all of that has in the back of its head, you can think about it like that, the coding agent, has the company knowledge and best practices, and that’s kind of what unites us… unifies us. So basically, we look at the SDLC in a holistic way. And then one last thing I would add is that we believe that we have a strong belief that developers and enterprise organizations will need to customize AI agents for their specific needs.

So we don’t believe in a one agent that would rule them all. We believe in more of an agent swarm type approach where different teams will configure agents a bit differently, will give them different tools, maybe different permissions, and would want to control the input and the output, and the triggering of the agents. And we built a system to enable them to do that.

Rolanda: But I think that’s one thing that I really love about your guys’ approach is, right, kind of that end-to-end development lifecycle coverage, whereas I think a lot of tools out there tend to kind of pick an area to focus on. So I think that’s really clever on your guys’ end.

I am a little bit more curious, too, to dive into the platform. I mean, it seems like you guys have built a lot. Can you talk a little bit more about that decision to do a lot of that development versus leveraging existing models out there? How do you make those trade-offs?

Dedy: Generally, we do have… we do leverage the large models, and you have the ability to choose the companies that use our product, leverage both Anthropic, OpenAI, Google models. So we also have very flexible deployment options. You can use our SaaS. We also support single-tenant environments and self-hosted. For self-hosted environments, we do provide our own model that is essentially built on top of an open-source model.

So we don’t train a foundation model from scratch, but we did invest quite a bit in training embedding models for code because we believe that the foundation for… as I mentioned earlier, the foundation for everything is deep code-based understanding, and we saw that there was a gap in the market in that area. So we did train a state-of-the-art embedding model for code that has comes as a built-in part of our platform.

Rolanda: I love how dynamic you have set it up to be, and I think that’s really critical for scaling any kind of solution these days. And maybe just to pivot a little bit, I think a topic that’s really on everyone’s mind these days is this term around vibe coding. So I’d be curious to get your guys’ thoughts too. How does your platform enable the vibe coders of this generation to better leverage your platform, and how does that impact what you have created?

Dedy: Vibe coding, I think, it’s like… And I think also when Andrej Karpathy coined that term, he was really referring to pet projects for coding when you don’t care about how the actual code is being built. You’re more focused on the functionality and just seeing that a functionality actually works, but that’s not sustainable for enterprise production code. You’re basically generating a lot of tech debt, and you may be overlooking issues. You’re not focused on testing. So, in order to make this process of AI generating the code work for these complex code bases, you’ve got to put the right processes and tools in place that allow you to check the code to set the right frameworks and best practices.

So that, first of all, you try to get the code to, right as it gets generated, already get to take into account your different rule and best practices for a given code base for a given team. So we do that with our Qodo Gen with our generation side.

But then also once you need to review the code, that’s the checkpoint, that’s the point where you got to really make sure that it’s aligned with the best practices, that it’s well-tested, well-reviewed. And we believe that having these two sides work hand in hand, we call it the blue team and the red team, and that makes it actually work in an enterprise environment.

Rolanda: I think that’s a really good description of both sides. The red team, blue team have to play a little bit of both. And I think that’s something we’ve talked a lot about internally, too, is just people talk about code generation a lot, but not enough about some of the other sides around testing and review. And I think those are even more critical it seems in this current environment, especially with something like vibe coding.

So I’m curious, you mentioned enterprises. Can you talk a little bit more about how you balance getting developer love versus selling to enterprises? Is it one or the other, or is it a little bit of both? Have there been any pitfalls when trying to focus on one or the other, or has it always been a smooth ride?

Dedy: It’s definitely a challenge. Generally, with a startup, we try to focus, and for us, we call this strategy middle-out. Our focus really resonates with team leads with architects, with platform teams, and developer experience teams, which by the way, you’re now seeing these kinds of teams at large organizations gain a lot of, I would say, power in the organization or ability to influence the tools. So we are assisting these teams to really grow.

So on one hand, our pitch really resonates with higher-level managers, architects, and team leads, but on the other hand, as a dev tool company, you ahve to have this bottom-up approach, and developers need to love using your product. We’re trying to always balance that. So we go both top down and bottom up. And it means that we do have a self-serve approach, and we have a freemium tier. We do have the ability to swipe a credit card and go to our team’s offering.

But typically, when you get to the enterprise side and you want to index a very large code base and you want to do it in a single-tenant secure environment, that’s where you do a more controlled proof of value and you engage in the conversation with the enterprise stakeholders. So a lot of our, I would say, larger customers, they tried us out self-serve, they just came and experimented a little bit with our product, but then they contacted us to do kind of a larger trial or a larger pilot. So this is how things have worked for us generally.

Rolanda: That makes a ton of sense. I think the balance of both is super critical, both the individual and the enterprise level. I’m curious, are there any stories of enterprises that were hard for you to get that you were probably most proudest of converting, or any horror stories, I guess, of trying to sell in the space, or even advice for people that are trying to sell in this space?

Dedy: I can give an example of a Fortune 10 retailer that is really one of our largest customers, and they really like… their challenge was around, “How do I make sure that the code that gets generated by AI is well-tested and well-reviewed? And how do I kind of…” Their focus was a lot on the code review bottleneck, and they approached us and they started small with a small pilot, and what they saw is that the product just started start expanding in organization. People were hearing about it and were wanting to turn it on in their repos.

And the challenge that we had was really around supporting the growth that they had inside of their organization. All of a sudden, they had thousands of developers knocking on the door, and this is an air-gapped environment. So you have to take into account things like load on GPUs and things like that, making sure that response times are good and that the quality of the results are good and that it’s aligned with their best practices of the different teams.

So we worked with them very closely to be able to support them, but yeah, they expanded and now they standardized for the entire organization on Qodo for their entire pull request and code review process. So yeah, it was a journey. So the pilot was a few months, and then they expanded, and it took time until they expanded the entire company. But yeah, it’s like with these companies, you’ve got to really support them. You have to really give them the feeling that, or not just giving them feeling, but really work very closely with them and listen to their pains and be willing to kind of go the extra mile for them.

Rolanda: That’s kind of the dream land-and-expand scenario with a customer, right? Hopefully, they’ll be your customers for a long time coming. I’m curious, I mean, given you have spent so much time with these developers and these enterprises, how do you see the future of some of these developer teams changing? Are you already seeing how Qodo is impacting how these teams are structured? I’m curious where you see the future of all this going? Are there… Are engineers going to all be replaced? I think that’s what everyone’s scared of, right?

Dedy: The way I think about it is that I think that the roles of developers are just changing. I think that, especially for very large complex code bases, I don’t imagine a world where a product manager can in a click of a button just make a change that impacts the entire code base and redo the entire onboarding experience for a new customer for one of these very large. Maybe a large bank or any kind of large enterprise. So I think the developers are going to become orchestrators of agents. Each one of them will have the ability to launch multiple agents and also customize each agent for specific use cases, specific triggers, and then be able to review the work of these agents at scale.

And then, yeah, most of the code, they’re not going to hand write, I would say. But they need… I think there’s still going to be, at least the way I see it for the foreseeable future, for complex code bases, you’re going to need technical people, technical developers experience that are able to orchestrate this work and make the dev teams a lot more productive, but also make sure that you don’t have this kind of, I hope it’s okay that I’ll say it, but a CrowdStrike moment where the world grinds to a halt because something was overlooked. So the way I see our goal as a company, or what we’re trying to do, is enable these organizations to be so much more productive, but not have these CrowdStrike moments.

Rolanda: And is this something that you see playing out, I guess, over the next five to 10 years? I mean, I think you talked a little bit about the near term, right? I think that makes a lot of sense with the developer kind of as the orchestrator. How do you see this playing out even further out? Is there even going to be an entry developer role, or how do you see Qodo really being that partner in terms of really catalyzing this change?

Dedy: First of all, it’s very hard to make very long-term predictions. But the way I see it is that I think the role is going to continue to evolve. I think we still need… There may be some kind of a curve in the demand for developers where you see maybe going down and then going back up because you’re going to need these people that are very, very technical, that are able to orchestrate and manage these agents that are writing code.

And if fewer people end up now going into studying computer science and things like that, then you’re going to have a situation where you don’t have enough of these people. So I think we’ll see that there will be very interesting dynamics. Also, I think there’s going to be an explosion of software in general.

Think about all the ideas in people’s minds that for software that are not becoming companies today. I think there’s still so much more potential for a lot more software to be created, and you’re going to need engineers for that. So yeah, I do believe that engineers are going to be mostly orchestrating agents in the next 2, 5, 10 years. But I think you’re still going to need the engineering team for the foreseeable future. This is how I see it.

Rolanda: I think that’s a great assurance for any engineers listening in on the podcast. And I think it’s also something that we’re excited as investors, right, and something that we believe in, as just a lot of these forces will continue to multiply, and more software just means more things for people to manage. I think it’s more about the roles shifting. So totally aligned there.

So maybe just to pivot and switch gears a little bit, I think one thing that’s impressive is just around how fast you guys are growing. So I’d love to hear a little bit more about how you think about go-to-market. How do you make sure that you’re targeting the right customers and training your reps?

Dedy: It’s funny, we’re just now doing an onboarding bootcamp because we significantly grew the team. We’re going to be 80 soon in the company.

Rolanda: Wow.

Dedy: Yeah, I think maybe a year ago we were 30 or so, something like that. So, we’re actually now experiencing this growth, and how do you do that? I think you need to, first of all, do this… spend a lot of time as founders, and we were like 10 people in the founding team, something like that, when we just started [inaudible 00:23:10] company. So you have to have the founders and the founding team really working closely with the go-to-market team, helping them, supporting them, joining them on calls, and make sure that you’re constantly enabling them.

You’re constantly over-communicating. Also, as the market shifts or the market… there are changes in the market, you make product decisions, and you got to make sure that people understand where your product is also heading, and do a lot of product roadmap sessions with both, actually, the go-to-market people, but also with your customers. I think it’s just spending time and making sure you do that.

Rolanda: Yeah, yeah. I mean, I think your job’s only going to get more exciting and harder now to scale out the team and transition from that founder-led sales. But yeah. No, I’m sure you’ll do a great job there. Maybe going off that, I’m curious for your guys’ founding journey so far.

What kind of advice do you have for other people building in the developer space and in AI in general? Are there any hard-earned lessons that you have come across in the first couple of years that you would like to share with some other people that are starting to embark on this journey?

Dedy: The challenge with this AI space and AI and software engineering also now, there’s so much noise, so much going on, and you have to have an insight. You have to stick with what you believe, and you have to find the right balance between building to the future.

So, building for where you believe the models will be in X time from now, but you can’t build for too much into the future because… so you have to strike that balance, right? You need to… On one hand, I would say the balance is probably building for a few months out, where you believe the model capabilities will be, and then just stick with your insight. And yeah, it’s like you either win big or you fail big. So I don’t think there is an in-between at the moment.

Rolanda: Yeah, that’s great advice. And I’m curious, how do you maintain your own long-term focus and integrate customer feedback, right? I think a lot of founders struggle between, there’s a lot of noise in the market that you hear from competitors, from probably your investors, from different kinds of customers. How do you maintain focus between you and Itamar to make sure that you continue to build for that right three-month direction?

Dedy: On one hand, you have to stay up to date. You can’t ignore the competition. You have to strike a balance — on one hand, you do need to react to things that are happening. So if all of a sudden there’s a new model that comes out that allows you to do things that you couldn’t do before, you do need to respond to it, but you can’t just be reactive.

So you need to have a roadmap, you need to stick with that roadmap, but then you also need to build the organization in a way that people embrace the change. I think the people that are best suited for fast-growing startups in the AI space are those who, on one hand, have this kind of grit that can stick with things. And there are hardships. There’s moments where, all of a sudden, we worked on something and some competitor released it a week before we were about to release it, and it kind of sucks this situation.

So you need to have people who can deal with this kind of situation. But on one hand, you have to be very, very adaptable. So you do need to see what’s going on in the market and be determined. I mentioned earlier that one core company value is no fear of good conflict.

We always debate things even between us as founders and with the founding team and with a broader team, but also move fast with confidence. So once you decide on something, you have to move on it fast, and you have to do it with confidence, then you have to make decisions. The biggest issues happen, I feel like, when you get pulled in different directions and you end up not making a decision.

Rolanda: That makes a ton of sense. I mean, this has been so incredibly insightful, Dedy. I just have a few rapid-fire questions for you to wrap this all up. I think we’ve agreed on a lot of things so far. So maybe just to spice it up a little bit, the first question is, what do you believe about the future of AI and software development that many people might not fully appreciate yet, or that people might even disagree with?

Dedy: I think one area that would come to mind is if you look at, for example, the big labs now that are launching their products in this space, right? So you have the OpenAI Codex, you have Claude Code, right, from Anthropic. And I think the way they think about it is let the model do most, or keep the system layer very, very lean, and the model capability is getting better and better and better, and the model will solve everything eventually. Context windows are expanding, so you’ll just shove everything in a context window, and the model will do it.

We have a different point of view on that, a significantly different point of view. So we believe that for enterprise complex code bases, you’re not going to just shove the entire code base into the model context window for every inference of the model. You actually need to have a system that preprocesses the code base and creates their relationships and derives the insights. And you also want to give the developers the ability to control the agent, define the tools for different use cases, and create different workflows that are customized or configured for their specific use cases.

So we believe in a more controlled agentic environment where you have, again, I mentioned this earlier, a little bit a swarm of agent, and each agent is more tailored, and it has different permissions, it has maybe different tools, and this entire thing is controlled by developers.

And this is why we also believe that developers are not going away because they’re going to manage these agents, and they’re going to configure it and build them and track them and monitor them. So yeah, I think that’s probably the majority of the market think about this a bit differently, so is how we think about it.

Rolanda: Yeah, that’s a great insight. And thinking about outside of just Qodo, even maybe development lifecycle for a second, what’s a company or AI trend that you’re really excited about outside of all of this?

Dedy: Outside of coding, I’m very excited about the impact that AI can have in biology, for example, and potentially finding cures for diseases. I think the next couple of decades will be very, very exciting in this space. The big labs are going to scale reinforcement learning, that’s obviously in verifiable domains like coding. It is not even a question anymore. It’s obvious. And I think we’ll see in the rest of 2025 and 2026, a very significant rapid improvement in model capabilities because of the scaling of reinforcement learning that is going to happen. And I think if they’ll be able to solve this for other fields like biology and figure out how to close the reinforcement learning loop, then we’re going to see rapid advancements in these fields.

And I’m very excited about the possibility. There are still unknowns, a lot of unknowns there, but I’m hopeful. Obviously, it’s not my area of expertise, but I’m hopeful that they’re going to be able to figure this out and make significant advancements there.

Rolanda: I think that’s really powerful. Obviously, it’s great to impact people’s work, and that’s a lot of people’s lives, but obviously, there’s the actual life part of it as well. So I think that’s a great insight there. We’ve talked about advice that you would give others building in this space. What’s one piece of advice that you would give your own past self? If you were to rewind and think about when you were starting this company, what’s a piece of advice for what you would do differently?

Dedy: I think it’s a great question. I think to always remember that this is a marathon, not a sprint, and that’s in terms of the balance you need to strike as a founder. For example, for me, I used to be very big into rock climbing, and for the first two years of the company, I basically gave up on that because I couldn’t find the time … I couldn’t strike that balance. I started realizing that this is going to be a 10-20-year journey (who knows) doing this, so you’ve got to strike a balance. So, recently, I started getting back into climbing. And for me, it really affects me in a very, very positive way — and even makes me feel more productive at work. So it’s like you’ve got to strike that balance and realize that you can’t give up on things that are really important for you just because you’re a founder.

Rolanda: Yeah, I think that’s a really powerful message for people. And there’s, at least over here in the US, a lot of developers like to rock climb on the side too. So you never know. You might find some of your future customers there. So it can work out in both ways. And yeah, I mean, maybe just a fun question to wrap up with. You changed your name from Codium to Qodo. I’d love to learn what Qodo means.

Dedy: So Qodo is Quality of Development — and it’s like code with a Q. So the trigger to change from Codium, so there are. You’re probably aware there were two Codium companies. We both started at a similar time, and there was just a lot of confusion. Obviously, there’s an overlap. We’re more focused on quality, verification, and testing in enterprise organizations. So there was always differentiation, but there was still confusion. So that triggered the change. And I think it worked out quite well.

Rolanda: I love that Q for quality. I’ll remember that. Well, Dedy, thank you so much for the insights today, and thanks for joining us on the Founded & Funded podcast. I know I’ve learned a lot through our conversation, and I think it’s such a great story of your guys’ vision and journey, so I really appreciate you sharing that.

Dedy: Thanks, Rolanda. This was a lot of fun.

Breaking into Enterprise: How Anagram Landed Disney with Cold Outbound

 

This week, Madrona Partner Vivek Ramaswami hosts Anagram Founder & CEO Harley Sugarman. Harley’s founder journey is a fascinating one — from magician to music video producer to engineering at Bloomberg and then to investor at Bloomberg Data before eventually leading to becoming an entrepreneur. He launched a company in 2023 with a bold mission: to fundamentally rethink how we protect the human side of cybersecurity.

Originally founded to help security teams upskill through immersive training, the company has since evolved into a next-generation human security platform that’s tackling one of the biggest unsolved challenges in enterprise security: employee behavior.

In this episode, Harley and Vivek unpack how one pivot, a flood of cold outreach, and relentless focus on behavior change transformed a niche tool into an enterprise platform serving companies like Disney and Pfizer. From landing enterprise logos off of nothing but wireframes to outmaneuvering the 800-lb gorilla in a legacy industry — Harley’s tactics are a masterclass for any founder trying to stand out in a crowded market.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Vivek: What motivated you to launch this company? How did your background help you in the early shaping of the company itself?

Harley: Yeah, so I have a bit of a funny background. I actually went into school as an English major, was a semi-professional magician prior to doing that. So I really did not think I was going to be in the tech world for a long time. But after going to school in Palo Alto, sort of being very much immersed in that world, the idea of starting a company took hold, but I didn’t know what I wanted to go and do. So right out of college, I moved to New York and was working at Bloomberg, mostly there doing engineering for sort of infrastructure tools that Bloomberg was building, and sort of realized that I knew a lot about the product side of startups, but didn’t know much about all the other stuff, about the fundraising, about the hiring, all those other facets that go into creating a company.

So I found Bloomberg Beta, which was this early-stage fund that exists within Bloomberg and is very focused on the future of work, which is something I was also very interested in. So I joined for two years, essentially with the ultimatum that after two years, I want to be fired and forced to go and start a company. And the team I worked with was very supportive. I worked with a woman, Karen, out of New York who sort of held me to that promise. And for me, security was always a very natural fit. There’s a lot of engineering within security that I find really fascinating. That was my specialist focus at university.

And really for me, there was always this interesting intersection of the technical side of security and the human side of security. And that human side was this sort of much fuzzier problem than the technical side. And it was a problem that a lot of people had tried to solve, but hadn’t really been able to solve. And so, that was what gave me the trigger to say, “You know what? Let’s go into this space. Let’s look and see where there’s opportunity.” And that led to the initial vision for the product, which was focused on security teams and eventually led us to what we’re now building at Anagram.

Vivek: Yeah, and I think obviously you had a lot of interesting developments leading you to even starting the company in security in the first place. But then, the original vision of the company was very different, or certainly different from what you’re doing today, which was around upskilling cybersecurity employees and teams with this kind of capture the flag style approach. But then, you pivoted the company in 2024 and saw some rapid success from there. What sort of led you to this pivot? Walk us through what your thought process around that was.

Challenges and Market Realities

Harley: So the original product that we built, which I stand by as being an awesome product, was this way of evaluating security talent and training security talent through this idea of puzzles, right? In security, there’s this culture of capture the flag, which is we’ll give you a piece of broken software or vulnerable software, and your job is to figure out how to exploit that vulnerability. And in doing that, you very quickly understand, “Oh, okay. This is how I would defend against that in the future.” And it’s a really cool, engaging way to teach people to evaluate what somebody knows. And so, we started off building software very much focused on solving that problem for security teams, which I can talk about at length.

But I think one of the issues with it is that security is a very gate-kept industry. There are a lot of certifications, a lot of a feeling that you need to have gone to a certain kind of school, or have a certain degree to get into this field, and I didn’t buy that at all. So that was the first product that we launched. We got a little bit of traction with it. And candidly, we just realized quite quickly that the market wasn’t there for it. We launched sort of 2022, end of 2022 timeframe into a market that was very much contracting. So companies that had big security teams, financial institutions, tech companies, et cetera, were on hiring freezes. They were doing layoffs. They were downsizing. So we quite quickly learned that there was a cap to how big this business could be.

And then, the other challenge with it was the needs and the requirements for security training at these different orgs looked fairly different, given the risk profiles, given the compliance frameworks you needed to worry about, given the kind of software that you are developing. So we decided to make a pivot, but we knew that we had built something special because the feedback from users was really, really positive. They loved the puzzles. They loved this idea of critical thinking and keeping things short, but engaging. And so, we basically started talking to the customers that we had, and we said, “Okay. Where are you feeling this pain, where this kind of solution could be interesting?”

And the thing that came up more and more frequently was this idea of training users who weren’t necessarily on the security team, training the general population. And it’s a very known issue. Human risk tends to be the biggest hole for most enterprise companies, meaning someone clicking on a phishing email, somebody sharing data with their personal email or to someone that they shouldn’t. Now, increasingly, there’s a lot of AI risk around what information gets put into models and what does that company do with that information. And there hasn’t really been a good solution to that problem. There’s a lot of companies out in that space. It’s a very popular space. But the approach most of these companies has taken is very cut and dry, right?

Pretty much everyone watching this or listening to this is going to have had to have gone through some kind of security awareness training in the past. That’s usually 45 minutes of videos talking about what is a phishing email, followed by a kind of reading comprehension quiz that looks like it came out of the SAT, and that is not a good way to train people. And what we learned was that there was real appetite to take this more engaging way of educating and teaching, and apply it to this space, which is a lot broader space, because every company above a certain size needs to do this employee training. And so, that was the seed that led to Anagram.

To be honest, I was quite hesitant to do this initially. I thought we were going into a space that was very commoditized. I felt it was a little bit of a race to the bottom. So I sort of felt like, “Okay, if we do this, we have to do it right and we have to do it differently.” And so, that was the initial sort of hump that I had to get over was my own kind of internal bias like, “Does this actually make sense? Is this a good idea? Is this not a good idea?” But we got really, really good feedback, and so we then started doubling down. And it kind of didn’t even really feel like a pivot at all.

Executing a Startup Pivot

Vivek: Take us through that actual pivot. You talked about externally why you decided to… You saw one direction, and you said, “Okay, you know what?” Actually, if I take the company and the product in a different direction, I’m seeing a lot of immediate traction there.”

But internally, what actually happened? If you just think about the tactics of this, did you wake up and you tell the team, “Folks, we’re going to change the direction of the company, and this is the product we’re going to go into.” Did you have to change the team? What sort of happened? At least on the internal side after you made that pivot, because that’s something so many founders have to go through.

Harley: So the way that we thought about it or the way that I thought about it was every step kind of felt like a natural progression of the step before. So I actually don’t think I remember waking up one day and saying, “Hey, you know what? We’re now a security awareness company.” That didn’t happen. It actually felt very organic. I make decisions well when I see data. That’s kind of the framing through which I view the world. And when we started selling this new product, we started sort of ideating on, “Hey, okay, security training for security teams. That’s probably going to hit a wall at some point. Where can we expand?”

We started just running a ton of little experiments around messaging and outbound, outside of our existing customers, and we said, “Okay. My hypothesis — this is probably not going to work.” Right? There’s a million companies out there. CISOs kind of think this is an unsolvable problem because human risk is that you can’t fix the human, which I think is a really terrible framing. And so I sort of said, “All right. Well, we’ve got these customers who are willing to give us some money. Let’s focus some effort on building that product as cheaply and as quickly as we can.”

And then in the meantime, let’s just start running some tests. Let’s start reaching out via LinkedIn, via email to CISOs, and see if we can get any interest like, “Hey, we’re going to try this a little differently.” And we got a lot of bites that way. Of our first, I think, 20 customers, 80% of them came through cold outbound. It wasn’t my network. It wasn’t people doing us favors, right? We had a couple of existing customers who converted, but the vast, vast majority came from us just reaching out. And I think that was the thing above everything else that made me say, “Actually, this is cool. There’s a there there. There’s an opportunity here.”

And people are sick of the status quo, and they feel like there is a chance for us to build something here that is differentiated, and feels unique, and feels innovative.” And so, we just slowly started spending more and more of our time on it. And there was a moment, I’d probably say early to the middle of last year, where we decided as a team, this is going to be our focus now. We are spending 80% of our time on this product. We closed more revenue in the first six months than we did in the past 18 months building the original product. And so at that point, it was fairly obvious like, “All right, we’re going to do this.” And you know, for the team, a few of them came and sort of said to me like, “It kind of didn’t feel like a pivot.”

Someone said to me once, I thought it was kind of interesting. There’s like two genres of pivot. There’s a market pivot or there’s a buyer pivot. And it’s hard to do both, but you can do one. We kind of made a buyer pivot, right? We are still selling to security. The process is not a million miles away. It’s this top-down enterprise sale. But the end user, the experience, the problem that we’re solving is just fairly different. And we were kind of lucky there was a lot of DNA from product one that could apply into product two.

But yeah, it felt I think very natural and we didn’t need to make any team replacements or anything like that.

Vivek: I think that’s where the word pivot just sounds like you’re making a hard pivot, right? Where so many things are changing, and it’s like you got rid of one team, and you got another. In this case, it’s more almost a transition, right? There’s a transition of the product, but there’s a transition on the buyer side. They decided, “Okay, you know what? This first part I’m not going to spend that much money on. But the second thing that you’re doing, actually, there’s budget for this.” And now, you can tap into that budget.
And I think the thing you said that was really interesting and that is differentiated is, so much of your early traction came from you reaching out cold on LinkedIn and on email. And these aren’t just small SMB customers. This is Disney, and Pfizer, and big blue chip customers. Maybe talk through that a little bit. What was sort of different in what you were doing that allowed it to work?

Security Awareness Training that works

Harley: A few things that we did, I think, that worked for us and would work for other people is, we always started from a place of feedback. So in the early days, we didn’t have a ton of functionalities to the product. We had a lot of wireframes, we had some basic things that we could demo, and we were very upfront about that. We weren’t saying, “Hey, we are going to come in and solve all of your problems.” We came in and said, “Hey, security awareness training sucks. You probably hate what you’re doing. We’ve taken this approach that is a little bit different. We’d love to show it to you. We’d love to hear if the approach resonates with you. Do you have 15 minutes?” And I will die on the hill that a cold email is the highest reward-to-risk ratio proposition that you will ever encounter as a founder.

It costs you literally as close to nothing as it can cost you, right? Some time and some thoughtfulness around who you reach out to and how you reach out. And the worst that can happen is they ignore you. And maybe if you reach out to them a year later, they respond, but they’ll have forgotten about the first one. So there’s zero reputational risk really if you do a thoughtful one, but the potential upside is so high. So we really leaned into that. Just try it. But as I say, I think coming at it from a place of, “We want your feedback. We are not telling you that this is going to solve all of your problems.” Coming at it from a place of the open secret that in this space, there are a bunch of issues. And hey, we are thinking about how to solve this, and we’re being innovative. I also think, tactically, if you frame yourself as earlier than you are, that can help.

Even now, we send stuff from my university alumni email address, and we try things where we say, “Hey, we’re a team of… We recently founded a company, even though the company’s been around a year.” I just think having that framing of like we are early gives you two things, especially for C-levels at big companies. One is it puts them a little bit more at ease. It feels like, “Hey, I’m not just going to get pitch slapped by this dumb company that I get 400 of these every day.” And then the second thing is a lot of these big C levels love the idea of paying it forward and helping startups. Some of them might become investors. Some of ours did become investors. Some of them might want to help connect you with VCs that they’re connected with. So there is this idea of a rising tide lifts all boats that I think we were able to capitalize on as well.

Vivek: And I think also you approaching this with a level of humility that I think is really important too, right? It’s not this idea of, “Hey, buy us, and we’re going to solve all your human risk problems.” Right? It’s like we are the greatest security training tool. It’s not like, “Hey, we have a different approach. Why don’t you try us out? Or I’ll at least, get on the phone and talk to you about it. And we’ve had this success and you can just sort of build on top of that.” And I think that’s great. By the way, this probably the first time I’ve heard pitch slapped.

Harley: I didn’t come up with that.

Vivek: I was going to say it’s very good. And I’m not sure if we’re going to be able to use it or not. Hopefully, we can. But I think one thing you noted there is really important, which is this is a space that’s been around for a long time. Right? As you say, at a certain size, you need to have some level of security awareness training, products within your organization. Now, the vast majority of them today are very much check-the-box, and it’s not a great experience. And I’m sure 90% of the people who are listening or watching this podcast have kind of gone through that. Now, I’d love to hear how you all and how Anagram thinks about standing out in this space.

We’ve talked about this. There are almost two sets of competitors. One is all these new age sort of AI forward companies, many of which are getting venture funded, and a lot of them have been out there for the last few years. And then, there’s this one giant behemoth, right? In KnowBe4, which has been around, probably the first one in this space, and is sort of the 800-pound gorilla in this market. For a product and a company that’s only been around for a couple of years, how do you think about how Anagram competes in the spaces? Do you think about all of these? Do you think about this side more? Walk us through the competitive set and how you compete and find success.

Harley: So I think for what we do, it is a fairly commoditized space. There are a lot of startups. There are a lot of incumbents. There is really one massive incumbent, which is KnowBe4, and they’ve built a machine. I’ve got to give them credit because they have managed to dominate the industry. When we go into these enterprise companies, nine times out of 10, we’re competing with KnowBe4. All stuff they’ve built internally. But it’s unusual, I think, to see a space within security where so few of the incumbents of the big customers are using a startup as a solution. When we go up for these customers, we’re very rarely competing against the startups. We’re usually competing against KnowBe4. Sometimes, a company like Proofpoint, like an email security platform, has some training built into it.

And then, the startups that are doing the AI solution. We run into them a little bit more at the mid-market, sort of maybe 500 to a thousand, 1500 employees. We learned quite quickly that our bread and butter is the big enterprise. We found that we can serve 1000, 2000-person companies pretty well, but those processes tend to be very competitive. You’re going against all these other AI-driven companies. There’s a lot more shininess. And you’re also going up against a team that you’re selling to a persona who’s got a lot more going on in the sense that there is typically one person. Maybe it’s a CISO. Often, it’s not even a CISO. It might be a director of IT who’s wearing 17,000 hats, and security awareness is one of those hats.

And so they just need to get something in, check the box, and get it done as quickly and as cheaply as possible. And that’s not really the company that we’re going after. What we’ve realized is that these big enterprises that have the biggest risk surface because they’re dealing with 10,000, 20,000. In our bigger customer cases, 400 – 500,000 employees, they have to think about this with a bit more sophistication. They have to think about, how do we target training to different parts of the org? How do we customize what people get? We deal with a lot of companies that have manufacturing facilities. And those workers are hourly. And so for them, training that is 45 minutes long versus 15 minutes long, all of a sudden.

Vivek: An hour of pay.

Harley: Right. Exactly. Yeah. If you take 10,000 manufacturing workers times that by 30 minutes, that’s 300,000 minutes. I think I did the math right.

Vivek: It’s a lot of time.

Harley: Yeah, it’s a lot of time. But it’s interesting, because what we’ve learned quite quickly is that that is our bread and butter. Those are the programs where we can go in and show ROI really, really quickly. And so, that’s where we focused. In terms of how we’ve looked at it from a product perspective. I think if you look at the incumbents, we’re all familiar with that product. As I say, it checks a box, but it doesn’t lead to behavior change.

And I think the big problem in this space, and the reason you’re seeing a lot of startups in this space, is that CISOs have realized this training doesn’t really do anything. It is a thing that we have to do because our compliance framework says we have to do it, but that’s kind of as far as it goes. And the analogy I’ve started to use is if we taught kids, if we taught school-age kids the way that we expect adults to learn security awareness, within two weeks, there would be protests in the streets.

Vivek: Yes.

Harley: We’re not going to let our kids just sit in front of a screen and then take a comprehension quiz. That doesn’t actually teach. And it doesn’t teach you anything, right? There’s been so much research on how behavior change works at scale and how education can work at scale, not from security companies, but from companies like Duolingo, from companies like TikTok.

Vivek: Right.

Harley: And you need to incorporate those kinds of technologies and techniques into awareness training. The big incumbents don’t do that. And then, even the smaller startups that are very AI forward and they say, “Hey, we’re going to take the phishing simulations that you do and we’re going to let them be AI-generated,” or, “We’re going to have AI-generated videos.” Again, there’s some value there, but it’s kind of, to me, the lowest hanging answer to the question, how do we incorporate AI into this training?
And I just feel like if you put someone, and give them three minutes and say, “Hey, how do I incorporate AI into awareness training?” That’s what they would come up with. So what we’re trying to do is go one level deeper and look at the workflows that our users are engaging with day-to-day, and figure out how do we insert nudges, behavior change, training into those workflows. And I think that’s something that no one’s really doing right now.

Vivek: At the end of the day, for most of these customers, they would like to drive behavior change. Right? It’s very hard to do.

Harley: Yeah.

Vivek: Because we’re used to the way we do things at work, right? And so, being able to show, especially at these big enterprises, can you drive behavior change? Or what leads to behavior change to make your environment more secure is so important. And so, I would say that one of the things that attracted us to Anagram was taking this enterprise-first approach, right?

A lot of companies will go say, “Hey, you know, let me start with a small customer and kind of work my way up.” Versus just going, “No, Disney has a complex environment, and that complexity is what’s going to make our product shine.” What advice do you have for other founders who are trying to break into the enterprise? Because enterprise is not easy, right? And these large companies are not easy. What sorts of things have you found successful for Anagram that you think might be relatable to other founders?

Enterprise Sales Strategy

Harley: So I think within the enterprise, you really have to put in the time understanding how their business works. It becomes a much more of a consultative sale than a prescriptive sale because every enterprise has its own beast. There are power dynamics at play that you will not get a sense for until you’ve spent real time with them. And what we started to do that is showing a good amount of promise is look for a specific problem that we can solve, right? Let’s take J&J, or Kenvue, or Disney as an example.
They have such a huge number of challenges that if you try and say, “Hey, we’re going to solve everything day one.” First of all, they’re not going to believe you if you’re a company at our stage. Second of all, the number of people who would have to buy in for you to go and solve all of those problems is probably in the dozens. Not always, but oftentimes. And there’s different departments that might need to buy in. I’m not even talking about budgeting yet or dollar amounts. Just the sheer complexity of the sale goes up so quickly. So what we’ve tried to do is really focus on specific areas where we can improve. So it might be training people who click on phishing emails a lot. It might be doing a little bit more targeted training to certain parts of the organization. But this land and expand motion, I think, works really, really well within those enterprises, because you get the chance to build some of that trust.

And you get in the door a little bit quicker. Your contract size won’t be that big initially, but our mission is to build trust, and to get them enjoy working with us, and that unlocks volumes, I would say, in terms of ability to expand. Also, social proof, right? I think once you’re in one of them, you can then kind of name-drop that one. And then all of a sudden, you get a little bit of the FOMO among the enterprise, which is always nice.

Vivek: Yeah. Well, you should be less humble about it. It’s not easy to break into these companies in the first place, and then expand. But as you say, if you can get your foot in the door, you have a good starting point that can give you a lot of leverage to moving across the organization.

Choosing the Right Investors

Let’s switch gears here a little bit to fundraising. Tell us a little bit about the fundraising journey. And I guess I’ll put you on the spot a little bit. You weren’t really fundraising when we talked, and so why’d you even take the call? And give us a little bit about that experience.

Harley: Yeah, we weren’t fundraising when I first met you. I think we had planned to fundraise kind of around now, like March, April, May. I guess we’re in June of 2025. At the end of 2024, when we got introduced, I was really just in the early stages of saying, “Okay, I probably need to get my reps in fundraising. And I remember this from my time as a VC. We used to give this coach to our founders. Fundraising is like a muscle, and you just have to remember how to talk about your business.

What are the common questions that come up? What are some of the big things that you’re going to have to answer? Who are your competitive set? How are they fundraised? And there’s a lot of prep work that I think goes into a good fundraising process. And so, this was going to be the very, very early stages of that process for me. We got connected through my little brother, of all people.

Vivek: You owe him a nice Christmas present.

Harley: Yeah, exactly. And he was like, “Oh, I met these guys at Madrona. They seem really smart. Do you want to have a chat?” I said, “Yeah, I’ll get some practice swings in.” And yeah, I think on our side, we were in a fairly good position because we had had a lot of momentum. We’ve closed a lot of contracts. We were and are growing pretty well. But I was just really surprised. I loved the conversation that we had, and you asked really good questions. I also fundamentally believe that fundraising… And I hate the cliche, but fundraising is kind of like a marriage. This is a long-term partnership, right?

This is not, “Oh, they’re going to write a check, and then disappear.” This is — you are going to be seeing these people or speaking to these people multiple times a month, or once a course, in a board meeting or whatever it is. And so, you need to get along both on a professional level, obviously. And then, have a shared vision for the company. But also, you need to just want to spend time with them, and trust that you can be honest with them, and have these conversations. Life’s too short to have friction like that. And yeah, as I say, I just… You know? Not to big you up too much on your own podcast.

Vivek: We’ll take it. We’ll take it.

Harley: But I just thought it was a really great conversation.

Vivek: The feeling is mutual. And I think, for us, we had looked at a number of companies in this space. But I feel like the approach you are taking and Anagram was taking, but also your authenticity, right? Outside of just having a very interesting approach in this space, it’s like, are you the kind of founder or the kind of person that’s going to be able to make changes when the market throws a million things at you, right? It takes time to suss that out. I remember we spent half a day in New York and went for dinner. And those are the kind of things that I think get us really excited. Great traction and customers is important, but ultimately, so much of the company is defined by the founder in the early stages. And I think getting to those things is really important.

And I think the other underappreciated part is that when you already have a great group of investors, like in your case, you already had Bloomberg Beta and GC. You want to have someone who also can probably fit into that board dynamic while also challenging you in certain ways. And I think you’ve been very lucky to have a great set of investors, and I’m not talking about ourselves, but even your other investors. And so, it’s been fun to be part of that dynamic in that group. If you were to give advice to founders about fundraising and choosing a partner for the long term, what are one or two things that you would, maybe that they don’t hear as much? Or are there one or two pieces of advice you’d give after having gone through this process?

Advice for Founders

Harley: I would say the biggest thing is choosing someone who you get along with. To me, that is far and away the most important piece. And this isn’t to say the case for every CEO, but I really value collaboration and I really enjoy hearing other people’s ideas. I think of myself as a sponge. And I might take in all the information, and then just wring it out and ignore all of it, but I love absorbing it. And so, I need someone who I can listen to and who I’m going to care about their opinion.

And I think if you are someone who always thinks that you’re the smartest person in the room, if you’re someone who has to get the last word in edgewise, you are just going to… I just know myself and I know that I’m going to start dismissing what that person says or take it with a grain of salt. And so for me, I was really solving for that. Who is this person who is going to be a thought partner? Because ultimately, you also have to, as a founder, know where to take the advice and where not to take the advice. VCs are great, and we have been very lucky. We have a really good batch, but they are backseat builders, right? They are not there in the trenches. They don’t see what’s happening day-to-day. So you, as the founder, have to make the call ultimately, but you need someone who is going to have that little bit of humility on the VC side and say, “Look, I know that I am not necessarily the best person to answer this question. You’re the best person to answer this question, and trust you with that vision.” Again, being a VC, I think I saw this happen in a positive, constructive way.

And I saw this happen in very negative ways, where board meetings were just sort of this thing that the founder and even some of the VCs dreaded because they knew there was going to be conflicting egos. It’s uselessly destructive because it doesn’t contribute to building the business. It doesn’t contribute to a good relationship between these people who are spending a ton of time and are really invested in each other’s success. So yeah, as I say, for me, it’s a soft answer, but I think that interpersonal connection is massively important.

Vivek: That totally makes sense. And let’s end with a couple of rapid-fire questions here. So, outside of fundraising, if you could provide founders with one piece of advice, what would it be?

Harley: Send more cold emails than you think you should.

Vivek: Love that. Okay. Send more cold emails. Yeah, I think you probably learned some of this skill as a VC too, right? This is what we do all day. And one, let’s talk about hiring for a second. What lessons have you learned around hiring, especially when it comes to the kind of talent you want to get at this stage that Anagram’s at?

Harley: This is a longer answer for a quick-fire question, but it’s difficult to get talent that can scale at any stage of a company. What is a good fit for someone when you are 2, 3, 4, 5 people might not be a good fit when you are 10, 11, 12, 13 people, which might not be a good fit when you’re 25, 30, or 50, or a hundred. And so, always being cognizant of, is this the right blend of talent that I need for my team at this stage of this company? And then, a lesson that I have learned and a mistake that I’ve made that I’m trying to internalize and continue to get better at is not letting bad hires or mistaken hires drag out. So, making those decisions quickly.

And I say that not in a kind of, oh, let’s just be cruel and callous about it, and just hire and fire everyone and kind of create, because that’s really bad for the culture. But there genuinely is, for most people, a stage of company where they will shine and a stage of company where they will not. And we’ve hired a couple of times people who I think would be really great fits for companies where there was a little bit more infrastructure. And maybe if the company were 50, or a hundred, or a thousand people, they would be a really, really strong A-level player. But when you’re five people, when you’re 10 people, there’s just so much ambiguity and so much comfort that you need to be able to do without that, it just wasn’t the right fit for them or for us, right?

It’s not really fair for them to be wasting their talent at a place like us. And it’s not really fair for us to be bringing them in where they’re not contributing what they could be, right? So I think making decisions quickly. Also, smaller point — hire more junior than you think, especially at the earlier stages. Like someone who is earlier in their career, maybe, or not as experienced, but really hungry and learns really quickly. I will take that 10 out of 10 times over someone with a bit more experience who’s maybe coasting.

Vivek: Yeah. In fact, it feels like we’ve actually had more success in that type of hiring, and watching those people grow, and do all the tactical sort of get in the mud working versus bringing in someone senior. There’s a time for that. But I think, as you say, for seed series A, early-stage companies, that kind of role makes a lot of sense. Last question for you, Harley. Looking out five years, where do you see this market and where do you see Anagram?

Future of Security Awareness Training

Harley: I think that the current incarnation of security training disappears. I think it has to. One of the nice things about the way the current compliance frameworks are written is that they’re actually very vague. They just say, “Well, you have to train your employees annually on security relevant to their jobs.” That’s pretty vague. And what that has meant so far is we’ve done the lowest hanging fruit like, “Okay. Well, we’re going to give employees annual training about security,” and it doesn’t work.

And I think that as AI attacks become more sophisticated, so as these emails, that phishing farms can create become more personalized, become higher quality, become more relevant, the email security platforms are going to struggle to detect that as effectively and struggle to prevent those from landing in employees’ inboxes. I think we’ll see it with AI language models and these tools that we’re using. Even companies that are banning them outright, they’re still getting a ton of data leaked into them because employees are just putting it on their phone or taking a screenshot of the code and loading it into ChatGPT.

These are all stories that I’ve heard. And AI is just this kind of massive tailwind towards the humans needing to become better at detecting and preventing these security breaches, right? Already, humans account for 70, 80% of the number of breaches that big enterprises face. I think that number is just going to get higher and higher and higher because the tools that attackers are used get more and more sophisticated.

And so, the only way that we can solve that is to actually create behavior change and actually impact the way that users think about security. And for us, as I say, annual training is not a way to do that. And so, that’s where we are really focused on innovating in terms of both the simulations and the tests that you can use to train your employees, the format of the training itself, and then also ultimately the workflows, and getting into those workflows, and pointing them in the right direction.

Vivek: It’s awesome. Well, Harley, this has been great. We’re so excited to be on this journey with you. It’s a pleasure to have hosted you here, and I’m excited for everything you do in the future. So thank you again, Harley.

Harley: Awesome. Thank you for having me.

The End of Biz Apps? AI, Agility, and The Agent-Native Enterprise from Microsoft CVP Charles Lamanna

 

In this Founded & Funded episode, Madrona Managing Director Soma Somasegar sits down with Charles Lamanna, corporate vice president at Microsoft, to unpack his journey from startup founder to corporate leader. They dive into what it takes to build a successful AI-transformed organization, how the nature of business applications is evolving, and why AI agents will fundamentally reshape teams, tools, and workflows. This conversation offers a tactical AI adoption playbook—from “customer obsession” to “extreme ownership”—as Charles delivers insight after insight for startup founders and enterprise leaders navigating the age of AI.

They dive into:

  • Why business apps as we know them are dead
  • How AI agents and open standards like MCP and A2A are reshaping software
  • The shift toward generalist teams powered by AI
  • What startups are doing today that enterprises will follow in 3–5 years
  • How to focus deeply on a few high-impact AI projects instead of chasing 100 pilots

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Soma: Now, Charles, you came back to Microsoft when Microsoft decided to acquire your company, MetricsHub. How was that sort of entrepreneurial experience, and then transition back to a large company? Any learnings that you want to share that you think other founders or entrepreneurs might find interesting or valuable from your experience?

Charles: Absolutely. From my time on the outside of Microsoft, there were two big things that I learned and internalized in a profound way. The first is true customer obsession. That means even if you’re the engineer writing code, really understanding exactly how your customers use your product, what problems they’re trying to solve and what they’re looking to try to get out of it. That obsession has followed me since then, and I’ve really tried to bring back and inject deeply into Microsoft.

We do things like customer advisory board. We had one a couple of weeks ago with a few hundred customers came to town, and we have great quantitative analysis of how people use our products, where they get stuck, what our retention and funnel look like. Those are things that sometimes you forget or lose in a big company like Microsoft, because you have this amazing go-to-market arm, and you can make money even if maybe you aren’t delighting your customers. That has been a huge change. I think, of course, any startup is not going to be successful if they don’t really understand their customer and the pain points they’re going through.

The second thing is this idea of complete ownership, and when you have this sense of complete ownership, it doesn’t matter who’s responsible for doing something, if it’s necessary for the product or the business to be successful, you do it. And that is the biggest separation from a big company and a startup because in a startup, you don’t look around and say, “This is somebody else’s job.” It’s your job. If you’re a founder, everything is your job. Whether that’s responding to a customer support request, figuring out how to set up payroll, or doing financing. All of that is part of the job, and you never second-guess it. You never think for a second, “I need to hire someone to do this, or this is somebody else’s problem.”

That’s another thing, as you go into a big company like Microsoft, sometimes it’s easy because there’s such a robust support framework around you. You’ll say, “Oh no, I don’t do marketing, I don’t do finance, I don’t do this selling process.” Bringing back that extreme ownership has made it so much easier to create these successful businesses inside of Microsoft over the last 10 years. Things like Power Apps and Power Automate, they’re really expanding Dynamics 365. It’s that sense of total ownership.

Soma: I love those two things, customer obsession and complete ownership. Thanks. As I was just mentioning, Charles, we’ve heard Satya or Microsoft publicly say this on many occasions, like, “Hey, business applications as we have known them are dead.” We know that with AI, there is a tremendous amount of re-imagination of what business applications could mean or could look like that’s happening, including places at Microsoft. How do you think about this?

Charles: As the guy at Microsoft who works on business applications, sometimes the truth hurts, but business apps as we know it are indeed dead. I think that’s just the truth of it, and the analogy I always make is, it’s going to be like mainframes. I’m not saying tomorrow there will be $0 spent on CRM and ERP and HCM inside the enterprise. People will probably spend the same amount of money they did before, maybe a little bit less. They’re not going to do any innovation or any future-looking investment in that space because a system of record designed for humans to do data entry is not what transformation is going to look like in the world of AI agents and AI automation.

Instead, what will probably happen is you’ll see this ossification of these classic biz apps, the emergence of this new AI layer, which is very focused around automation and completing tasks in a way that extends the team of humans and people with these AI agents that go and do work. And if I kind of break it down, what’s in a biz app, I always have thought there are basically three things. It’s a basic form-driven UI gooey application on mobile and the web. It’s a list of things, and you can drill in and edit the individual things, whether it’s orders, or leads, or sales items. It’s a set of workflows that are statically defined, which codify how a lead goes to an opportunity, or how you close a purchase order. Very fragile, not dynamic.

Then it’s some relational database to store your data. That’s what a biz app was. Those aren’t the three elements of what a business application is going to look like in the future. Instead, it’s going to be closer to business agents. You’re going to have a generative UI, which AI dynamically authors and renders on the fly to exactly match what the person’s trying to do. You’re going to replace workflows with AI agents, which can take a goal and an outcome and find the best way to accomplish it, and you’re going to move from static relational databases to things like vector databases, and search indexes, and relevant systems, which are a whole new class of technology. When we fast-forward 10 years from now, you’ll look at those two things and it’ll be so clearly different, but right now they’re just beginning to separate. The gist of it is yes, indeed, biz apps, the age of biz apps is over.

Soma: When you mentioned forms of UI, in our workflow and database, you literally transported me back to my VB days.

Charles: Yes, yes.

Soma: Those were the things we were thinking about to help democratize application development. The fact that 20 years later we are still, at least today’s deployed application world is, tells me it is time for some disruption and some innovation.

Charles: Yes, exactly. I always joke, if you go and you look at a biz app that ran on a mainframe, it looks remarkably similar to a web-based biz app of today. That’s not going to be true in 10 years.

Soma: Whether it was the internet wave or the mobile platform wave, it always takes several years, many years, before you would find what I call canonical applications that define what the platform is capable of. In the AI world, I sometimes wonder whether that is still ahead of us as opposed to behind us. For all the hoopla and excitement that the world has seen around ChatGPT, that’s one sort of AI app that has gotten to what I call some critical mass in terms of adoption and usage.

Now in the startup world, there are a bunch of others like Perplexity, Glean, Cursor, Runway, Typeface, and a whole host of other companies that are getting to some level of critical mass. Some of these applications are targeted at consumers, some of them are targeted at enterprises, and some of them have aspirations to go both directions. What do you think is going to be the time when we can look and say, this is what a modern business application is going to look like, and throw away all the mental models you have about what that could be? Do you think it’s around the corner? Do you think it’s a few years away? What do you think?

Charles: I think we’ll see what the shape starts to look like very clearly in the next 6 to 18 months. I think because you already have glimmers of it, and then I think it’ll take longer to be mainstream. The refresh cycle of biz apps and core business processes takes a little bit longer, but in my mind, by 2030, this will be the prevalent pattern for business applications and business solutions. And in the next 6 to 18 months, you’ll really have it codified.

We can look to some of the places that have moved faster, like I’ll use Cursor as a great example. If you take Cursor, it’s a AI-powered application, tailored to provide an entirely AI-forward environment for a coder or developer. If you think about that, there’s the same type of work that happens for sales, or customer service, or core finance, like if it’s budget analysis, or reconciliation, or for core supply chain. You’re going to see things like Cursor or GitHub Copilot show up for each of those disciplines and be extremely tuned to take what people used to do and reimagine it with AI.

Just like how you have things like vibe coding, you’ll have vibe selling, vibe marketing, and vibe legal work. Those things will all show up. There are great companies out there. Harvey is a great company on the law side. There are a lot of companies that are emerging that are starting to do that. And of course, I’m biased. I think we have a lot of great stuff at Microsoft. We have very broad adoption of our Copilot offerings, but I think we’re going to see that fill out by industry, by business process, and by function.

The last thing I would say, which I think is probably one of the more interesting elements of all of this, is right now we’re taking the way organizations are structured and just mapping them to this AI world, right? Oh, you have a sales team, so they need AI for sales. You have a customer support team, you need AI for customer support. I don’t know if that will be what the world looks like at the end of the decade. You’ll have new disciplines, and new roles. Maybe you don’t have sales and customer support as two divisions. Maybe it’s one. Maybe sales, marketing, and customer support all become one role, and one person does all three. I think we’re going to reason through that, and that element is what will probably take the longest. We’ll probably have a wave of great technology for the old way of working that have new ways of working. Then another second wave of great technology, but all I know is it’s definitely going to be an exciting couple of years.

Soma: Your last point particularly made me think about this, Charles, instead of AI for sales, and AI for finance, and AI for this, and AI for that, do you think people are starting to think about, hey, what do people need to do in a company to get their job done, or to get their work done and start thinking about workflows that may or may not stay within a particular function or a particular discipline and cross discipline? Do you think there’s enough of that push that’s happening already, or is it coming in the future?

Charles: It’s very early. I mean, what’s amazing is that startups are doing this because startups, in a world where you have extreme ownership and you have to do whatever it takes to succeed, you don’t feel constrained by disciplines and boundaries. If you want to see where the enterprise world or where mid-size companies are going to go in three to five years, look at what startups are doing right now and that’s exactly what they’re doing. Different structures, different ways of working, and there are two things which I think are going to really drive a lot of this transformation.

The first is, these AI tools bring experts to your fingertips. As a result, you can be a generalist with a team of expert AI supporting you. That’s how I feel every day. I have an agent that helps me with sales research. I’m not a salesperson, I’m an engineer, but I don’t have to go out and talk to a salesperson to get ready for a customer meeting. I have a researcher agent, which helps me prepare and reason over hard challenges. I have a document editing and proofreading agent, which makes me a better writer. I have all these tools, which make me more of a generalist, kind of overseeing these set of AIs. What that translates to is probably de-specialization in the enterprise, de-specialization in companies where you have less distinct roles and disciplines, more generalists powered by AI. That’s item one.

The second thing is, what makes a team? We always think a team is a group of people. The big change is that the team is a group of people and AI agents. That’s really how we need to start thinking about how we organize organizations and companies, and how we even go out and do hiring. If you think about who you work with, you’ll start to, increasingly I think of it as — here are the people I work with, and here are the AI agents I work with to get a job done. That means you have meetings, you have calls, you have documents you work on together. Those two things will help drive that transformation. It’s not like a startup sits down and says, “How should we structure ourselves for the future?” They tackle this problem, that problem, that problem in the best and most efficient way, and it happens to look like that. So that is, I think, probably a lot of the changes that we’ll start to see.

Soma: You talked about this notion of, a team as not just a bunch of people, but a bunch of people plus a bunch of AI agents. Can you take it one step further and say, hey, every information worker or knowledge worker is really a human being plus a bunch of AI agents at their disposal? Is that a good way to think about it?

Charles: Absolutely. The way we approach it is every individual contributor, everybody who individually does work, will increasingly become a manager of AI agents who do the work. We have a thing we talk about internally at Microsoft, which is in the past we built software for knowledge workers to do knowledge work. In the future, probably most knowledge work and most information work will be done by AI agents. And a knowledge worker’s main responsibility will be the management and upkeep.

Soma: To orchestrate and manage.

Charles: Exactly. That’s where you get this idea that you can be much more of a generalist and an expert, and this is how you get a huge productivity gain. You’re not talking about, oh, I’m 10% more productive or 15% more productive. We all are going to have entire teams of AI agents working for us. We can be 5 or 10 times more productive if you get that right, and that’s what gets me excited because that’s what starts to change the shape of the economy and really create abundance of doctors, lawyers, software, and all of those things.

Soma: People fondly refer to 2025 as the year of agentic AI. First of all, do you agree with that? How do you see the role of agentic AI or AI agents as far as the next generation of business applications go?

Charles: It definitely is the year of agents. Everyone I talk to, from the smallest to biggest company, understands what agents are and they want to get started with deploying agents in their enterprise. You can see, you have Google with Agentspace, you have Salesforce with Agentforce. We have plenty of agents at Microsoft in and around Copilot. OpenAI is talking about agents, Cursor is talking about agents, everybody’s talking about agents.

It very much is beginning to diffuse — kind of like how 2023 was probably the main year of chat AI experiences on the back of ChatGPT and Copilot’s launch, that’s what 2025 will be, but for agents. Business applications, in particular, are going to be the ones most changed as a result, and I think you’re starting to see it. Every company I work with tells me, “I’m excited by business applications with AI, that’s great, but I really care about business agents. Tell me how I can get agents deployed in my back office, in my front office? How can I grow revenue, cut costs, using agents?” That is a new conversation, which to me means it’s the era of agents.

Soma: We’ve gone through a major platform shift almost every decade or so, and sometimes during this platform shift, every major player would go off in their own direction, trying to figure out what it means for them and what they can do with that kind of thing. If you go back to the internet platform way, you could argue that HTTP was something that sort of came in pretty early on, and everybody adopted and said, “We are going to be behind this kind of thing.”

Similarly today, when I think about this agentic world, I look at a protocol like MCP, or a protocol like A2A, and see a tremendous amount of industry consolidating. In fact, the thing that surprised me is that Anthropic, in MCP’s case, came out with MCP, and within a few months, pretty much anybody that mattered talked about how they’re all in on supporting MCP and came out with their own offerings. That level of industry consolidation around something is both exciting and fantastic. How do you see that?

Charles: It’s probably 30 years since we’ve had such an industry-wide convergence on an open standard, back to really the original open web HTML, HTTP, and JavaScript. It’s incredible because that means more opportunity for startups because there’s really not some strong incumbency advantage, as a result of open standards. Also for customers. I can buy 10 solutions, 10 different AI agents, and I have confidence that they’ll work together. Even at Microsoft, we support A2A. We’ve announced that a couple of weeks ago. We have MCP support for a couple months, and we’ve even contributed back changes to MCP that have been accepted and merged with a bunch of other companies for authentication to make that work well with MCP.

This is going to be great because a typical company has so many, say, SaaS applications and databases today. In the future, they’re going to have a ton of these different agents and tools for agents. That’s what the future is going to look like. If you think about what it’s like to be in an IT department that has 300 different SaaS apps, it’s so painful to integrate them. I don’t think it’ll be as painful in this world of MCP and A2A and that’s huge opportunity for lots of these startups, which can be so fast and agile using these AI tools and can interoperate with the big footprints that exist in a typical user’s day, whether it’s consumer or commercial.

Soma: I want to go back to one of the earlier things you talked about, which is customer obsession. You mentioned that you had a customer advisory board, and a couple of hundred customers come through. When you talk to enterprise customers, where do you think they are in the journey of adopting AI, whether it’s in the form of business applications, next-generation business applications, or Copilots, or what have you? Do you think they’re in the early stages, mid-stages, or later stages, and what are you hearing from them?

Charles: It’s a big spread out there right now. Some companies are almost like a tech company in terms of how aggressive and ambitious they are with the AI transformation, usually that comes from a very top-down investment focus from the CEO, the board, plus having business and IT and tech resources equally engaged. A lot of companies are very early, and they’re looking for that first big win. Maybe they have a few POCs, a few prototypes, a few experiments. They don’t have that big top line or bottom line moving win.

What’s interesting is that if you went back a couple of years ago, it was all about building things yourselves. Everybody had dev teams calling APIs and using models. We’re coming out of that because people realize how hard it is to assemble these things and get business outcomes. It’s the era of these AI finished solutions, whether that’s in an agent or this new type of AI application like Cursor. That is starting to be the main place that companies are looking to get that value quickly. If I were to take a step back and maybe do a pattern match of what are we seeing for companies that are being most successful, enterprises that are being most successful, the three main things when it comes to the AI transformation.

First, they’re being very focused on driving real resource constraints into the organization to drive productivity improvement. If your budget grows every year, you don’t feel a lot of pressure to improve your unit performance inside the organization. That’s a hard thing to do, particularly if a company is growing. The second thing is having a big focus on democratizing access to AI. Companies which are struggling are the companies that don’t have AI in everybody’s hands every day.

If you want to become an AI-transformed company, the only way to do it is all of your users, no matter where they are, technical, non-technical, need to be picking up and using these tools each and every day. If you don’t have that, people will have dreams of the magic AI can do, which isn’t grounded in reality, or they’ll be unnecessary skeptics for future projects. Get AI in the hands of everybody. The third and last bit is don’t spread yourself a mile wide and inch deep. For companies that are successful, they don’t do 100 projects, they do 5 projects very well with a lot of force and with continuous improvement in mind. That’s kind of what I see as showing up as the most successful enterprise organizations.

Soma: That’s great. Did you hear the Shopify CEO make a prompt from a few weeks ago about how everybody should be thinking about AI?

Charles: Yes.

Soma: That dovetails with what you’re saying about, hey, make sure that everybody has access to AI tools?

Charles: Exactly. I go out and tell my team, “This year, you won’t be promoted unless you use AI tools if you’re an engineer, because how can you really say that you’re on the cutting edge of AI software development if you yourself are not using AI?”

Soma: That’s great. Charles, earlier on, you talked about customer obsession and complete ownership. Some of the learnings that you had from being a startup founder to coming back to Microsoft. Going hand in hand with that, how do you think about agility? One of the things I worry about, and I was part of Microsoft, so I can sort of say that I’ve been there, but as the company gets larger, sometimes you sort of wonder whether the agility is what it needs to be, the level of urgency is what it should be. How do you encourage your teams and Microsoft to say, hey, I want to operate with the same level of urgency and agility that a startup does?

Charles: There’s three big things that we’ve done to help instill that. The first is, for the most intense period since I’ve been back at Microsoft, it’s mission-oriented. Everybody understands what the mission is. All of our software, all of our technology, all of our products is going to be completely disrupted by AI. Do we want to be the people who watch that happen, or do we want to be the people who do it to ourselves? The energy is off the charts. I’ve not seen folks at Microsoft working as hard and pushing the limits and boundaries and innovating in the last 10 years I’ve been there, as there has been over the last couple of years. That’s kind of item number one.

Number two is when you’re in a big company, there’s always this incredible inertia and this incredible layers of bureaucracy, and process, and layers of decision makers, and consensus building that slows everything down. That’s where extreme ownership and this desire to grind through anything is really critical because anything you want to do, if you want to innovate, there’ll be 100 reasons why you cannot do it. You have to find the one reason why you can and how you can, so that extreme ownership grit to really push through all these barriers to go be successful.

And the third piece is really encouraging experimentation and being willing and rewarding failure if it produces learnings. We have these interesting forums at Microsoft where folks will come in and say, “Here is a product experiment we’ve done, or here’s an AI model experiment we’ve done.” We have these every week and they share good or bad. Here’s what we tried. It didn’t work for these reasons. Here’s what we tried. It did work for these reasons. It’s almost like the cloud post-mortem culture that you had to develop with repair items and a blameless post-mortem.

It’s this continuous experimentation, innovation feedback loop around model and AI products, and doing both of those because those are both equally important, is how we’re really starting to drive this culture of, it’s not build a plan for six months and we’re going to run the plan no matter what. It’s build an experiment, run it in a day, learn, run it another day, learn, because that’s what all the good AI companies are doing. Those are just a few of the things. If you look at the pace of innovation, Microsoft is definitely moving faster than we’ve ever moved before.

Soma: That’s a super helpful framework to think about, as teams and organizations are thinking about how do they operate with the same level of urgency or agility that is required in today’s age. It’s not a nice to have or hey, yeah, someday I’ll do it. If you want to survive and if you want to be ahead of the curve, you need to do it today. Now, coming to the personal side a little bit, Charles, I’m sure AI is impacting your life in a positive way, whether it’s at work or outside work. Are there one or two tools that you use on a daily basis, and can you talk a little bit about what those tools are and how they change what you’re doing?

Charles: I will exclude all my Microsoft tools that I use all the time, in the interest of being different a little bit, because I use a bunch of those, and one of my favorite features that have been released lately is the deep research functionality. Between o3 and Deep Research, you can get some amazing insights. A big thing I like to try to do is really have a good view of the market to try to find blind spots. What startups are out there being successful, and how are the big competitors doing when they do their earnings, announcements, or conferences?

What I can do with Deep Research is I can basically have a very specific question, and I run this basically every week. I’ll give an example, help me understand the financial performance of business application companies, and who is accelerating versus decelerating, and what are some interesting facts and terms around usage that they’ve announced. I can basically describe this nice little big healthy prompt, send that off, come back 10 minutes later, and I get a beautiful little view. This is a way that I stay on top of what’s happening in the market every week. In the past, I could do this by reading various places, Hacker News, and on X, and stuff like that. But this gives me a really in-depth view report, almost as if I truly had a competitive researcher full-time doing work for me.

That has been game-changing and my poor team is probably tired of me sending screenshots to these reports because I use that for a lot of public information. Second thing is, I’m a big user for image generation tools. I have subscribed to Midjourney. That’s just so much fun because I never was a great artist, but I’d say I can create lots of fun images and pictures and I share them with family and friends. That’s kind of like a relaxing thing for me to do. And I don’t have Photoshop. I would never have opened up and drawn free form, but I can have that feeling of creation and creativity in a way I wouldn’t have before. It’s interesting. It’s a new kind of hobby, a new accessibility. Again, back to the generalist specialist thing, I’m definitely not a specialist artist, but I can use AI.

Soma: It’s a good outlet for your creativity.

Charles: Exactly, exactly.

Soma: That’s fantastic.

Charles: I cannot wait for companies like Runway, as they mature capabilities to be more than just images to videos. I can’t make a film or a movie today, but I bet in the next 10 years I’ll be able to make a 60-minute film, like really. So that’ll be fun.

Soma: That is great. On that note, Charles, thank you so much for taking the time to be here with us today. I really enjoyed the conversation, and we took it in multiple directions, and it was fun to be able to hear your views, your perspectives, and your experiences. Thank you so much.

Charles: Thank you for having me.

Building AI That Sells: Scaling Smarter with Luminance CEO Eleanor Lightbody

 

Entrepreneurs often ask: When do I know it’s time to scale? Or, how do I lead when I wasn’t the original founder?

In this week’s Founded & Funded, Madrona Partner Chris Picardo sits down with Eleanor Lightbody, CEO of Luminance, who shares how she took a promising AI product and built a company culture, go-to-market motion, and product strategy that’s scaled 6x in just two years. Eleanor’s candor and tactical insights on hiring, selling, and navigating founder dynamics make this a must-listen.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Chris Picardo: To kick things off, I think it would be great to just talk a little bit about your career journey. How do you go from a cybersecurity account manager at Darktrace to the CEO of Luminance?

Eleanor Lightbody: I think to understand that, it’s probably worth going way back. I was reflecting on this the other day, I grew up in a household where my mother was an entrepreneur. She started a small business that has grown massively and I saw that when I was growing up. And my father’s a lawyer. Reflecting on it, and looking back, I’m like, “Oh, matching, kind of mirroring entrepreneurship and working in the legal space.” It feels like a given now.

I started at Darktrace when I was fresh out of a post-grad, and I chose Darktrace for a few reasons. The first one was that it had some seasoned investors, and it felt like they had built and grown quite established companies before. I was going to be one of a handful of people in the London office. It felt like it was an opportunity to work for a company that had the potential to scale very fast because what they were offering was appropriate to every single company in the world.

I’m pretty glad that I did. I had a few offers on the table, but I’m very glad that I did because I joined when the company was super small, probably 50 of us globally, if that. I left just before the IPO, and recently the company was bought by Thoma Bravo for over $5 billion. I set up the African operations. I was the first one on the ground to open up that market. Then I went on to run a global division that looked at securing national critical infrastructure, and the investors of Darktrace were similar to those at Luminance.

I got a phone call over five years ago, and it was like, “Hey, our portfolio company’s doing something really interesting. Would you like to join them?” And I thought, “What do you mean? Join them? I’m like meeting the company.” I had to clarify that with them. I’d actually known the founders at Luminance from afar for a while. After speaking to them, I wasn’t thinking about leaving Darktrace, but actually,y it was a very quick yes and an easy yes, because the opportunity that Luminance had was absolutely massive. And so I thought, “You’d be crazy not to say yes to this.”

Chris Picardo: It’s so interesting. It’s got to be a great phone call to get to be like, “Hey, do you want to go do this really fun thing at another exciting company?” One quick question I wanted to ask that you mentioned, which I’m curious about, is you said when you joined Darktrace, it felt like it was ready to grow really fast. What does a company feel like when it’s ready to grow really fast? I know that sometimes you get that feeling, sometimes you don’t. What resonated with you when you were like, “Okay, this company is ready to go.”

Hear from Eleanor during
the 2024 IA Summit.

How to Know When an AI Startup Is Ready to Scale

Eleanor Lightbody: That’s exactly why I joined Luminance, because I was like, “They’ve got these foundations and they have this ability to scale.” I think it’s a combination of things. One, it’s like— how big of a problem are they addressing? Are they thinking about the problem in a different way? This is when I’m talking to founders. Is there deep expertise in the technical team, and do they have a real sense of what they’re trying to deliver? Those are the key elements that I was always looking at.

But fundamentally, ideas change, they permeate. I think any successful company experiments a lot. It’s not necessarily about the product today, but it’s about the team today and the way that they think about things. Then alongside that, it’s like, do they have that energy? You know when you walk into a room and you’re like, These people want to build, and they are competitive, and I feel like there is no Plan B, but success. Those are all of the things that both Darktrace and Luminance have shared.

Chris Picardo: It’s sort of like that drive that you are going to build this product because you want to win and you want to see it out in the world and fulfill your vision, and that is what’s motivating you as being part of the company.

Eleanor Lightbody: 100% and nothing’s going to stop me.

Chris Picardo: Obviously, you’ve had that feeling twice and both stops have been quite successful, so it must be a great feeling to have. When we talk about Luminance, you’ve done something obviously a little bit different than a lot of people’s journeys, which is stepping into the CEO job. Talk a little bit about what that experience was like. The founders are still there. There have been a couple of other CEOs. How did you think about that? How did you navigate the dynamic? What was that broader experience like?

how to scale an AI startup with a non-founder CEO -- Luminance CEO Eleanor Lightbody

Navigating Founder Dynamics as a Non-Founder CEO

Eleanor Lightbody: When I first joined, it was a combination of a tiny bit of naivety, which I think is actually really good. As we grow up in businesses and get more and more experiences, one of the key things is to try and keep a bit of naivety because that can help you take on these hugely daunting tasks that in time we slightly get a bit wary of. I was excited, and I was daunted.

I look back on it and it’s quite funny. One of the pieces of advice that I was given from one of my mentors was, A, it’s going to be really hard. I didn’t think I knew how hard it was going to be, but I was like, “Okay, cool.” The second one was, “Look, you want to join a company and within the first few weeks you’re probably going to want to change so much. Get a piece of paper, write everything down that you want to change, put it in a cupboard, and don’t look at it for four weeks. Don’t change anything for four weeks and then revisit it. You’ll start to understand why some of the things you might not need to change or why they are a certain way.”

I didn’t listen to that very well, and I think for a good reason. Regardless of the circumstances, they’re bringing you in. The most important thing to set the scene is to have very frank conversations upfront to say, “This is what I can bring. These are the things I can bring.” For me, when I joined Luminance, there was so much that could be done to mold the commercial teams and to change the way that the company was thinking about how to go to market.

It felt like I could sit with the founders and say, “This is what you guys are doing, this is what you should be doing. Let’s do it today.” And they were like, “Oh, wow. Okay.” Then, having a few metrics to show early signs of success can help you bring the founders on that journey with you. From the outset, my piece of advice to anyone going in is, “Know where your strengths are. Don’t try to boil the ocean from day one.” They’re stuff from that list that I wrote, and I haven’t changed, and know where your finer strengths are. For the first few months, figure that out. Be very upfront in your conversations, be very transparent, and then you’ll start to build a lot of bridges.

Chris Picardo: It seems like when you bring in a new CEO, obviously, one of the reasons you might do it is because you want a change agent. You have to be nuanced and thoughtful about how you want to go about getting buy-in and team-wide enthusiasm about the change that you’re bringing.

Eleanor Lightbody: Yeah, exactly. Being thoughtful and mindful are really, really key things.

Chris Picardo: One thing I hear a lot from CEOs, people who’ve been CEOs, people who’ve worked with a lot of CEOs, is it’s kind of a job. There’s no job description around being the CEO, and it’s not like, “Oh, here’s this great training path to becoming a CEO because it’s a very different type of role than anything else in the company.” Is that your experience as you sort of transitioned into the seat, that maybe your expectations around what a CEO does or what people externally think a CEO does is kind of different from what they actually do?

Eleanor Lightbody: I think everyone thinks that a CEO does something different. If you were to take a general survey of the market, I’m not sure that you would get consensus at all. I don’t think I knew what I was getting myself into, and I mean that in a very positive way. The rate that we’re growing, a lot of CEO’s roles change and they morph. I used to say every month felt slightly different.

The rate of innovation and the speed of scale each week are the key thing, for successful CEOs are a few things. One is being able to understand the vision, being able to understand the 10,000-foot view, but also to be able to parachute in and understand the problems and think about things from the first principle basis to then help fix them. The second thing, and the most important, I have this opinion, very biased, but I think it’s — focus.

As a CEO you can be pulled in so many different directions. All the teams can benefit from having you in the conversations. What are the most important things that day, that week, that month, that year, the next few years? And to try and be very regimented in that. I also think part of my role is being a bit of the hype person for the business, you know? Like going in every day, leading by example. There are other CEOs who might not necessarily agree with that. It really depends on the company, what’s required at that given time, and adapting to it.

Chris Picardo: Being able to figure out what the company needs at any given time is one of the key things you have to be able to do. It’s probably a good time also to talk a little bit about that in practice at Luminance. You stepped in and were able to really massively change the growth trajectory of the company. I’m both very curious about how you did that? But also, I think it’s a good time to spend 30 seconds on Luminance and what it is and why you were so excited about it being able to really put the company on such an incredible growth trajectory.

Scaling Go-to-Market: From Law Firms to Global Enterprises

Eleanor Lightbody: If you think about it as we sit here right now, there will be teams in every corner of this world who will be receiving legal contracts. They’ll be reviewing them, they’ll be processing them, and they’ll be deciding what to do with them. As that stands, it’s very time-consuming, it’s expensive, it’s prone to human error, and it’s costly. What we do at Luminance is that we automate that process end-to-end.

Our customers are pretty much any company of any size, whether that’s someone like AMD or DHL or Hitachi, all different industries who are using us for every single interaction that they have with their legal contract. When I joined the company four years ago, Luminance was only selling to law firms and they had done a really good job. They’d sold to a quarter of the top hundred law firms.

When I came in, I was like, “This platform’s so powerful.” Their addressable market is so much bigger than just law firms. One of the things that we did very quickly was we used the underlying AI models and technology, and we built a whole new workflow and platform to be much more directed at in-house legal teams. That was one of the bets that really paid off because we saw that sales cycles were much faster, the time to value was much faster. That product in itself has grown its ARR 6x over the last two years.

And, again, going back to what I did very quickly when I joined Luminance, I’ve got a sales background. I came in and I asked the teams, “How many fast meetings have you got? How many POVs?” We do free trials. “How many free trials are you running? How many cold calls are you making?” All this stuff. There was a bit of a disconnect of what was expected from the sales teams versus what sales teams should be delivering.

The sales teams have been ex-lawyers. I was like, “Wait a second, we need a lot of lawyers in the company to help build the product, help understand the use cases, help be subject matter experts, but the one area that we do not need lawyers is in selling, because selling is slightly different. Give me a graduate who’s 21 and I will train them how to sell, as long as they understand what we’re doing.”

We changed the whole hiring process, the whole structure. I very quickly promoted two very young, at the time, account executives into more leadership positions, into mentor positions. I remember some people saying, “They’re very young, they don’t necessarily have enough experience.” And I was like, “Give me someone who’s young and hungry and I will help train them into what they need to be.” They’ve become two of the most successful people in the business. That was one of the moves. Adapting the product, it wasn’t really changing it was adapting it and totally changing our go-to market for two areas that were really important to do first thing.

Chris Picardo: You just gave a very condensed master class in both positioning your product and then figuring out how to sell it. One of the things we were joking about in our prep call, I guess a couple of weeks ago now, was people love to say, “If you’ve got a great product and AI, it’s just going to sell itself.” I think we were both laughing because that is definitely not true. Certainly, some companies have been able to figure that out, but I think for the most part, you have to map a great go-to-market strategy with a great product strategy, right? Those two sort of work together.

Eleanor Lightbody: I totally agree. By the way, any company that has been able to do that, hats off to you. Kudos to you. High-fives to you. I want to talk to you. Great, amazing. But that, as a founder, as a CEO, is not necessarily what you should be solving for. If that comes organically, brilliant, amazing. But there are still some key things that might not necessarily last till the end of time.

So understanding your product and understanding distribution, of course, you’ve got to have innovation at the heart of it. You’ve got to have a really solid understanding of the problems that you’re trying to address and how technology can help. If you don’t know how to go to market to it and you don’t know how to take customer feedback, and the key thing is that taking the right customer feedback and prioritizing it, again, is I think so important when you’re scaling business.

Chris Picardo: That’s an interesting point that you want a lot of customer feedback, but you also need to prioritize the customer feedback around what moves the needle and what is useful and interesting information, but maybe not as high on your priority list.

Eleanor Lightbody: Exactly. Everyone talks about customer feedback. It’s important that customer feedback is crucial, don’t get me wrong, especially depending on where you are in your product development cycle. If you’re early on as a company, then what we found super successful or to be super successful was choosing. We were very lucky that two very big companies, early on, were like, “We’ll become design partners with you.”

We said yes to them, but actually no to some others because we felt that the use cases that they were trying to address with our technology and what they were trying to build with our technology could relate to companies around the world of all different sizes. So we were like, “We’re going to go with you guys.” And I think finding some early design partners can be super useful.

Then obviously, as you get more and more customers, the key is two things. The first one is prioritizing customer feedback. I have friends in other areas and other businesses who, as time goes on, they remove themselves from the customer. For us, it’s so important to be even more ingrained with the customer because you get so much valuable information there. But, again, having a system where the feedback loops really fast and you can see if the same feedback’s coming across from all of your customers, those are the ones that you need to really focus on versus a customer saying, “Can I put my logo on it or can it be yellow rather than blue?”

Another mentor of mine told me early on, and I didn’t really get this because in sales you’re like, “Yeah, yeah, we can do this, this,” But when you’re running a business, the power of, no, when you’re talking to customers. Saying, “No, we’re not going to do this and there are some reasons why we’re not going to do this.” Most often than not the customers say, “Okay, cool, that’s fine. It was just an idea.” Actually, if you overpromise, then that’s when you find yourself in a bit of a wrap rate, like a hamster wheel of trying to always catch up and that’s not necessarily a place that you want to be in.

Chris Picardo: Then you’re customizing for every customer, and it’s a problem, right?

Eleanor Lightbody: Then the amount of when you’re upgrading, not somewhere that you necessarily want to be. The second thing is around, we have a team that focuses on customer feedback and product, but we also have our blue sky thinking team because sometimes customers can identify lots of things that they want and for you to kind of help them with. They might not necessarily see around the corner, they might not see the potholes. We’ve got a team very much focused on what can’t we do today that we might be able to do in six months time, in a year’s time.

The capabilities of these models are getting better and better and better. That’s no secret to anyone. They’re getting cheaper and cheaper and cheaper. How are we building for the future? Metaphorically, they’re carved out into a separate room because I don’t want them to get too distracted by the noise of the daily operations. I want them always to be thinking about innovative, cool, different end-use cases.

Chris Picardo: I was so curious to talk a little bit more about this concept that you have around, I think you call your innovation team, the jet pilot team. Then you’ve got the core team. I know a lot of founders and CEOs always try to think about how to balance innovation versus core product delivery and what are the ways to do it? You’ve been successful at doing that. How do you think about managing that on a day-to-day basis and having those two different groups working in concert with each other?

Inside Luminance: Running a Dual-Track Product & Innovation

Eleanor Lightbody: It’s always a bit of a balancing act. For us, one of the pillars that a whole company is based off of is innovation and speed. I think that to be competitive in this market, you have to build very, very fast. You can only do that if you’ve got a team that really understands the impact of what it means to build slowly, actually. There are areas that you need to build slowly, don’t get me wrong, but there are new value-adds, features, use cases, and modules that you want to get out there. It’s seen as a classic like 80/20 rule. We’ll get it out there and see what the feedback is, choose a few customers to go live with, and then you can iterate on it and fly.

The way that we’ve got our teams to buy into this is that they’re the ones talking to the customers. They’re the ones who developers will build something, they’ll sit in and they’ll see the impact and historically they’re siloed away and they’re like, “Oh, but why do we need to build fast? Why does this matter? Let’s make sure that we’re getting it totally right before we push it out.” My argument always to them, and they only started believing this when they saw it, was you don’t know what’s going to really work. You actually have no idea.

We might think in isolation that something is going to land really nicely, but more often than not, sometimes the things that we don’t know are going to land are the ones that people really, really like. So just get it out there. A terrible analogy, but it’s like throwing mud at the wall and seeing what sticks. Throw as much mud and see what sticks, and then you can iterate really quickly on that.

Chris Picardo: Are you able to earn your ability to do that because you’re so close to your customers and able to gain that trust? I think you said something interesting, which is the 80/20 is okay to launch some of these things and be iterative and to get new product in front of customers quickly. Did you earn your way into doing that? Is that something you did? Then customers said, “Hey, this is great, right? Keep it coming.” What’s the nuance there? There are a lot of people who are like, “Hey, if I do launch something and it’s terrible, then have I blown up all of my trust?” Or do people now think it’s not a great product? I think clearly you’re saying, “Yeah, you might have a couple of those things that just don’t land.” But if you launch and you put stuff in front of customers, that’s the cycle to get the best product out there.

Eleanor Lightbody: It’s choosing the right customers. You don’t do this across all of your customer base. You say, right here, X customer. AMD is a great example. They came to us and they were like, “We love your AI for legal negotiation, but at the moment, I still have to review clause by clause.” The AI tells me whether I should agree to it or not. It gives me language that I can use instead. Still the human has to go through and say, “Yes, I agree.” Or, “No, I don’t agree.” They were like, “It’d be amazing if I could just click a button and it rewrote the whole contract for me.” We we’re like, “Okay, let’s do that.”

The first few times that we showed it to them, it didn’t get it right. We could have easily lost their trust, but it’s all about the framing and the positioning, which is like if you’re going to these development partners and saying this is a fully baked solution, then if it doesn’t work or if it doesn’t quite hit the mark, then of course you’re going to lose trust very quickly. It’s like, “No, this is something that we’re working with you on and we’re going to iterate. This is the first future version of it. Let’s continue this dialogue.” Then if you pick the right partners, they’re going to love being able to roll up their sleeves and help you with that.

Chris Picardo: You have a set of customers that are basically perpetual design partners that are happy to work with you on iteration knowing that some of it is going to be experimental and you guys will co-work on it together to land the final version of the product.

Eleanor Lightbody: The good thing is that most of the stuff, we eat our own dog food, it’s the best way to describe it. We test it out ourselves first as a legal team and we think, “Yeah, this is going to have legs on it and then we go to a few select customers. There is sometimes it doesn’t even pass getting past our legal team and we’re like, “This is a cool idea, but actually this is never going to work in reality.” Yeah, that’s how we come to that.

Chris Picardo: It’s funny because, obviously, you’re so focused on go-to-market, but you have tons of product insight on how to map the two pieces of go-to-market and the product roadmap together. That’s a hard thing sometimes for purely technical or purely product-focused founders and executive teams to be able to see how those two pieces work very closely together.

Shifting from Sales to Product Leadership as a CEO

Eleanor Lightbody: It goes back to the earlier point, which is I think I bought in very much for distribution and go-to-market. Now I spend much less time thinking about that. We’ve got a really good repeatable machine and I dip in and out of it. Now my role is very much with product and thinking about, “Okay, what are the next use cases? ” I absolutely love that.

I was talking to one of the founders the other day, and I was like, “I’m not an AI expert and I always come up these ideas.” He’s like, “Eleanor, they’re not your ideas?” Really at a high level, oh, I’ve great ideas.” Building that respect and that trust has been so great and why we’ve seen so much success.

Chris Picardo: I wanted to circle back to that question, which is, how you think about working with technical founders. I’d imagined you still might not consider yourself to be ultra deeply technical, even though you clearly are pretty technical now on a lot of this. I think Snowflake is the example people often use about very technical founders and very commercially minded CEOs, but it seems like it’s a great working relationship that you have. Are there ways that you thought about that or culturally think about that to make it so effective?

Eleanor Lightbody: I don’t think either Adam or Graham, who are our founders will mind me saying this. The beginning part that was definitely, we had to work through a few things. I didn’t know what I didn’t know, and they didn’t know what they didn’t know. The key thing is, again, I’ve done this. It’s understanding where your lane is when you start off. You’re in a room, all of you, because you are super strong at something you would hope.

We always talk about leaving egos at the door when we get together as a management team. We’re pretty blunt with each other. We give constant feedback and no one’s off the hook. I get bombarded and the founders get bombarded. When I first started I was a bit like, “Oh, wow.” Then you’re like, “No, this is exactly what success looks like.” Is that we’re constantly holding ourselves to higher standards, and we have really productive, sometimes heated, conversations, most of the time not, and that’s so key.

It’s starting where your strengths are, showing a bit of success, showing that you appreciate each other’s art. The other day I was reminiscing. When I came in, the founders didn’t really know what ARR was. They weren’t plugged into the monthly sales numbers and they didn’t quite understand that whole world. Now, the first people to text me on a monthly management call, end of the month, would be the founders. “How are our numbers this month?” They’re like, “How are the sales? What are the growth? Why don’t we win?”

I’ll be the ones asking them about new AI models. And I’ve got a bit of a competition where I’ll be like whenever this new AI model that comes out, I’ll text them, I’ll be like, “Am I the first one to know about this?” And they’ll “No, you’re still not the first one. There’s this appreciation now of each other’s worlds, and I love that.

Chris Picardo: Many of the things you’ve been saying have come back to mutually aligning the culture around a company that’s going in the same direction culturally, doing things the same way and has the guardrails up around what the culture is like. It almost goes back to the initial thing we were talking about, around feeling that energy and alignment at a company that is just, “Hey, we are scaling, we are going, this is awesome.” Iit does seem like an explanation under a lot of these topics is the cultural alignment that you’ve been able to drive and alongside obviously the founders and the team to set up this version of Luminance where it is.

It’s really fun for me to listen to stories like that, especially because I think it’s unique to do it as the non-founder. To come in and say, “Hey, maybe at this point you’re effectively a quasi-founder or a true right founder of this version of Luminance.” It’s very unique to do it that way and at that time versus, “Hey, I’m going to try to start this from the beginning because I started the company from scratch.”

Scaling People and Culture Alongside Product

Eleanor Lightbody: It comes with a lot of challenges, but also it comes with so much potential. The fact that you can join, you can change things, and you can hopefully clearly see what needs to be improved, and also what has happened, that’s been amazing, and to remind people of the amazing things that they’ve done. Sometimes they get lost in that. Then slightly change things up to make sure, you guys, you’re on the right path to the next stage of growth. I love it.

Chris Picardo: You can tell. It’s fun talking about it, obviously. We’ve talked a little bit about it, but I want to ask you, is there anything that’s been particularly hard? I’m sure there’s lots of things. Is there anything that surprised you like, “Wow, that is hard. We’ve got to go figure out how to solve.” That maybe wasn’t on your list of things you thought were going to be hard as you scale the company?

Eleanor Lightbody: I mean, it’s all hard. I don’t know, if anyone tells you it’s not great, but I’m very honest. It’s so rewarding and fun and as I said, absolutely, you’re so immersed in it. One piece of advice I would give to anyone who is thinking about stepping into this role would be, there’s no such thing as work-life balance. Leave that at the door.

If you think that you’re going to join a company, you want to scale in and you want to be successful, it’s not just you that’s going to be immersed in it. It’s your spouse or partner or your friends, you family. You are all in it, whether they like it or not, you are all in it together. I think that is important. What have I found hard? What I have found hard is, and maybe this is a bit controversial for a podcast.

I’ll be very open, which is, you really hope that the team that you have with you will scale with you and that they will learn and they will grow. Sometimes, some people don’t. As you have to, as you go through that journey, manage that. That is probably one of the hardest things that I have found in the past. So that was quite a wide awake. I don’t think you know what it’s going to feel like until you have to go through that.

What is always interesting is making sure that you’re focused because you can spend a week feeling like you’re super productive and so busy, but have you really moved the needle or have you been focused on the things that are the most important part? The third thing that I found, I don’t want to say hard, but I just hadn’t expected it. We hear so much and you read so much about how hard it is or how exciting it is and how the teams are so important in innovation. What you don’t hear or I had heard less about, is how important having the story is, how important having a narrative is. Not just for you as a company and so that everyone’s aligned to it, but also for the outside world. I think in the U.S. Marketing is amazing, and I don’t think I had appreciated the power of it until I joined Luminance.

Chris Picardo: Sometimes the English major part of my brain comes out. In another part of my life, I work with a bunch of scientists here at Madrona, and I think one of the things we talk about a lot is telling your story in simple language that people can understand. It doesn’t matter what you do or even if you’re talking to people who are extraordinarily deep experts in your field, like that. The ability to do that and modulate how you can tell your story, I think, really sets apart a lot of very, very successful companies and founders and executives from people who sometimes struggle with that.

I do want to thank you for making the point around the difficulty of knowing that some people on your team won’t scale because sometimes you hear people talk about that from time to time. It is one of the conversations that certainly from our investor CEO relationship perspective and from, I’m sure, with your board members, that’s a conversation you have more than I think people generally talk about and it’s an important and hard thing for everybody involved. It’s nice that you mention that for the broader audience, that it’s a normal part of scaling a company.

Eleanor Lightbody: Exactly. Most often than not, people do massively surprise you for the positive and people step up and people grow. The counter to that is I really believe in giving . I took on Luminance when I was 29-years-old and, I mean, that’s maybe really old for Silicon Valley, but globally quite young. It’s so powerful to give opportunities to talent that might not necessarily be quite ready for it or the traditional experience for it.

We’ve seen some of our best employees rise up the ranks and be given tasks and responsibilities that might be slightly, at the time, felt like maybe too much. They have not only risen to the challenge, but they have absolutely accelerated in it. I have to sometimes remind myself that it’s like some of the best people that I’ve worked with are given opportunities young and to keep on doing that rather than to be like, “Oh, no, we need to make sure that they’ve got 20 years of experience.”

Chris Picardo: It’s also so important that you say that. I think about that from time to time sitting here in Seattle, we can all name some famous Amazon, Microsoft, et cetera, executives. Almost all of them were given opportunities at those companies really young and were able to scale into those jobs. They didn’t just emerge out of nowhere. I think you giving people those opportunities to be able to do that is, I’m sure, massively beneficial to the company, but also just amazing for the people. It’s such a great cultural piece.

Eleanor Lightbody: Also everything for you as a leader. You get to watch people mature, grow, and believe more in themselves. That is, I think, one of the funnest part of my role, is seeing someone who was a 22-year-old, now 26 and just so capable and even I thought, that they were going to be.

Chris Picardo: Those people are in your network forever. I mean, we’ve met so many CEOs that you’re like, “Wow, there are hundreds of people who have worked for you, been in your organization, been given opportunities that have then gone on to do incredible things. A lot of them can point that path all the way back to those opportunities they got or being at Luminance right at the right time to be able to get an opportunity. That’s got to be an incredible feeling.

Eleanor Lightbody: Yeah.

Chris Picardo: This has been such a fascinating discussion and there’s so many other topics we could have talked about, including, we probably could have spent an hour on the topic of design partners, which I find to be endlessly fascinating. I wanted to talk about a couple of quick questions to end it. The first one is, where are we in the hype cycle? What’s your take of AI? Is it a good thing? Is it a bad thing? What’s real? What’s noise?

Where AI Is Headed — and Why 10x Value Matters

Eleanor Lightbody: Depends on any given day that you’re asking me — my mind might change. I’m kidding. Fundamentally, there is so much that AI can do for good, and there is so much impact, and I think we have only scratched the surface. But I think where we are is, historically, up until today, I think people have seen kind of two X three X productivity gains. To change behaviors, you need to see 10x productivity change. We’re about to start seeing that, and that’s going to be absolutely massive.

So, where we are in the hype cycle, everyone’s got a different opinion. Do I believe that we are only starting to see the potential and we’re going to see more and more potential? Absolutely. The conversation that I think has moved on hasn’t hit. Now people are like, “Okay, cool, this is really useful. This could have a huge amount of impact.” Now people are like, “Okay, what are the return on investments? How am I driving adoption? Why am I actually using this?” Those are the really interesting places to be.

Chris Picardo: What’s a belief you had a year ago or last week, I don’t know, about AI that you think turned out to be totally wrong?

Eleanor Lightbody: It goes back almost to the point I just made, which is like, I always thought that if you could do something and save 50% of time or if you could save X amount of costs and if you had something that gave your customer two X productivity gains in some shape, way or form, that would be enough for adoption and it’s going to be totally ingrained.

To change human habits, it always has to be much more than two X. A few years ago, I was like, “Oh wow, this is not about being more efficient or being more productive. What is the real intrinsic value?” If you don’t have value, then you might be like a one-hit wonder, essentially. People use you but then get tired of you and move on to the next thing.

Chris Picardo: If you look out into the future, what has you the most excited or what do you think we should all broadly be really excited about?

Eleanor Lightbody: I think today the human is still in the driving seat with AI. The human’s still putting the inputs in. It’s still most often than not, they’re not checking the outputs. That’s going to inverse and the AI is going to be massively in the driving seat and the human’s going to be there to slightly tweak levers and to put some guardrails up. I’m so excited for that.

From the humanist point of view, AI negotiating against AI, I think we’re going to live in a world where most legal contracts are the first pass, second pass, third pass, done by AI either side. Also beyond legal, like drug discovery, the impact that we can have on personal medicine, the impact that we can have on curing some of the diseases that we’ve been trying to cure for years. That is going to be, and again, we’ve only scratched the surface there, but it’s going to be so, so positively beneficial for society as a whole.

Chris Picardo: Yeah, that’s the other part of my world. I, 100% agree with you. Eleanor. This has been so fun and we could have talked so much more about so many of these topics, but I really appreciate you joining us on the podcast.

Eleanor Lightbody: Thank you so much for having me.

Tune in to the next episode on May 28 to hear from Microsoft CVP Charles Lamanna.

Customer Obsession & Agentic AI Power Ravenna’s Reinvention of Internal Ops

 

Most startups bolt AI onto old products. Ravenna reimagined the entire workflow.

When we first met Ravenna Founders Kevin Coleman and Taylor Halliday, it was clear they weren’t just chasing the hype cycle. They were pairing AI-native architecture with deep founder-market fit, and rebuilding how internal ops work — from first principles.

Their new company is going after a market dominated by legacy players. But instead of being intimidated by incumbents, they got focused, making some smart moves that more early-stage teams should consider:

  1. Speak with 30+ customers before writing a line of code
  2. Define a clear ICP and pain points
  3. Build natively for Slack — where support actually happens
  4. Prioritize automation, iteration, and real workflow transformation
  5. Stayed radically transparent with investors and early customers

In this episode of Founded & Funded, these two sit down with Madrona Managing Director Tim Porter and talk through their journey, what they did differently this second time around as co-founders, and how they’re building a durable, agentic platform for internal support.

If you’re a founder building in AI, SaaS, or ops — this conversation is full of lessons worth hearing.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Tim: So I mentioned in the intro, you’ve done a company together before and this is a second one. We’re super excited to have been able to invest in the company, an announcement that just came out here recently. But let’s go back. Tell us about the moment you decided to start Ravenna. What problems did you see that you said, Hey, we got to go solve this for customers?

Kevin: I was at Amazon for four years, and I think the whole time I was there, I was looking around trying to figure out what was going to be the next company that we go and do. It took a while to find it, but about halfway through my tenure there, I realized one day that I was spending a lot of time in an internal piece of software Amazon has that serves as the help desk across a lot of different teams. It was the tool where I would go to request onboarding for new employees, to request new dashboards to get built from our BI team. My teams would use it for other teams to request work from us. I realized I was spending so much time in this tool, it wasn’t a great product experience. The way I always described it to folks is it was like the grease in the enterprise gears, if you will. It was the way that things got done internally.

And so I got obsessed with what is this product category? It’s so foundational to how Amazon as a business operates and I started doing a bunch of research in this space. I found out it’s called enterprise service management, which is the category. ServiceNow is the leader. I finally understood what ServiceNow as a business did and why they’re such a valuable business and how large this market is. I started thinking, what does a next-generation, amazing version of this product look like if a very innovative startup built it that cared about design and user experience and cared about automation as well? So really, what does the next generation ESM platform look like?

Tim: I love that because ESM is a category, it’s a big market. ServiceNow is a leader, but I also think, like a lot of things, Amazon did it in an innovative kind of scrappier way. You actually used it for more things. This was the way you just requested and got things done across different groups, as opposed to “Well, we got to log it into this system of records so somebody has a record of it,” and it’s like, no, this is actually the way it was getting done.

Kevin: Yeah, absolutely. And so I came up with this concept and when a concept gets lodged in your mind, you can’t get rid of it. I went and ran to Taylor, who obviously was my co-founder previously and the guy I wanted to start the next company with, and I said, “Hey, I’ve got this awesome idea. We’re going to build a next-generation enterprise service management platform.”

Taylor: On first blush, it was tickets and queues. I was looking at this like, “Is this Jira? I don’t quite understand what’s going on here.” But it came at a good time. So rewind the clock, ChatGPT hit the scene, Zapier, just like every other company probably on the planet, had a little mini freak out, like, “What do we do with this? What does this mean for us?” Product strategy, what have you. At the time, I was lucky enough to pair up with some of the founders to basically do a listening tour. We headed around to mid-market size companies, talked to C-suite directors, executives, what have you. Obviously, Zapier is known for its automation game; we wanted to try to figure out what would be a great solution here in the world of AI/LLMs to bring a new level of value to them.

We asked an open question. Where would you like to have us poke and prod? We did 20 to 30 of these calls. It became pretty clear, resoundingly — we kept hearing about internal operations, over and over and over.

I had a problem picking through that in my own head, and I even had a blunt conversation with one of the CEOs saying, “I’ve heard this so many times. What’s the deal? What’s going on here? Why do I keep on hearing about internal operations?” I think it was a couple answers. One, I can wrap my head a lot better around internal efficiencies or lack thereof in this company, and what we are hoping to do. There was this desire or gap of a desire kind of thing in terms of “I don’t have a good amount of visibility to what folks are actually doing. I can’t top-down efficiency at my company. They would say, “As much as I say, ‘Hey look, I want you to be more efficient, do these different things,” I don’t know super well what the market or what the engineer is doing.” They all kind of use AI as an opportunity to help maybe bottoms-up some of this efficiency without it being a top-down thing.

I think that was a lot of the interest. And so when Kevin ran to me, it was like, “I have this idea circling around, this internal management tool, and there’s an opportunity perhaps in this larger market that’s old, 30-year plus incumbents are all over the place. That’s what got me interested and sparked a lot of the collaboration in the early days.

Tim: We think a lot about founder market fit when we’re looking at new investments, and I remember our first meeting Matt McIlwain and I had together with you guys and we both left like, “Oh, my gosh. We have to find a way to fund this.” It was this unbelievable founder market fit that you had lived the tool in using it at Amazon. You literally witnessed it across all of these customers at Zapier who are using it to get these automations in place, but it wasn’t a full product. So, awesome to see you both come together with those insights. I’m going to abstract away a bit. We’ll come back to Ravenna and the specifics about enterprise service management, but you guys have done a company before.

Kevin: We have.

Tim: You’ve been through YC, and you decided to do it again. That’s a testament to your working style. But were there things like, “Okay, we’ve got the idea again.” And other things like, “Hey, we’re going to do something different. We’re going to do it the same”? How is it different for you guys going at it a second time around?

Taylor: It’s funny, Kevin and I, you mentioned a past company, I always joke, it depends on how you count a company. Kevin and I have been working together for quite a long time. Whether that be in the early days just finding a coffee shop or bar at night working on the smallest app. Then to San Francisco, we ganged up together and tried to start a CDP of sorts, went through YC with that, and molded that into several different things. But regardless, we were kind of taking stock of that history, if you will.

To your point, what went wrong, and what went right? I think an interesting way of characterizing it, which I feel like a lot of entrepreneurs do this in the early days, and early innings is trying to, “What’s the new, new? What’s the thing that doesn’t exist out there yet?” If you were to take a retrospective look on the stuff that we’ve put a lot of time and effort into, it circles around that, which is frankly speaking some of the fun and exciting when you’re with some buddies and say, “Hey, this doesn’t exist yet. What if we made this?”

We were thinking like, “Hey, look, what if we flipped it on side this time? Instead of doing that approach, let’s try to figure out a market that’s super well-defined and try to focus in on opportunities to actually bring better experience.” Especially in the age of AI, it seemed like the perfect time to target this one in particular.

Kevin: Taylor hit the nail on the head there. We, for the life of us building software together have always been trying to identify something that doesn’t exist and shy away from competition, and this time we’re taking ahead on. So we’re super excited about that. Big markets are exciting. We don’t have to go find a small group of people who need what we want. We know there’s a ton of people out there who need what we’re building. That’s really exciting to us.

Taylor: As part of that analysis of, “What do we want to work on and where do we want to press?” I remember talking to you, Kevin, thinking about taking a step back, “What kind of risks do we want to bring on?” We kind of framed it like that. Going back to my earlier point, I would characterize a lot of the early endeavors as pretty high in market risk. We’re trying to figure out, “Hey, let’s try to not optimize for that this time. Let’s try to optimize for something else.”

To compliment ourselves a bit, I think we’re pretty good at building a lot of products. And doing it pretty fast, too. Also at getting together a lot of good folks to work with. So from, call it a human capital risk, I didn’t see that on the table. Taking a larger market, trying to take the market risk off the table. We tried to optimize more for what we thought of as go-to-market risk.

Kevin: The other thing I’d say that we’re doing better on, I don’t know if we’re doing great at it, but we’re doing better at it this time around, is understanding who our customer is and being super clear about what we’re building and for who. So ICP, ideal customer profile, if you will. Taylor mentioned the last company, the first product that we built was a customer data platform. We effectively at his startup, my startup, we had problems with our customer data. It was sprawling all over the place. Folks who were non-technical were always asking us to integrate various tools so they could get customer data where it needed to go. We would go around to potential customers and say, “Hey, you probably have problems with your customer data. Can we help you?” And they’re like, “Yes, of course we do.”

The problem was the problems were all over the place. There wasn’t a product that we could identify that would cut across a bunch of companies. Part of that was, we were early entrepreneurs and didn’t know what we were doing. This time around, before we even built any software, we spent months talking to customers and understanding the space and understanding what pain they had before we started writing a line of code. We wanted to be super clear about our ICP, what they needed, what their problem was, and then we back into the product from that. So a hundred percent this time around, we’re doing a lot better on that front than we were last time and we think it’s definitely the right way to go.

Tim: Well, this is definitely a super big market, and another thing that came through from the beginning, as we have been engaging and working together, is customer-driven, customer-driven. Sort of the maniacal customer focus that is maybe the core attribute for I think successful startups. So that’s been awesome.

Let’s talk a little bit more about what the product does and bring it more to life. I’ll lead you into that by talking about some of the investment themes that Madrona has that we thought Ravenna embodies. A big part of that is AI and part of the why now for Ravenna. Probably our biggest theme is around how AI can reimagine and disrupt different enterprise apps. You’re using what I would call or many in the industry would call an agentic approach where you can actually create various agents that don’t just surface insights but can automate and finish tasks. This world and this product area is really ripe for that, and you’ve done some interesting things there.

And then new UIs. The user experience, you’ve embraced Slack as a place that work is getting done and made the product be extremely Slack native and fully integrated in people’s existing workflow as well as an ethos around clean, simplistic. Taylor, you and I talked about this the very first time we talked about the product, but maybe give a better description. Okay, great service management, tickets, people have something in mind, but say more about the key features and then maybe tie that back to when you were out talking to these initial prospects, what did you hear about what was missing and what could you deliver in your product to make this experience such a big leap forward?

Taylor: Going back to why we picked this. There’s a well-known UX product pattern that you see basically in this market. We weren’t very impressed by what we saw. In the age of AI/LLMs, the popular thing, I would argue, would be to come at this with an intelligence layer. We definitely consider that, and we made a conscious decision on what we think is maybe where the longer-term value is — but also perhaps the tougher one — which is that we’re not building the intelligence layer for this market. We have a lot of confidence, conviction, if you will, that there’s room for a new rethought platform. What that means in actual practice for those who are familiar with even the space, a help desk is probably the most down-to-earth version.

Tickets and queues, it’s a very similar pattern to how you expect it to be from a customer interface with customer service software, same type of thing except the primary difference is that this software is geared towards basically solving your colleagues’ problems. The canonical example, is the IT help desk. You asked for a password reset, new equipment, what have you, that creates a case, it creates a ticket. That’s the typical way of going about this. We’re not talking purely about the intelligence layer and the agents, which we are super excited about, I think we have a lot of fun stuff there, but also very much of building and rethinking what the larger brick and mortar ticketing platform looks like.

Kevin: Yeah, 100%. So enterprise service management is the category. That’s a very broad term. Most people don’t know what enterprise service management is. The easiest way to think about it is it’s an internal employee support platform, internal customer support platform if you will. So, you have functions across an organization, whether it’s rev ops, HR ops, sometimes called people ops, facilities, legal, etc. They all offer internal services. What I mean by that is they offer services that other colleagues can leverage.

So in legal, a service might be, “Hey, can you review this contract?” In facilities, it might be, “Hey, my desk is broken, can somebody come and fix it?” And so this pattern exists across companies, and what people need is a tool that allows them to intake these requests, assign those requests, resolve the requests, and then get reporting and analytics. Increasingly, with AI and automation, classic workflow automation, they want to automate a lot of this work as well.

What we’re building is a platform that allows any team within a company to offer a best-in-breed service, best-in-breed help desk and provide amazing service to their colleagues and then also automate a lot of their work with our AI. That’s a pretty straightforward way of describing it.

Tim: You recently were part of a launch that Slack did for partner companies. Pretty cool. You’re Slack native but yet a new company, kind of an interesting series of events that maybe led to that. What’s the background on that and what has it been like trying to partner closely with Slack?

Kevin: I’ll say upfront that when you start a company, weird, cool, fun stuff just happens. It’s kind of like Murphy’s Law, right? Anything that can happen will happen. It feels like that is embodied in a startup to a certain extent. So yeah, we were a launch partner for the launch of the AI and assistance app category in the Slack marketplace. You can find Ravenna in the Slack marketplace, which is super cool.

The way it happened is very fun. Matt McIlwain, who is obviously your partner here at Madrona, when we were going through our recent fundraise, he said, “Hey, there’s a local entrepreneur you should go and talk to.” He made the introduction, this local entrepreneur went on a walk with Taylor, heard what we were talking about, what we were building and said, “Hey, a certain high level executive at a large CRM company in the Bay Area,” who happens to be Slack’s parent company, “should learn about this.” We were like, “Of course, anybody who’s an executive of these companies should learn about us.”

They ended up forwarding along our deck. That got forwarded over to the executive team at Slack, and they got in touch with us and said, “Hey, what you guys are doing is super interesting, we should talk.” We had a conversation, and we got a Slack channel open with a couple of those folks, as you do when you’re working with folks at Slack. Then we noticed that this new app category is coming out. So because we had that introduction there, we reached out and said, “Hey, we think Ravenna fits really nicely into this new app category. What’s going on here? How can we get involved?”

It was, fortunately, really good timing. We got connected with the partnership folks over there, and they said, “We’re launching this category in two months. If you guys can get your stuff ready, we’re happy to feature you as a launch partner.” Funny how these things work out.

Tim: You all have been great about using advisers but also using your own networks to get feedback. You never know where it’s going to go.

Kevin: You never know, you never know.

Tim: This is another example of putting yourself out there, and getting the feedback. Sometimes it takes you right through to the CEO’s desk.

Taylor: Kevin mentioned, it’s the funner parts, to be frank with you about it. If you have humility to understand that there’s so much out there to learn — especially going into a category that you’re trying to make some hay in and do a different thing in — It’s valuable to get a lot of perspectives there. The more of that you do, there’s tangible stuff that tactically you might get some Ps and Qs kind of learnings along the way, but there’s also some of the funner random doors to get open, such as that one.

Tim: One thing I think is cool too, and part of it is using Slack, and part of it is how you can pull data in from other places — is that questions get asked, and people didn’t realize, this question’s been answered already. How do you create this instant knowledge base from what’s already in Slack all over the place or maybe from an existing knowledge base that is there, but people don’t go look at it. It’s easier to fire off a Slack like, “Hey, Taylor, can you tell me the answer to X?” And by doing that, you can create an automation that the person, and the task gets finished and you didn’t have to do anything, right? That’s a big unlock here.

Kevin: You’ve mentioned Slack a couple times, and we should revisit that really quickly. Slack is the interface for the end customer of the platform. That’s super critical and was a learning during our listening tour at the beginning of last year. The traditional help desk, there’s basically a customer portal where you go, you fill out some form and then your request goes into the ether and you don’t know what happens to it until somebody pings you back a couple of days later is like, “Hey, we resolved your issue.”

What basically every customer across the board told us is employee support happens in Slack now. So, “If you guys are going to build this platform, everything needs to be Slack native, that’s where our employees work. We don’t want to take them out of there. That’s super key to us. If you go to our website, it’s very clear that we’re deeply integrated with Slack. So, we started building into Slack and then to your point about knowledge, we started talking to customers and said, “Hey, you get a lot of repeat questions. A lot of those questions pertain to probably documents or knowledge bases that you’ve written. If you give us access to those, we can ingest them and use AI to basically automate answers to those questions so you don’t have to answer them over and over again. Just to save you time.”

Some people were like, “That’s amazing, let’s definitely do that.” Other people basically said, “Yeah, it’s not going to work for us.” And so we were like, “Okay, why not?” They were like, “We don’t have good knowledge. We don’t have time to maintain it, it gets out of date really quickly and, frankly, it just doesn’t make the priority list.” And so we asked the next question, which is, “Okay, if you hire somebody, how do they get up to speed? How do they learn how to answer these questions if you’re answering them in Slack?” And they were like, “We literally point them to Slack channels and say, ‘Go read up on how we answer these questions and that’s how you should answer going forward.'”

That was this light bulb moment where there is a treasure trove of corporate information and really knowledge that exists in Slack, or any team chat application, so Teams as well, that is sitting there. And companies don’t derive a ton of value from that. A lot of what we’re trying to build is not only give operators of these help desks tools to turn Slack conversations into knowledge-based articles, but really to build a system that can learn autonomously over time.

You should assume that when you’re using Ravenna, your answers are going to get better over time. The system’s going to get better at resolving your internal employees’ queries over time because we’re listening to that data and evolving the way that we respond and take action based on how your employees are answering their colleagues’ questions.

Tim: One of the things that is super exciting here is that I see this as how work gets done inside businesses, and it’s really broadly applicable. On the other hand, a truism about successful startups is that they stay focused, and there is this IT scenario initially where IT is used to using tickets, people are used to asking IT for things. Those things tend to be maybe more automatable, I don’t know. But how do you balance that? Staying focused on, let’s just go nail IT service management, ITSM, versus we have this broader vision around better automation for how enterprises get work done. How do you get that balance right? What are you learning from customers and where are they drawn to initially as you start talking to them and start working together through an initial set of users and customers?

Taylor: I’m going to tie this back to some of the questions you asked around, what’d you get excited about working on this? Rewind the clocks, Kevin runs over, “I got this great idea. The market’s called ITSM.” I’m like, “What? I haven’t heard of this thing.” “No, it’s a huge market.” “Really? I’ve never heard of this acronym before.” ITSM is the name of the larger market and it’s been traditionally known as, okay. Half that acronym’s IT.

Today if you say, “Look, who’s the ICP? Who do you want us to introduce you to at a company?” We’re going to say, “Look, it’s the IT manager.” And it’s because they know what it is. Again, longstanding industry, they know what to call it. They know that funny acronym. They know the pain points very, very well and they understand how to kind of wade through the software. And so that is typically I’d say the beachhead if you will, for us approaching.

Tim: That’s the initial wedge. That’s a great place to enter.

Taylor: Correct. Where this gets more interesting, in my opinion, though, I remember kind of noodling on this thing. I was looking at Zapier’s help desk channel and I was kind of looking through it and being like, “Huh, this is not the most inspiring, password reset, what have you, kind of stuff. Is this really this massive market that Kevin’s super excited about?” No shade if anyone from Zapier’s listening in. The channel’s great, by the way. But I would say it’s what light-bulbed, looking around the rest of the company, it was the same interaction pattern. The same user pattern that you see in what was traditionally known as the help desk channel — that same pattern is present in HR ops. It’s the same thing that you see in marketing ops. It’s the same thing you’ve seen in engineering ops.

It was interesting, though, because I was being very coy interviewing a lot of folks back then. IT knows what they call it. They know what the class of software is, right? But the folks who were in charge of marketing ops, engineering ops, I couldn’t find many who knew the acronym ITSM, so I stopped doing that pretty early, but they know the pain. I started to realize, I came around, circling around to being like, “Look, if you are in, call it an ops position, marketing, engineering, pick your flavor and/and department. If your job is to provide great service to your colleagues, you are operating a help desk, whether or not you know it. That’s the fact of the matter. So again, to your question about who do you start with, we start with IT, it’s the most well-known commodity in that space.

The excitement for me, maybe it’s broader than IT, maybe there’s more stuff than that. That’s kind of grown to be true so far in the early innings here is that other folks see basically a better class of software being introduced by IT. It’s this interesting thing, it’s an infectious being, like, “Wait, what is that? Where’d that come from?”

And so therefore we are trying to maintain in terms of precedence, IT is the number one persona and that’s the one where we’re going to, I’d say charge ahead on the absolute most in terms the bespoke workflows that they have to do and the ones that we have to help automate better. Nonetheless, though, HR ops seems to be the one that we’ve just seen in organic pull with, it’s kind of second in position, and after that is revenue ops.

Kevin: I’ll give you a very concrete example. This morning I had a demo call with a large email marketing tool in Europe. They got out these four IT guys on the call like, “Hey, we’re looking for a new tool. We need a new help desk tool, we need AI, etc.” We’re like, halfway through, they’re going through all the requirements. They’re like, “Oh, by the way, it’s not just us. It’s facilities, it’s HR,” and I think they said product is the other team. That happens all the time.

We are always talking to IT people, and it always comes up on our calls, “It’s not just for us. Other people who offer internal services need this as well.” So it’s exciting for us because IT is the entry point, but then you’ve got this really nice glide path into the rest of the organization. Again, I don’t know if it’s a secret or whatnot, but it’s one of our core learnings you’re going through this journey — there’s a lot of teams across these companies who need this type of tool. So that’s exciting for us.

Tim: Yeah, it’s an interesting form of land and expand.

Kevin: Yeah, exactly.

Tim: IT has budget, they get it, they need it, but everybody is asking them for something so you can get sort of a viral spread, and there’s no difference in the product functionality to start using it for sales ops as you were using it for IT ops.

We referenced ServiceNow a couple of times, so one of the most valuable application software companies in the world, $175 billion market cap. VCs like to use shorthand to describe companies’ investments, one of our best investments ever, Rover, it was Airbnb for dogs. I’ve shorthanded Ravenna as an AI-native ServiceNow for mid-market and growing companies. ServiceNow is obviously upper enterprise. You like that moniker? Should I keep using that, or do I need to disabuse myself of that type of VC speak?

Taylor: I think that’s a good one. It ties back for me at least to the distinction I made earlier around the platform versus the intelligence layer, kind of like, well what are you guys doing? I always like to joke, for better or for worse, we’re doing both. I say for better or for worse, again because it’s a lot of software, but that’s where the conviction is. ServiceNow is what we view as someone who’s taken a very similar bet a long time ago in terms of, “Look, we want to actually own the data layer. We want to actually be the thing that is close to basically all the customer data and the employee data at a company.” We view that as a more durable, longer-term play rather than just the intelligence layer. And so, I like the moniker.

Kevin: Definitely like the moniker.

Tim: All right, I’ll stick with it. So, it’s been fun in this conversation as you ping-pong back and forth, Taylor talking about go-to-market things, Kevin talking about product things. Taylor, your background is traditional engineering leadership. Kevin, you most recently have been doing go-to-market at Amazon, but an engineer by background. How do you divide it up? How do you divide up the responsibilities inside the company? That’s always an interesting thing that sometimes founders struggle with, is we’re full stack, you guys are both full stack, but we have to have some roles and responsibilities here.

Taylor: For Kevin and I, given how long we worked together, I think it’s probably more blurry than most, but I think that’s also one of the benefits of working with him. I know him so well that I can trust him for a wide range of things. That all said, we do try to basically divide up the product and how we go about this. I’ve tried to focus more on the AI automation side of the fence. Kevin’s very much more on the, call it the broader platform side of the fence, and so that’s roughly speaking from a product angle.

From a go-to-market angle, I’d say it’s messy at this point. We’re a young startup, it’s kind of like hit the network, hit all your networks.

Tim: Most of you’re on customer prospect calls all the time.

Kevin: Of course.

Taylor: I mean — roles and responsibilities only matter so much in terms of if you have people that you think might want to buy this kind of stuff, we got to do that. It’s good to have some delineation between roles, but I think at the earliest stages it’s just messy, and embracing that I think is part of the deal.

Tim: Another way you run the business that was super nice for us in the process of leading up to investing is you’re radically transparent. And ll of the prospect calls or customer calls, all those videos you record, they were all on Slack. You just gave us access to all of them. Like, “Here, go watch them and see what we’re learning and help us along the way.” That was super nice. But that must also permeate through your organization, and maybe it gets to the culture a bit, maybe speak to the culture some and what you’re trying to be intentional about in terms of building culture here in the relatively early days of Ravenna.

Kevin: I think this was, for me at least, a core learning from the first business. We didn’t do a good job of talking about what we were doing or telling people what we were doing. Part of it was, I don’t know, I didn’t think the business that we had was the most exciting thing in the world. So it was a little bit of not wanting to broadcast it as that much. I would hang out with friends and whatnot, and they wouldn’t know what my business was back then, and I would be kind of frustrated internally like, “How do you not know? We don’t have a lot of friends who started businesses, you should know.” But the fact of the matter is, they shouldn’t know. I should have been a lot more vocal about our business.

This time around, I think there’s two things. We want as many people to know about what we’re doing as possible because we think it’s pretty cool. Hopefully, other people will think it’s pretty cool. Hopefully, customers will think it’s pretty cool. The other thing is we want as many sharp minds helping us — in the product, and business, as possible. We think the way to accomplish both of those goals is being radically transparent. It’s radically transparent with our team and our customers. When we talk about roadmap, or when we talk about the stage of the business, what we have and what don’t. It’s all an open book, and we’re very transparent with them on where we’re at and where we’re going.

With investors as well, we shared a ton of stuff with you guys, and it wasn’t an angle to get you guys excited about what we were doing. It was more that we really liked you guys. We thought you were really sharp, and if we share a lot of stuff and you guys see what’s going on, hopefully you’ll get excited about the business. But then also hopefully you’ll, I don’t know, see something that we’re doing and be able to give us feedback on how we can sell better, how we can build better, pattern match across different portfolio companies that you’ve seen and help us. We want everybody to know what we’re doing, and we want as many smart people helping us and being transparent helps us accomplish this.

Tim: Super effective. We should say in other investors that were even investors before us, Khosla, Founders’ Co-op, have been really I’d say best practice in making a great collaborative style where we always are up to speed and can try to add value.

It probably has impacted recruiting too. It’s a hard recruiting market, especially for good technical talent and AI talent. You’ve done an amazing job of building the initial engineering team, including great AI background. Without giving away any hiring secrets, talk a little bit about how you’ve been able to do that. It’s never easy, but you’ve made it look relatively easy in these early days. What’s it been like in this hiring market, especially when you’re competing for AI talent?

Taylor: I don’t have any deeply held secrets.

Tim: At least that you’re going to share.

Taylor: If I did, I wouldn’t give it away anyway on podcast. But, really —we’re super excited about the team we have, and I think equally as proud about the culture that we’ve been much more intentional building this time around. We’ve tried to hold a high bar with the folks that we’re interviewing. I think that was more of a self-serving thing originally. But I like to think that comes through, frankly speaking, for a lot of the folks that we are speaking with.

It’s not just about the mission per se, it’s also about knowing that we basically have built quite a bit of software in our past lives and have a lot of perspective and a lot of conviction. Not just the market we’ve talked a lot about, but also how to go about building this and how we’re thinking about taking a different approach. I think that in itself has helped basically attract a lot of folks that, frankly speaking, we’re honored to be working with at this point as well.

Kevin: Totally. My playbook, I’m happy to share it because it’s pretty simple. I reach out to a lot of people and I tell them that Khosla and Madrona put some money into a company to help go after ServiceNow’s market, and people get excited about it. Yeah, it’s just trying to find good people and trying to get them to have a conversation with you and then explain the vision of what we’re doing and why we think not only the opportunity is really big, but we want to build the next great Northwest software company, if not West Coast software company.

We want to be intentional about building an amazing engineering culture, an amazing product culture, an amazing culture that works backward from customers. Amazon likes to say they’re the most customer-centric company. Hopefully, we’re going to be the most customer-centric company over time. And we’re very much striving to do that right now, but just really build a great place where people want to come work.

Tim: What’s an example of something that maybe you had an assumption coming into this company, now a year later it turned out to be wrong, and you had to quickly work through that. Not necessarily like an 180 degree change in direction, but constantly sort of course correcting.

Taylor: It goes back to perhaps what I would say about picking a large market, and being conscious about that. Nothing in life is for free. You get into that and you quickly realize a couple different things. If you pick a large existing market, sure people know it, you can assign a market cap to it. It probably makes the investor conversation a little more easy in terms of figuring out what the TAM is. But you start actually building here, you quickly realize that a well-known market has a lot of well-known features, a lot of well-known capabilities, a lot of basically well-known expectations from the buyer. Which in some level is good. It kind of clears things up.

The trade-off that what we found is that it translates into a lot of software. So again, for better or for worse fits well to some of our strengths and also some of the recruiting that we’ve done. We’ve been moving extremely fast because we have to. Another quick tenant about Kevin and I, the way we think about doing building companies, the whole stealth thing is orthogonal to us. I’m not going to go so far to bash some of the folks who want to do that type of thing.

One of the things of learnings from our journey is that there’s nothing more true, harsh, and real than the market. Every bit of time that you spend not interfacing with that market with what you’re building is a gap that you were accumulating and accumulating. One thing we always talk about at Ravenna is making contact with the reality as fast as possible.

Tim: I agree. I think the value from asking for feedback, shipping, and getting the feedback from actual shipping so outweighs any risk of, “Gosh, somebody else took my idea, or we should have stayed in stealth longer.” It’s just not even close. You guys have lived that. We keep talking about this big market. We alluded to this, that a way to think about Ravenna is a AI native ServiceNow for mid-market. So ServiceNow just did a big acquisition.

Kevin: Yeah, it did.

Tim: They bought this company called Moveworks, you know, biggest acquisition in the history of ServiceNow. It’s kind of an AI ITSM. How do you think about that move? Is that relevant for Ravenna? How is Moveworks similar or different to the product you’re building in the market you’re going after?

Kevin: In terms of is it relevant? Sure, it’s relevant in the sense that it’s definitely in the market that we’re playing in. We got really excited when we saw it. There’s clearly, we’re not the only smart people in the world who know that there’s a lot of opportunity in this space, but it’s exciting to see the activity and obviously a big acquisition, so it’s cool to see.

Moveworks is a previous generation AI intelligence layer on top of existing help desks. It was brought up a lot by investors when we were going through initial fundraising. Which was like, “Are you guys trying to be Moveworks? Are you to be ServiceNow? How do you guys think about it?” Because there’s AI, but there’s also the platform. Our approach is distinct from them in the sense that Moveworks sits on top of existing platforms like ServiceNow, whereas we’re trying to build the foundational platform plus the intelligence layer on top.

At the end of the day, customers will get similar AI capabilities from Ravenna, current next-gen capabilities, because we’re LLM native. I think they’re built on previous generation NLP technologies.

Tim: Which has a huge impact on accuracy and does it work?

Kevin: We think so. Yeah, exactly. I mean, no shade or anything to the Moveworks folks. They’ve clearly built an awesome business and had an amazing outcome and congratulations to the team because that’s fantastic. That’s what every entrepreneur strives for. We just believe, in the fullness of time, the majority of the value accrues to the platform if you can become the system of record. We honestly felt like this was the time to take a shot at building a new system of record in this space. That’s one of the fundamental differences between us.

Now in terms of near-term impacts on the market, I’m not sure what ServiceNow’s plans are for Moveworks, but there is a large kind of call it mid-market enterprise segment of customers who need this AI capabilities. Whether or not Moveworks continues to play there or ServiceNow kind of brings a more upmarket into large enterprise, which is where they like to play, there’s just a lot of opportunity for us in this space.

Tim: Yeah, that’s a great point because I think the things we talked about, Slack, easy to get up and running, beautiful UI, but another thing is price point.

Kevin: Yeah, very true.

Tim: You get a lot of functionality at the enterprise level, but you’re making this accessible and a price point that’s accessible for faster-growing companies and for them to grow with you.

Kevin: 100%.

Tim: We’ve talked about how AI is an integral part of the product, and you also built AI systems at Zapier, Taylor. One question we think about a lot from an investment standpoint is what’s durable? Is there a moat from the AI itself? What’s your take on that? Do you feel like the technology itself is a place that you can build competitive advantage? You’re building an agent-based system here. What does that mean to you, and is that part of what you think you’ll provide customers with, with durable competitive advantage over time?

Taylor: This goes back to the things that got me excited about this originally. It might be first useful instructive, breakdown, when we say AI and automation here. Like, what’s that mean? Big generalization, big time. 50% of the stuff in terms of automation here falls into the category that we talked about earlier, around like, “Hey, there’s information somewhere. It’s Slack, it’s a KB, it’s in these other interesting places. Can we answer that in a more automated way?” That’s one side of it.

The other side of it is actions. When I say that, for lack of a better example, instead of asking, “Hey, what’s the procedure to reset my password?” It’s more interesting to say, “Hey, can you reset my password,” right? Actions. On the first side, I think we covered decently well. One of the things that Kevin touched upon is creating knowledge. I think that’s a very interesting thing here is whether or not you want to call it us building a KB, we haven’t gone so far to put that stake in the ground in terms of our product feature yet.

Nonetheless, one of the things that gets me excited about the idea, it’s like Ravenna is growing with you. That this knowledge is in all these disparate places and we have the ability to go through and hone in on where people work, and make Ravenna better.

Tim: Awesome. So exciting. So much to go build, so much opportunity. Last question. Any advice for other aspiring founders out there thinking about going to get something started in this market right now with AI?

Kevin: The thing I would encourage everybody to do if you’re thinking about building a product, is go talk to a lot of customers before doing it. It was definitely the biggest mistake we’ve made many times throughout our career is like, “Oh, this seems cool. Let’s go and spend a month, three months, six months.” As people who know how to code as engineers, the bias is just, let’s go build because it’s easy. Building is way easier than going and finding 20 customers who will give you a half hour of their time to validate your idea or whatnot. However, you’re going to save yourself so much time, disproving ideas or you’re going to validate your idea and have a lot more conviction about going off and doing it. The biggest piece of advice I can give to folks who want to start companies is go talk to 20, or 30 companies or customers before you start writing a single line code.

Tim: You think you’ve done enough customer validation? Go do more, double down.

Kevin: You can never have enough. Even now, every customer call we’re on at our stage, I mean, we’re not a super old company, we’re eight months old, but we treat it as a discovery call. We spend most of the time asking questions and trying to learn as much as possible about the pain that they’re trying to solve for, because that influences what we’re going to build next week, next month, et cetera. We spend a little bit of time talking about Ravenna as well, but the learnings are still critical for us, and I think will always be.

Tim: Bring us home, Taylor.

Taylor: I’m always reticent to give advice. It’s because I’ve found that just doing this for a decent amount of time, everyone’s experience is so bespoke to them. I do love advice. I do love hearing other people’s journeys, but that’s the way I kind of think about it.

One of the things from my journey that I try to hold true, and I always get word, even conversations a little bit like this. We’ve talked about ServiceNow so much and the incumbents out there, but at the end of the day, the only thing that matters is the customer. That’s the only thing that matters. I try to very much hold a, call it competitor aware, but customer obsessed point of view.

That’s critical because I’ve seen the playbook the other way around, and I’ve seen basically not a lot of success. Whereas I have been lucky enough to work with folks, even to my surprise, a maniacal, just focus on the customer, despite the fact that we were circling around by crazy incumbents and everything on the wall said we were going to lose, it was that maniacal focus on the customer and the problem that pulled us through at the end of the day. So I’ll try and pull that together where we’re at here too.

Tim: Customers, it’s all about the customers.

Kevin: A hundred percent.

Tim: Thank you both so much. It’s a real privilege and a ton of fun to be working together. Looking forward to the future.

Reinventing Recruiting: SeekOut’s Anoop Gupta on the Rise of Agentic AI

 

This week, Madrona Managing Director Soma Somasegar sat down with Anoop Gupta, the co-founder and CEO of SeekOut — a company at the forefront of agentic AI in recruiting, redefining how organizations discover, hire, and manage talent.

In this conversation, Soma and Anoop explore how SeekOut has evolved its platform to include SeekOut Spot, an agentic AI solution that reduces the time it takes to move from job description to qualified candidates — from 45 days down to just three. Together, these two long-time friends unpack lessons on building an AI-native company, navigating changing market dynamics, and what it takes to deliver real outcomes in a sea of AI hype.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Soma: Anoop, you have truly an eclectic background, starting with, initially, academic, then a startup, then a large company, and now back to a startup. Why don’t you introduce yourself a little bit to the audience and also talk about what you do at SeekOut and what SeekOut does?

Anoop: I’m Anoop, I’m a geek and an entrepreneur. I got my PhD from Carnegie Mellon in computer science. I was in the faculty of computer science at Stanford. And then did my first startup in ’95. We were one of the first companies doing streaming media. When the modems were still 56K modems and it was a wonderful time at Stanford. That company was acquired by Microsoft in ’97. SeekOut is a talent business. We focus on helping companies build winning teams and fill the talent gaps. We look at talent very holistically from external talent, internal talent, how to retain, grow, and recruit people. We are used by some of the top brands. We have over 700 customers, who is who in tech, in defense, in pharma. We feel really privileged that they used us as their recruiting and talent solution.

The Role of Agentic AI in Recruiting

Soma: Now, to remind people, as much as we think AI has been around for the last 100 years, it was really only five years since the transformer revolution, so to speak, happened and the large language models came into existence. Even before all that happened, you were thinking about SeekOut as an AI-first company. You’re going from like, “Hey, how can AI help you,” to , “Hey, how can AI do it for you,” kind of thing, right? Some people refer to it as agentic AI or what have you. Tell us a little bit about what got you started on AI from day one and how you evolved with the changes in technology and the innovation that’s happening at a pretty fast pace.

Anoop: If you look at talent, there’s a lot of data. You’ve got to understand the data. The data is noisy. Even if you think about you went to the University of Pennsylvania, do you say Wharton, Pennsylvania. If you’re doing diversity classifiers. The early AI was in data cleansing, in data combining, in classifiers, in everything in how do you build the most amazing search engine in the world. So that is where we started. As the time has gone along, especially, the second thing was, “Oh, LLMs are there and you just give it a job description and it can build searches for you.” But it has fundamentally changed with agentic AI.

The thing is that recruiting in some sense, and actually any talent job, whether you’re thinking succession planning, anything, is a very predictable thing. You look at a job description, you talk to the hiring manager to understand the needs. You go and search a large landscape, you go and evaluate hundreds of thousands of people, you reach out in very hyper personalized ways, and you do that and that magic agentic AI is very good. Vertical AI is very good. And that’s where we can bring the time from a job description to initial candidates you’re interviewing from 30, 45 days to three days. That is the magic. It’s a transformational jump.

Soma: I’ve heard you consistently tell me this, Anoop, for a while now. Companies that don’t take a step back and reimagine how they are going after recruiting talent and managing talent, they’re going to be left behind. Why do you think now is the time for companies to embrace agentic AI solutions to reimagine how they could go about talent acquisition and what is the urgency here?

Anoop: Yeah. So, Soma, the world is changing. Every industry is changing, every business is changing. People are refining their strategies and saying, “We’ve got to adopt AI, or we’ve got to do this thing differently otherwise.” Now all of this evolving, changing business strategy, you’ve got to have a talent strategy. You got to say, “Do I have the right people in the seat now the speed of change has become, and the speed at which the companies need to change has increased?” In this world where things are changing, it becomes urgent for the talent organizations to say, “What am I going to do differently to deliver talent quickly that’s high quality so the right people are in the right seat?” One more angle just quickly I would add to that is there is a lot of pressure on all organizations that AI is coming, how are you becoming more effective, more efficient? That is another pressure that people are feeling on how to do more with less.

Soma: You guys have recently launched SeekOut Spot. I’m super excited about that. In fact, I’m proud to say in this forum that we were probably one of the earliest customers of SeekOut Spot and we are happy customers. Tell me a little bit about that. Tell me how you came up with that, because there is a growing trend here where people are talking about not just software as service, but service as software with AI playing a key role. Tell me a little bit about the genesis of SeekOut Spot and what does it do for companies, organizations, and talent leaders.

Business Model and Flexibility

Anoop: When we start with the business leader and what they care about is the right hires with speed and quality that is there. The magic of AI agents is outcomes as the delivery thing, and not, “Here is a tool that your people have to use.” The fundamental thing, from a business model, is the focus on outcomes. Now there is a lot of flexibility because it’s a combination of people and the AI agents that are happening. We have supported a lot of different models. We will deliver you a hire and that is pricing costs associated with it. We can say we will augment the sources that you have. Maybe you need fewer sources, or when the demand is changing, we can come and help you.

There’s a lot of flexibility of business models, but they’re all outcome-based that we deliver for them. To dig in a little bit, what does the recruiting task look like? How do you engage with the talent? That is interesting. Our recruiters tell us, by the time they’ve done the 10th message, their eyeballs dry out, fall off, and roll across the table. It is crazy the amount of hard work and crazy work that you have to do as a recruiter. With SeekOut Spot, the recruiters focus on the tasks they love, talking to the hiring managers, talking to the humans, selling to the candidates what needs to be done and Spot takes over everything in the middle, delivering your results faster and with higher quality.

Soma: That’s awesome. Sounds truly magical, but help us walk through the shift here. SeekOut Spot in my mind is a classic example of service as a software. Tell me what is the business model changes here and why is it the right thing for your customers?

Anoop: The business model change is we deliver hires or we deliver you strong candidates for this thing. That is the outcome. Eventually, what the talent leader and the business leader are looking for is, “Did you get me a hire? Are they great? Are they the right fit?” They don’t care when it’s taking six weeks. The average for a technical hire by Ashby is 83 days for a hire, and for non-technical, it’s around 63. That’s a long time, and that’s just the median. Many things take longer. So this can be so much faster and better.

Soma: As you know, Madrona wanted to hire a data scientist a few months ago, and whenever we think about hiring a position, my mental model is, “Hey, I’ll be happy if we can hire somebody in the next 90 days.” 180 days maybe, but 90 days would be like, “I’ll be thrilled,”. This hire, the data scientist hire, from start to finish, I think, took less than four weeks for us.

I was amazed at the speed and the quality of the candidates that we saw through the process. It all happened amazingly well for us. Thank you to you and to SeekOut Spot, we made that hire and that person is on board for the last few months and we are thrilled with him so far. So tell me, you mentioned earlier, this goes from 30 days or 45 days to three days, right? We’ve seen at least one example in our environment where we’ve been able to hire a high quality data scientist in about three and a half weeks from start to finish kind of thing.

Anoop: Basically what we can do, and this technology is, from the kickoff meeting with the hiring manager to the initial candidates you’re interviewing, on the fourth day. So that is the magic. Now the hiring, making the offer takes a little bit of time. We have had Discord, which is getting amazing results. We had a startup named Shaped.ai, which is getting hires within three weeks with the initial… And they’re amazed at it. If you look at the quotes on our website, we have Discord, 1Password, HP, Shaped, and Madrona. Even though it’s early, we are seeing real proof that there’s magic in here.

Soma: The other thing that I’ve heard you say, Anoop, when I was over to your place for the SKO, you mentioned, “Hey, with SeekOut Spot, we can deliver a 5 to 10X productivity for recruiters,” or talent acquisition, people kind of thing, right? Talk to me a little bit about that.

Recruiting Process and Efficiency

Anoop: Basically, here is how the recruiting process works. The recruiter talks to the hiring manager, understands the role, then they build a mental model of what they need to do, do some search, come back, do some search, send some messages, and that cycle goes on. In SeekOut, on the first day, after you talk to the hiring manager, you have this success rubric. You have explored automatically thousands of candidates, you’ve evaluated each one of them and we give you a spider graph and how they’re doing across each of the rubric elements, and you have sent out messages.

That thing, I’m saying 5 to 10X, that takes a long time for a recruiter, and that time is being compressed to the benefit of the recruiter and the customer. I’ll tell you, we have specialized, in this service, of course, there’s technology, but we also have recruiters who operate this technology because they’re some tasks that humans do best. They’re the happiest, most energized recruiters, because, “I’m doing what I love and I’m delivering results quickly.” It is pretty amazing how everyone is happier, the business leader, the talent leader, the recruiter. So it’s exciting to us.

Soma: That is great. We’ve been talking about human in the loop for a while now and with something like SeekOut Spot, what you’re really telling organizations is like, “Hey, recruiters are highly valuable. Let them focus on things that they need to focus on. And I’m going to give you agentic AI solution or AI agents that is going to work in conjunction with your talent people to be able to get things done better, faster, all that fun stuff. This notion of hybrid model where you have AI agents working with human beings, that seems like a great model to drive forward as it relates to recruiting and talent acquisition, right?

Anoop: Yes.

Soma: If I’m a recruiter today or a talent acquisition professional, and I see the world of agentic AI, you could argue like, it’s going to disrupt my world, or you could say, it’s going to reimagine what is possible and get me to do what I need to do much, much, much better, 10X better, whatever it is, what should I take away from this as a recruiter or a talent acquisition person and how should I be prepared for this wave of innovation to come?

Anoop: So here’s the way. Do what is human and only humans can do and become the best at it. One is it means when you talk to the hiring manager, how can you be an advisor? Ask hard questions. Do you really mean this? Do you really want this? What is this person going to do? If you feel confident, expert, and good at that, that is one side of it. The second is when you’re talking to a candidate, right? How do you sell? How do you say what is inspiring? How do you say what are you going to do? Why is this a great company for you or not? I think those are the skills you have to become very good at. A lot of them, messy middle, which is critical and important, AI agents will do a great job for you.

Soma: That’s cool. That’s a good way to frame and think about it. I always tell this, Anoop, to every founder or every startup, in the history of this world, there isn’t a single company which has always had a smooth journey. There are good days, there are amazing days, there are okay days, there are lousy days and everything in between.

In your journey over the last, say, seven years or so, you’ve gone through some amazing highs, some not so great lows, and everything in between. How did you and your co-founder, Aravind, how did you guys navigate through this and are there any takeaways or learnings or ahas that you would like to share that will be valuable for other founders where every founder goes through this?

Navigating Challenges and Product-Market Fit

Anoop: We had an early phase of it where we were in hypergrowth, exponential growth, and then the economic malaise, the market change, and we went through some flat portions and now we are on a path to hypergrowth again. What are the things that you need to do? I think the most important thing is continuously watching a product market fit. When the market changed, the environment changed, what was needed changed, it took us a little while to say, “How do we do?” Because market always wins. You can have a great team and the market is this thing you’re not going to succeed in. You can have a lousy team, but you’re aligned with the market, you’re going to win. So one key message is watch for market fit. Just because you have market fit once doesn’t mean you’re maintaining.

The other is to have a sense of confidence and always iterating and experimenting and keeping calm. Your organization comes along with you. That bad stage shouldn’t ruin, though you have to be very conscious about managing your expenses and how you are doing. I want to point to one piece advice, which was maybe a thing for the times. When we were raising a series B, or series C, we had not spent any money that we had raised. And you said, “Go ahead and raise it anyway. It’s good times.” And that helped us too. We’ve never had to be in a desperate situation, not that we don’t want to be scrappy, or want to be conscious. That has given us a comfort and a cushion and that was very wise advice to us.

Soma: Thank you. There are two elements to that, Anoop. One is you want to raise money when you don’t need to raise money, that’s always the best time to do it. The second time is what you said is one of the reasons I was excited about us raising that money was because I’ve seen how you and Aravind have been very scrappy. I wasn’t worried about if there was a little more money than what you need today, would bad behavior set in in terms of spending. Right? It doesn’t matter how much money there is in the bank or not in the bank. I think as entrepreneurs, as founders, as what I call efficient stewards of capital, you have to always be thoughtful about that. I say this right now, hey, your increase in investment should warrant a return on that investment. If you’re confident about that, go do that. If not, you have to be really thoughtful.

I’m so glad whenever you raise your CDC and that has helped you tie through the last year or so to put yourself back in hypergrowth mode. That’s fantastic. Going hand in hand with raising is also a more thoughtful deployment of capital.

Anoop: I totally agree. We always feel like it’s our own money, we have a responsibility.

Soma: Are there any lessons from your own journey? I truly believe that you guys were AI-first from day one, as I said, well before the Transformers and large language models came into existence. Any guidance or advice that you would give to founders today when people are thinking about like, “Hey, I want to ride this AI wave. I want to truly be an AI-first company,” what should they do or what should they not do?

Founders: Focus on Outcomes, not on Hype

Anoop: My advice to founders and actually to the customers that we have, or prospects, is to focus on outcomes, not on the hype. If I were advising this thing, and because everybody’s put AI in their marketing materials, I say look for the outcomes. What are the outcomes you’re delivering? Get to the success stories, shout those out from the rooftops — that focus on outcomes is really important. In the recruiting space, we have a lot of companies that would talk about, “We are AI. We are agentic AI,” and all they have done is maybe an LLM, and they give a natural language query, and something comes out. That is not the end solution. Part of the vertical AI is looking at the whole workflow and process that results in the outcomes. What I would say is, don’t use it as a buzzword, genuinely create value for your customers. That is the thing to do. There is a lot of power in what’s coming, but focus on the customer’s problems and outcomes.

Soma: When people are thinking about AI, and I agree with you completely, focus on the outcome and not the activity alone, how important is what I call a data moat? Do I need to necessarily have either what I call proprietary data or a data moat if I’m an AI-first company, or I don’t need to have that?

Anoop: I think there’s different kinds of data. A lot of data is available, everybody has data. One is the experience moat. As you work with clients and you get the proprietary data from the customer and the client and how you integrate it and how easily you can do that, so it becomes not your data moat and some base data that you have. Our one example would be, in recruiting, it’s not just external data, how do you integrate with the applicant tracking systems and the data that customers have, or their internal employee data and how do you bring that or how do you integrate with specialized resources and partners, whether it’s healthcare and nursing data. I think the data moat comes from delivering outcomes and the learnings, and that learnings and the data becomes also a moat.

Soma: If I take you at face value, you’re telling this from the rooftop that, every investor should be talking to their portfolio company about, “what is your talent acquisition strategy”? How are you reimagining in this world of agentic AI?'” What would you want to tell investors?

Don’t build a recruiting org too early

Anoop: I think building a recruiting org too early is not good because your demand is going to fluctuate. The quality and the people that you need will fluctuate. These are specialized roles. What a startup should have is probably one recruiting manager and recruiting coordinator that they have where the interface to the hiring managers and scheduling interviews and calendaring, work with somebody like SeekOut. Because in startups, the right hires are really important. The cost of a bad hire is so much more than just, “Here is what I did.” If you can get high-quality hires in two weeks, three weeks, that makes a difference to your business outcome. I would invite, it’ll sound a little bit selfish as I am saying it, come and check us out, talk to us. I think it can make a big difference.

Soma: I want to underscore one point that you made. For every organization, every company, this is true, but it is even more true for a startup because you have finite assets of resources. Every right hire can be a true force multiplier.

I truly believe it’s extremely important for startups, particularly the early stages of the company, to ensure that, and everybody goes through this way. Nobody bats 1000. There’s always hiring mistakes, but you want to minimize that and truly understand that every great hire is going to be a true force multiplier for your company.

Anoop: Yes, it is so important.

Soma: From your vantage point, particularly as a hiring manager, what do you think the biggest hiring mistakes that companies today are making?

Anoop: One is company strategies change. Hiring strong, fungible engineers and marketers who can change as your strategy changes is really important. That is the thing you need to do. The second thing we have found is attitude is really important. People who understand the ambiguity, who can take the punches, roll with the punches, adapt and adjust, and are there and get shit done. Those things are also super critical in the hires that you make.

Soma: Now the other side of the audience is usually founders, either existing founders or new founders or people who are thinking that in the next 6 to 12 months, they want to be founders. What is your message to them?

Anoop: My message to founders is, one is before you hire, try and do the job yourself in some of the cases. I did a lot of sales, I’ve never done sales before, and I became an expert because I didn’t know how to hire a sales person. That was one thing on the talent side that I did. Then there were many cases where I had no expertise. Let’s say on a sales leader CRO. I leveraged people at Madrona and said, “Would you interview this person for me?” You want to leverage your connections and contacts who are experts in that so that you can get a good sense of what it needs to be.

My recommendation is, to the people who are thinking to be founders, that the initial team is super critical. Become familiar yourself before you jump into hiring all those people at a level of detail so that you know what is the right thing you need. The salesperson you need for one startup versus another varies a lot. You have to say, “What is your selling motion, who is you need,” and understand that deeply, and second, leverage your friends to do that, and then leverage the right people who can feed you that talent.

Soma: How can I get in touch with you, Anoop, if I’m a founder or investor and want to learn more?

Anoop: Okay, it’s simple. My email is [email protected]. You can find me on LinkedIn, connect with me and we’ll love to talk to you and show you. Because seeing is believing. Everybody talks so much. I’m just such a passionate thinker that seeing is believing, come and see, come and experience, and we would love to partner with you.

Soma: As we come to an end in this episode, Anoop, I want to congratulate you on pushing the boundaries and pushing the envelope on what AI can do for talent acquisition for organizations of all sizes and in all industries. Is there a final word that you would want to say to people, whether they are in a smaller environment like a startup or a bigger environment like an enterprise as to what they should do about talent management and talent acquisition as they look ahead?

Agentic AI for recruiting is here

Anoop: Agentic AI for recruiting is here and now. I would say experiment with it. This is the time. Be early before the change is thrust upon you. Be the lean-forward leader, experimenting, and adapting, and flowing with the transformation versus being hit by it when somebody comes and says you’re too late. The world is changing, and it is changing in amazing, wonderful ways, that is don’t get stuck in the old world to the extent you can avoid that and look broadly on what needs to be done. Especially for the large enterprises, the transformation is going to be very huge, and even for small companies. So, my final word, this will sound very selfish, contact us. We’ll show you what we can do for you as you explore all of the different options that are out there so that you’re getting the right tires and kicking the ball out of the field.

Soma: First of all, even before I wrap this up, thank you for allowing me to partner with you and Aravind for the last seven years or so. It’s been a fabulous journey. So many learnings and so much success. For all the progress we’ve made, I think we are still in early stages and there is so much more that we can do, and I’m looking forward to that. Thank you so much for joining us here today, and thanks to everybody for listening and we’ll see you again soon.

Anoop: Thank you, Soma. It has been wonderful to have you as a partner in our journey.

 

Unscripted: What Happened After the Mic Went Off with Douwe Kiela

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Full Transcript below.


Sometimes, the best insights can come after an interview ends.

That’s exactly what happened when Madrona Partner Jon Turow wrapped the official recording of our recent Founded & Funded episode with Douwe Kiela, co-founder and CEO of Contextual AI. The full conversation dove deep into the evolution of RAG, the rise of RAG agents, and how to evaluate real GenAI systems in production.

But after we hit “cut,” Douwe and Jon kept talking — and this bonus conversation produced some of the most candid moments of the day.

In this 10-minute follow-up, Douwe and Jon cover:

  1. Why vertical storytelling matters more than ever in GenAI
  2. The tension between being platform vs. application
  3. How “doing things that don’t scale” builds conviction early on
  4. The archetypes of great founders — and how imagination is often the rarest (but most valuable) trait
  5. Douwe’s early work on multimodal hate speech detection at Meta and why the subtle problems are often the hardest to solve
  6. Why now is the moment to show what’s possible with your platform — not just sell the vision
  7. It’s a fast exchange full of unfiltered insight on what it really takes to build ambitious AI systems — and companies.

And if you missed the full episode, start there.


This transcript was automatically generated and edited for clarity.

Jon: One thing I’m learning about, I talked to a lot of enterprise CTOs, as I’m sure you do, and a lot of founders, as I’m sure you do, and I feel like even when this kind of technology is horizontal, we say you go to market vertically, or by segment, or whatever, but I don’t even think that’s quite right, I think the storytelling is the thing that becomes vertical or segmented. When you speak to a CTO of a bank versus a CTO of a pharma company, or the head partner of a law firm, or whatever it would be, none of these people, their eyes will glaze over when we start to talk about chunking. But if we can talk about SEC filings and the tolerances in there, and a couple of really impactful stories that are in the language of those segments, that seems to go so far. I’ve seen it myself, and even when a student, customers will realize it’s the same thing. And so storytelling at a time like this, where there’s opportunity in every direction you look, feels like a thing that can be a superpower for you.

Douwe Kiela: It’s not easy, because it’s like, how vertical do you want to go? We don’t want to be Hebbia or even Harvey; we want Hebbia and Harvey to be built on Contextual, but the only way to do that is to maybe show that you can build a Hebbia and Harvey on our platform.

Jon: I’ll tell you about when I’ve done it right and when I did it wrong. When I did it right was in early days of DynamoDB, the managed NoSQL data store, and we said, “Dynamo is really useful for ad tech, for gaming, and for finance, probably.” It’s because there were key use cases in each of these domains that took advantage of the capabilities of NoSQL and were not too bothered by the limitations of NoSQL, we only have certain numbers of lookups and things like that. Astute customers could realize you could use Dynamo for whatever you wanted, but we didn’t say that ever. All of our market was we had customer references, and we had reference implementation, and that helped us, like you plant your feet really well. When I’ve done it badly, also shows the power of this technique. I remember I did a presentation about Edge AI, this was like 2016, at AWS re:Invent. Edge AI, we shipped the first Edge AI product ever at Amazon.

We showed how we were using it with Rio Tinto, which is a giant mining company doing autonomous mining vehicles. We chose that because it’s fun and sparks the imagination, and we thought would spark the imagination across a lot of domains. This is a re:Invent, so it was on a Thursday, I want to say, a Wednesday or a Thursday, that I did that presentation. On a Friday morning, before I was going to fly out, I got an urgent phone call from the CTO of the only other major mining company of that scale, saying, “I have exactly that problem. Can you do the same thing for me?” I thought, “Well, gee, I aimed wrong,” because I picked a market of two, I already had one. But it shows that if you really put it in people don’t necessarily use imagination, but if you put it in terms that are that recognizable, they can see themselves.

Douwe Kiela: Yeah. So I heard that, maybe it was Swami or someone senior in AWS, said, “The big problem in the market right now is not the technology, it’s people’s lack of imagination around AI.”

Jon: That sounds like a Swami.

Douwe: Swami or maybe Andy. Yeah, I don’t know.

Jon: It could be. I would also say that that’s a major role for founders on this spectrum. There will be, put you in a group with Sergey and Larry, right? And so there’s the Douwes, Sergeys, and Larrys, there’s the Mark Zuckerbergs who are only PHP coders, and there’s the domain experts who are visionaries, they’re missionaries about solving a real problem, and they understand the problem better than other people do, and they are not necessarily nuanced in what is possible, but they can hack it together, they can get it to work enough that they can get to a point to then build a team around them.

Douwe: Who’s the archetype there?

Jon: I would think about, this is not a perfect example, but I would think about pure sales CEOs.

Douwe: Benioff or something?

Jon: Yeah, or the guys who started Flatiron Health and Invite Media. They were not oncology experts, they understood their customers really well. Jeff Bezos was not a publishing expert, nor did he wrote code at all at Amazon, I’m not sure he ever checked one line of code in a production, but deep customer empathy and conviction around that. The story with Jeff is that the first book that was ordered on Amazon.com from a non-Amazonian was not a book that they had at stock. And the team told Jeff, “Sorry, we got to cancel this order.” And Jeff said like, “Hell, we do.” And he got in his car and he went to every bookstore in the city.

Douwe Kiela: Barnes & Noble, somewhere.

Jon: Yeah and he found it, and then he drove to the post office and he mailed it himself. He was trying to make a point, but he was also saying, “Look, we’re in the books business now and we promised our whole catalog. In the first order, you better believe we’re going to honor it.” So that’s what I think about. And you do things that don’t scale and the rest.

Douwe: Doing all the crazy stuff. All the VCs are saying, “Just do it SaaS, no services. Focus on one thing, do it well.” And all of that is true, but if you want to be the next Amazon, then you also have to not follow that.

Jon: Do things that don’t scale, and you figure out, you know and I know, eventually, you can get things to scale. One of the reasons, and you would know this so much better than I do, one of the reasons Meta invested as early as it did in AI was content moderation.You would like a social media business to scale with compute, but it was starting to get bottlenecked by how many content moderators, and that’s a lot slower and more expensive. How quickly and effectively can you leverage that up?

Douwe: That’s why they needed AI content moderation.

Jon: That’s why they needed AI.

Douwe: We’re doing all the multimodal content moderations. That was powered by our code base.

Jon: Wow. And what year?

Douwe: It was around 2018. We did hateful memes. I don’t know if you’ve heard of this, the Hateful Memes Project, that was my thing. Where that came from was content moderation was pretty good on images and it was pretty good on text, like if there was some Hitler image, or whatever, or some obvious hate speech.

Jon: That’s kind of an easy one.

Douwe Kiela: Exactly. The most interesting ones, and people have figured this out, is like multimodal. It’s like I have a meme, so on the surface, to the individual classifiers, it looks fine, but if you put them together, it’s super racist, or they’re trying to sell a gun, or they’re dealing drugs, or things like that. Everybody at the time was trying to circumvent these hate speech classifiers by being multimodal. Then I’m on it, and I came in and we solved it.

Jon: How did you solve it?

Douwe Kiela: By building better multimodal models. So we had a better multimodal classifier that actually looked at both modalities at the same time in the same model. We built a framework, and we built the data set, and we built the models, and then most of the work was done by product teams.

AI, Ambition, and a $3 Trillion Vision: Satya Nadella on Microsoft’s Bold Bet

 

TLDR: Microsoft Chairman & CEO Satya Nadella shared candid insights on leadership, AI, and Microsoft’s transformation into a $3 trillion powerhouse during Madrona’s Annual Meeting on March 18, 2025. He reflected on the cultural shifts that fueled the company’s resurgence, Microsoft’s AI strategy and pivotal AI partnership with OpenAI, and why AI’s success should be measured in global economic growth. His key messages? Mission and culture define strategy. AI is still in its early days. And “The world will need more compute.”

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated.

Soma: Satya, it’s fantastic to have you here today. I don’t know if you remember this. We had you actually at our annual meeting five years ago to celebrate our 25th anniversary back then. But it so happened that once we agreed that we are going to do this two weeks before the event, we had to go on a massive scramble. The world changed from everything being in person to everything being virtual and you were a good sport and we did this virtually five years ago and that ended up being a great conversation. Thank you for doing that.
But I’m so, so excited to have you in person here today.

Satya Nadella: Likewise, I’m glad it’s in person.

Soma: This year we are celebrating a couple of different milestones, okay? First and foremost, obviously, Microsoft is celebrating its 50th year anniversary. In fact, I think two and a half weeks from now (April 5th) is the 50th anniversary. So that’s a fantastic milestone. I spent 27 out of these 50 years at Microsoft and some of those years working closely with you, so for me personally, it’s with a lot of personal joy and satisfaction to see how far Microsoft has come along under your leadership these last 11 years. Coincidentally, we’re also celebrating Madrona’s 30th year anniversary this year. Back in 1995, when the four co-founders of Madrona started Madrona; and I see Paul there. He was one of the four co-founders for us back then. The thesis and the bet for Madrona was very simple. It was all about like, hey, we are going to take a bet on the technology ecosystem, on the startup ecosystem in Seattle.

And 30 years later we are so glad that they took the bet and we all joined the journey. But for all the progress that I think we’ve seen in Seattle, I think we are still scratching the surface. There’s so much more ahead of us in the next 20 years, 30 years, 50 years that we are excited to see where the world is going and how we can play a part in help shape that world, so to speak.

11 years ago when you became the CEO for Microsoft, I actually don’t know how many people in this audience and in the world imagined that hey, there’s going to be a day not in the too distant future where we are likely to have two companies that collectively have a market cap of over $5 trillion in Seattle. Microsoft being one and Amazon being the other. But just looking at what you’ve been able to accomplish at Microsoft, when you took over as the CEO, Microsoft’s market cap was around $300 million. Today it’s around $3 trillion. It’s a phenomenal progress and one that I definitely did not imagine and I continue to think about, hey, how did this happen and what caused it to happen?

Satya Nadella on Microsoft’s AI Strategy, Leadership Culture, and the Future of Computing

Satya Nadella: But did you hold?

Soma: A lot.

Satya Nadella: That’s great.

Soma: In addition to everything else, I’m a shareholder of Microsoft. I’m excited about that. Okay. But Satya, congratulations on a great, great run at Microsoft so far, and I know there’s still a lot more to go there.
I do know that everybody here in the audience is really interested in hearing from you, so I should stop my ramble and dive into the conversation.

Satya Nadella: Sure.

Soma: I want to take you back 11 years ago when you decided that, “Hey, I’m going to take on the matter to be the CEO for Microsoft,” what were some of the things in your mind in terms of what were your expectations, what do you think might happen? And then talk about some of the key inflection points in the last decade in your tenure as the CEO of Microsoft.

Satya Nadella: Yes. First of all, thank you so much for the opportunity to be here. It’s great to be celebrating, I guess, your 30th year. And as you said, for me of late, I’ve been thinking a lot about our upcoming 50th, which it’s unbelievable to think about it. I was also thinking about it yesterday. I was seven years old, I guess, when Microsoft was formed. And a lot has happened.
In 2014 when I became CEO, Soma, quite honestly at that time, my frame was very simple. I knew I was walking in as the first non-founder. Technically Steve was not the founder, but he had founder status at the company. The company I grew up in was built by Bill and Steve. And so therefore, I felt one of the things as a non-founder was to make first class again what founders do. What founders do is have a real sense of purpose and mission that gives them both the moral authority and telegraphs what the company was created for and what have you. And I felt like we needed to reground ourselves.

In fact, back then, one of the things I felt was, wow, in 1975 when Paul and Bill started Microsoft, they somehow thought of software as a … In fact, the software industry didn’t even exist, but they conceived that we should create software so that others can create more software and a software industry will be born. And that’s what was the original idea of Microsoft. And if that was relevant in ’75, it was more relevant in 2014 and it’s more relevant today in 2025.

And so I went back to that origin story, took inspiration from it, re-articulated it as our mission now that we are to talk about, which is empowering every person and organization on the planet to achieve more. So that was one part. The other piece that I felt also, again as a non-founder, was to make culture a very first class thing. Because again, in companies that have founders, still culture is also implicit because it’s a little bit of the cult of the founder. You can get away with a lot, whereas a mere model CEO like me can’t.

And so you needed to build more of that cultural base even. I must say I was lucky enough to pick the meme of growth mindset from Carol Dweck’s work and it’s done wonders. And quite frankly it’s done wonders because it was not considered as new dogma from a new CEO because it spoke a lot more intrinsically to us as humans, both in life and at work. And so therefore, both these things, making mission a first-class explicit thing and culture, these two things. And then of course they’re necessary but not sufficient because then you’ve got to get your strategy right and execution right, and you’ve got to adapt because everything in life is path dependent.

But you don’t even get shots on goal if you don’t have your mission and culture set right. And so that’s at least what I attribute a lot of, at least our … And we have stayed consistent on that frame, I would say, for the last whatever, 11 years.

Soma: If you go back to I think you took over in February sometime and then in May that year, 2014, your first announcement externally came up as like, “Hey, we are going to take Office cross-platform.” And that I thought was visceral. Particularly people who knew Microsoft until then or who had been part of the Microsoft ecosystem in one way, shape or form, knew how big of a statement that was. Was it a conscious decision on your part to say, “Hey, I need to signal not just to the external world, but to my own organization what it means?”

Satya Nadella: Yeah. The Microsoft that you worked at and that I worked at know, you’ve got to remember, we launched Office on the Mac before there was Windows even. So in some sense, obviously we achieved a lot of success in the ’90s and so therefore we went back to Windows as the only thing that is needed, and the air we breathe and what-have-you. But it was really not the company’s core gene pool. Our core gene pool was we create software and we want to make sure that our software is there everywhere.

And obviously it’s not like I came in February and I said, “Let’s build the software.” Obviously Steve had approved that. But it worked well because it helped unlock, to your point, what was Microsoft’s true value prop in the cloud era. See, one of the things when I look back at it, if God had come to me and said, “There’s mobile and cloud, pick one,” I would’ve picked cloud. Not that mobile is not the biggest thing, but if you had told me pick one, I’ll pick something that may even outlast the client device.

And so therefore, that’s what was the real strategy, which is we knew where our position at that time was on mobile. We were struggling at third. Having seen what happens to number three players in an ecosystem, I felt like wow, that train had left the station. So therefore it was very important for us to make sure we became a strong number two in cloud at that time. And then in fact, more comprehensive than even our friends across the lake because of what we were doing on Office 365 and Azure.

And so we just doubled down. And when you double down on such a strategy, you got to make sure that your software is available and your endpoints are available everywhere. And so that was what that event was all about.

Soma: Great. You just referenced culture, cultural transformation, and growth mindset in the context. By the way, if any of you haven’t read that book, I’m a huge believer in the book. I think that book is one of the best books that’s been written on culture and please get a copy and read that. It’s a fantastic book and something that I try hard to practice every day. And I can tell you I’m still learning.

But I’ve also heard you talk a lot about changing the culture from a know-it-all culture to a learn-it-all culture. But like anything else, when you took on the mantle, they were already a 100,000-people-strong organization that was steeped in a particular set of ways of doing things and thinking about things. How easy or hard was it for you to go through the cultural transformation?

Satya Nadella: Yeah. I think the beauty of the growth mindset framework, if you will, is not about claiming growth mindset, but confronting one’s own fixed mindset. At the end of the day, the day you say you have a growth mindset is the day you don’t have a growth mindset. That’s the nice recursion in it. And it’s hard and it has to start with setting the tone.

Let’s face it. In large organizations like ours, or anyone I guess, it’s easy to talk about risk because you want the other person to take risk. Or it is easy to say, “‘Let’s change.” It’s the other person who should change. And so in some sense, the hard part of organizational change is that inward change that has to come. And so this thing pushes you on it. It gives you at least a way to live that. And by living up to that high standard of confronting your own fixed mindset, you get hope to make that large-scale change happen. And like all things, Soma, it’s always top down and bottom up. You can never do anything in any one direction. It has to happen across both sides of it and all the time.

The other thing I must say is you have to have patience. You can’t come in the morning and say, “Hey, we need to have by evening growth mindset.” You have to basically let even leaders bring their own personal passion to it, personal stories to it, give it some room to breathe. And I think somehow or the other not because we really thought it all through, it took on, as I said, some organic life. People felt like this is a meme that made them better leaders, it made them better human beings.

And so therefore, I think that that’s what really helped. And we were patient on it. Like for example, the classic thing at Microsoft would have been to metric it and then say green, red, yellow, and then start doing bad things to all the reds and then it would’ve been gamed in a second. We didn’t do that, and that I think helped a lot. And like all things, it also can be taken to the extreme. There are times when I’m in meetings where people will look around the room and say, “Here are all the people who don’t have a growth mindset,” versus saying, “Look — the entire idea is to be able to talk about your own fixed mindset.” And by the way, the best feature of that cultural thing is that it’s never done. So you never can claim that job done. Right now, oh my God, talk about it. Which is you’re in the middle of, again, saying, “Wow, we’ve got to relearn everything because there’s a complete new game in town again.

Soma: So before we talk about AI, I thought we’ll talk a little bit about something that is personal to you and hopefully something on a lighter note. You’ve been a cricket player in high school and college and it’s been fun working with you these last many years, trying to bring cricket to the US through Major League cricket. And you’ve mentioned this many times, Satya, about how that sport has shaped your thinking, your leadership style. In fact, had a positive impact on your life. Share with us a little bit about that.

Satya Nadella: Yeah, Caitlin, who works with me is here. Every time I post on cricket, I get all these likes from India and she says, “God, why don’t they do the same when you post on Microsoft products?” It’s like a billion and a half people who are crazy can do that for you.

Look, I think all team sport shapes us a lot. I think it’s one of those cultural things that … When I see leaders; and you can easily trace back to the team sports they played and how it impacts how they think about it. There are three things that I think I’ve written a lot about and I think a lot about even daily. I remember there was this one time. It’s interesting, there’s this guy that you know, Harsha Bhogle, who actually went to the same high school as me and recently I was talking to him and he was telling me about our … we call them physical directors. Think of them as a coach, I think is the best translation.

But anyway, so we were playing some league match and there was this guy from Australia who suddenly happened to be in Hyderabad of all things and playing for the opposition. And he was such an unbelievable player. And I was sitting, I was feeling at whatever, at forward short leg and watching in awe of him. So I hear this guy yell, saying, “Compete, don’t admire.” And it’s like when you’re in the field that zeal, the competitive spirit and giving it all, I think it’s just such an important thing that sport teaches you. That ability to get the energy to go play the game is one.

The other one that I’ll say, talking about teams, I’ll never forget this. There was this unbelievably important match of ours. There was this unbelievable player who was pissed off at our captain for whatever reason, because I think he changed him soon or what have you. And the guy just drops a catch just on purpose. And think about like the entire 11. All our faces dropped. We were all so pissed off, I guess. But also more let down when, in fact, your star player who somehow feels like he wants to teach us a lesson and then thereby cause us to lose.

And then the last thing I would say, which has probably been the most profound impact in me, is what is the leadership lesson? There was a captain of mine who went on to play later a lot of first-class cricket. One day I was a bowler and I was bowling some thrashy off spin. And so this guy takes the thing. He changes me, he bowls, he gets a wicket, but he gives it back to me the next over and that’s a match I got some four or five wickets. And then I asked him like, “Why the heck did you do that?” And he comes to me and he says, “You know what? I needed you for a season, not for a match. Because I wanted to make sure that I could make sure that your confidence is not broken.” I said, “Man, for a high school captain to have that level of enlightened leadership skills …”

That’s the idea, which is leadership is about having a team and then getting the team to perform for a season. And I think team sport and what it means to all of us culturally and what it means in terms of teaching us the hard lessons in those fields is something that I think a lot about.

Soma: That’s great.

Satya Nadella: And of course, I think a lot about MLC too.

Soma: Season three starts June 12.

Satya Nadella: The sports market is not sufficiently penetrated in the United States. Talk about you got to make your money somewhere else.

Soma: Let’s talk about AI now. You mentioned this, that if you look at the history of Microsoft, we are in the beginning or in the middle of the fourth platform wave. First one was Client Server, then it was internet and mobile, and then the cloud, and now it’s AI.
Microsoft, as much as we talk about AI a lot these past few years, Microsoft has had investments in AI for decades now. Tell me a little bit about how you decided, hey, in addition to everything that we are doing ourselves, how do we think about partnering with OpenAI.

Satya Nadella: I love the way you say ourselves. That’s good.

Soma: How does Microsoft think about partnering with somebody like OpenAI? And then more importantly, how has that partnership evolved till today and what do you think the future is going to be of that partnership?

Satya Nadella: Yeah, it’s a good point. I think in 1995 is when we had our first ML research team and MSR speech. That was the first place we went to. And obviously we had lots of MSR work. Here’s the interesting thing, which is even the OpenAI side, we had two phases of it. In fact, the first time we partnered with them was in the context of when they were doing Dota 2 and RL at that time. And then they went off on that and I was interested, but RL by itself, at that time at least, we were not that deep in. When they said, “We want to go tackle natural language with transformers,” that’s when we said, “Let’s go bet.”

Quite frankly, that was the thing that OpenAI got right which is that they were willing to go all in on scaling laws. In fact, the first paper I read was interestingly written by Elian Dario on the scaling laws and saying, “Hey, we can go through compute and see scaling laws work on transformers on natural language and natural language.” If you think about Microsoft’s history, for those of you who’ve been tracking us, Bill has been obsessed about natural language. And of course the way he has been obsessed about it is by schematizing the world. To him, it is all about people, places, things, beautifully organize it into a database and then do a SQL query, and that’s all the world needs.

That was the Microsoft that we dreamt of. And then of course, when we thought of AI was, oh, adding some semantics on top of it. That’s sort of how we came in. It turns out in hindsight, of course, when we were taking that bet, it is unclear to us quite frankly. But to me, when I first saw code completions in a Codex model, which is a precursor to 35, that is when I think we started building enough conviction that, one, you can actually build useful products. And software engineering, the team that you ran, even the engineers are skeptical people. No one thought that AI will go and make coding easy. But man, that was the moment when I felt like there’s something afoot. Definitely my belief in scaling laws and the fact that you could build something useful. And so then the rest is history. We just doubled down on it. And even today when I look at GitHub Copilot, it’s unbelievable to see in the, whatever, three years or so, there’s code completions.

And by the way, all of these things are happening in parallel. Code completions are getting better. We, in fact, just launched a new model even for code completion. And then chat of course is right there. You have multi-file edits. You have agents that are working at their full repo, and then we have a SWE-agent that is more like you’re going from, I’ll say, pair programmer to a peer programmer. So it’s all like a full system being built off of effectively what is one regime.

Soma: I remember now, this was before GitHub Copilot had launched in beta or whatever it is to the world. You and I were having dinner and now you literally spent probably 20, 30 minutes there talking about this new thing that the GitHub guys were doing called Copilot. I remember walking out of that meeting thinking I need to go talk to my buddies in DevDiv to understand what is happening here, because I haven’t seen you that animated and excited about something. And this was well before it came into what I call as a product finally kind of thing.
But those early days, how did you decide to take a bet on that inside the company? Because I would assume that in any organization there’s going to be some level of resistance to something new that is going to be fundamentally a paradigm changing thing.

Satya Nadella: Yeah. There were two phases to that as well. GitHub Copilot was the first product, and then ChatGPT happened. And ChatGPT, quite frankly, you should ask the OpenAI folks, but nobody thought that this is a product. It was supposed to be at best, maybe some data collection thing. And then rest is history. But I must say that was the thing that really helped, which is the beauty of at least Microsoft’s position was one, the partnership with OpenAI. Second thing is we were already building products like GitHub Copilot. And thankfully ChatGPT happened because then there was no … And we were ready so once ChatGPT happened and we had built a product and we had built the stack, it was easy to copy, paste, so to speak, across all of what we were doing.

But a lot of these waves are that, which is, if I look back at it, even in the four waves, you could say Windows, we had one, two, and three, but I joined really post three. And that was what we did. Once Windows 3 hit, it was like we knew what to do after. That’s where I think the path … And we were ahead. In some of the others, we were behind, but that’s fine. But this one we were ahead and so we executed pretty well, quite frankly, across the length and breadth of the Microsoft opportunity. But as you rightfully point out, but it’s still very early, I think in backstage, you and I were talking about it.

I think I feel it’s a little more like the GUI wave pre-Office or the web wave pre-search. I think we’re still trying to figure out where does the enterprise value truly accrue? Is it in the model? Is it in the infrastructure? Is it in one app category? And I think all that’s to be still litigated.

Soma: We have a point of view on that, but let me turn around and ask you that. If you look at the AI stack today, you’ve got AI infrastructure, you’ve got models, you’ve got applications, what we call intelligent applications. We historically always believe the application layer is where you’re going to have the most, what I call value creation over a period of time. Whether it’s horizontal or vertical or some combination thereof. Do you see that trend also following through here in the AI or do you think differently?

Satya Nadella: It’s a great question. I think that if I look back through all these tech shifts, I think all enterprise value accrues to two things. One is some organizing layer around user experience and some, I’ll call it, change in efficiency at the infrastructure layer. You can say GUI on the client and client server. That was one. Or you could say search as ultimately, although we thought browser for the longest time, but turns out search was the organizing layer of the web. And then SaaS applications and the infrastructure and databases and what have you. And same thing with cloud.

In this case, I think hyperscale, when I look at our business, if you ask the question five years from now, even in a fully agentic world, what is needed? Lots more compute. In fact, it’s interesting. When I look at, let’s take Deep Researcher or what-have-you. Remember, Deep Researcher needs a VM or a container. In fact, it’s the best workload to drive more compute.

And in fact, if you look at the ratio, even take ChatGPT. It’s a big Cosmos DB customer, which is all its state is in databases. In fact, the way they procure compute is they have a ratio between the AI accelerator to storage and compute. And so hyperscale, being one of the hyperscalers is a good place to be and to be able to build the infrastructure. You’ve got to be best-in-class in terms of scale and cost and what-have-you.

Then I think after that, it gets a little muddy, because what happens to models, what happens to app categories? I think that’s where I think time will tell, but I go back and say each category will be different. Consumer, there’ll be some winner take all network effect. In the enterprise space it’ll be different. That I think is where we are still in the early stages of figuring out, but I think the stable thing that at least I can say with confidence is the world will need more compute.

Soma: I have a lot more things to talk to Satya about but I know that we are running short of time here. I’m going to ask him one more question. You have a very unique vantage point in terms of who you talk to day in and day out, whether it’s Fortune 100 CEOs or whether it is heads of government or what-have-you. You recently mentioned something about one way to think about maybe the impact of AI success is its ability to boost the GDP of a country or the world or whatever it is. That’s a fascinating way to think about what AI’s impact would all be over a period of time. Can you elaborate a little bit on that?

Satya Nadella: Yeah. I think I said that in response to all these benchmarks on AGI and so on. I find that entire … First of all, all the evals are saturated. It’s becoming slightly meaningless. But if you set that aside, just take the simple math. Let’s say you spend a $100 billion in CapEx, and then you say, okay, you go t to make a return on it, and then let’s just say roughly you are to make a $100 billion a year on it. In order for you to make a $100 billion dollars on it, then what’s the value you have to create in order to make that?

And it’s multiples of that. And so that ain’t going to happen unless and unt il there is broad spread economic growth in the world. So that’s why I look at it and say my formula for when can we say AGI has arrived when, say, the developed world is growing at 10%, which may have been the peak of industrial revolution or what have you, that’s a good benchmark for me. If you ask me what’s the benchmark. This is the intelligence abundance and it’s going to drive productivity. I think we should peg ourselves. In fact, we should say the social permission at least for companies to invest what they’re investing in, both from the markets as well as the broader society will come from, I believe, our ability to have broad sectoral productivity gains that’s evidenced in economic growth.

And by the way, the one other thing that I’m excited about is this time around. It won’t be like the Industrial Revolution in the sense that it’s not going to be about the developed world or the Global North and the Global South. It’s going to be about the entire globe, because guess what? Diffusion is so good that everybody is going to get it at the same time. So that’ll be the other exciting part of it.

Soma: Great. Thank you, Satya. Thank you for being here, and congratulations again.

RAG Inventor Talks Agents, Grounded AI, and Enterprise Impact

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

From the invention of RAG to how it evolved beyond its early use cases, and why the future lies in RAG 2.0, RAG agents, and system-level AI design, this week’s episode of Founded & Funded is a must-listen for anyone building in AI. Madrona Partner Jon Turow sits down with Douwe Kiela, the co-creator of Retrieval Augmented Generation and co-founder of Contextual AI, to unpack:

  • Why RAG was never meant to be a silver bullet — and why it still gets misunderstood
  • The false dichotomies of RAG vs. fine-tuning and long-context
  • How enterprises should evaluate and scale GenAI in production
  • What makes a problem a “RAG problem” (and what doesn’t)
  • How to build enterprise-ready AI infrastructure that actually works
  • Why hallucinations aren’t always bad (and how to evaluate them)
  • And why he believes now is the moment for RAG agents

Whether you’re a builder, an investor, or an AI practitioner, this is a conversation that will challenge how you think about the future of enterprise AI.


This transcript was automatically generated and edited for clarity.

Jon: So Douwe, take us back to the beginning of RAG. What was the problem that you were trying to solve when you came up with that?

Douwe: The history of the RAG project, we were at Facebook AI Research, obviously, FAIR, and I had been doing a lot of work on grounding already for my PhD thesis, and grounding, at the time, really meant understanding language with respect to something else. It was like if you want to know the meaning of the word cat, like the embedding, word embedding of the word cat, this was before we had sentence embeddings, then ideally, you would also know what cats look like because then you understand the meaning of cat better. So that type of perceptual grounding was something that a lot of people were looking at at the time. Then I was talking with one of my PhD students, Ethan Perez, about, “Can we ground it in something else? Maybe we can ground in other text instead of in images.” The obvious source at the time to ground in was Wikipedia.

We would say, “This is true, sort of true,” and then you can understand language with respect to that ground truth. That was the origin of RAG. Ethan and I were looking at that, and then we found that some folks in London had been working on open-domain question answering, mostly Sebastian Riedel and Patrick Lewis, and they had amazing first models in that space and it was a very interesting problem, how can I make a generative model work on any type of data and then answer questions on top of it? We joined forces there. We happened to get very lucky at the time because the people at the Facebook AI Image Similarity Search, I think is what it stands for, basically, the first vector database, but it was just there. And so we were like — we have to take the output from the vector database, give it to a generative model. This is before we called it language models, Then the language model can generate answers grounded on the things you retrieve. And that became RAG.

We always joke with the folks who were on the original paper that we should have come up with a much better name than that, but somehow, it stuck. This was by no means the only project that was doing this, there were people at Google working on very similar things, like REALM is an amazing paper from around the same time. Why RAG, I think, stuck was because the whole field was moving towards gen AI, and the G in RAG stands for generative. We were really the first ones to show that you could make this combination of a vector database and a generative model actually work.

Jon: There’s an insight in here that RAG, from its very inception, was multimodal. You were starting with image grounding, and things like that, and it’s been heavily language-centric in the way people have applied it. But from that very beginning place, were you imagining that you were going to come back and apply it with images?

Douwe: We had some papers from around that time. There’s a paper we did with more applied folks in Facebook where we were looking at, I think it was called Extra, and it was basically RAG but then on top of images. That feels like a long time ago now, but that was always very much the idea, is you can have arbitrary data that is not captured by the parameters of the generative model, and you can do retrieval over that arbitrary data to augment the generative model so that it can do its job. It’s all about the context that you give it.

Jon: Well, this takes me back to another common critique of these early generative models that, for the amazing Q&A that they were capable of, the knowledge cutoff was really striking, you’ve had models in 2020 and 2021 that were not aware of COVID-19, that obviously was so important to society. Was that part of the motivation? Was that part of the solve, that you can make these things fresher?

Douwe:

Yeah, it was part of the original motivation. That is what grounding is, the vision behind the original RAG project. We did a lot of work after that on that question as well, can I have a very lightweight language model that basically has no knowledge, it’s very good at reasoning and speaking English or any language, but it knows nothing? It has to rely completely on this other model, the retriever, which does a lot of the heavy lifting to ensure that the language model has the right context, but that they really have separate responsibilities. Getting that to work turned out to be quite difficult.

Jon:

Now, we have RAG, and we still have this constellation of other techniques, we have training, and we have tuning, and we have in-context learning, and that was, I’m sure, very hard to navigate for research labs, let alone enterprises. In the conception of RAG, in the early implementations of it, what was in your head about how RAG was going to fit into that constellation? Was it meant to be standalone?

Douwe: It’s interesting because the concept of in-context learning didn’t really exist at the time, that really became a thing with GPT-3, and that’s an amazing paper and proof point that that actually works, and I think that unlocked a lot of possibilities. In the original RAG paper, we have a baseline, what we call the frozen baseline, where we don’t do any training and we give it as context, that’s in table six, and we showed that it doesn’t really work, or at least, that you can do a lot better if you optimize the parameters. In-context learning is great, but you can probably always beat it through machine learning if you are able to do that. If you have access to the parameters, which is, obviously, not the case with a lot of these black box frontier language models, but if you have access to the parameters and you can optimize them for the data you’re working on or the problem you’re solving, then at least, theoretically, you should always be able to do better.

I see a lot of false dichotomies around RAG. The one I often hear is it’s either RAG or fine-tuning. That’s wrong, you can fine-tune a RAG system and then it would be even better. The other dichotomy I often hear is it’s RAG or long-context. Those are the same thing, RAG is a different way to solve the problem where you have more information than you can put in the context. One solution is to try to grow the context, which doesn’t really work yet even though people like to pretend that it does, the other is to use information retrieval, which is pretty well established as a computer science research field, and leverage all of that and make sure that the language model can do its job. I think things get oversimplified where it’s like, “You should be doing all of those things. You should be doing RAG, you should have a long-context window as long as you can get, and you should fine-tune that thing.” That’s how you get the best performance.

Jon: What has happened since then is that, and we’ll talk about how this is all getting combined in more sophisticated ways today, but I think it’s fair to say that in the past 18, 24, 36 months, RAG has caught fire and even become misunderstood as the single silver bullet. Why do you think it’s been so seductive?

Douwe: It’s seductive because it’s easy. Honestly, I think long-context is even more seductive if you’re lazy, because then you don’t even have to worry about the retrieval anymore, the data, you put it all there and you pay a heavy price for having all of that data in the context. Every single time you’re answering a question about Harry Potter, you have to read the whole book in order to answer the question, which is not great. So RAG is seductive, I think, because you need to have a way to get these language models to work on top of your data. In the old paradigm of machine learning, we would probably do that in a much more sophisticated way, but because these frontier models are behind black box APIs and we have no access to what they’re actually doing, the only way to really make them do the job on your data is to use retrieval to augment them. It’s a function of what the ecosystem has looked like over the past two years since ChatGPT.

Jon: We’ll get to the part where we’re talking about how you need to move beyond a cool demo, but I think the power of a cool demo should not be underestimated, and RAG enables that. What are some of the aha moments that you see with enterprise executives?

Douwe: There are lots of aha moments, I think that’s part of the joy of my job. I think it’s where you get to show what this can do, and it’s amazing what these models can do. So basic aha moments for us, is accuracy is almost kind of table stakes at this point. It’s like, okay, you have some data, it’s like one document, you can probably answer lots of questions about that document pretty well. It becomes much harder when you have million documents or tens of millions of documents and they’re all very complicated or they have very specific things in them. We’ve worked with Qualcomm and they’re like circuit design diagrams inside those documents, it’s much harder to make sense of that type of information. The initial wow factor, at least from people using our platform, is that you can stand this up in a minute. I can build a state-of-the-art RAG agent in three clicks basically.

That time of value used to be very difficult to achieve, because you had your developers, they have to think about the optimal chunking strategy for the documents, and things that you really don’t want your developers thinking about but they had to because the technology was so immature. The next generation of these systems and platforms for building these RAG agents is going to enable developers to think much more about business value and differentiation essentially, “How can I be better than my competitors because I’ve solved this problem so much better?” Your chunking strategy should not be important for solving that problem.

Jon: Also, if I now connect what we were just talking about to what you said now, the seduction of long-context and RAG are that it’s straightforward and easy, and it plugs into my existing architecture. As a CTO, if I have finite resources to go implement new pieces of technology, let alone dig into concepts like chunking strategies, and how the vector similarity for non-dairy will look similar to the vector similarity for milk, things like this, is it fair to say that CTOs are wanting something coherent, that can be something that works out of the box?

Douwe: You would think so, and that’s probably true for CTOs, and CIOs, and CAIOs, and CDOs, and the folks who are thinking about it from that level. But then what we often find is that we talk to these people and they talk to their architects and their developers, and those developers love thinking about chunking strategies, because that’s what it means in a modern era, to be an AI engineer is to be very good at prompt engineering and evaluation and optimizing all the different parts of the RAG stack. It’s very important to have the flexibility to play with these different strategies, but you need to have very, very good defaults so that these people don’t have to do that unless they really want to squeeze the final percent, and then they can do that.

That’s what we are trying to offer, is you don’t have to worry about all this basic stuff, you should be thinking about how to really use the AI to deliver value. It’s really a journey. The maturity curve is very wide and flat. It’s like some companies are figuring it out, it’s like, “What use case should I look at?” And others have a full-blown RAG platform that they built themselves based on completely wrong assumptions for where the field is going to go, and now, they’re stuck in this paradigm, it’s all over the place, which means it’s still very early in the market.

Jon: Take me through some of the milestones on that maturity curve, from the cool demo all the way through to the ninja level results.

Douwe: The timeline is, 2023 was the year of the demo, ChatGPT had just happened, everybody was playing with it, there was a lot of experimental budget, last year has been about trying to productionize it, and you could probably get promoted if you were in a large enterprise, if you were the first one to ship genAI into production. There’s been a lot of kneecapping of those solutions happening in order to be the first one to get it into production.

Jon: First-pass-the-post.

Douwe: First-pass-the-post, but in a limited way, because it is very hard to get the real thing past the post. This year, people are really under a lot of pressure to deliver return on investment for all of those AI investments and all of the experimentation that has been happening. It turns out that getting that ROI is a very different question, that’s where you need a lot of deep expertise around the problem, but also you need to have better components that exist out there in an open source easy framework for you to cobble together a Frankenstein RAG solution, that’s great for the demo, but that doesn’t scale.

Jon: Our customers think about the ROI, how do they measure, perceive that?

Douwe: It really depends on the customer. Some are very sophisticated, trying to think through the metrics, like, “How do I measure it? How do I prioritize it?” I think a lot of consulting firms are trying to be helpful there as well, thinking through, “Okay, this use case is interesting, but it touches 10 people. They’re very highly specialized, but we have this other use case that has 10,000 people. They’re maybe slightly less specialized, but there’s much more impact there.” It’s a trade-off. I think my general stance on use case adoption is that I see a lot of people aiming too low, where it’s like, “Oh, we have AI running in production.” It’s like, “Oh, what do you have?” It’s like, “Well, we have something that can tell us who our 401(k) provider is, and how many vacation days I get.”

And that’s nice, “Is that where you get the ROI of AI from?” Obviously not, you need to move up in terms of complexity, or if you think of the org chart of the company, you want to go for this specialized roles where they have really hard problems, and if you can make them 10, 20% more effective at that problem, you can save the company tens or hundreds of millions of dollars by making those people better at their job.

Jon: There’s an equation you’re getting at, which is the complexity, sophistication of the work being done times the number of employees that it impacts.

Douwe: There’s roughly two categories for gen AI deployment, one is cost savings. So I have lots of people doing one thing, if I make all of them slightly more effective, then I can save myself a lot of money. The other is more around business transformation and generating new revenue. That second one is obviously much harder to measure, and you need to think through the metrics, like, “What am I optimizing for here?” As a result of that, I think you see a lot more production deployments in the former category where it’s about cost-saving.

Jon: What are some big misunderstandings that you see around what the technology is or is not capable of?

Douwe: I see some confusion around the gap between demo and production. A lot of people think that, “Oh, yeah, it’s great, I can easily do this myself.” Then it turns out that everything breaks down after a hundred documents, and they have a million. That is the most common one that we see. There are other misconceptions maybe around what RAG is good for and what it’s not.What is a RAG problem and what is not a RAG problem? People don’t have the same mental model that maybe AI researchers like myself have, where if I give them access to a RAG agent, often, the first question they ask is, “What’s in the data?” That is not a RAG problem, or it’s a RAG problem on the metadata, it’s not on the data in itself. A RAG question would be like, what was, I don’t know, Meta’s R&D expense in Q4 or 2024, and how did it compare to the previous year? Something like that.

It’s a specific question where you can extract the information and then reason over it and synthesize that different information. A lot of questions that people like to ask are not RAG problems. It’s like, summarize the document is another one. Summarization is not a RAG problem. Ideally, you want to put the whole document in a context and then summarize it. There are different strategies that work well for different questions, and why ChatGPT is such a great product is because they’ve abstracted away some of those decisions that go into it, but that’s still very much happening under the surface. I think people need to understand better what type of use case they have. If I’m a Qualcomm customer engineer and I need very specific answers to very specific questions, that’s very clearly a RAG problem. If I need to summarize the document, put that in context of a long-context model.

Jon: Now, we have Contextual, which is an amalgamation of multiple techniques, and you have what you call RAG 2.0, and you have fine-tuning, and there’s a lot of things that happen under the covers that customers ideally don’t have to worry about until they choose to do so. I expect that changes radically the conversation you have with an enterprise executive. How do you describe the kinds of problems that they should go find and apply and prioritize?

Douwe: We often help people with use case discovery. So, thinking through, okay, what are the RAG problems, what are maybe not RAG problems? Then for the RAG problems, how do you prioritize them? How do you define success? How do you come up with a proper test set so that you can evaluate whether it actually works? What is the process for, after that, doing what we call UAT, user acceptability testing. Putting it in front of real people, that’s really the thing that matters, right? Sometimes, we see production deployments, and they’re in production, and then I ask them, “How many people use this?” And the answer is zero. During the initial UAT, everything was great and everybody was saying, “Oh, yeah, this is so great.” Then when your boss asks you the question and your job is on the line, then you do it yourself, you don’t ask AI in that particular use case. It’s a transformation that a lot of these companies still have to go through.

Jon: Do the companies want support through that journey today, either direct for Contextual or from a solution partner, to get such things implemented?

Douwe: It’s very tempting to pretend that AI products are mature enough to be fully self-serve and standalone. It’s decent if you do that, but in order to get it to be great, you need to put in the work. We do that for our customers or we can also work through systems integrators who can do that for us.

Jon: I want to talk about two sides of the organization that you’ve had to build in order to bring all this for customers. One is scaling up the research and engineering function to keep pushing the envelope. There are a couple of very special things that Contextual has, something you call RAG 2.0, something you call active versus passive retrieval. Can you talk about some of those innovations that you’ve got inside Contextual and why they’re important?

Douwe: We really want to be a frontier company, but we don’t want to train foundation models. Obviously, that’s a very, very capital intensive business, I think language models are going to get commoditized. The really interesting problems are around how do you build systems around these models that solve the real problem? Most of the business problems that we encounter, they need to be solved by a system. Then there are a ton of super exciting research problems around how do I get that system to work well together? That’s what RAG 2.0 is in our case, how do you jointly optimize these components so that they can work well together? There’s also other things like making sure that your generations are very grounded. It’s not a general language model, it’s a language model that has been trained specifically for RAG and RAG only. It’s not doing creative writing, it can only talk about what’s in the context.

Similarly, when you build these production systems, you need to have a state-of-the-art re-ranker. Ideally, that re-ranker can also follow instructions. It’s a smarter model. There’s a lot of innovative stuff that we’re doing around building the RAG pipeline better and then how you incorporate feedback into that RAG pipeline as well. We’ve done work on KTO, and APO, and things like that, so different ways to incorporate human preferences into entire systems and not just models. That takes a very special team, which we have, I’m very proud of.

Jon: Can you talk about active versus passive retrieval?

Douwe: Passive retrieval is basically old-school RAG. It’s like I get a query, and I always retrieve, and then I take the results of that retrieval, and I give them to the language model, and it generates. That doesn’t really work. Very often, you need the language model to think, first of all, where am I going to retrieve it from and how am I going to retrieve it? Are there maybe better ways to search for the thing I’m looking for than copy and pasting the query? Modern production RAG pipelines are already way more sophisticated than having a vector database and a language model. One of the interesting things that you can do in the new paradigm of agentic things and test-time reasoning is decide for yourself if you want to retrieve something. It’s active retrieval. It’s like if you give me a query like, “Hi, how are you?” I don’t have to retrieve in order to answer that. I can just say, “I’m doing well, how can I help you?”

Then you ask me a question and now I decide that I need to go and retrieve. Maybe I make a mistake with my initial retrieval, so then I need to go and think like, “Oh, actually, maybe I should have gone here instead.” That’s active retrieval, that’s all getting unlocked now. This is what we call RAG agents, and this really is the future, I think, because gents are great, but we need a way to get them to work on your data, and that’s where RAG comes in.

Jon: This implies two uses of two relationships of Contextual and RAG to the agent, there is the supplying of information to the agents so that it can be performant, but if I probe into what you said, active retrieval implies a certain kind of reasoning, maybe even longer reasoning about, “Okay, what is the best source of the information that I’ve been asked to provide?”

Douwe: Exactly. It’s like I enjoy saying everything is Contextual. That’s very true for an enterprise. So the context that the data exists in, that really matters for the reasoning that the agent does in terms of finding the right information that all comes together in these RAG agents.

Jon: What is a really thorny problem that you’d like your team and the industry to try and attack in the coming years?

Douwe: The most interesting problems that I see everywhere in enterprises are at the intersection of structured and unstructured. We have great companies working on unstructured data, there are great companies working on structured data, but once you have the capability, which we’re starting to have now, where you can reason over both of these very different data modalities using the same model, then that unlocks so many cool use cases. That’s going to happen this year or next year, just thinking through the different data modalities and how you can reason on top of all of them with these agents.

Jon: Will that happen under the covers with one common piece of infrastructure or will it be a coherent single pane of glass across many different Lego bricks?

Douwe: I’d like to think that it would be one solution, and that is our platform, which can do all of that.

Jon: Let’s imagine that, but behind the covers, will you be accomplishing that with many different components each handling the structured versus unstructured?

Douwe: They are different components, despite what some people maybe like to pretend, I can always train up a better text-to-SQL model if I specialize it for text-to-SQL, than taking a generic off-the-shelf language model and telling it, “Generate some SQL query.” Specialization is always going to be better than generalization for specific problems, if you know what the problem is that you’re solving, the real question is much more around is it worth actually investing the money to do that? It costs money to specialize and it sometimes hampers economies of scale that you might want to have.

Jon: If I look at the other side of your organization that you’ve had to build, so you’ve had to build a very sophisticated research function, but Contextual is not a research lab, it’s a company, so what are the other kinds of disciplines and capabilities you’ve had to build up at Contextual that complement all the research that’s happening here?

Douwe: First of all, I think our researchers are really special in that we’re not focused on publishing papers or being too far out on the frontier. As a company, I don’t think you can afford that until you’re much bigger, and if you’re like Zuck and you can afford to have FAIR. The stuff I was working on at FAIR, at the time, I was doing like Wittgensteinian language games and all kinds of crazy stuff that I would never let people do here, honestly. But there’s a place for that, and that’s not a startup. The way we do research is we’re very much looking at what the customer problems are that we think we can solve better than anybody else, and then focusing, thinking from the system’s perspective about all of these problems, how can we make sure that we have the best system and then make that system jointly optimized and really specialized, or specializable, for different use cases? That’s what we can do.

That means that it’s a very fluid boundary between pure research and applied research, basically. All of our research is applied. In AI, right now, I think there’s a very fine line between product and research, where the research is basically is the product, and that’s not true for us, I think it’s true for OpenAI, Anthropic, everybody. The field is moving so quickly that you have to productize research almost immediately. As soon as it’s ready, you don’t even have time to write a paper about it anymore, you have to ship it into product very quickly because it is such a fast moving space.

Jon: How do you allocate your research attention? Is there some element of play, even 5%, 10%?

Douwe: The team would probably say not enough.

Jon: But not zero?

Douwe: As a researcher, you always want to play more but you have limited time. So yeah, it’s a trade-off, I don’t think we’re officially committing. We don’t have a 20% rule or something like Google would have, it’s more like we’re trying to solve cool problems as quickly as we can, and hopefully, have some impact on the world. Not work in isolation, but try to focus on things that matter.

Jon: I think I’m hearing you say that it’s zero even in an environment with finite resources and moving fast?

Douwe: Every environment has finite resources. It’s more like if you want to do special things, then you need to try new stuff. That’s, I think, very different for AI companies or AI native companies like us. If you compare this generation of companies with SaaS companies, there is like, okay, all the LAMP stack, everything was already there, you have to basically go and implement it. That’s not the case here, is we’re very much figuring out what we’re doing, like flying the airplane as we’re building it sort of thing, which is exciting, I think.

Jon: What is it like to now take this research that you’re doing and go out into the world and have that make contact with enterprises? What has that been like for you personally, and what has that been like for the company to transform from research-led to a product company?

Douwe: That’s my personal journey as well. I started off, I did a PhD, I was very much a pure research person and slowly transitioned to where I am now, where the key observation is that the research is the product. This is special point in time, it’s not going to always be like that. That’s been a lot of fun, honestly, I’ve been on a podcast a while back and they asked me, “What other job would you think is interesting?” And I said, “Maybe being the head of AI of JP Morgan.” And they were like, “Really?”

And I was like, “Well, I think actually, right now, at this particular point in time, that is a very interesting job.” And because you have to think about how am I going to change this giant company to use this latest piece of technology that is frankly going to change everything is going to change our entire society. For me, it gave me a lot of joy talking to people like that and thinking about what the future of the world is going to look like.

Jon: I think there’s going to be people problems, and organizational problems, and regulatory and domain constraints that fall outside the paper.

Douwe: Honestly, I would argue that those are the main problems to still overcome. I don’t care about AGI and all of those discussions, the core technology is already here for huge economic disruption. All the building blocks are here, the questions are more around how do we get lawyers to understand that? How do we get the MRM people to figure out what is an acceptable risk? One thing that we are very big on is not thinking about the accuracy, but thinking about the inaccuracy, and what do you do, if you have 98% accuracy, what do you do with the remaining 2% to make sure that you can mitigate that risk? A lot of this is happening right now. There’s a lot of change management that we’re going to need to do in these organizations. All of that is outside of the research questions where we have all the pieces to completely disrupt the global economy right now, it’s a question of executing on it, which is scary and exciting at the same time.

Jon: Douwe, you and I have had a conversation many times about different archetypes of founders and their capabilities. There’s one lens that stuck with me that has three click stops on it, there is a domain expert, who has expertise in revenue cycle management, but may not be that technical at all, A. B, there is somebody who is technical and able to write code but is not a PhD researcher, and Mark Zuckerberg is a really famous example of that. Then there’s the research founder, who has deep technical capabilities and advanced vision into the frontier. What do you see as the role for each of those types of founders in the next wave of companies that needs to get built?

Douwe: That’s a very interesting question. I would argue how many PhDs does Zuck have working for him? That’s a lot, right?

Jon: That’s a lot.

Douwe: I don’t think it matters how deep your expertise in a specific domain is, as long as you are a good leader and a good visionary, then you can recruit the PhDs to go and work for you. At the same time, obviously, it gives you an advantage if you are very deep in one field and that field happens to take off, which is what happened to me. I got very lucky, with a lot of timing there as well. Overall, one underlying question you’re asking there is around AI wrapper companies, for example. To what extent should companies go horizontal and vertical using this technology?

There’s been a lot of disdain for these wrapper companies like, “Oh, that’s a wrapper for OpenAI.” It’s like, “Well, it turns out you can make an amazing business just from that, right?” I think Cursor is like Anthropic’s biggest customer right now. It’s fine to be a wrapper company as long as you have an amazing business. People should have a lot more respect for companies building on top of fundamental new technology and discovering whole new business problems that we didn’t really knew existed, and then solving them much better than anything else.

Jon: Well, so I’m really thinking also about the comment you made, that we have a lot of technology that is capable of a lot of economic impact, even today, without new breakthroughs that, yes, we’ll also get. Does that change the next types of companies that should be founded in the coming year?

Douwe: I think so. I am also learning a lot of this myself, about how to be a good founder, basically. It’s always good to plan for what’s going to come and not for what is here right now, and that’s how you get to ride that wave in the right way. What’s going to come is that a lot of this stuff is going to become much more mature. One of the big problems we had even two years ago was that AI infrastructure was very immature. Everything would break down all the time. There were bugs into attention mechanism, implementation of frameworks we were using, really basic stuff. All of that has been solved now. With that maturity also comes the ability to scale much better, to think much, much more rigorously, I think, around cost quality trade-offs and things like that. There’s a lot of business value just right there.

Jon: What do new founders ask you? What kind of advice do they ask you?

Douwe: They ask me a lot about this wrapper company thing, and modes, and differentiation. There’s some fear that incumbents are going to eat everything, and so they obviously have amazing distribution. They’re massive opportunities for companies to be AI native and to think from day one as an AI company. If you do that right, then you have a massive opportunity to be the next Google, or Facebook, or whatever, if you play your cards right.

Jon: What is some advice that you’ve gotten, and I’ll ask you to break it into two, what is advice that you’ve gotten that you disagree with, and what do you think about that? And then what is advice that you’ve gotten that you take a lot from?

Douwe: Maybe we can start with the advice I really like, which is one observation around why Facebook is so successful, it’s like, be fluid like water. It’s like whatever the market is telling you or your users are telling you, fit into that, don’t be too rigorous in what is right and wrong, be humble and look at what the data tells you and then try to optimize for that. That is advice that when I got it, I didn’t really appreciate it fully, and I’m starting to appreciate that much more right now. Honestly, it took me too long to understand that. In terms of advice that I’ve gotten that I disagree with, it’s very easy for people to say, “You should do one thing and you should do it well.” Sure, maybe, but I’d like to be more ambitious than that. We could have been one small part of a RAG stack and we probably would’ve been the best in the world at that particular thing, but then we’re slotting into this ecosystem where we’re a small piece, and I want the whole pie ideally.

Then that’s why we’ve invested so much time in building this platform, making sure that all the individual components are state-of-the-art and that they’ve been made to work together so that you can solve this much bigger problem, but yet, that is also a lot harder to do. Not everyone would give me the advice that I should not go and solve that hard problem, but I think over time, as a company, that is where your moat comes from, doing something that everybody else thinks is kind of crazy. So that would be my advice to founders, is go and do something that everybody else thinks is crazy.

Jon: You’re probably going to tell me that that reflects in the team that comes to join you?

Douwe: Yeah, the company is the team, especially the early team. We’ve been very fortunate with the people who joined us early on, and that is what the company is. It’s the people.

Jon: If I piggyback a little bit and we get back into the technology for a minute, there’s a common question, maybe even misunderstanding that I hear about RAG, that, “Oh, this is the thing that’s going to solve hallucinations.” You and I have spoken about this many times, where is your head at right now on what hallucinations are, what they are not? Does RAG solve it? What’s the outlook there?

Douwe: I think hallucination is not a very technical term. We used to have a pretty good word for it, it was accuracy. If you were inaccurate, if you were wrong, then I guess to explain that, or to anthropomorphize it would be to say, “Oh, the model hallucinated.” I think it’s a very ill-defined term, honestly. If I would have to try to turn it into a technical definition, I would say the generation of the language model is not grounded in the context that is given, where it is told that that context is true. Basically, hallucination is about groundedness. If you have a model that adheres to its context, then it will hallucinate less. Hallucination itself is arguably a feature for a general purpose language model, it’s not a bug. If you have a creative writing department, or marketing department, creative writing thing like content generation, I think hallucination is great for that, as long as you have a way to fix it, you probably have a human somewhere double checking it and rewriting some stuff.

So hallucination itself is not even a bad thing necessarily, it is a bad thing if you have a RAG problem though and you cannot afford to make a mistake. Then that’s why we have a grounded language model that has been trained specifically not to hallucinate, or to hallucinate less. Because one other misconception that I sometimes see is that people think that these probabilistic systems can have 100% accuracy, and that is a pipe dream. It’s the same with people. If you look at a big bank, there are people in these banks and people make mistakes too.

Jon: SEC filings have mistakes.

Douwe: Exactly. The whole reason we have the SEC and that is a regulated market is so that we have mechanisms built into the market so that if a person makes a mistake, then at least we made reasonable efforts to mitigate the risk around that. It’s the same with AI deployments. That’s why I’m talking about how to mitigate the risk with inaccuracies. It’s like, we’re not going to get it to 100%, so you need to think about the 2, 3, 5, 10% depending on how hard the use cases where you might still not be perfect. How do you deal with that?

Jon: What are some of the things that you might’ve believed a year ago about AI adoption or AI capabilities that you think very differently about today?

Douwe: Many things. The main thing I thought that turned out not to be true was that I thought this would be easy.

Jon: What is this?

Douwe: Building the company and solving real problems with AI. We were very naive, especially in the beginning of the company. We were like, “Oh, yeah, we just get a research cluster. Get a bunch of GPUs in there. We train some models, it’s going to be great.” Then it turned out that getting a working GPU cluster was very hard. And then it turned out that training something on that GPU cluster in a way that actually works, where if you’re using other people’s code, then maybe that code is not that great yet. Now, you have to build your own framework for a lot of the stuff that you’re doing if you want to make sure that it’s really, really good. We had to do a lot of plumbing that we did not expect to have to do. Now, I’m very happy that we did all that work, but at the time, it was very frustrating.

Jon: What are we, either you and I, or we, the industry, not talking about nearly enough that we should be?

Douwe: Evaluation. I’ve been doing a lot of work on evaluation in my research career, things like Dynabench where it was about how do we hopefully maybe get rid of benchmarks all together and have a more dynamic way to measure model performance. Evaluation is very boring. People don’t seem to care about it. I care deeply about it, so that always surprises me. We did this amazing launch, I thought, around LMUnit, it’s natural language unit testing. You have a response from a language model, and now you want to check very specific things about that response. It’s like, did it contain this? Did it not make this mistake? Ideally, you can write unit tests as a person for what a good response looks like. You can do that with our approach. We have a model that is by far state-of-the-art at verifying that these unit tests are passing or failing.

I think this is awesome. I love talking about this, but people don’t seem to really care. It’s like, “Oh, yeah, evaluation. Yeah, we have a spreadsheet somewhere with 10 examples.” How is that possible? That’s such an important problem. When you deploy AI, you need to know if it works or not, and you need to know where it falls short, and you need to have trust in your deployment, and you need to think about the things that might go wrong, and all of that. It’s been very surprising to me just how immature a lot of companies are when it comes to evaluation, and this includes huge companies.

Jon: Garry Tan posted on social media not too long ago, that evaluation is the secret weapon of the strongest AI application companies.

Douwe: Also, AI research companies, by the way. So OpenAI and Anthropic, part of why they’re so great is because they’re amazing at evaluation too. They know exactly what good looks like. That’s also why we are doing all of that in-house; we’re not outsourcing evaluation to somebody else. It’s like, if you are an AI company and AI is your product, then you can only assess the quality of your product through evaluation. t’s core to all of these companies.

Jon: Whoever is lucky enough to get that cool JP Morgan head of AI job that you would be doing in another life, is that intellectual property of JP Morgan what the evals really need to look like, or is this something that they can ultimately ask Contextual to cover for them?

Douwe: No. I think the tooling for evaluation, they can use us for, but the actual expertise that goes into that evaluation, the unit tests, they should write that themselves. In the limit, we talked about a company is its people, but in the limit, that might not even be true, because there might be AI mostly, and maybe only a few people. What makes a company a company is its data, and the expertise around that data and the institutional knowledge. That is what defines a company. That should be captured in how you evaluate the systems that you deploy in your company.

Jon: I think we can leave it there. Douwe Kiela, thank you so much. This was a lot of fun.

Douwe: Thank you.

Dropzone’s Edward Wu on Solving Security’s Biggest Bottleneck

Listen on Spotify, Apple, and Amazon | Watch on YouTube

This week, Partner Vivek Ramaswami hosts Edward Wu, the founder of 2024 IA40 winner Dropzone, which is building a next-generation AI security operation center. Edward decided to take the leap and start his own company after spending eight years at ExtraHop, where he rose to the role of senior principal scientist, leading AI/ML and detection. Now at Dropzone, he’s tackling some of the most pressing challenges at the intersection of AI and cyber security.

On this episode, they explore Edward’s decision to leave ExtraHop to build Dropzone, his thoughts on why generative AI is uniquely suited to addressing alerts and investigation in cybersecurity, and how Dropzone is redefining the role of AI in the security operations center. They unpack Edward’s decision to leap into entrepreneurship, how he landed key customers like UiPath, and why transparency is vital in a category often skeptical of AI. He also shares his perspectives on how AI unlocks new opportunities in cybersecurity, along with lessons he learned as a first-time solo founder.


This transcript was automatically generated and edited for clarity.

Edward: My pleasure.

Vivek: Let’s kick off with having you share a little bit about your journey into security. What sparked your interest in the space, to enter into security?

Edward: I would say, quite similar to a lot of security practitioners, I grew up playing with computers, playing games, cracking games, and I think that’s what got me started with security, because a lot of the, you can say, skills or tools you use to crack games or cheat in games, jive with reverse engineering and malware analysis. Then, after I got into my undergrad program at UC Berkeley, I really made the decision to eventually pursue a PhD in cybersecurity, and that’s kind of where I spent three years in my undergrad, doing cybersecurity related research, like automated malware analysis, binary analysis, reverse engineering, Android apps.

Vivek: Yeah, that’s great. So, even back then, you were thinking about security and cybersecurity and obviously there was a lot of attacks and things like that, even back then. You spent eight years at ExtraHop, which is a Madrona portfolio company, and eventually became the senior principal scientist, led AI/ML and detection there. Tell us a little bit about that journey, and then you can tell us a little bit about why you decided to leave and launch your own company in Dropzone.

Edward: ExtraHop was definitely a very fun ride for me. I joined when I decided to quit my PhD, due to a variety of reasons. Part of it was cybersecurity academic research, frankly, is not as interesting as the real thing in the industry. When I decided to quit my program, I applied and interviewed at, practically, any and every stage cybersecurity companies I could find. I remember one of them was Iceberg. I was offered to be employee number four, and Iceberg was a Madrona portfolio company as well, so while I was looking around, ExtraHop really struck me, because back then, ExtraHop wasn’t in cybersecurity at all. It was in network performance analytics.

When I saw the demo of ExtraHop’s product, I saw so much potential, because what ExtraHop had in terms of potential is very similar to what police departments and state agencies discovered about traffic cameras. You initially have a lot of traffic cameras for monitoring traffic, but after a while everybody discovered how much more valuable information you can get out of traffic cameras from tracking, whether it’s fugitives or helping to identify other sorts of suspicious activities, so I really saw that opportunity, and ended up joining ExtraHop. Essentially helping ExtraHop to build and pivot from a network performance company to a network security company and, along the way, built ExtraHop’s AI/ML and detection product from scratch, and really spent a lot of time working with ExtraHop customers in understanding how security teams actually work.

Vivek: How did you think about even joining a startup or a scaling startup back then? Obviously, you’re interest in security, you probably could have looked at Palo Alto Networks, Fortnite, or a much larger platform. What attracted you to a startup at the time?

Edward: While I was in college, I came across a couple of blogs talking about the founding journey of different security startups, and I think those really struck me and got me excited and interested to eventually start my own company. While I was looking for my first job out of college, the number one criteria was the opportunity to learn and how to build a startup someday in the future for myself. When I interviewed with ExtraHop, and I met ExtraHop Co-founder and CEO at the time Jesse Rothstein, I told him, “Hey, the reason I’m looking at startups is I want to start my own company someday,” which is great foreshadowing for when I told him I’m going to resign and start my own thing eight years later.

Vivek: So, he couldn’t act shocked, because he would’ve known eight years from before.

Edward: Correct, correct. Back then I was looking for the opportunity to learn how to build a product from scratch, and that’s kind of where, between the choices of ExtraHop and Iceberg, I picked ExtraHop, because it was a little bit more mature. I could learn from the existing lessons and the potholes ExtraHop fell into, and then dug themselves out of.

Vivek: It sounds like you had that kernel of idea in your head, from early on, that you wanted to start your own company. Before we get into the aha moment that led you to founding Dropzone, would you suggest to other founders that it’s helpful to spend time at a company? Even if you had that idea early on in academia, thinking about starting a company, would you suggest it’s good for founders to go and spend a number of years at another startup to learn, or how would you think about that journey that founders have to go on before they start their own business?

Edward: At least in my experience, I believe that if you’re going to start a B2B company, it’s vitally important to work somewhere first, because you’ll have the exposure to how B2B actually works. I think there’s a number of, you can say whether it’s processes, or structures, that all B2B companies have to go through, and by working at an established organization, it teaches you what good engineering looks like, what good customer success looks like, what good marketing looks like, and what good sales look like. All of these will become tremendously important when you do start your own B2B company.

Vivek: So, now you’ve been at ExtraHop for eight years, you’ve learned good marketing, and good sales, you’ve seen this journey, and you’ve obviously had this idea now for eight years in your head that you want to go found your own company. What was the aha moment? Walk us through the idea you had in your head? Where did you see the opportunity that led you to actually go out and leave ExtraHop and found Dropzone?

Edward: The biggest thing was, while I was at ExtraHop, I had been keeping track of industry movements and trends, because I know the only way I could found my own company someday was by looking for the next big thing. During my time at ExtraHop, I had done a lot of analysis and paid attention to every single RSAC Innovation Sandbox, as well as other movements within cybersecurity to see, “Okay. What are other people building?” And if I were to be an investor, would I invest my money or time, right? Because as a founder, to some extent, you’re also an investor.
You’re investing in the most precious resource you have, which is your time. I’ve been doing a lot of that for years. Then, when GenAI came around, that got me excited, because for the first time I saw an idea where we can tackle one of the holy grail unsolvable problems within cybersecurity by leveraging this new technical catalyst. That combination of a very concrete, universal pinpoint, and a new technical catalyst, which essentially means there was no way to solve this problem previously, makes starting a new company a lot easier, because you don’t have tons of incumbents to deal with, and all the factors combined are reasonings behind my departure.

Vivek: You bring up a good point, and I think many of the founders that listen to this podcast and that we work with, over the last few years, after college ChatGPT came out or after Transformers really were becoming a big thing, is that they also said, “Hey, there’s an opportunity in AI. I want to go found a business.” You mentioned that, if it wasn’t for AI or the current versions that we have in AI, some of these problems likely couldn’t have been solved in security. Maybe just take us through that. What, specifically, were you seeing in this intersection of AI and security that said, “Hey, there’s a technical change. Something is different now, that’s going to unlock problems that we couldn’t unlock before,” and then maybe you can tell us a little bit about how that led you to what your core focus is at Dropzone today.

Edward: For people who are not familiar with security, one of the biggest challenges within cyber security today is the ability to process all the security alerts. To some extent, it’s actually a very similar problem to modern day police departments, which is they have all sorts of crime reports, but not enough detectives to follow up on every single report. This is kind of where, historically, it has been a very difficult problem to solve, because the act of investigating security reports and alerts requires tons of human intelligence.
You cannot hard code your way through an investigation process, because when a security analyst is looking at security reports and alerts, what they’re going through in their head is a very detective recursive reasoning process, so that has been one of the biggest bottlenecks within cyber security. There’s a couple workforce reports out there that shared, as a world needs around 12 million cyber defenders today, and there are 12 million job postings out there, but the actual workforce is only around 7 million, so there’s this shortage of 5 million cyber security analysts or defenders that a world needs to truly protect themselves, but unless somebody invents cloning or some sort of mind transfer, then some sort of software-based automation seems to be the only other solution.

Vivek: As you say, there is a shortage in the number of security practitioners that can do these kinds of things. It’s interesting, because I feel like in this first wave of AI, we saw a lot of companies going after, “Hey, there’s this intersection of AI and security. Let’s just go secure the models, or let’s think about the models themselves.” It seems like what you were thinking about is there’s an existing workflow today that is understaffed, and that’s where we see AI actually helping. Had you worked with these practitioners before, in your time at ExtraHop? Had you seen these problems of alerting and alert fatigue, and how do we actually get AI to solve problems where we don’t have enough people to scale and solve these problems?

Edward: To some extent, what I did at ExtraHop was probably one of the reasons why security practitioners are overwhelmed by alerts, because what I built at the ExtraHop is a detection engine, so it looks at network telemetry and identifies suspicious activities. User A uploaded five gigabytes of data to Dropbox. User B established a persistence connection with an external website for 48 hours, right? User C, SSH linked to the database. All of these security alerts takes time to investigate, and those are exactly the type of alerts that historically have overwhelmed security practitioners.

So, to some extent, my work in the past eight years has contributed or maybe partially caused some of the alert fatigue and overload, so I’m definitely intimately familiar with this particular problem. The way you said when genAI came along, a lot of people had this idea, “Oh. Let’s just secure the models,” my train of thought is very similar to a post I saw on Twitter, which says, one way you can think of genAI is, essentially, we as humans are discovering a new island where there are a hundred billion people with college-level education and intelligence, willing to work for free. We just talked about this huge staff shortage in cybersecurity, so why don’t we take those a hundred billion people with college-level intelligence, willing to work for free, and have them look at all the security alerts and help to improve the overall cybersecurity posture?

Vivek: You have this great term that you were describing to us, Dropzone is having a number of interns or having a whole new set of staff. How do you describe it?

Edward: If we were to zoom out, we view Dropzone as essentially a software-based staff augmentation agency for cybersecurity teams. What we’re building is, essentially, you can say AI agents or AI digital workers that work alongside of the human cybersecurity analyst engineers to allow security teams to do 5X to 10X more than what they’re capable of doing today, but without 5 or 10X of budget or headcount.

Vivek: You’re primarily selling to CISOs, the Chief Security Officer, Chief Information Security Officer, but the actual practitioners of who is using Dropzone tends to be folks that are in the security operation center, right? Who are usually the people who are using Dropzone on a day-to-day basis or interacting with it?

Edward: The primary user of our product are essentially security analysts who work in SOC or security operation centers, and are responsible for responding to security alerts and confirmed breaches.

Vivek:
Going back to one thing you were saying before, which was the nice thing about building, when there’s a new tech change, what we have with AI, is that you don’t have these incumbents, right? Or the incumbents tend to be a little bit slower to move or they’re more reactive. In this case, you can build a net new business, and you can help create a category. One thing you and I have talked about is this is such an obvious problem, in the sense that every large company or mid-market enterprise company has an understaffed security operation center.

A number of startups have sort of popped up and started to build what they call AI SOCs or agents for the SOCs, and so, if we zoom out, how do you view this landscape, how do you view this category where, on one hand, it’s a total validation of the market, saying that something like this needs to occur because people clearly want this product. On the other hand, it’s like, “Okay. Well, how am I supposed to disaggregate and decide between 10 or 12 competitors that all maybe look the same on the surface?”

Edward: If you were to zoom out, the market Dropzone operates in, the AI SOC analyst market or autonomous SOC platform market is probably the single most competitive market within cybersecurity today. Like you said, one challenge is the intersection of cybersecurity and AI is tremendously interesting. The alert investigation use case, to some extent, is kind of an obvious use case a lot of people can see. The way we think about competition is actually not as different from all previous generations of the startups, which is having a lot of competitors is great validation for the market, but the reality is most startups or most players are not going to be successful for a variety of different reasons.

So, to some extent, it’s not a competition in terms of who gets the highest grades. It’s actually a competition of who finishes the marathon, so from our perspective, when we think about competition, a lot of it has to do with how could we do better? How can we ensure that we’re delivering real world, concrete value to our end users? Because we know we’re solving a very large problem with a lot of needs and very large market. We don’t need to worry too much about our competitors right now, because frankly most of them are still pre-product at this moment. Our focus is solely on, can we sign up 1, can we sign up 5, can we sign up 10, 20, 50 paying customers who are getting real world value out of our technology? As long as we could do that, the success will come, regardless of what our competitors do.

Vivek: So, focus. You just have to focus, focus on your customers, and make sure that you’re delivering a product and experience that they really like.

Edward: Yeah.

Vivek: You could say this about other areas of security in the past too, right? I mean, endpoint security 10 years ago was a very hot category and it’s created several, multi-billion dollar companies, CrowdStrike, SentinelOne, and others. As you say, the reason that there’s so many competitors is because people clearly see there’s a lot of value in this market, but as you think about the ecosystem many existing security tools already, and you went to RSAC, and you’ll see 1000 booths and everyone has a booth. So, outside of even the AI SOC space, but in security in general, as an early stage startup, that’s not as much on the map as some of these incumbents, what are the things that you find are valuable to have customers recognize you and think about you? What are some of the tips you have for other founders in a crowded market and how to stand out?

Edward: The biggest learnings we had so far, on marketing front, is making sure you are very precise on how you describe yourselves. Cybersecurity is so fragmented, if you say, “Hey, we are using AI to solve all the problems with cybersecurity,” that’s not going to work, because there are too many vendors out there, but instead, you need to be very focused in your messaging and positioning, so the prospects or security buyers can immediately tell where do you fit in the larger security ecosystem? There are no security teams that only uses a single product.

Most security teams has 5, 10, 15, 20 products. It’s very important to be precise so people don’t conflate you with other products, and they can immediately understand what you’re trying to do. That’s kind of where you mentioned RSAC. I always love RSAC, and I love walking through Expo force, because I find that to be a really good opportunity to level up product marketing. When you walk through the Expo halls and see 1000 vendors, you can really quickly tell who has good product marketing, because every time you walk through a booth, you might have like five seconds right before you start looking at the next fancy, shining booth.

Within that five seconds, you can immediately tell what they’re doing or you’re confused like, “What is this thing?” I think that’s a great exercise. I know I, myself, have been doing this, and I’ve encouraged a lot of folks in my company to do as well, to really make sure our positioning and messaging is very clear so people can immediately tell what we’re trying to do, versus some Panacea AI magic.

Vivek:
Well, there’s a lot of those. Now that we’re a few years into this post ChatGPT wave, we’ve seen so many of these vendors that say they do AI security. If you go to the last two RSA conferences, all you would hear is AI, AI, AI, but then what are you delivering to customers, right? And so, in that way, I think it’s really helpful to hear from you, Edward, about how you all landed UiPath as a customer, really impressive, and they’re obviously a very discerning and sophisticated business themselves. Take us through that journey. How did you land UiPath? What went into that? Are they finding value from Dropzone today?

Edward: UiPath, one of their security engineers reached out to me personally on LinkedIn saying, “Hey, I saw a Dropzone somewhere. It seems you guys are doing interesting stuff. Can I get a demo?” And then, we kicked off the POC, where the end goal of the POC is to evaluate how much time saving we can create for their security team, because UiPath is growing very quickly, and unsurprisingly their security budget is not growing linearly compared to the overall headcount. As a result of that, during the POC, we worked with UiPath very closely to, not only make sure our product is automating tasks that allow their security engineers to essentially get higher leverage, but also working with them to align on the future roadmap of the product.

They’re not only buying us for what the product can do today, but also what the product can be three months, six months down the road, and that’s very interesting, because most of the time it’s a founder reaching out to 1000 people, leading, begging for a demo, not the other way around, and I think we have a very large chunk of our customers and active prospects come from organic inbound. I think part of that is because, echoing my previous point, by having really good positioning and messaging, and also very transparent product marketing, it allows security buyers to find you, versus you trying to push the ropes and trying to force the product down people’s throats.

This is where we took a very conscious effort and a strategic decision to be very transparent. For example, our entire product documentation is public on the internet. We have over 30 interactive recorded product demos, as well as an un-gated test drive and full transparent pricing. We are able to allow interested early adopters within security community to complete, essentially, 80% of the buyer journey without talking to us, and that really allows us to get these high-quality handsraisers who have already, to some extent, self-qualified themselves and know they want to try this technology.

Vivek: I love the point you made about being very transparent and being open, and that’s not common in security, right? There’s a lot of clothes selling, and you never really know how deals are done. I think I’m sure there’s some set of new generation of buyers that want that transparency. What led you to sort of stray from the path of what we would call as normal in security to be more transparent than what the norm is?

Edward: A lot of it came from my time at ExtraHop. While I was at ExtraHop, I really advocated for an interactive online demo. Back then, ExtraHop was probably the single security vendor in the entire detection and response space, where you can access an un-gated interactive demo, like actual product, not like recorded video, but an actual product. I saw how much additional credibility that marketing tactic really helped, so I decided to bring that and keep that with Dropzone as well.

Vivek: Well, last point on this is that I’m sure, as you’ve noticed, CISOs are sold a lot of bad products, and we have a CISO Advisory council here at Madrona, and the one thing that they’ll say is that they’re just inundated with products and a lot of inbound to them. With you, with this transparent marketing, and being able to show the demo and show the value, is there another step that needs to happen for you to bridge that gap to have them come and say, “Hey, take a look at our products”? Is that an evolution? How do you think about the push versus pull nature of what you’re selling and how CISOs are typically sold into?

Edward: I think it’s definitely a combination of the two. Over time, generally, what I’ve seen within cybersecurity is initially most startups are in a push market, because there’s no category awareness. Most of the security startups solve a problem that’s more or less kind of obscure to the general public, so they need to do a ton of eventualization. I would say, for us, it’s a little bit easier, because the problem we solve, again, is one of the most universal and concrete and well understood problem within cybersecurity. It’s just that nobody has been able to come up with a technical solution to solve it, so that definitely makes our lives a lot easier, because to some extent we don’t really need to evangelize the problem we solve due to the fact that it’s already been there for 20 years, and every single team experiences that every single day.

Part of getting security teams to raise their hand, part of it also has to do with the overall macro environment. For example, people have heard of Stargate projects, $500 billion of investment, as well as DeepSeek and all sorts of interesting reactions from different vendors when they really start to see competition, as well as genAI becoming real, and I would say that played in a big part as part of our marketing tailwind, because now it’s very common. I mean, obviously, I’m sure you guys have been saying the same thing to your portfolio companies, right? Which is regardless what kind of business you are in, I want to know why you are not using genAI in every single business function, and that’s a question I would say every single board has been asking the executives. When that trickles down to security teams, alert investigation, and software-based documentation for SOC, it is generally one of the first places people look for.

Vivek: To your point, we’re seeing our own companies and the customers of the companies we work with, everyone is saying we’re using AI, but they don’t want use AI foolishly. They want to be smart about how they use AI, and to your point, in the security space, it’s hard to just put AI and say, “Hey, let’s walk away,” right? Security is security. It’s a very important piece of both the application and the infrastructure side of businesses, so being able to already have that pull from the SOC team, saying, “We’re already drowning in alerts. We need help. However way you can help us is going to be important,” and you can come in and execute against that, I think, is really interesting.

Edward: Absolutely. We have seen, thanks to ChatGPT, I think ChatGPT is probably the biggest marketing gift OpenAI has given to all these genAI startups, because it enlightens everybody, whether they’re technical or non-technical, on the potential and capabilities of genAI or this kind of new technology. I remember getting calls from my parents, asking like, “Hey, Edward. You have been doing AI stuff for eight years.This genAI thing looks very cool. Why don’t you go build a stock trading thing using this technology?” Because of that, I think that made a lot of security practitioners start to play with this technology themselves.

We have seen a good number of open source projects, and a good subset of the prospects we run into, a lot of times they’ll be like, “Hey, Dropzone seems very cool,” and by the way, we have been internally playing with GPTs and trying to build our own open source AI agents who automate small stuff within cybersecurity, so we know the technology can get there, but at the same time, we know, as a security team, we’re not like a hundred percent developers. This is not our specialization, so we already built confidence, have confidence in the technology. All we need to find is a reputable, trustworthy, actually technology solution provider. That definitely, again, makes it a little bit more kind of a pool-based marketing, versus trying to push ropes.

Vivek: Yes. Well, you can tell your parents that, “Hey, you may not be building a stock trading app, but stock trading apps can use Dropzone,” which is really cool.

Edward: Correct, yeah.

Vivek: I’m going to transition into some rapid-fire questions we have for you. Edward, you’ve been a founder for a couple of years now. You’re both a solo founder and a first-time founder, so what are the hardest-learned lessons that you’ve had so far? What is something that you wish you knew or wish you did better on this early journey of yours?

Edward: Probably the biggest thing, and surprisingly, as a solo, first-time founder with a engineering background, is I wish I learned more about sales before I started. One common misconception technical founders have is, as long as we build the best product on the planet, people will magically come to us. But that’s definitely not the reality. You could argue I couldn’t be further from the truth. So, sales is actually very important.

To be frank, while I was at ExtraHop, I obviously had a number of engagements with customers, but one thing I always wanted to do at ExtraHop, that I wasn’t able to, is work part-time as a sales engineer, for like six months. I never got a chance to do that, even though I always had this idea in the back of my mind, but after funding Dropzone, I think that kind of forced myself to learn how to be a sales engineer and how to be a account executive. I think those skills are tremendously important, because if a technical founder cannot sell a technology or a product with all the vision, enthusiasm, and in-depth product understanding, then nobody else could. I think sales capability and knowing how to use different techniques, how to qualify customers, and how to have a good sales demo are the key skills I wish I had before I got started.

Vivek:
Great point. Sales is so important. It doesn’t matter what your product or businesses. Sales is very important. What is something you believe about the AI market that others may not?

Edward: One thing I believe about the AI market is the fact that distribution is going to be a very important factor, and how I think most people probably underestimate the power of human trust, and how much that plays within the overall business ecosystem. This is where I’ve seen a number of startups trying to build technologies that completely substitute certain roles and responsibilities. I think, at least from my perspective, I think there are roles where the technical deliverables is maybe a fraction of the value proposition, but the other fraction is actually this human trust, human responsibility, and accountability.

This is where AI startups are looking at different industries and verticals, and try to identify insertion points for AI agents. I do believe we should be very respectful of the fundamental human trust, and how having automation itself is not completely obvious. That’s one of the reasons why I suspect software engineers will get more automation versus, for example, account executives because nobody is going to really build, have a relationship with an AI agent, posing as an account executive. This is where this human relationship, human trust building channel is something that I think it’s a lot more difficult for AI to substitute.

Vivek: Well, we see this when you’re driving down the 101 and you see multiple AISDRs. Which do I go with, right? Who do I have a better relationship with? I’m not sure right now, but outside of Dropzone, or you can even think outside of security, what company or trend are you most excited about?

Edward: Probably robotics. Part of it is I love watching animes, and there’s a number of animes where they talk about future societies with all sorts of cyborgs and robots, and I think humanoids robots. I think those are all very cool, but also part of it is a little bit maybe self-fulfilling, because obviously, as a cybersecurity vendor, I see more robots there are around us, I think the more important cybersecurity will become as well.

Vivek: Last question. This will be an easy one for you. There’s a 90s movie with Wesley Snipes called Dropzone. Is the company named after that movie, or what was the basis for calling the company Dropzone?

Edward: I actually have never heard of that movie, so maybe I should check it out or maybe ask ChatGPT about it. We named the company Dropzone, because we envision the future, when we have the resources and the needs to sponsor a Super Bowl ad. We want the ad to involve a scene where you have cyber defenders surrounded at the hilltop, overwhelmed by attackers, and then cyber defender essentially deployed Dropzone, which is, in my mind, I’ve been thinking about some sort of portal or Stargate, kind of a warp gate kind of a construct. They’ll deploy this portal, and through that, they can summon additional reinforcements to help them push by the attackers, so we named the company Dropzone, because we view Dropzone as a portal of, you can say, software-based staff augmentation for cybersecurity teams.

Vivek: Love that. Well, thank you so much, Edward. We really appreciate it.

Edward: Great to be here.