Startup to Scale: A Mini Masterclass in Efficient Growth and GTM

 

 

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

In the latest episode of Founded & Funded, Madrona Managing Director Tim Porter hosts Pradeep Rathinam, a seasoned software executive with over three decades of leadership experience. Paddy was the Founder and CEO of Madrona portfolio company AnswerIQ, which Madrona invested in back in 2017. It was one of the early machine learning applied to SaaS companies that Madrona invested in. Paddy sold AnswerIQ in 2020 to Freshworks, where he started out as chief customer officer, helped take the company public, became chief revenue officer, and really drove a series of incredible accomplishments for Freshworks.

Today, Tim and Paddy dive into that world of SaaS go-to-market. Paddy shares his experiences on how to not just grow a company, as every company needs to do, but how to grow efficiently and how that includes reducing churn, expanding accounts, and landing new logos. This is something that all go-to-market leaders and startup founders have a lot of questions about how to unlock, so it is a must-listen for founders.

This transcript was automatically generated and edited for clarity.

Tim: So before we jump into the nitty gritty of efficient growth, tell us a little bit about the AnswerIQ story. You founded the company, built it up, and sold it in a relatively short period of time. Give our listeners some insights into what was AnswerIQ and why you decided to sell it.

Paddy: So perhaps the best thing I can do is start by introducing myself. So like you said, I’m old. I have three decades of experience.

Tim: I didn’t mean to highlight that. But I’m going on 30 years here as well. And it’s like, wow, there’s a lot of things that we’ve learned, isn’t there?

Paddy: The last decade is probably the most interesting for listeners here, I founded AnswerIQ, which was funded by Madrona. Thank you. It’s a classic B2B SaaS startup focused on using historical records to make predictions in customer service. We were using very early versions of GPT 2.5. And as I reached a pivotal point, which was — is this business going to grow to $100M or more? I reached the conclusion that being an AI layer sitting on top of customer service systems like Zendesk and Salesforce wasn’t necessarily going to get us to that $100M mark. So, in early 2020, prior to COVID, we sold our business to Freshworks.

Tim: Let me interject there quickly as we get into the Freshworks part of the story. I should mention that the AnswerIQ investment was led by my partner, Soma, and you had known Soma from Microsoft through the journey as well.

Paddy: That’s right.

Tim: In making that decision, was there strong alignment with the board? How did you work through that? Sometimes there can be a disagreement. Classically, the investor’s like, “We’re going to go for it,” or it could be the reverse. Was there a good alignment in that thought process that you alluded to?

Paddy: There was a very good alignment. Soma was an incredible board member and a dear friend. I think one of the things I learned in that experience was the simplicity of the questions that he asked, which was, “Hey, where do you see this business in five years? Do you see this business growing to a hundred million or a billion dollars? Do you see the path?” Once those became clear, it was alignment with the board to get everyone together to say, “Hey, let’s go out and find a transaction.” Freshworks was going for an IPO journey. I wanted to experience an IPO and I decided to jump on that bandwagon.

Tim: Freshworks, tell us that story.

Paddy: I joined Freshworks in 2020 as a chief customer officer. Those days, churn was the biggest challenge that the company was facing, and it was one of the key deterrents to its IPO and the net dollar retention. That was the first challenge I took on, built out the customer teams, and towards the end of three years, we reduced churn by over 40%. In post-IPO in 2022, our growth had slowed down significantly. We invested significantly in our field sales force, adding our investments by over 50%, and we saw a decline in new business in the field. I took on the challenge of the CRO to see how I could turn that tide. That was also an incredible year of sales transformation that we experienced.

In the end, it comes down to, like you said, why do founders stay or leave? For me, it was about what a mentor once told me — that being professionally rich its about the wealth of experiences you accumulate. It was about taking advantage of the opportunity to garner those experiences and learn from those.

Tim: When companies that I work with exit, I always encourage the founders stay on as long as you are learning a ton, and if you can have a big impact, this is a great opportunity. Yes, we’re in the startup business and it’s exciting like, let’s go do it again, let’s run it back. The notion of not just growing but growing efficiently has really been driven home. For those of us who’ve been working multiple decades, yes, businesses must be profitable and grow. Both are equally important.

There was a period leading up to really 2022 where it was sort of grow at all costs and you led through, it’s like, “No, we also need to be efficient.” When you’re a public company, and you’re reporting every quarter, it’s really glaring, and people are pushing on these things. But it’s incredibly important as a startup founder too. I mean the obvious, how long is my cash going to last? And so driving runway. I think efficiency through the full life cycle is critical in the market and everyone realizes that now, but it can be hard, especially as you’re also trying to significantly scale. Talk about your framework for thinking about efficient growth. What goes into that? What does it mean to you?

Paddy: Freshworks is a company that has over $600 million in revenue, creating SaaS products for IT service management, customer service, and CRM. With over 60,000 customers, we had three sales motions, which is PLG inbound, field outbound, and then partner-led. We had selling to three distinct buyers. We’re selling to a customer service leader, we’re selling to an IT leader as well as sales and marketing leaders. It’s a complex business in a lot of ways, right? When we started looking at this business, it’s not easy to break this down and say, “Hey, how do you really solve for growth?” Right after IPO, a year after that, like I said, we reached a crisis point where we created a 50% increase in our sales investment, and our new business declined in the field. All of these investments were essentially targeted.

Tim: That’s not efficient.

Paddy: Basically, all of them were targeted toward the field investments. We wanted to grow our ARR and ACV. I took on the challenge of the CRO, and one of the things I had to do was have a framework of how do you solve this problem. The first thing we did was look at deep analysis. I truly believe that growth for most companies lies within. If you’ve grown to a certain point that it’s 10, 50, or hundred million, there’s a reason why customers have bought you. There’s a reason that you need to go down and really analyze and be relentless in the analysis, whether it is by product, by geo, by SKU, by sales motion, really looking at both the economics, whether it’s CAC or LTV and those types of simple metrics or going into the mechanics of understanding and seeing what is going on with respect to win rates? What’s going on with respect to MQL-to-close and go-to-market efficiencies and engines?

Really looking holistically at the problem and saying, “How do we really break this problem down to see what’s going on in the business?” When growth stalls in a company, you typically will see the common view being the sales is broken. The go-to-market engine is not working or demand generation is not working. If you ask the sales folks, they’ll say, “Hey, the dog is not hunting.” The product isn’t really fit to what the market needs. It’s behind its competitive features when you compare AI with other competitors.

Tim: Wait, sales blaming product? Yes, that happens. Product blaming sales? Yes, to get in the blame game.

Paddy: It’s pretty common, but when growth stalls, it’s usually a structural problem that goes beyond go-to-market. Yes, there are go-to-market challenges but beyond that, when you look at product and mix what’s going on with product-market fit and pricing and packaging and competitive dynamics. It is a lot more complex than just looking at one dimension and saying, “Hey, we got to go do a go-to-market transformation and we’ll go find growth back again.”

Tim: You inherited a complex go-to-market, three different segments, three different buyers, and three different product lines; that’s a detailed matrix. What was kind of the range of deal sizes? If that’s something you can share, so the audience can orient because there’s sort of the big enterprise versus smaller deal size overlay on this too. Then, the first thing I’m taking away is to really know the data and follow the data. What did you do from there as you dug in?

Paddy: When we dug in, we realized two or three common patterns that were happening. The reason why our field new business had declined year-on-year was that the field was doing a lot of sub-$10K ACV deal. Imagine sitting here in the United States or in Europe or in Australia in some of these markets and building a sales force that is going out and selling sub-$10K deals. That was not cost-efficient and was primarily because of segmentation was broken. Our segmentation was such that more than 250 employees were belonged to what would be considered field. Looking at that data, one of the first things was to really break the mold and say, “Hey, we got to stop selling to a smaller segment of customers from the field,” number one. Number two, we got to sell a minimum ACV threshold, and we defined that as $30,000, and that became the core ICP.

More importantly, it was all products. Imagine a sales force running all products and selling them to three different buyers. It’s super complex. One of the things we did is simplify the motion saying, “Hey, we’re going to focus on one product that is going to be the winning cars.” We focused on IT. That was a product that was growing at 50% year-on-year. We made the sales force learn that one product and took all our other products and made them own to overlay and specialist sales force. What this is brought focus. Focus on IT, and on $30K plus ACV. We had competitors like ServiceNow, Ivanti, Cherwell, and we were winning in that marketplace. All we had to do was try and get the sales force to get focused on that.

Tim: The most important part of strategies is often deciding what not to do. That focus, I can see that being critical, simplifying things for your team. I often hear the conventional wisdom on what is the minimum ACV to be able to profitably sell direct with an outbound motion. Even at $30K, you have to be quite efficient, right? Sometimes I’ve heard $50K, it depends a bit, but 30K, while $10K seems very, very hard, even to achieve $30K probably took some real discipline and streamlining within the org.

Paddy: Yes, it is because when you are coming from $10K, $50K, $100K deals felt out of reach for a lot of our sales folks. Part of this was also transforming our sales force, the sales leadership, and the first-level leaders. First level leaders in sales organizations are the most critical path of really driving this. Once we got ourselves to a point of defining the ACV right, and we started transforming the sales force and because no GTM transformation happens without people. There along, we started getting more experienced salespeople, and we started selling a lot of $50K, a lot of $100K deals. $30K was the threshold, and, essentially, we created another aspect of it, which was our business was hybrid. A lot of B2B SaaS companies still operate with hybrid sellers, hunting and farming.

The reality is hunting is significantly harder than farming. A lot of farmers really got themselves into sellers, so we separated hunting from farming, and that brought a lot more incision and focus on the craft of hunting. We changed the incentive structures and so those were some of the things that we did just from a focus standpoint in basically saying, “Hey, focus on this ACV, narrow down.” The next thing we did was to add a lot more fuel to the fire. When you have three products, your marketing budget and your capital allocation tends to become evenly spread. We took from 40 cents to a dollar on our IT product to 70 cents to a dollar and saying, “Hey, that’s our core growth engine.”

Tim: Double down on what’s working.

Paddy: Double on what’s working. We saw that there was uphill challenges with our customer service product. Customer service as a landscape was going through a lot of change. Zendesk went private, growth rates pretty much stalled down, stalled for all our competitors. One of the things that I learned in my life is like, “Hey, don’t push uphill. Something’s got momentum. Go double down on it.” But trying to revive, the normal notion is we go try and fix this product and go back and sell and market. It takes 18 months, a product fix takes for 18 months, a product-market fit had changed. AI had changed the landscape of customer service, the expectation of productivity was high as well as the fact that the product itself had become more conversationally oriented. There’s a whole bunch of things that happened in the customer service space and I think that decision was a really good one to say, “Hey, let’s make that an add-on product that we sell through a specialist on overlay sales force and focus on IT.”

Tim: Can we double click just for a second on the change management aspect here? You looked at the data, you made these decisions, but then you alluded to the fact that it can be challenging to implement all that. It’s hard. I think I’ve seen whether you have eight people in your go-to-market team or you have 800. Certainly, more can be more challenging. Any lessons learned from the change management, both maybe your other peers around the exec table about hey, we’re not going to do these things and kind of letting go and then especially rolling out these changes to the field? You mentioned having to change out some folks. Was it a lot of rehiring? Was it training? Communication? What were some of the advice on doing that change management effectively? I know a lot of companies are going through this right now actually.

Paddy: Change management is hard. I think a lot of people think go-to-market transformation is about changing people and leaders. The reality is you want to bring them along and you want to give them an opportunity to really try and scale up. What we experienced was from a change management perspective, we did all of these changes in 45 days. So we weren’t doing a great job. What we did is give people a break for the first quarter and make sure they picked between wanting to be a hunter or be a farmer. These kinds of changes were very, very important because people suddenly felt like a goldmine of existing customers that they sold to went away.

And so change management has to be gradual. It has to be thought through. There is no good change management simple principle that you can say — The only thing I can say is to over-communicate and make sure that you bring people along and set expectations that it’s going to take time for these changes to land. It took us 1, 2, 3, probably on the fourth quarter, at the end of 2023, we had the largest quarter in six quarters. North America did its biggest business. It takes time for all of these things to land if you will.

Tim: It can be hard to get people selling $10K deals to selling $30, $50, $100, but it wasn’t just a wholesale. We have to get rid of our existing sales force and add new ones, bring people along, retrain, communicate. Sure, getting the comp piece of it all aligned, incentives aligned with the direction as well. That’s good to hear. Patience and intentional communication sounds like the key. That’s often the right answer, is it? Even in our family lives.

Paddy: It’s harder to said than done. People always will say, “Hey, this did not work well. Mindset is not right.” But overall, I’d say as we went past the six-month mark, there were no questions asked. Everybody was like, “Hey, this is the right strategy. Let’s just go and double down and win here.”

Tim: That’s the other thing. Once you start winning, that’s what aligns everyone. Everyone wants to win. You alluded to something in that description around hunting and farming. You also said it’s much easier to kind of grow from your existing customers than to find new ones. That hunting is much more expensive than farming. That being said, new business is the lifeblood. It’s oxygen. You have to also add new logos to then do the expansion pieces. We can come back to the expansion side of it, which is super critical. I think, especially in a down market, we’re adding new logos is that much harder, but you were able to do both. Maybe talk about the new business piece. It’s the hardest and most expensive, yet you found ways to do that efficiently. How did you do that, Paddy?

Paddy: New business is probably the hardest part in any enterprise field motion. It’s not repeatable. It’s long. It’s expensive, it’s time-consuming. In the end, the only thing that people say there is a sales process that’s repeatable, but the magic of new business is hard. One of the things I talked about is the focus. We focused on North America. We focused on winning in our ICP, which was on $30K plus ARR, competing with ServiceNow, Cherwell, Ivanti, and some of the competitors. Once we put all of these dimensions, then we had to make sure both our recruiting changed to make sure we got the right caliber of people. We changed our enablement, which used to take six months, down to one month of enablement. There’s a set of things that you need to do to get your new business motion right. We changed the incentive structure for multi-year contracts and larger deals.

Tim: How did you pay on that? Did you pay for the full multiyear upfront or what was the from-to?

Paddy: There were kickers. There were kickers for multiyear, and essentially, one of the things that we did was to make sure we made a target that was quarterly coders. It was really about you can make a lot of money, but the idea around the fact that you only are surviving on hunting was a new notion in Freshworks. We really made all of these changes. Then the other thing was, as soon as we put 70 cents to a dollar on our marketing on IT, we started seeing some repeatability. Now storytelling is very important. The alignment between sales and marketing is critical. You get that right, and then you start seeing momentum.

Tim: Did you own marketing as well?

Paddy: No, I did not own marketing.

Tim: Okay. So it was another tricky alignment piece is to work with your peers who are running marketing to drive this alignment.

Paddy: This is an area where I would say there shouldn’t be a ray of sunshine between sales and marketing because goal alignment is important. You want to be able to make sure that everybody is talking the same language. Businesses are measured on quarterly revenue. Salespeople are running on quarterly revenue, but often you’ll find marketing teams being misaligned because they are on an annual incentive plan, annual plan, corporate plan. The reality is MQL (Marketing Qualified Lead) is no good if it is not converting into business. And so getting marketing leader and alignment in my view, ought to be much more closer to the sales incentive alignment on a quarterly basis.

Tim: That is a common one where the MQL is elusive. You, of course, need top of the funnel to get at bottom of funnel, but you can have lots of MQL that aren’t actually qualified. Did you pick sales opportunity? How did you align? What’s the marketing to sales funnel metric that mattered at Freshworks?

Paddy: MQL-to-SAL (Sale Accepted Lead) and then SAL-to-close, and the reality was MQL-to-close was also a number to really understand. In SaaS business, these are very, very hard metrics to really point to. I do think that once you align the leadership teams on the same goals, I think you will start seeing a lot more teamwork and collaboration on these areas.

Tim: With the lead-gen motion? I’m sure marketing was doing inbound programs where they’re kind of out doing marketing things, generating leads that get handed off to sales. What about the outbound motion? Was there a BDR function, and were those people under marketing, or were they under you? That’s another place where I feel like there are six of one and half dozen of the other can depend on the leaders that are in place, but that’s a place you can often get the alignment wrong.

Paddy: And I think we got it wrong. I do believe that the BDR team belonged to my organization and the CRO organization. The reality is if you align your BDR team and marketing teams much closer and if they’re on that team, everything from messaging, because when a customer picks up the phone or is a response to an email, the language, the words, the nuances, that’s where you learn your messaging, outside of talking to your existing customers and talking, why did they buy you and what was the trigger points? The reality is BDR (Business Development) teams actually see tremendous synergy with marketing and putting them together, making sure that everything from MQL-to-SAL is all well packaged together, is probably one of the things I would say is a better alignment from a sales and marketing perspective.

Tim: What was the dominant lead gen source? Was it the BDR function or was it inbound or was it pretty evenly split?

Paddy: It was the BDR function.

Tim: BDR function. So getting that tuned was critical.

Paddy: Getting that tuned was very critical. BDR function is hard, especially post-COVID. It’s very hard for people to pick up phones, emails are hard to read. And so really using new innovative methods, trying to understand how to get better on LinkedIn. Those were the types of things that one would do. I would still think that getting these two functions tightly aligned would really change demand generation. Demand generation is hard. It’s a crowded marketplace. It’s hard to get your message out there. I feel for marketing leaders because simple storytelling is hard today. A big part of what’s missing in today’s industry is the ability to tell a story that a fourth-grader would understand. That’s the compelling message and that stays memorable, right? With SaaS, that’s very hard in a crowded category. Of course, if you are Copilot for GitHub, you got it all easy.

Tim: Let’s shift gears to retention. So this is the other side of we were talking about new logo, but customer retention is you have to do it. It’s more profitable, it should be easier. On the other hand, coming out of this period of time where every single dollar in the budget is under scrutiny from the CFO, can be harder. I know with some of our companies in 2023, I sort of said, “The theme this year, first and foremost, is retention.” We have seen even with that focus, net retention rates have dropped across SaaS. If you look at the overall numbers, certain things like seats weren’t growing naturally, consumption wasn’t growing net as naturally. How did you think about customer retention? Sort of interesting since your AnswerIQ is sort of in this support retention kind of area, but critical for you at Freshworks, how did you approach that side of the coin?

Paddy: Freshworks had a unique challenge. One, because of selling to SMB, churn tends to be natural on one end. On the other hand, with 60,000 customers with the PLG motion, you tend to see higher churn rates in some of those types of businesses. What we did was to simply start, and I have a very simple equation, which is retention = adoption + engagement + advocacy. These are the three core pillars, but it all starts with adoption.

Tim: I like that retention = adoption + engagement + advocacy. Let’s decompose that a little bit on those different pieces.

Paddy: That’s how I set up the entire team’s charter. The customer organization was thinking about this, but the most important thing for SaaS businesses is adoption. It all starts with product. Understanding product telemetry, picking up the right telemetry that you know as a business that will retain your customer or the customer will stay with you for long is critical. Most businesses, if you ask me, they’ll collect tons of metrics, tens and tons of metrics, but can’t make sense of what does retention really mean. We used a framework saying, hey, let’s look at cohorts of customers by revenue sizes, by industry, by usage scenarios across all of these. And basically said, how do we look at the top five to seven features that essentially a customer that has been staying with us for long is using? That’s what we call a pack called essentials.

We created a package called advanced and one for ultimate. It was deeper usage of features. Feature usage and adoption, not as a feature, but in the usage scenario is critical. Being able to see from a telemetry and understand was important. We go CSMs and our customer success and account management organization saying you got to move customers from essentials to advance to ultimate. The ultimate, we knew all the integrations. We knew when people were bringing data into our system, they were less likely to leave. There’s a framework you can think through, but it all starts with telemetry and being strategic about telemetry is very, very critical in today, in the way you think about product adoption.

Tim: That’s fascinating. I think most of the companies I work with realize, hey, we need to have this telemetry, otherwise, we’re flying blind. The customer’s actually using it and the product, of course, has to be compelling that draws people in, et cetera. The packaging, the SKUs as a way to not sort of drive more monetization but actually to drive more usage. That’s a really interesting insight.

Paddy: It was not only usage, it was an upsell strategy too. It was a little bit of both because we knew if someone was in advanced or ultimate, they would actually go to higher, higher-level plans of our product. The second part of it was onboarding.

Tim: If you don’t set people on the right course, it’s hard to get them saved over time.

Paddy: Absolutely. And the first hour in PLG SaaS business, the first day in SaaS business is important in a trial period, but the onboarding experience that you offer and what you learn from that is the Achilles’ heel in SaaS business, because some businesses take 30 days for time to value, some take nine months for time to value. The reality is you’ve got to understand what is it that the customer is asking for and how do you understand what are you setting up for the customer and taking that, feeding that back into the product team.That’s where a self-service onboarding can become significantly better. We created a package for digital adoption where we took onboarding for free for six hours for 15,000 plus customers. It was massive in our churn reduction initiative.

Tim: Before that digital pack, was this not offered or did you charge for it?

Paddy: It wasn’t offered. It wasn’t thought through. A big part of this was to create a simple package. Once you had essentials advanced, go take essentials, get it done with customers, and we knew the customer had a higher opportunity to stay with us.

Tim: Do you do any free trial or POCs before people decide to adopt? And at what point did you layer on this digital pack for the kind of free onboarding training?

Paddy:All our products have a free trial, and then for a field and enterprise, you take them through a POC. The reality is go live is a misnomer. Usually, it’s a .one, it’s an MVP, but there’s a lot of work to make that product work after that, and so understanding those pieces were critical from the churn perspective. Onboarding was a critical path in making sure that you can make it very easy as well as you can do it at scale. That was one of the things that helped us reduce churn.

Tim: Onboarding needs to start at the free trial or POC, or you probably don’t convert to paid, but also once they’re converted to paid that go live, that’s a misnomer too because it has to continue to make them successful.

Paddy: That’s right. Yeah, because to see value takes a lot more than just the go light that they have. There’s more work to be done. The third area was around customer service and support, right? Think about it. It’s a goldmine of signals, ongoing touch points, with the customer. This is a place where you see customers reach out to you because the product is not working in a certain way or there are certain things that can’t work and even developers tend to think about is a simple feature, why can’t the customer use it? We learned a lot from the contact rate. Reducing contact rate, understanding contact rate, and getting the right codification on that is critical from a churn perspective.

You get this right. We had an amazing partnership with the product team where we said we want to bring down contact rate by 5% every quarter for the top contact codes, but contact coding also requires a lot of strategic thinking in terms of how you do the coding, so that way, you can really take that feedback back to the product team. All of these three areas are very core to what I call product related and product telemetry related in terms of an adoption perspective.

Tim: Your first part of your attention equation adoption, just to summarize, first, the product has to be wired so you have the right telemetry in place and you use it. Second, you have to get the onboarding right to really get them to adopt. Then there’s this ongoing contact rate, which is probably a segue to engagement, which that’s the next part of the equation to how do you solve retention.

Paddy: Right. So engagement is an overused term in the industry. People do QBRs with the person they meet on a weekly basis, and they will talk about how’s the health of the system and what have you. The reality is engagement really works when you have a strategic relationship with an executive on the other side. One of the things that we did was enforce this thing called the executive business review. In that, we wanted to ask the business leader on the other side, your simple questions like saying, “Hey, here are the best practices. Here’s what other customers are seeing from the product and how they’re benefiting from this.” Then ask the straight question, if you were to ask you today, would you renew or not? There’s no point in asking a lower-level stakeholder. You want to take this to the right level.

Executive business reviews are hard. Generally, you will not find the ability for a lot of sales leaders, account leaders, success leaders, being able to go and ask the questions saying, hey, I want to talk to a C-level person on the other side, because conversations are simple. The conversations are, hey, are you doing something to improve their customer experience? Are you doing something to reduce their cost? Are you doing something to improve their avenues? If it’s none of these three, and then what is the practice that’s used by their competitors that you can take to them? That is where the executive business reviews really works.

We had the top 100 customers adopted by execs. Every C-suite person inside the company had to own 10 customers. For example, we had a voice of customer once every month. We would get three customers to come and speak into the management team and just keep their verbatim feedback without being any coaching, without giving any coaching. Those were signals that we would learn from the engagement perspective to understand, hey, in the top tier, mid-tier, the bottom tier of customers, how are customers perceiving their relationship with the company?

Tim: That executive buy-in, it sounds obvious, but it’s so critical. I’ve seen so many times where, hey, the customer’s using it, they love it, the daily champions, but then you might get the legs cut out from under you a time of renewal because execs didn’t see the value, got rid of budget, or maybe were getting sold to by a competitor that comes in top down. When you talk to teams, it’s like, “Are the executives buy in?” “Oh, yeah, yeah, yeah, they are, they are,” right? Because if you might be just listening to your champion and they might not have it. So this point about enforcing that discipline and creating some mechanisms to ensure that those EBRs happen, et cetera, I think that’s really good advice.

Paddy: That’s right. Companies tend not to spend as much focus on this, but I do think that being very disciplined around the executive focus is a sure-shot way to make sure that you have customers who are happy.

Tim: We’ve been talking about change management and org alignment. There’s also interesting on your side. You referenced different execs across Freshworks that had customers, but there’s also always this question of was this an account management function? Was it a customer success function? Did you have both? How did those orgs dovetail to accomplish this set of things?

Paddy: We did an interesting exercise and one of the things was, how do we put the customer at the center and have these teams work with each other? If you look at what was what happens, it’s a blame game between, hey, this is not an expansion opportunity. This is a retention opportunity option thing. CSM works. CSM tells it’s a renewal opportunity and account managers take it. What we did is we built a pod. We made the account management and CSM function under one same set of customers. Account manager focused 70% on expansion, 30% on churn. Customer success leader manager focused on 70% on churn and 30% on expansion, and they worked like magic. Now, you put the customer in the center, EBRs and QBRs were happening, adoption packages were being sold in. There are things that we did that essentially structurally brought, which makes sense because hey, it’s the customer is at the center, why do we have two teams in two different organizations to really work with the customer?

Tim: Okay, last piece of the retention equation, advocacy, I would guess that’s the one. It’s like, okay, what does that mean exactly? I kind of get the, they got to use it, and we got to engage over time, but what is the advocacy piece and how does that bring it all together?

Paddy: Advocacy is the network effect, and I think if you see customers like to talk to other customers because being in a community is also a sense of, well-being, being part of something, learning from others, learning from others experiences. Advocacy became one of the goals where we say, if we can get a company to publicly say with us, say that they’re using us and this is the benefit that deriving, you would get CSM incentives and points on that. Driving that became a critical factor. One of the best things we found with advocacy was that when our advocacy worked well, we bought a lot of net new customers. The new business customers, when they came into some of these forums and they felt like, hey, they’re seeing other customers talk, that was probably one of the magical moments for them to say, hey, I think these guys are transparent. They’re talking about the product, the company’s listening, and the magic happens for new business there.

Tim: Retention = adoption + engagement + advocacy.

That’s good to remember. By the way, Paddy has published some blogs in some of these learnings, so we can point to those later. You can go and read more and learn more about some of the details here. Let’s move to one other key area here, and you alluded to it. There’s new logo, there’s keep the logos that you have, but then how do you expand within them? When you were talking about account management and CSM alignment, you teased the expansion question different ways that you can expand in accounts. There’s upsell, there’s cross-selling, other products, SKUs, et cetera. What was your approach to driving expansion and keeping customers happy at the same time at Freshworks?

Paddy: First principles, expansion can’t happen unless you have deep product adoption, you have advocacy, and you have a happy customer. To go back at renewal and say, “Hey, I want to expand my revenue or plan three months before,” is not necessarily the best path for expansion. Expansion, in my view, is one of the areas where product managers, product leaders, have to be very, very strategic about thinking about what is the roadmap for growth from an existing customer. It’s not so much of an account management function, but really the set of SKUs, the offerings, how do you package AI beyond Copilot?

Those are the types of things that product leaders need to think through because that is where the rubber meets the road. When I look at product management today, pretty much most SKUs are designed around T-shirt sizing: small, medium, large. You look at a competitor, you say, “Hey, let’s don’t match that.” The reality is you can learn from telemetry, you can think about usage. How do you increase usage and consumption in different ways in the way you design your SKUs, right? A big part of what I think about expansion is making sure that you have a slew of offerings that account managers have in their armory when they go to customers, and then be able to really talk with the best practices that other customers are using so that they can get them.

Tim: Were there any mechanisms internally to take all those learnings from your go-to-market org and feed it back into product management? Was it as simple as, hey, we communicated a lot, or was there anything that you did there to make sure that those learnings transferred in the right decisions that you referred to ultimately got made?

Paddy: A lot of it would come through the quarterly product reviews. In our business, we have a quarterly product review, and so in the core quarterly product review, you have sales, account management, CSM, everybody bringing in the feedback, customer success, customer support, onboarding teams to the product teams. It was, while it was not so structured, it was a forum every quarter to bring some of this back to the product teams. The second aspect is when you think about the amount of effort that businesses spend in the amount of money, in resources, in marketing, in sales towards new business, expansion marketing teams are relatively underfunded in most businesses. It is a lost opportunity.

It is a lost opportunity in the sense that, hey, not only can you drive more advocacy, you can drive more adoption, but systematically thinking about expansion marketing is very, very critical. I do feel that it is underfunded in most organizations, because everybody is basically smitten by the new business growth that one wants to get. One other aspect I will add is how do you make this more product led? The reason why I was putting it back product leaders is because making expansion more discoverable, which is how do you get more features, aspects of your products, your cross-sell upsell more discoverable in the experience of the product is critical. I think there is still a lot of room for growth in SaaS businesses to do that better.

Tim: You brought up earlier the pricing packaging aspect of this. Is there anything you did around structuring contracts that was also helpful in encouraging expansion or any tips on that side of things?

Paddy: From a contracting perspective, it’s pretty obvious. One of the things that you’ll find is most SaaS businesses discount heavily on year one and sometimes end up discounting for multiyears because of the desperation of winning that deal. It’s very important upfront to start thinking about whether it’s expansion rates that you apply, limits you apply for usage in how you package your structure of the deal in such a way that you continuously get on renewal rates, ARPA improvements, as well as expansion of the new expansion dollars that you can see. And I think that is definitely one of the areas I would say that SaaS businesses ought to do better. The big guys like Salesforce and Oracle, they steamroll you and do that. But generally, when you see startups and midsize companies, they struggle to find that voice and that muscle to be able to do so, and I would understand.

Tim: Panic a little bit right in discounts. So the advice is maybe like yes, as a startup, you have to realize you don’t have the pricing power of Salesforce, but maybe don’t get desperate too quickly.

Paddy: Yes, especially at the point of sale.

Tim: It’s harder than you think to make it up in year two and year three.

Paddy: You have the customer. The customers agreed to buy. I mean, I think at this point, you don’t give in on some of these areas.

Tim: Wow, what a fascinating journey and amazing success at Freshworks. If you tie this all the way back to startup founders and thinking back to your time leading founding AnswerIQ and having been a CRO here for these last several years, what do you think about org building there at earlier stage companies? This is always a question, when do we bring on a CRO? Any advice about that or general org building now for the early-stage companies that were maybe in that kind of first three-year journey that you took AnswerIQ through?

Paddy: Most startups go with head of sales to start with, and then they bring on a CRO at a later stage. CRO to me is a strange title, right? The title by itself has so little leverage. The customer’s only thinking about getting discounts from you. There’s no value conversation you can have with the CRO title because you have the name revenue written on it. Besides that point, the three things in a startup, you have to build are, you got to sell, and you got to tell. In the initial stages of a startup, a CRO is the seller and the teller because you’re learning from your customers and you’re building your story as you go along. It’s a critical function that you bring on early that is lockstep it the way you’re building your product and you drive growth for your business.

Tim: Paddy, this has been absolutely terrific. I know that the absolute most common set of questions we get from founders are all of these things we’ve been talking about around go-to-market, the decisions, the organization, the levers, what good looks like. This is incredibly useful to our audience. Again, Paddy has written some blogs that go into more depth. We’ll have links to those in the episode notes, but I can’t thank you enough for sharing your time and your insights with us here today in Founded & Funded.

Paddy: Thanks, Tim. Thanks for having me.

 

Deepgram Founder Shares Strategies for Scaling and Outmaneuvering Big Tech

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today Madrona Managing Director Karan Mahandru and Scott Stephenson, Co-Founder and CEO of Deepgram, a foundational AI company building a voice AI platform providing APIs for speech-to-text and text-to-speech. From medical transcription to autonomous agents, Deepgram is the go-to for developers of voice AI experiences, and they’re already working with over 500 companies, including NASA, Spotify, and Twilio.

Today, Scott and Karan dive into the realities of building a foundational AI company, meaning they’re building models and modalities from scratch. They discuss the challenges of moving from prototype to production, how startups need to out-fox the hyperscalers while also partnering with them, and, of course, how Scott went from being a particle physicist working on detecting dark matter to building large language models for speech recognition. This is a must-listen for anyone building in AI.

This transcript was automatically generated and edited for clarity.

Karan: To kick us off today, Scott, why don’t you share a little bit about your incredible background as a particle physicist turned voice recognition start-up CEO? Not your traditional journey into AI.

Scott: I built deep underground dark matter detectors. We were working about two miles underground — imagine a James Bond layer. There are yellow railings, there are cranes, there are all sorts of workers milling about in the background with hard hats on and building stuff, and that’s exactly what was happening. We were doing this for a few years, building the quietest place in the universe from a radioactivity perspective. The purpose of this experiment was to detect dark matter with a terrestrial-based detector, and you had to build super sensitive detectors that analyzed waveforms, these analog waveforms, hundreds of them in real time, and you tried to pick out the signal from the noise. We had this experience building with FPGAs and training models with GPUs, doing signal processing, and using neural networks to understand what’s inside waveforms. When we were down there, we also noticed that what we were doing was pretty insane. Who gets to do this? Who gets to work two miles underground and work on all this stuff?

We thought, man, there should be some documentary crew or somebody down here. There wasn’t. But we were like, “Well, we could be our own crew. Let’s build a little device to record audio all day, every day, to make a backup copy of what we’re doing.” Then we started to realize, wait a minute, you could put these two things together — the types of models that we were building to analyze waveforms could be used for audio as well. The thousands of hours that we recorded then, or many hundreds of hours, you would be able to search through and find the interesting moments and then all the dull moments you would get rid of. We looked for an API or a service that would provide that to us — this was back in 2015 — and they didn’t exist. Once we looked around long enough, we said, “Hey, we should be the ones to build this.” And so nine years ago, we started Deepgram.

Karan: That’s an amazing story. Every time I hear it, it is fascinating just given what you were doing. It is also a reminder that some of the best founders and companies are born out of frustration and extreme pain that they’ve faced themselves, so that’s awesome. At that point, you started Deepgram. Maybe talk about some of the going in theses you had. It feels like you were intentional about building a developer-first and developer-centric approach to solving this problem. Walk us and the listeners through your thinking when you initially started Deepgram.

Scott: This definitely files under build what you know, or you are your own customer way of thinking. We were developers, and we were looking for an API that we could analyze our audio. We realized there wasn’t a good way to do that, so let’s build that for ourselves, and then we’ll be able to. As long as we’re scratching our own itch, we’ll at least know one customer who would be interested in it. But then we also suspect that there are a lot more out there, and that was an interesting journey because when we first started, we thought, “Hey, we’ll just speak only solely to the individual developer, and that will be able to build a product with a lot of users and a big company.” That was nine years ago, and we quickly realized that, hang on a second, there’s a whole lot of education that has to go on around AI first for folks to build up enough demand to support a venture-backed company in that area.

There were plenty of other buyers who were already there and had tons of pain. This was in the call center space, so recorded phone calls all across the world. Anytime you hear that this call may be monitored or recorded for analysis later, that type of market was already big. We focused on B2B as a company, but we always had this developer mindset, and we believed that in the coming years, just read the winds and the tide and everything that these things are going to combine.

The developer is going to get more and more power in the organization and figure out which product they will build. If you build with B2B in mind, but then you also build with that individual developer in mind, and then you meld them together, then that’s what’s going to create the great product along with building the foundational models that supply that. We had some initial thoughts around that, and I still believe in them today, and they turned out to be true. Another one was around end-to-end deep learning being the thing that will solve the underlying model problem, so that go-to-market and product packaging along with the foundational deep learning models, solving the underlying deep tech problem, and meld all of those together and then you put the blinders on and only chase after that. And that’s what we focused on.

Karan: Having worked with you for a couple of years, I know there’s a lot of foundational tech that you and the team have built, and it was probably not always up and right from the first day that you started building Deepgram. You already mentioned one of the challenges you faced, which is how do you convince customers to buy their foundational tech from a venture-backed and very early-stage venture startup. Can you talk a little bit about some of the other challenges that you probably faced as you were closing your first 20, 30, 50 customers?

Scott: That decision to press pause on the PLG side and go directly to individuals and then focus only on a sales-gated contact us form, to go after the market that way was a really big one for us. We learned that through basically getting punched in the face over and over. And so that’s one, but I already talked about that. But there’s another, which is the first product that we offered as a company was not speech-to-text, which is what a lot of people know Deepgram for today, our real-time speech-to-text and our batch mode speech-to-text in over 30 different languages. Our first product was a search product, much like what we were doing in physics, where you’re trying to find individual events that mattered to you.

What you would do is try to find individual words or phrases or ideas that were happening in the audio and then surface those. We learned very early on that there was too big of a gap for buyers. How I would think about that today is the product that we built then is essentially a vector embedding model plus a fuzzy search, so everybody knows these today with great companies that do that kind of thing. We had to decide, “Well, are we going to be the fuzzy database vector embeddings company in 2016?” There’s going to be no demand for it for so long. What do people have demand for? Well, they have a demand for speech-to-text. Early engineers and researchers at Deepgram were hesitant, “Hey, isn’t speech-to-text boring?” Because they know about the fancy stuff that is coming. They know about the embeddings. They know about the speech-to-speech models. They know about all this other stuff.

And it’s like, yeah, but we have to earn our license to learn in this market. We need to establish ourselves in one product domain, and we’re just undeniable in that domain. Then we can expand into these other domains, and it gives us the right to play. I like to think about that every year, every two years with Deepgram, “Hey, have we earned the right to play again? Have we positioned ourselves well?” I think that early decision to say, hey, the search side — by the way, we still have that in our product today — but to switch to speech-to-text and our straight-to-developer PLG switch to B2B as the first move. By the way, we do the PLG and all of that now and search, and that kind of thing is even better today, etc. We get to do all of these things now, but we had to really put the blinders on early, and I think that’s an important lesson for any kind of company.

Pick one thing. Be really, really good at that thing, and then see how you can expand it over time. And you might think, “Hey, I might expand it over quarters or something like that.” But it’s generally actually years.

Karan: I love that. I think we often get into conversations around the trade-offs between focus and having an expansive vision, but I think it’s great that you emerged from your position of strength around speech-to-text and then expanded from there. I remember from our conversations that conventional thinking back then was, like you said, a lot of these hyperscalers were coming in. Google would launch a speech-to-text product, and there was this fear of commoditization of the most important source of data inside the enterprise which is voice. And here’s this startup called Deepgram, which is venture funded with a few customers with the best product in the market. I’m sure you faced a lot of people sort of talking about the commoditization of speech-to-text. Help us understand how you went around that and through it. What was your approach to working with the hyperscalers like AWS and Google? And then I want to get to another moment in the company’s history, which I think was interesting post that, but let’s talk a little bit about that first.

Scott: This is always an interesting line to walk. You do want to create a general model that is good in many circumstances, and others out there will be creating that as well. There’s other technology out there, and by some measure, this is moving toward commoditization. In other words, interchangeable services like, “Hey, we build an API service that supports batch mode in real-time.” Well, there are others out there that support batch mode in real-time, and they support English, and they support Spanish and all sorts of things. But there are differences between them. There are accuracy and latency differences, the time it takes for the API to respond, and throughput differences. There are also differences in where you run your computations. You could do it fully in the cloud hosted by us, you can do it in the cloud hosted by you in your own VPC, or you can do it air-gapped in your four walls as well.

These different areas of competition and differentiation start to show up. There is a little bit of commoditization where you want folks to get together and build their first demo, to scale up a little bit, but then they start to feel the pain in the other areas. They start to feel it in latency, accuracy, and the model’s adaptability or where it runs.

From our perspective, we were thinking, hey, we think from first principles that these systems can be cheaper than they were before a few years ago. The only way to get speech-to-text was to pay $2 an hour. I think that’s 10 times too much. If you drop the price 10x, then you can get 100X or 1,000X more usage. This is one angle for us. You always have to walk the line, you have a commodity offering, but then you have this differentiation that makes it plainly not a commodity for a B2B customer that needs the best accuracy or the best latency or the best cogs or whatever it is. Then, you have these large areas of differentiation.

Karan: One of the things that I remember in our conversations early on, the conventional thinking at that time, if you want to call it that, was the four things that you mentioned, speed, cost, latency, and accuracy. In some ways, many of them were mutually exclusive. If you had speed, you didn’t have cost. If you had cost, you didn’t have accuracy, and I think one of the things that Deepgram really pioneered was reminding folks that no, it is possible to have very accurate, low word error rate products that are low latency, power real-time applications, and you can do it at a price where you can still afford yourself 85% gross margins. I know it takes a lot of stuff in the back end, a lot of engineering work that you and the team did, but I think that was really interesting.

Let’s go fast-forward. We have OpenAI, which is in some ways a hyperscaler and in some ways a very nimble startup, and it might be the best of both. They launched Whisper as their speech-to-text product. Let’s walk through that sort of moment in time. What did that do to Deepgram? How did you react when that came out? What did you hear in the market? What did you then remind yourself and the team to do?

Scott: I remember when that came out, and we did our first testing. We’re like, “Ooh, this model model’s pretty good.” Also, their mentality on how to structure the model was like, “Ooh, this is an end-to-end deep learning model for real.” Up until that point in time, every open source speech-to-text model was not. It put together several different pieces, so it was missing mostly on accuracy, but it also was missing in speed and latency, but if you don’t have accuracy, then the other stuff doesn’t matter. We’re like, “Hey, this model is actually pretty good out of the box, and it supports several languages as well.”

One thing, though, to take a step back and look at us as a company is who did the first end-to-end speech-to-text models in the world? It was Deepgram. And we did them seven years prior to Whisper being released. We were like, “Hey, I’m surprised it took this long for somebody to put all this together and put a model out there.” I’m also glad that OpenAI, which has a very big marketing bullhorn — when they say something, the world listens. Now, everybody is aware that end-to-end deep learning works for speech. Some of these other things previously thought impossible, like supporting multiple languages in a model and that kind of thing, where people who didn’t think about that. I’m glad that they did this education as well into a reasonable level on the accuracy side. It helped everybody get pushed through their learning curve faster for folks who wanted to implement AI and voice AI into their products.

From the outside, you might think like, oh no, OpenAI just open-sourced this model. What’s going to happen? Right? But like we talked about before on the other differentiation like latency, high throughput, low cogs, running hybrid in the cloud or on your own infrastructure, all of those things matter. Our own models are also more accurate than Whisper too. And there’s a reason for that because Whisper and models like it, and many open source models are trained on public data. When our customers work with us, they can adapt models to their own domain. In that domain, you can expose your models to different acoustic environments or audio scapes, and then the model can get good at those types of things. If you feed that type of audio to an open-source model that’s only trained on YouTube videos, it typically doesn’t do as well. From the outside, it might look like, oh no, but for us, we’re like, oh yes, great. Everybody’s going to get educated around this.

When they try out the open source stuff, and try to run it themselves, all of that, they’ll be like, “Wow, this is really complicated. This is expensive. It’s hard to make the model do what I want it to do.” But now they’ll be educated and say, “Hey, are there other products out there in the world?” And yes, there are like Deepgram where you can build a B2B voice AI product and have it just work. We definitely had to take a beat as a company and say, “Hey, what’s our positioning going to be around this? The world is going to wonder what do we think about it?” I wrote a blog post at the time, and I probably did a podcast or something about it, too, saying, “We’re glad about this because, hey, it’s moving all of it forward, and now everybody will be educated about the power of end-to-end deep learning for speech.”

Karan: I remember those conversations at the board level as well where there was a little bit of a pause, there was a breather, and then we realized that OpenAI just did Deepgram and this entire speech AI market a huge favor by educating and having that like you said, the megaphone they had.

In venture, we always say the trick is to be non-consensus and right, and there are a lot of people who are operating on consensual thinking. If you think about what the world believes about AI or, more specifically, speech AI today that you believe to be true or conversely, what do we not believe to be true that you think it is, it’d be helpful to hear a little bit of your vision into where this goes and what most of us are probably getting wrong in our assumptions about AI or speech AI.

Scott: I’m smiling because six months ago, the answer would be different than a year ago, than two years ago, etc., because it’s been such a rapid pace of learning, and everybody is paying their tuition around what AI is capable of and how fast it can do different things. There were tech companies, and now there are intelligence companies, and intelligence companies move three times faster.

I have to update my own model of where the world is at and what does the world understand compared to our own? Also, are there overreactions? For instance, a year ago, it was really plain to see that smaller models, more efficient models are going to become super important because the cost of the inference is going to matter so much. A big reason for that is AI is actually effective, and when you scale it up, no company wants to pay a hundred million dollar AI bill.

Over the last year, we’ve seen that come true. COGS have become more important, cost has dropped for LLMs and that kind of thing. We’ve seen a recent thing now, which is maybe a couple of months ago, OpenAI did their demo of GPT-4o with the voice mode. It’s a speech-to-speech model, and I think the industry probably absorbed that a little too much. They think, “Okay, everything has to be a multimodal model.” I’ll caution folks against that.

Multimodal models are great, especially in a consumer use case like a jack-of-all-trades situation. You allow them and shape them into a single personality and then allow them to handle some of the normal tasks, and that will work fairly well. In a B2B use case where you’re trying to build a voice AI agent that handles insurance claims, or food ordering, et cetera, they’re going have to interact with CRMs. They’re going to have these voice AI agents, like humans. Think of it as a human, okay? They’re going to have to interact with the CRM, they’re going to have to interact with a knowledge base, they’re going to have to interact with all these things.

A speech-to-speech model that is trained to sound likable and respond to things is not going to be able to do that. You’re going to have to have separate parts that have different beefed-up components, and every B2B company is going to have to choose where they want to spend their cogs, basically. Do you need good speech-to-text? Do you need good LLMs? Do you need good text-to-speech? Do you need them to interact with a rag system? Do you need them to interact with whatever next-gen cognitive architecture system that is coming out?

You’re going to need controllability. The idea that multimodal models will save us all and reduce all of the complexity, I don’t think that’s necessarily true because these models need to interact with all these other pieces, and it’s going to take a while. It’s going to take several years for that to shake out. Don’t get me wrong, though, three years from now, five years from now, you’ll start to see a condensation of the different areas into more multimodal. It probably won’t be all one single master huge model, but it will be components of it that you put together. It won’t look so much like a Swiss army knife. It’ll look more like putting together AWS Lego blocks or something like that in the future. At least for B2B right now, you need more control. You don’t want to leave it open-ended to a speech-to-speech system to handle your bank account resets. You need way more control over it.

Karan: The pace at which the space is moving makes anything that you say almost obsolete by the time you finish saying it, and so I totally resonate with that statement that you have to keep changing your own mental model and your own model by adapting to what’s happening.

One of the things I love about this space and speech as a modality or as a way of interaction with the world is that it’s not zero-sum in many ways; it’s sort of expanding the pie of what’s possible once you get to a place where Deepgram gets to and allows people to build these amazing applications. Talk a little bit about some examples of enterprise B2B applications that Deepgram is powering today. As you look forward, without divulging too much, of course, what should we expect? What should customers expect? What should the world expect from Deepgram in the near future?

Scott: We’re in a revolution right now, and I wince when people call it another industrial revolution. I think it’s different. We had an agricultural revolution. We had an industrial revolution. Then we had an information revolution. Now we’re in an intelligence revolution, and I won’t go into the specific details of each of those, but intelligence is different.

Two years ago or prior, you had to have a human do intelligent work. That is not necessarily true anymore. In the industrial revolution, you had to have a human swing the hammer, and that was not true after that anymore. In the information revolution, you had to write down on paper and file away in a filing cabinet and all of this, not anymore, not after the information revolution. You can transmit information at the speed of light, and you can categorize it, and you can search it, and you can do all these things.

In our current situation, things are going to change drastically, but they’re not going to change all at once, and a company like Deepgram is not going to try to tackle all these things at once. I think it’s important to recognize that everything is going to be touched by this, just like everything was touched by electricity, everything was touched by cars and transportation, everything was touched by the internet, et cetera. Then that comes back to us putting the blinders on and saying, “Hey, we have a belief that there are these fundamental infrastructure companies that need to be built that are also foundational model builders at the same time, and this is what Deepgram is.” We have a horizontal platform that anybody could use ostensibly to do anything, but we are going to focus in certain areas that are going to make the world more productive, but also that we have an advantage from what we have already done or from the way we think about the world.

The advantage from our perspective is scale. We always think about the world from first principles and what will scale, and so we’re thinking about cost per watt, we’re thinking about how much training data we need in order to do that kind of thing. We look at the world like, “Hey, the call centers, food ordering, that kind of thing. This is all massive scale, and it’s all going to be disrupted, and so we’re going to focus in that area where there’s scale.”

Where there are other areas, like doing dialogue for a Hollywood film or something like that, that’s not our game. There are other companies that are out there that will do that well. I think it’s really helpful to think about the world in these ways that there are certain things that scale gets you, and if you drop the price, then now you can kind of change the face of work and how it happens. We’ll do our part in the voice AI area. There will be other companies doing it from a search perspective. There’ll be other companies doing it in text. At some point in the future, we’ll all kind of have to figure out how we all fit together, but that point is not now. Let’s just go transform our own respective areas.

Karan: One of the interesting things about some of the applications that I hear about from you and the team that Deepgram is powering, is the distinction between what we hear from a lot of the other AI companies is a lot of stuff that is happening in prototype stages, a lot of stuff that is happening in sort of toy boxes. I think one of the things that is really unique about Deepgram is it is a foundational AI company that is powering real use cases in production for large enterprises, but we don’t hear a lot of examples of actual AI companies powering use cases in production, and I think Deepgram is an exception in that.

Scott: It’s kind of amazing to me at this point. I think we could say that by default, you could probably assume that Deepgram is under the hood if the speech-to-text is working well in whatever product you’re using. Don’t get me wrong, there are other good technologies out there, but it’s a pretty good bet. That’s just in speech-to-text. We released our text-to-speech last year as well, and that is growing at a really big clip too, and that’s powering that real-time agent side. We have companies like embedded device companies that many people would be familiar with. Unfortunately, I feel like I have to be cagey about this because many of our customers don’t like us to say that Deepgram is under the hood, but I can mention some of them. Nevertheless, there’s a lot out there that is being powered right now.

If there’s one thing to take away from this, it’s not like this is coming, it is already here. It’s already being utilized, but now it’s being utilized in new ways that will be even more user-facing. You’ll start to think, if I call a call center, if I need to order something, if I’m going to talk to something online, you’re not going to dread it. You’re not going to think like, “Oh, great, I’m going to spend 45 minutes, and it’s going to be a horrible process.” You’ll start to be glad. You’ll be like, “Wow, that was really peppy. It understood exactly what I needed. I solved my problem. I’m in and out in three minutes.” That’s way better than sending an email and waiting for it to come back three days from now or something.

Karan: Is there something, Scott, about the nature of the product that Deepgram has or the use cases that you power that just makes it — I don’t want to call it easier because nothing you do as a startup is ever easy — but easier than many of the other AI companies that are having a hard time moving from prototype to actually powering production use cases. Why is it that it’s working so well at Deepgram, and yet so many AI companies are having a hard time with that?

Scott: I think partially, time is on our side. There are only a handful of foundational AI companies that were started around 2015, and we have the benefit of being one of those. Another is coming from first principles at the problem. I like to liken this to what’s the difference between Tesla or Ford? Or what’s the difference between SpaceX and Lockheed Martin. All their cars have four wheels. They’ve got a steering wheel, they’re propulsed through the world, they have turn signals. The rockets are tall and pointy, and they have an engine at the other end.

What is the difference? The difference is not so much what is the chemical makeup of the metal they use or something like that, it’s actually the methodology by how they arrive at their product. You have to think of your company as a factory, I think, right now. To build a true foundational AI company, efficiency matters. Bring in some of the Amazon mindset and everything as well, because it’s really more like three companies in one. You’re a cutting-edge research company like DeepMind, and you’re a cutting-edge infra company trying to compete with AWS and Google and Azure and all of that.

You either need to partner with or be yourself an amazing data labeling company as well. When you get all three of those right, then you can have this amazing product, so that’s the secret. It isn’t that much of a secret, but it’s just really hard to do all of that from a lean perspective and from a first principal’s mindset. What we’re always looking for internally is instead of thinking, “Should we hire for that?” We think, “Should we do it at all?” And then the next thing would be, “Can we automate this?” Okay, the next would be, “Well, maybe we need to hire for it now, but can we automate it later?”

You’re always trying to explore and then condense and explore and condense, and then you rely on this backbone spinal cord of the company or whatever that is amazingly well suited to accomplish the goal of, so, for instance, the vast majority of models trained at Deepgram now are not trained by a human. They’re trained by a machine that we built ourselves to do those tasks. Now, the frontier models that have never been trained before, they’re partially trained by a machine and partially done by a human. They’re working in concert. But that type of thing, if you just tried to start building it today, that would be a very good idea to think along those lines, but the companies that have been doing it for two years, five years, nine years in the case of Deepgram, it’s harder to go compete with that. So you have to come up with a new first principles thing that you think will be better in the end, and it might take five years for it to pay off just because of the underlying moats that companies already have.

Karan: Listening and obviously continuing to talk about where the space goes, how fast this market is evolving, and how fast speech AI is evolving, I sometimes wonder if you and I were doing this podcast two years from now, whether it’ll be your agent talking to my agent powered by Deepgram voice AI. So I look forward to that.

Scott: And we’ll just approve it. In the end, we’ll be like, “Yep. Yeah, that’s what I would say. Actually, it’s better than how I would say it. Okay, good.”

Karan: Yeah, that’s great. Well, on that note, I just wanted to say a huge thank you to you on behalf of all of our listeners. There are so many things I love about Deepgram. It’s such a joy to work with you. It’s a privilege to be your partner on this journey. But the one thing I’ve always said that really separates you from many of the founders that we work with is the audacity of your ambition and where you want to take Deepgram and, by extension, this whole space that you’re operating in. I really appreciate you and your time today.

Scott: Thank you, Karan. Love working with you and Madrona.

Temporal Founders Bet on Open Source and Developers to Build Invincible Applications

Temporal - Founded & Funded

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Madrona Managing Director Soma Somasegar hosts Temporal Co-Founders Samar Abbas and Maxim Fateev in studio. Temporal was founded in 2019 based on an open-source microservices orchestration engine project but is the result of a more than decade-long partnership between the two co-founders that spanned their time working on different iterations of the same thing at AWS, Microsoft, and Uber.

In this episode, Soma, Samar, and Maxim dive into the challenges of building an open-source ecosystem while also working on a commercial offering and scaling a successful company, how to navigate and adjust a product roadmap in an ever-changing world of AI and large language models, and how to successfully build an early startup team, navigate through a CEO transition, and bring on a startup’s first independent board member. It’s a very compelling conversation and a must-listen for every builder out there.

This transcript was automatically generated and edited for clarity.

Soma: To kick it off for the audience’s sake, why don’t you start with what Temporal does and, more importantly, how does it help developers?

Samar: Temporal is an open-source application platform that ensures the durable execution of your code. So let’s say if you have a function, and during the execution of that function, the machine crashes or another kind of hardware failure happens. What Temporal provides is it’ll resurrect your application in exactly the same state where it left off and continues on. So, as a software developer, for you, it’s like magic, like failure did not happen at all. So what does that mean? Let’s imagine you have an application that is about money transfer, and during the execution of such an application, one of the transactions fails because the application crashes in the middle of it. Where it debited the money from the source account but was not able to credit it into the destination account. If such an application is built on Temporal as a platform, it will reselect the state of your application on a different host, exactly where it left off with all local variables and call tags, and then continues executing from that point onward.

So what that means for you as a developer is now you can never have transactions that are left in an inconsistent state. So now you won’t have money transactions lost in the ether. And if it was debited from one account, then we as a platform provide a guarantee that that entire code executes to its completion in the presence of all sorts of failures. That completely eliminates 90% of the code people are writing to deal with those infrastructure or hardware failures essentially. So this is a very powerful primitive powering a lot of applications. Every Snapchat story is a Temporal Workflow. Every Coinbase transaction is a Temporal Workflow. Every Airbnb booking is a Temporal Workflow. Underneath the cover, they have been backed by Temporal as a platform, which is powering those use cases.

Soma: You guys know a little bit about my background, but I was for a while there involved with .Net Framework and Windows Workflow Foundation. So I really appreciate what it means to say, “Hey, let’s build an orchestration engine for workflows.” But I don’t want to sort of age myself here a little too much, but let’s fast-forward to today. Temporal is a great solution for people who care about orchestrating workflows and managing that. I’m sure there are other solutions out there. Can you talk a little bit about why Temporal is best in class and better than all the other solutions out there?

Samar: I think there are two areas where these kinds of systems typically fall short. The first one is the developer experience. Just by the nature of imposing that, “Oh, you have now… You can build stateful applications through this orchestrator, which inherently gives you durability.” It forces a certain programming model or certain restrictions that come along with that programming model. So that’s typically one area where they kind of fall short. The other one is if you look at these orchestrators, they have very severe limitations on scalability and performance characteristics. So this is where Temporal really shines compared to other systems out there. First of all, from a developer experience perspective, today, we have SDKs in six languages. So, if you know how to build applications in those six languages, you can build applications on top of Temporal.

So, there’s a little bit of a learning curve to understand the SDK, the API, and all of that. But after that, if you are a Typescript developer, you can write Temporal Workflows. So there is no extra thing that you have to learn. And then on top of that, the scalability and reliability characteristics is — I mentioned earlier, every Snapchat story is a Temporal Workflow. You can imagine New Year’s Eve, how many of those Snapchat stories are getting posted. So, this is a system that scales horizontally with billions of workflow executions running concurrently.

Max: I think the main point is people can build distributed systems and large-scale systems. But the main point of this abstraction, what we call durable execution, is that with Windows Workflow Foundation, for example, you cannot use code, but you actually create some sort of abstract syntax tree. And in Temporal, you just run code directly. So you actually can use a breakpoint, you can use a debugger, and if you say this code call ABC, it’ll be just ABC calls. You don’t need to create a sequence of whatever tasks and so on. So, it’s a very different experience. And I think this experience allows you to hide a lot of complexity.

So the whole idea of Temporal is that this high level abstraction allows to hide all the complexity of distributed systems underneath. So, it’s highly scalable system, fully asynchronous. So it’s event-driven architecture underneath without all the negatives of an event-driven architecture.

Soma: I’m actually glad that you mentioned Typescript. It did bring back fond memories of DevDiv days. But more importantly, it is one of those things where we said right from day one like, “Hey, we thought about open source because we were building it on top of JavaScript.” And usually, in my opinion, at least, there is a huge chasm to cross when you go from a cool open-source project to building a commercial entity or a company around that open-source project. For you both, what was that “Aha” moment when you said, “Hey, I can see why a commercial entity around this successful open-source project could make sense for the world.”

Max: Okay, first, you need to have a successful open-source project that solves a real problem. And we proved that multiple times because the first time this idea appeared at Amazon when I was tech-lead for the Simple Workflow service, which was the first service that introduced those ideas. We didn’t nail the developer experience. That’s why this service is still not very popular, but I think at least we learned a lot. Then Samar built a similar thing at Microsoft, so he pioneered the Durable Task Framework, which later was adopted by Azure Durable Functions. Then we built it at Uber, so we had multiple iterations. Even Simple Workflow, the public version, is the third version, so we had three iterations inside of Amazon. Then we had iterations within Microsoft and Uber, so Temporal is practically the fifth or six iteration of that. So that’s why we kind of learned that.

And when we built it at Uber, I think three things happened. First is that because we iterated a lot, we’ve finally nailed the developer experience. Second, because we built it at Uber, we had the ability to prove this platform on real applications. So we’ve got a lot of internal adoption first. For the first two years, we didn’t get any external adoption — nobody cared about our approach, but we were building it in the open. We were building from scratch on GitHub in the open. We grew in three years over a hundred use cases within Uber. And the third part was that people knew us from Amazon. So, when the first open-source users came, for example, with HashiCorp. The developer experience, high-scale production use, and practically knowing us personally, helped us to propel that and start to get adoption within pretty significant companies pretty fast.

Also, one thing I didn’t mention is every company, like at Amazon, at Microsoft, and at Uber, we’ve got all this bottoms up adoption very organically. None of this project was ever sanctioned from the top. Even at Amazon, it was a bottoms-up project. Samar did it also kind of bottoms up. And at Uber, it was the same thing. It never was like management came and said, “Let’s build this.” We just knew that it was needed, and we built it. And we got adoption every time. So, we absolutely were sure that the world needed it. And then from the monetization point of view, because from the beginning, it was built as a backend service, and then SDKs, and SDKs needed backend service to run, it was pretty clear how to monetize that. Because you cannot monetize a library. Okay, you can, but it’s certainly super complicated. If you look at every successful open-source company, it’s either big data or it has a persistence component, right? We have Confluent, we have Mongo, and we have all the other similar companies — and we fit into that category very well.

It’s very easy to monetize it as a service, and this is what we planned. If you look at our seed-level presentations, we didn’t need to pivot at all. We are practically just executing on that path. Certainly, our scope is much larger right now, and we added a lot of ideas, but our original idea about monetization as a cloud service, our original idea of building an open-source community — all these ideas are still the same.

Soma: When I think about open source, the other thing that I always wonder is particularly if you’re running a commercial company. You had to think about the balance between how much energy and effort you’re putting into keeping the open-source ecosystem thriving and staying engaged with what you’re doing in the open-source world. While at the same time thinking about how you’re moving the ball on the commercial front. Do you feel like you’re able to maintain that balance in a good way or do you feel like you’re often struggling to get to the right balance?

Max: It’s hard. It depends on your business model. Our business model is that we have fully featured open source under a very permissive MIT license, and we don’t do open core. So, it means that there is no crippled open-source version and then an enterprise version. Our open source is fully featured. And the only way we monetize it is by hosting the backend component of that on the cloud. And we also promise our users that it’s fully compatible. So, if you are running our SDKs against the open-source version, you can switch a connection string, and within a few minutes, you can run it against our cloud. But we also promise if you run against our cloud, you can switch connection string and connect to the open source without changing a line of your code. So I think that keeps us honest because it means we cannot have features that will break compatibility. It’s pretty straightforward. We are innovating on scale. We are innovating on control plane. We are innovating on cloud features. But the core abstractions and how code runs and SDKs, they’re open source, and they’re compatible.

Samar: The way I would think about it is — there are three models out there that you typically see with open-source companies. The first one is essentially that everything is open source. But the company’s coming to other organizations, “Oh, probably you don’t want to run it yourself.” Let us create a blessed build and then sell that with some kind of support contracts, which allows you to run them, essentially. Then, I think probably a decade ago, you started seeing a lot of open core models coming up where open source, in all honesty, was more like a marketing tool. But for any meaningful application, there are features that are proprietary that those companies were monetizing by packaging it with an enterprise offering. We, as Max explained, fall in the third category, which is that we have a full commitment — open source is a big part of our strategy just because of the critical nature of applications people are building on top of us.

Where we have 100% commitment that the core product experience, your application, you can port that application with zero lines of code change from open source to cloud, and from cloud back to open source. And we keep ourselves honest on the quality of the service that we are providing behind cloud. Where yes, you’d love durable executions, you’d love Temporal as a platform, which enables durable execution workloads or Temporal Cloud is a differentiated way to run those workloads. So, I think if you look at the next generation of open-source companies like Databricks, or even MongoDB transitioning with Atlas into that model, and Confluent building Confluent Cloud — we believe we were lucky enough to have started the company in an era where we started in this world.

Soma: We are talking about developers, we are talking about open source, and I know that two years ago you guys started this annual Replay as a way to engage with developers in the developer community. And I know that you guys have already announced that this year, it’s going to happen in September. Can you talk a little bit about what you envision happening there and why it is exciting for you to have this annual conference?

Samar: The way we think about that conference is a few things. First of all, we believe Temporal is… Or durable execution, this is a whole paradigm shift where it’s a new way of building applications. So this is a conference where we just want to be broader Temporal as a technology. We want it to be a real backend conference for developers who are excited about durable execution as a category play. This conference is kind of becoming where, once a year, you can renew your vows about this movement that we started around this backend development. It doesn’t have to be so hard, why it’s so hard?

So this is where these users come together and then talk about their experiences on building on top of Temporal, first of all. This actually helps us build this connection with our developer community to drive feedback into the product. And this actually, even a lot of our product roadmap are things that come out of our interactions on these conferences. This conference is also showing how people are using and leveraging Temporal for all sorts of use cases out there. For instance, last year we had 46 talks at our conference, out of that, only seven where internal Temporal employees were presenters. The rest of the talks were actually Temporal users or our customers coming in and showcasing how they are leveraging the technology.

Soma: Let’s switch focus and talk a little bit about AI. When you guys started in 2019, it almost feels like it was a pre-AI world, right? The world hadn’t heard about LLMs to the extent they know about it today. It was mostly microservices, cloud-based services, and distributed services. Those were the things that people were talking about in 2019. Fast-forward to today, you can’t have a conversation without talking about AI, generative AI, LLMs, and a whole bunch of other things. The rate of innovation is pretty fast here. How has that changed your view in terms of how your roadmap is evolving, what your product is going to be capable of doing, and what are you frankly doing with AI?

Max: We do use AI on surrounding area. For example, in our education, we have AI bot in our Slack channel and in our website, but actually our core product didn’t change. Because our core product is about making execution of code reliable and guaranteeing execution. And you wouldn’t be surprised that a lot of AI companies are our clients and users. I think we have over a hundred companies with a .ai domain, for example, in our customer base. Obviously not all of them are doing LLMs, but I think a pretty significant chunk of them are. They use us for a few things. One is just the whole infrastructure. Because Temporal is the best tool to create practically any control plane. If you do deployments or even creating your own cloud service, like for example, HashiCorp Cloud is built around that. And then if you’re doing any data pipelines, any machine learning, any big data — so control plane for that is a very, very common use case. And machine learning and AI training, all of those very, very popular.

And then when you use those models, if you are not just doing chat interaction, but you’re actually using for something more serious, like for example, you are generating text from video and then you’re analyzing that. There are multiple steps and some of these steps would be parallelized. Some of those steps should execute on specific machines because Temporal allows routing through specific machines and so on. So those are very, very common use cases.

Another exciting one that comes up more and more is agents. First agents were pretty primitive. If you just ask something, it use a couple of tools synchronously and returns result. But the new generation of agents are actually performing actions in the real world. And these actions can fail, and these actions can be long-running and sometimes these actions need to wait for humans to come back. And when you start making this kind of thing, it’s practically transaction orchestration, and it’s a very common use case. When you need to make those models run reliably at scale, Temporal is used a lot for those types of use cases.

Soma: So in a world of what people call agentic workflows, Temporal is a must have?

Max: Absolutely. Yeah. I’ve seen some very interesting examples of whole platforms built around that.

Samar: The analogy I would give is back at Uber when both me and Max kind of joined the company, Uber was going through this interesting phase there. Of course, they had a monolith, and then they started teasing it apart into microservices, but somehow, they ended up in a place where they had more microservices than engineers in the company. So what we saw there is this durable execution systems became the glue to tie these microservices together in a fault-tolerant, resilient, and scalable fashion. Sometimes, people used to describe us as asynchronous service mesh.

Then if you think about… Look at what’s happening with this whole AI ecosystem out there is at least what I see is a lot of these big foundational models are kind of hitting their limits. And now there is a lot more focus on these much more targeted specialized models which are really good at doing things, and then, this is why this whole agent ecosystem is starting to emerge around. Which means that building real-world applications that eventually will be tying in these agents and having a mechanism where these agents can have communication with each other in a fault-tolerant, reliable way. So you need that same service mesh to kind of enable that. So we believe this is where Temporal is very differentiated, even in the world of AI when a lot of those applications will be these agents talking to each other.

Soma: So, should I think about Temporal as an all-in AI company today?

Samar: Every company is all-in AI in today’s world.

Max: But I think it’s more like a tool for all AI companies.

Soma: Particularly if we believe in a world of agents and agentic workflows, and that’s what is going to be how the world evolves. You guys are right at the center of that.

Samar: Here’s how I see the AI ecosystem kind of three categories. One is a company building these large foundational models, which are the OpenAIs of the world. Then you have a bunch of companies building tooling, enabling those who want to train your models and all of that. It is kind of providing the basics for people who are building those things. And the last category is people who are building these verticals — really SaaS solutions powered by AI to do some really meaningful thing and add value to the user. I would categorize us in the second category, which we call the picks and shovels-

Soma: Yeah, what we call picks and shovels. I was just thinking that. That’s great.I want to switch a little bit to talk about you two guys now. You’ve known each other for 10, 15 years now and have worked together in multiple environments, whether it’s Amazon or Uber or you deciding to start Temporal. I’m sure along the way you guys thought about like, “Hey, do I want to start this with Maxim” or “Do I want to start with Samar?” And you decided to do that and have been at it now for 5+ years. What advice would you have for a new entrepreneur who’s thinking, “How do I decide who my co-founder should be?” Any words of wisdom?

Samar: I don’t know what advice I would have for other people, but I can share a little. One of the things is that we’ve been really privileged. We have known each other for almost 15 years now. I think over this long period of time, we have built a lot of respect and appreciation for each other’s skill set. Both of us bring very different things to the table. At the same time, both of us have very strong opinions that are, luckily, loosely held. So, I think those are the kinds of things that have helped us develop that relationship over the last 15 years.

The reason I was a little bit hesitant, like “What advice you would provide?” Because a lot of times, people don’t have that luxury where when they’re starting a new journey when they are looking for a co-founder. Like, yes, you don’t have 15 years to first build that relationship. But I think for us at least, I felt that helped us a lot. That helped us to build appreciation and trust for each other and kind of know each other’s boundaries. And in all honesty, for me, I think that trust that we have accumulated over lots of years of experience working together, when we were starting Temporal, I didn’t even think twice. I knew, if I wanted to start a company, Max was the person I really wanted to start a company with.

Max: I think one thing worth mentioning is that we never were people thinking about starting a company. We actually didn’t discuss starting the company until we got the first VCs talking to us. We were just working on the project. We were trying to solve real problems. We were doing it as an open-source project. There was kind of an assumption that maybe it would become successful one day and something would come out of that. But the reality is that it was never like, “Okay, we need to build the company, let’s build something…” It was just solving real problems and trying to make the project successful. At some point, it became successful, and we started getting interest. Then we started to discuss that. And then initially we actually were hesitant because we had good jobs and both of us never managed a single person before starting the company. So we were just ICs, just engineers, and we had a good life at Uber building our project, but we just realized that the idea we had and the product we had was much larger than the company. So the full potential couldn’t be realized within any company just because obviously priorities are different. And then we decided that taking money was the option. And yes, I don’t think we had any doubts that the two of us should do it together. Let’s say even the open-source project wouldn’t be successful, I think, if both of us weren’t there.

Soma: But in the last few months you also went through a transition. You literally swapped roles, right? Maxim, you started off as a CEO and somehow you had the CTO. A few months ago you decided to sort of switch roles. Not every company goes through this transition as seamless as you guys have made it look for the rest of the world. So tell us a little bit about the transition and what drove that and how did you manage that?

Samar:

When we started the company, as Max said, we had absolute clarity on the product we wanted to build. Both of us are first-time founders. We have very little idea what we are jumping into because you can imagine, building a successful company is a lot more than just a successful product.

Initially, the things that really mattered: do we have the real right strategy in place, like what’s the product, how it transforms? And we made certain strategic choices literally the day we were starting the company, a large percentage of those choices still hold. That strategy of the company is roughly exactly the same as the day when we started the company, when it was just the two of us. But at the same time, what we are finding out is it’s no longer just two of us. It’s like over 200 people in the company.

So, we are semblings of basically a real enterprise SaaS organization. We have all of those functions right now, the organizational challenges, how everything fits together. So there’s a lot more operational challenges that have started to show up, requiring a very different rigor and a different skill set. So then both me and Max took a step back and thought about our core strengths that we bring on the table. And one of the things that Max is amazing at and that I have a lot of trust in, and that I’ve built a lot of respect for Max is that Max can take any problem lifted at a high level of abstraction and solve it there — like setting the long-term direction where the company needs to be in the next five years. I’ve always been the one who is more kind of operational — what steps we need to take to deliver on that mission.

And what we are finding out is a lot of those challenges and the kind of growth phase the company is going through, the kind of operational rigor that we want to bring in is in this second category now. So it made a lot of sense for me to step into the CEO role, to be an operational CEO, to bring that rhythm to navigate through this next phase of the company growth. And although on the surface it looks like it’s a role swap but Max, for instance, Max is the CTO, he is kind of driving product strategy but not managing people essentially. So it was a very conscious decision that I feel plays to natural strengths for us as leaders in this next stage of the company.

Max: As you said, in a lot of cases this transition happened because they have to happen. In our case, we didn’t have any forcing function technically because as you know, you sit on our board — it was not like, “Okay, something is going wrong, we need to fix something.” Actually, I think everything is going pretty well. Even in this pretty tough environment. There wasn’t a forced decision in any case, it was more like me finally realizing what the work of CEO is, and clearly understanding that the further we go, the much less technical the job becomes. It becomes very much more management and execution. And I know that my strength is technology, my strength is product, my strength is architecture. And yes, I tried and I found that managing people is not something I really enjoy doing. I clearly saw that Samar, besides we started company both with zero management experience, he clearly has knack for that. He clearly can manage, and he can drive the execution I think very, very efficiently. And I’m lucky because as Samar said, we had such a big trust. So for me, it was like a no-brainer. I 100% trust him. So, for me, giving him the CEO role was something that I didn’t hesitate on at all. And I know if I didn’t have a co-founder like him, it would be much, much harder decision because losing control is something that most people have good reasons to fear. And I think having somebody you can trust a hundred percent, that’s why it was seamless transition. And other part I think funny one was that nobody is agreed. Usually if something happens which is not natural, people usually complain or at least talk about that. I think everybody looked at us and said, “Yeah, it’s right,” and we just keep doing what we’re doing.

Soma: I should tell you, hearing from both of you about this, hats-off to you both for orchestrating and doing this change. And doing it at a time when there’s — like Max just said, there’s no forcing function. Making such decisions and doing such transitions when there is no forcing function is far better than when there is a forcing function. Because you really are looking at like, “Hey, what is the right thing for Temporal given where the company is at this stage and what is required for the next 2, 3, 4, 5 years?” So hats off to both of you for doing that. I also want to take this opportunity and congratulate both of you on one recent announcement you guys made. You just announced that Sahir Azam, the chief product officer at MongoDB, is now on board as an independent board director at Temporal?

I’ve known Sahir for many years. In fact, he and I are sitting on the board of another company. So I’ve got a chance to work with him over the years and he’s a terrific fellow, so congratulations for getting him on board.

There’s an interesting decision that every entrepreneur has to go through. Where at some stage in their company’s life cycle, they have to start thinking about, “Hey, how do I bring on some independent board directors?” So, can you share a little bit about your thinking in terms of, “Hey, why do I need an independent board director now and what attributes we are looking for,” as you made the decision to bring Sahir on board?

Samar:

It was pretty big decision because this is the first independent board member. So, so far everyone who sits on our board is either the two of us, who are co-founders or VCs who have invested money essentially. Few months ago when we started looking back is especially when I’m stepping into this new role, I’m first time CEO. And what kind of structure I want to build around me to help me have the right support for me to navigate the company through the next level of challenges. Clearly, one of the things that I really cared about is people who’ve been there, done that. And because for us, every new challenge coming in, every new problem is an unknown. Both of us are first principles thinker. Yes, we can apply our first principle thinking to solve problems, but it takes time. It takes a lot of trial and error to finally get to the right answer.

People who have been there and done that, this is one of the criteria that we really cared about a lot. But not being there and done that in a similar kind of a setup in a similar kind of a business that we are trying to build. So I feel Sahir is really differentiated on many things. First of all, we ourselves as a company look up at Mongo as a business just from the open source nature and how they grew that community to a successful business. There is a lot of learnings that we even look up to and even more than that, one of the things that we really like about Mongo is their user base, how they operate as a company. Those are the companies, they are obsessed about developer experience, and that’s what has driven really… Help them build this engine that makes their business thrive. So, based on those similarities, we just couldn’t ask for a better fit for the first independent board member that we were looking for. So it just was an amazing match for us.

Soma: Max, do you want to add anything to this?

Max: The only thing is that it’s one thing to interview somebody and decide to bring him on board. But I think after he actually joined and we had conversations, we clearly saw that this confirmed our choice. Because just in a very short conversation, we were able to extract so much value, and clearly, we see a lot of alignment there.

Soma: Having known Sahir for a little while, I’m a huge supporter of him and I’m so glad that he comes to the Temporal board meetings now. With that, I think it’s time to wrap up again. I want to thank both of you for this conversation. I think we covered a lot of different topics, and it was great to hear your candid perspectives and views. On the product, on the company, on the open-source ecosystem, on some of the organization decisions that you’ve had to wrestle with in the recent past. A lot of interesting nuggets of wisdom, at least for me in terms of how you guys are thinking and how you guys are executing. So, thank you again. It was great to have you on this podcast.

Samar: Thanks a lot, so much. It was a fun conversation. Really enjoyed the talk.

Max: Yeah, thanks for having us.

 

Transforming TV Advertising: iSpot’s Sean Muller on the Role of AI and the Future of Streaming

iSpot-founded-funded-sean-muller

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today Madrona Venture Partner Len Jordan sits down with Sean Muller, Founder and CEO of iSpot, which measures TV advertising for both conventional cable and streaming. It includes audience size and characteristics, creative effectiveness and performance, both attribution and outcomes. iSpot works with virtually every top advertiser and has helped manage more than $500 billion in advertising spend across more than 60 trillion advertising impressions. Len had the pleasure of hosting Sean on this show back in 2019, where they covered the early days of the company and evolving business ideas. Today they’re picking up with everything they’ve been up to since. In this episode, Sean shares how he lands strategic partnerships and acquisitions to enhance the data, iSpot can provide, his approach to new opportunities and ad-supported streaming, and how and why it’s so different from traditional TV network advertising. He also discusses the strategy and timing for competing against large incumbents and startups by using deep product differentiation and tight alignment with strategic allies. You won’t want to miss this one.

This transcript was automatically generated and edited for clarity.

Len: I would say of all the things that have happened over the last four years since you and I last spoke, the rise of ad-supported streaming has taken a lot of people by surprise. I spoke to one of our investors at our annual meeting four or five years ago, and we were talking about iSpot, and this person’s observation was, “Well cable is dead, right? It’s going to be gone in a couple of years.” That was one observation. And then the other was, “No one likes advertising, so why does that matter?” And it turns out both are wrong.

I think $83 billion was spent last year in the United States on advertising through a television device of some kind. The most interesting thing is cable didn’t die. $62 billion of that advertising was over cable, but $21 billion of that was streaming. I’d love for you to talk about streaming in general, but also the surprise that for a lot of people, advertising over streaming is now a really big business and is projected to become an even bigger business despite the fact that some people thought that only subscription would be the way people pay for media.

Sean: We obsess over streaming. Streaming is the future. The fact that advertising has grown so much in streaming has not surprised us. We knew that that was coming and we’ve been investing in streaming measurement and technologies for well over five years. One of the reasons we invested in smart TV data was because we foresaw streaming taking off. That’s what is allowing us to measure both streaming and linear in a consistent manner. There are a couple of key trends that are important to streaming and important to marketers. First of all, audience viewing and streaming has been big for a long time. Right? It’s just that advertising has lagged behind it. It’s mostly because of Netflix. Where Netflix has been a subscription service, and there’s so much viewership in Netflix, viewership has been shifting to streaming in a decisive way for a while now.

Today, it’s probably almost 50-50 in terms of viewership. Now, the ads always follow is one thing everyone should know. It might take longer than some people think, but the ads always follow. That’s simply what we’re seeing now. We’re seeing all of the subscription services, including Netflix going to AVOD or advertising video on demand. They’re all bringing advertising in a bigger way, and it makes a lot of sense. Consumers have been conditioned to advertising on TV. Advertising on a television side is part of the experience. There’s really not that much disdain from it.

Advertising has been growing rapidly on streaming. I would say it’s more like 70% linear or terrestrial cable — broadcast is how we should refer to it — and 30% streaming. That trend is going to continue. What’s going to tip it is sports. One thing everyone should know is sports drives a massive chunk of advertising dollars on television. What happens with sports is what’s going to dictate, and this is the dynamic to watch right now. Sports is still largely controlled today by broadcast and cable, by the traditional media companies, but that’s starting to change. You’ve seen Amazon Thursday Night football is now squarely on streaming. It’s a great experience.

It’s something that is going to continue to trend towards streaming, and one of the reasons is the companies that control streaming, the Amazons of the world, the Googles of the world, and the Netflix of the world simply have more capital than the traditional media companies. You’re going to start seeing more and more of the streaming-first companies investing in sports. It’s sports that will be the tipping point of when the advertising dollars shift to be streaming led.

The biggest challenge for marketers today is how to shift their dollars across traditional broadcast, cable, streaming, and social. Like how much to invest in TikTok, how much to invest in YouTube and things like Mr. Beast and other content distribution on social video. That is the number one challenge for marketers today, and that’s why we’re seeing so much success and growth in our business and helping marketers understand how to intelligently shift those dollars, validate, verify, and tie it all to outcomes. That’s the bottom line. That’s the biggest challenge today.

Len: One of the things I perceive, and maybe it’s because I have four kids who generally do a lot more streaming than watching old-fashioned linear cable television like their dad, is the audience could be different. I’m curious how that affects advertisers. I don’t just think of streaming as a different way of delivering the exact same content. The nature of the audience seems to be different, and generally younger. The opportunity for interactivity is different too, right? With linear cable, if I’m watching on a television that has no interactivity, I could watch and consume the content and then later on interact with it. In a world where I’m watching over a digital device, my opportunity to respond to the ad in real time or close to real time is new. Maybe that’s a few years away. I’m curious how you think of the advertising opportunity as different, given that the audience may be different and given that the level of interactivity may be different and maybe there are other things that are different too.

Sean: That’s an important question. There’s a lot to consider here. Generally speaking, you are correct that older audiences watch on cable and younger audiences watch on streaming. However, that’s not always the case and it’s much more complicated than that. Let’s take the NBA for example. Where are these games broadcasted? They’re on cable. When you think about NBA, that’s younger audiences. It also skews diverse African-American audiences, and these are audiences that are very, very important to advertisers. The dollars will keep flowing there. That’s cable. That goes back to the sports trend that I was talking about, and sports in general. However, with streaming, you can now target the audience on a one-to-one basis. You can’t do that with broadcasts and cable. So that is very, very attractive to advertisers as well.

You have to weigh all of these things right now. You have to look at everything holistically, in a completely deduplicated manner. That’s where we come in. We help marketers see it all across whatever channels they’re advertising on. There’s another thing to consider. The younger audiences are spending a lot more time with short-form video like YouTube. And there becomes a question, “Well, how much attention are you going to pay?” A 30-second ad is very intrusive when you’re watching a two-minute clip. But when you’re watching an NBA game that lasts multiple hours, that advertisement is not intrusive, especially when you’ve got some timeouts and breaks and whatever else is going on with the game.

It’s become an interesting world where the strategy today for marketers is not to abandon any one thing, but it’s to achieve the right balance and understand the impact and effectiveness. The world of advertising is way more complicated today than it was 10 years ago when you only had cable and broadcast today it’s very, very, very, very complicated.

This question that you just raised, this is our company in a nutshell. We obsess over these things. We obsess about how do you deliver the right message to the right audience and the right medium? There’s a lot of mediums happening here and you want to deliver the right outcome.

Len: I remember growing up when people would have one team that they would watch, but that’s completely changed because there are so many channels to watch. Cable, I don’t think, is going away anytime soon given the dominance that they still have across sports and a lot of other types of programming, but sports in particular. Another area that is definitely new for iSpot over the last four or five years is this area of helping advertisers, networks, and agencies plan their advertising upfront. iSpot started really by helping brands and advertisers figure out how to measure the effectiveness of their ads after the ads ran. You’ve also started to take that data that you’ve built over 9 or 10 years and start to help advertisers figure out how to use that data and that audience information to figure out what ads they should be buying in the first place to get better efficiency, return, and economics.

Part of that has led to maybe more of a frenemy relationship with Nielsen, and I think Nielsen will be around for a long time. You all have been very thoughtful about how you’ve entered this newer space of planning. I’m curious how you think about that because as a startup CEO — now iSpot’s not a startup — but you had to be very thoughtful about how to enter that newer space and think about who the other companies were in that market, Nielsen being one of them. You’ve navigated it and leveraged streaming. I’m curious how you’ve thought about entering this new market of using your product to help people plan for the ads they want to purchase up front.

Sean: All of our data at iSpot is delivered in real time, which has brought a lot of optimization capabilities. Where a lot of our clients before iSpot came along, you would get a report every quarter or two after the fact on how your TV campaign did. iSpot brought real-time data to the table for the first time, which means now you could optimize your media in real time. We started moving upstream, if you will, quite a bit ago, from measurement to optimization, and now into planning. Because now, it’s really important to make optimizations and decisions even before you buy, and that’s going to become bigger and bigger as the shift moves to streaming and programmatic buying of advertising on television. In fact, part of the deal that we announced with Roku includes optimization before the buy where you can become very predictive and plan on that data to deliver better outcomes before you make the buy.

We have a relationship with the Trade Desk, where the Trade Desk uses our data. Iit started with measurement, and now they use it for planning before the buy. We see this notion of planning, also known as pre-bid in the programmatic digital world, becoming more important as we continue to move our capabilities upstream. We’re also on the creative side doing a lot of predictive analytics where we can predict with our measurement how well a creative will do before it even goes out into the market. So yes, there’s this movement of going from measurement to optimization to planning, and we’re in the midst of that move. That becomes a lot more important in the streaming world, whereas in the traditional linear world you would once a year look at the data to create your plan and make upfront commitments.

That’s where Nielsen has really been strong is their data has been used in the front in TV and creating plans that sort of don’t move, you set them and forget. In this new world we’ve been moving into, it’s much more real-time optimization, and it’s driven before the buy is even made.

Len: Do you think that gets pushed a little bit by the advertiser’s experience with what I think of internet advertising? With Google, Facebook, TikTok, and other platforms that have been around now for a couple of decades, you could make a change on your ad buy within minutes. You could know that this search word is or is not working within minutes or this banner ad is or is not working within minutes. And then, you could automatically have the system self-tune and start purchasing more of the ads that work, and less of the ones that don’t. You close the loop and figure out if consumers are taking action. My sense is that advertisers feel like, “Hey, I want that level of accountability that I get on search. I want that eventually for video advertising because it’s a lot of money that I’m spending here and I want it to work.” Do you feel like that influence happens or am I overstating it?

Sean: Oh, 100%. You’ve described exactly why streaming is growing faster than any other advertising medium today. In fact, more dollars will go into streaming in the future. The reason is you have the video experience on the big screen. There’s no better, more effective medium than still that traditional 30-second spot on a big screen. Now, combine that with the ability to target it in digital and then measure it like you do in digital. No question one that is one of the major driving forces behind why streaming is growing, and it’s our measurement that’s helping prove the effectiveness of that medium and closing the loop. Also, deduping it with traditional linear of understanding how much incremental audiences you are getting as you’re shifting to streaming. These are some of the technologies that we’ve also brought to traditional linear.

In traditional linear, you can’t target it as precisely as you can in streaming. Think about streaming as sort of this holy grail of having the most effective and compelling medium in video on a big screen, plus the targeting and the ability to do the closed loop of digital. Quite honestly, there’s more demand right now for streaming than supply, and that’s why the CPMs on streaming are so high right now. It’s just there isn’t enough supply. As the industry shifts more to advertising-sponsored streaming, the supply will start catching up with the demand. This is why it’s so exciting. It’s such an exciting world right now in streaming for advertisers.

Len: So Sean, I think you all have done a really masterful job of navigating your path for creating value in the advertising measurement planning space. And you’ve also done that in a way where you’ve managed not to kick sand in the face of the big gorilla in the market, which is Nielsen. Partly, that’s because you have a different business model and a different product set. Now with streaming maybe a different opportunity. It would be interesting to hear your thoughts on how you’ve chosen to navigate the sequencing of your entry into different parts of the market because I think you’ve done it really smartly in a way that’s been strategic and hasn’t drawn a deep competitor to come after you in a way that I think is smart.

Sean: We’ve built our business by focusing on advertisers and marketers, and for those who don’t know, Nielsen is the largest measurement company in the US, but Nielsen’s revenue is almost entirely generated by the TV networks or the publishers. The TV networks pay Nielsen to be the arbitrator, or what’s known as the currency in our space where they measure the programs and thereby the ads. That gets used as the data that sets the fees for the ads as they’re traded between the buyers and the sellers, primarily between agencies and TV networks. That’s Nielsen’s entire focus. What we noticed coming into the space is the advertiser’s interests were not represented, so we set out to build software measurement and software solutions that are specifically meant to help advertisers assess the effectiveness of their advertising. It turned out that section of the market, which, by the way, funds the entire ecosystem, is the most important part of the ecosystem, was actually not being served directly. Their needs were not being served, and that’s the reason that iSpot exists and why we grew so quickly.

Today, we have very powerful software across creative, audience, and outcome that serves that sector of the market. The other thing that’s happening is as the marketplace shifts to streaming, there is opportunity to be the arbitrator, if you will, across the streaming landscape. That’s a big area where iSpot is investing in and how things worked in traditional linear are not going to be how things work in streaming. For example, in traditional linear, you measure the audience for the program, and it’s always the same ads that travel with the same programs.

By measuring the programs, you can basically create a currency for the ads that doesn’t work in stream. In streaming, the ads are targeted, so what’s happening in streaming, whereas in linear, 90% of the money that was being spent there was on arbitration or on the currency, and only 10% was being spent on optimization, planning, verification, and the outcome. You’re going to see that change in streaming. For streaming, it might be the opposite of that. It might be that 90% of the value is going to sit in all of these things that help optimize, place, verify, and measure the outcome. That’s how we see the measurement landscaping shifting as we get into a streaming-led world.

Len: Another area that has become a big area of focus for you all is agencies and networks. When iSpot got started, I think you were selling primarily to the advertisers themselves, but two other important constituencies are the agencies that those advertisers work with, number one, and number two, the networks. The networks and especially the streaming networks, are increasingly using iSpot as a way to prove to their advertising customers that their ads are effective. I’m curious how you’ve gone after agencies and networks as part of your ecosystem and allies over the last several years because that feels like an important new initiative that you’re driving aggressively.

Sean: Our core business is selling enterprise software solutions to brands and advertisers. We especially do well with larger advertisers. So that continues to account for the bulk of our revenue, I’d say probably 70% of our revenue is that. We also have a very massive and fast-growing business with TV networks and publishers. We work with every TV network with every publisher to help them measure their audiences and then prove to advertisers the effectiveness and the outcome of their advertising on their platform. So that is a quickly growing business for us, especially on the emerging streaming side.

We have for a long time worked with advertisers because they are an extension of the brand, but now we’re getting a lot deeper with agencies, and we’re providing solutions that are specific to agencies, and we’re providing business models specifically in the ability to buy on a CPM basis from US measurement services on a CPM basis. We view this as an ecosystem play. The fastest growing piece of our business is what we call the emerging business, which is a lot of capabilities and streaming, and those capabilities are being adopted by advertisers, by publishers, and by agencies at a fairly rapid pace.

Len: One topic where I probably will have to eat some crow because I’m historically, as Sean knows, pretty cynical, is acquisitions. I have not seen that many technology acquisitions work well. I’ve been a pretty outspoken person, at least on a few board calls where I’ve poo-poo the idea of acquiring companies. I’ve been wrong in the case of iSpot TV in at least two cases that I can think of. Sean, you guys successfully acquired Ace Metrix, and you also acquired 605. There are probably a couple of others that maybe were a little smaller, but you all have done a really good job.

It’d be great for you to explain those two companies, why you acquired them, how they fit into your strategy, and what you’ve done to successfully integrate them into iSpot. A lot of founders struggle with this same question. They feel like it’s a good idea to potentially acquire other companies, but getting the acquisitions right and then executing well on them afterward is a lot easier said than done. And you guys have done it well.

Sean: Buying companies is a lot easier than integrating the companies and making an acquisition successful. It starts with very careful vetting and thinking about an acquisition. For me, it always starts with our core mission to help advertisers measure the effectiveness of their marketing, specifically in video. Again, like I said earlier, it’s very, very, very simple. The first question becomes does this help advertisers measure the effectiveness of their investments? Is it synergistic with what we do? Is the business model synergistic? Is the culture of the company synergistic? Is this something that we believe we can take to our clients, and will our clients buy the service and find the service value? In the case of Ace Metrix, which checked all the boxes, they were already working with a lot of brands, a lot of our own customers, and our own customers raved about the capability.

They had a great product. Again, it was creative, and in the area of measuring creative, which we didn’t have at the time. We had the audience and outcome piece, and we said, “Wow, wouldn’t it be powerful if we can go to marketers and bring them the full end-to-end solution of what they really care about, the creative, the audience, and the outcome?” We believed that we could do that in the case of Ace. So, long story short, Ace Metrix has been a runaway success for us. We’ve nearly tripled the revenue of the company since the two and a half years or so that we’ve acquired them.

More recently, we acquired 605. What we really liked about 605, they had a very talented team that we thought could be very additive to us. They were good at outcome measurement for the KPIs that iSpot didn’t measure. iSpot was good at measuring outcomes to website and digital. 605 is very good at measuring outcomes to sales, offline sales, CPG sales, auto sales to foot traffic to credit card transaction data. It really was very accretive and synergistic, and it’s important for what marketers want in terms of measuring marketing effectiveness. Not to mention that they also had the charter data. We now have Charter, LG, Vizio, and Roku. We have over 100 million devices that we can measure off. 605 is more of a recent acquisition, so I think the jury’s still out on our execution and integration. Again, it’s much easier to buy the company than it’s to integrate it. If most of the effort is going to the actual acquisition and not enough to the integration part, that’s a problem.

Len: You guys have done a great job and there’s a lot of good lessons in there for other founders. One last category of questions is around innovation in general. We’ve known each other a long time and one of the things that I’ve been the most impressed with by your team and you in particular is you out-innovate yourselves. I don’t think I’ve been in a board meeting where I’ve heard you say, “Well, company X is building this, and we need to build this too because that’s our competitor, and so we’re going to build it.

It’s a challenge for a CEO to listen to customers but not look in the customer rearview mirror as it relates to what to build next because you have to have the vision for where markets, platforms, ecosystems, and consumers are going. How do you, at a process level, ideate? You cooked up a lot of really cool ideas over the last 11 years since I’ve been involved, and I know there are more in the works. I’m curious what process you and your team have used to innovate because it’s impressive and maybe there are things to be learned for other founders.

Sean: It does start with customers — not as much with prospects. Prospects will tell you all sorts of things. “I would buy it if you had X, Y, and Z.” So listening to customers is fundamental, but that’s the easier part. That’s basic blocking, tackling, and having an organization in listening to customers and collecting feedback by meeting often and discussing it. The harder part is understanding where the trends are headed. For me, I do a lot of listening, and talking to people in the industry about trends in general. The problem is you sort of have to become obsessive about it. I obsessively think about what I hear, but then I also have to block out the noise. There is so much noise, especially the media.

A lot of what the media is talking about is drama and hype. You also have to obsessively think about where it’s really headed. Who do you trust? Who do you listen to, and what do you not listen to? That is the trick. It’s so hard. At some point, you have to think about where your own core competency is at and not get drawn into directions where the company doesn’t have core competencies.

This is the tricky part. Even when we started iSpot, it was all over the media. “TV is dead, TV is dead.” One would ask, “Well, why are you starting a company in TV? TV is dead.” But you have to obsessively think through that. And it’s like, what does it mean TV is dead? Are people going to go to their wall and rip off this device off of their wall? No, I don’t believe that. I don’t believe anyone’s going to do that. Who doesn’t love a big screen on their wall? I mean, come on.

I spent a lot of my time listening, thinking, and doing a lot of obsessing. Sometimes, it takes courage to ignore noise and things that people are saying when you know deep down you don’t believe it. Here’s the problem. There isn’t a playbook for this one. This is what makes this job harder. You can’t pull up a playbook. If I was running a sales organization, I would have a playbook, but making decisions about the future is a bit trickier.

Len: One last question, kind of a crystal ball question. If you and I are here talking in four and a half more years, I’m just curious if you had to bet, what types of things will you and I be talking about several years from now?

Sean: One thing that’s probably not going to change over time is the investment in advertising. We know advertising works and is effective, but advertising can become a lot more effective than it is today. We were talking earlier in this area of planning and moving upstream, a lot of the planning is going to move to be AI-driven, both on the creative side and on the media side where you’re going to be able to auto-generate storyboards, imagery, and creative and have a lot more variation, and then also do the testing for those in a very quick AI-driven fashion. These are a lot of the technologies that iSpot is working on and has available and deploying. And the same on the media side. You’re going to be able to predict before you place an ad what kind of impact it’s going to have, especially in the streaming world where you can target it one-to-one.

Imagine a creative now instead of one variation, we now have 20 variations that are by different demographic cuts or interest cuts, and maybe there’s now the ability geographically to insert something that’s more relevant. There’s just going to be a lot more dynamic elements in both the creative and the placement of the media. As effective as it is overall, it’s way under-optimized. There’s a lot of upside there today and a huge opportunity in advertising for personalization, targeting, and optimization. It’s all going to be driven by real-time technology and machine learning AI.

Len: Super exciting. What a pleasure. It’s always great, Sean, to get to see your view of the future and remind me of the best stories from the past. It’s been a real pleasure for Madrona to get to be a partner of yours along this journey. Thank you for taking the time, and I’ll let you get back to the business of running a fast-growing company. But thank you so much, Sean, for spending time with us.

Sean: Thank You, Len. And for the viewers, Len and I talk more often than every three to four years in case somebody might get that impression. We talk quite often. And thank you, Len. You and Madrona have been awesome partners, and continue to be, and it’s an exciting time. So, thank you.

 

Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair

Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair

Listen to this Predibase episode on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Partner Vivek Ramaswami hosts Dev Rishi and Travis Addair, co-founders of 2023 IA40 Rising Star, Predibase. Predibase is a platform for developers to easily fine-tune and serve any open-source model. They recently released LoRA Land, a collection of 25 highly-performant open-source fine-tuned models, and launched open-source LoRA eXchange, or LoRAX, which allows users to pack hundreds of task-specific models into a single GPU.

In this episode, Dev and Travis share their perspective on what it takes to commercialize open-source projects, how to build a product your customers actually want, and how to differentiate in a very noisy AI infrastructure market. This is a must-listen for anyone building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: To kick us off, maybe we can just start off by telling us a little bit about the founding story at Predibase. Dev, you were a PM at Google in the Bay for about five years. Travis, you were a senior software engineer at Uber in Seattle for about four years. How did you meet and co-found Predibase in 2021?

Travis: The company originally started out of Uber. Our co-founder, Piero, was working at the time as an AI researcher for Uber’s AI team on a project called Ludwig. He and I worked together. He came to me in March of 2020 saying that he thought that there was an interesting opportunity to productize a lot of the ideas from Ludwig around making deep learning accessible and thinking about bringing that to other industries, other organizations, other personas beyond the data scientists, but folks like analysts and engineers as well.

It started getting a lot more serious in the summer of 2020. We started saying, “We definitely need someone who’s not completely technical and engineer. Let’s try to find someone who knows how to build good products and can help think about some of the go-to-market issues, and that’s how we ultimately came to meet Dev. Every other person that I had talked to up to that point was very bidirectional conversation like, “What are you guys doing? What do you think the opportunity is?” It was a lot of selling on our side. Dev came in with a presentation like, “Here’s how I think that you can turn this into a business.” I was like, “Okay, this guy’s pretty serious.”

Dev: On my side, I had spent time previous to Predibase as PM at Google, and worked on a number of different teams. I saw how Google did machine learning internally. For a while I was jaded on this idea of making deep learning and machine learning a lot easier. I’d seen it and we had tried to do it at Google, and it hadn’t really gone as well as we wanted. I originally got in touch with Piero and Travis through a mutual contact because I got introduced as a skeptic on the space. I think the thing that I had said was, “I’ve seen a lot of people die on this hill of trying to make machine learning a lot more accessible.”

I remember the first meeting I had with Piero and Travis, they didn’t understand what I think the vision directly was, but then I tried the projects that they had tried to be open source and they walked me through a presentation of how they were thinking about it. That’s when it clicked and I knew that there was something here that could be more differentiated approach. We all got to know each other over the summer of 2020, and we officially started the company at the beginning of 2021.

Vivek: It’s funny how different the deep learning and machine learning space was three or four years ago compared to where we are today. It’s almost hard for people to think, “Oh, people were jaded about it when you see all of the optimism today.” I’m curious: what was the aha moment when things really clicked? You mentioned, Travis, that the original vision was around Ludwig and these open-source projects. Often, there’s a lot of difficulty in going from a cool open-source project to Is this a business? So what was that aha moment, when did you start to see things really click?

Travis: What convinced me about the idea of making deep learning accessible was an extrapolation of what I perceived to be a trend happening in the industry, which was on two different fronts. One was the consolidation of data infrastructure from data swamps and data lakes and unstructured towards more canonical systems of record for data. Data was getting better and getting more organized. The other was on the modeling side, that model architectures were consolidating as well towards transformer-based architectures. There were only a handful of models that people were using in production and fine-tuning as opposed to having to build novel model architectures from scratch with low-level tools like PyTorch.

My bet was that eventually, these things were going to converge, and the idea of training or fine-tuning models on proprietary data would be like a one-click type operation in a lot of cases where you’re not having to think very critically about how many layers the model has or having to do a lot of intense manual data engineering. That was going to be much more about the business use case, the kind of level of the problem that someone like an analyst or someone like an engineer would work at as opposed to a data scientist.

I definitely believe this has been proven out to some extent over time. Of course, I’m sure we’ll get into it, but I think with large language models, it happened in a bit of more of a lightning flash as opposed to a slow burn. But definitely I think that that trend is continuing.

Vivek: It’s interesting, Predibase was founded after you all started talking in the summer of 2020 and officially formed the company. When we think about the start of this most recent AI moment or AI era, it’s really when ChatGPT was released at the end of 2022, and everyone started talking about large language models and AI. It’s so interesting talking to all these companies that were founded before that moment. How has the original vision changed or not changed since ChatGPT was released in this pre-LLM era? What do you think has changed and hasn’t changed at Predibase in terms of how you’re thinking about the future and the vision for the company?

Dev: It’s funny because I think a cheeky answer to this question could be that our vision hasn’t necessarily changed that much, but our tactics have changed entirely. Our vision initially was as a platform for anyone to be able to start to build deep learning models. We initially started with data analysts as an audience in 2021. We had an entire interface built around a SQL-like engine that allowed them to use deep learning models immediately. That’s where the name Predibase came from. Predibase is short for predictive database, and the idea was let’s take the same types of constructs that we’ve brought for database systems and bring it towards deep learning and machine learning.

At the end of that year, we found that analysts weren’t our niche. The people who wanted to use us were more like developers, engineers, the types of people that were along the lines of, “Hey, just give me some good technical documentation and I’ll figure this out.” That’s where we started to shift from an audience standpoint. Our vision was, how do we make deep learning accessible toward this type of audience? When we started pitching deep learning in 2021 to a lot of organizations, we came up with the value propositions of deep learning as, “Hey, you can start to work with unstructured data like text, and images. You get better performance at scale. And oh yeah, you can use some of them pre-trained, and that way you don’t need as much data to be able to start to get your initial results.”

The biggest change that happened is that last value proposition, which was third on our list, has become the most important thing that people have started to care about. Which, if I think about what was popular in our platform in 2022, people’s eyes sort of glazed over when we said deep learning is cool, but what they liked was we had this dropdown menu where you could select different text models, and one of them was a heavyweight model called BERT. You could start to use it pre-trained. They didn’t need as much data to be able to get started. They loved the idea that they could actually maybe fine-tune it inside of that platform. At the time, it was just one feature among many of the things that we had done on the platform, among many other value props.

In 2023, when large language models came out in a large way, we started to think about what we wanted our platform to be. One of our very first takes and maybe one of my very first takes specifically was LLMs are another dropdown item in the menu. You had BERT and all these other menus of deep learning models, and now we’re going to add Llama as an example. We needed to recognize that the market had changed how it thought about machine learning. It was no longer thinking about training models first and getting results after it. It was thinking about prompting, fine-tuning, and then re-prompting a fine-tuned model overall. Our tactics significantly shifted. We considered a product pivot that we did in 2023 to be able to actually better support large language models. Funnily enough, it’s still in service of the vision of how do we make deep learning accessible for developers.

Vivek: It’s almost as if the vision has stayed the same, but the market has come to you in some ways like, “Hey, we were talking about deep learning and machine learning before it was “cool” in the current context.” Large language models have opened up a whole new sphere of who you can market to.

Dev: We definitely got lucky there. We had to meet the market halfway. We had to make sure that we also were responsive and not trying to do or meet the market using our old form factors, or our old tactics. The market did come to us in essentially figuring out, “Hey, there are three value propositions you mentioned. One or two of these really matter. How do you center your offering around that?” That’s been one of the most helpful things for a business. As a startup, I find one of the biggest challenges is getting people to care. How do you get anyone from another startup organization to spend the 45 minutes with you or a large enterprise? One of the nice things that’s happened over the last year and a half is we no longer have to explain why deep learning matters. We frame it as LLMs and being able to fine-tune those.

Vivek: You both talk about the big thing for startups, and a lot of the founders who listen to this podcast will attest to the same, is you have this great idea in your head, and you see the tech maybe before anybody else does, but then there’s a question of, well, why should we care? How do you go from figuring out, “This is a cool open-source project,” to “This is something we can commercialize, and people and businesses are willing to pay for this.”?

Dev: A lot of open-source projects are very popular in GitHub, and I do think a subset of those are probably best as open-source software. They’re a framework that makes something easy but doesn’t necessarily need the full depth of a commercial or enterprise already product for it. No one knows more about these types of challenges of actually taking some of these open-source frameworks and running of the infrastructure than Travis. He’s been working directly on it both for Predibase as well as with users who have tried to use this independently. Travis can talk about the challenges we solve, translating open-source frameworks without a commercial offering, and why we thought there was a real commercial business to be built around these frameworks.

Travis: When it comes to open source and particularly open core models, I think the easiest argument to make is that at Uber we had a team of 50 to 100 engineers that were working on building infrastructure for training and serving models. The cost of that is quite significant, even for a company like Uber. For companies that don’t consider this part of their core business, maybe they consider it core infrastructure but it’s not differentiated IP for them, you could invest in building an entire team around it or you could just pay a company like Predibase to help solve those challenges. With our most recent project, LoRAX for example, there’s a good open-source piece of software that can be productized and productionized and used in situations where you need high availability, low latency, and all those sorts of things. That’s sort of the layer on top. We have internal software that’s been running on top of Kubernetes and running across multiple data centers to optimize the availability, latency, and throughput of the system that goes above what’s in the open source.

That’s inevitable when you’re talking about something that’s going to be used day-in and day-out thousands of times a day, that what starts off seeming like long-tail issues like, oh, this request failed or this service was down for some period of time, become mission-critical at certain points. That’s, where there’s a good opportunity to appeal to organizations that need that. There’s a good synergy, I think, where they need those particular levels of service, and we’re able to offer it, and that’s something that they’re willing to pay for at that point.

Vivek: It sounds like, unlike many other open-source projects where you start the open-source project and then you say, “Hey, let’s see if people are willing to pay or not,” this is almost where at the very start of it you knew that there is a willingness to pay given your time at Uber and having seen this scale. Let’s start with this open-source project and start with this product that’s out there, get people to try it, and there’s definitely a willingness to pay. So it’s a different angle. We see many open-source projects that are out there, you wait for a lot of people to use it and say, “Hey, are people willing to pay or not?” and then it’s a different debate you have at that point.

Travis: Dev actually has a good analogy about the front of the kitchen, back of the kitchen when it comes to this sort of thing. I think that serving definitely is this very front-facing thing where you have to get every detail right, and those minor differences make a huge difference in terms of the overall value of what you’re offering. So yeah, I’m not sure, Dev, if you wanted to maybe speak more to that.

Dev: I have two analogies that I’m going to throw out. The first analogy I think about with open-source projects is there a commercially viable business around this? The way I think about it is for us, Ludwig and LoRAX are sort of the engine, and what we’re trying to do is sell the car. There are some people, maybe advanced auto manufacturers, who just want an engine and they want to put it in their tractor or some other kind of setup. Most people want to be able to buy the fully functioning car, something that’s going to allow them to unlock the doors and have a steering wheel and other things along those lines, which in our world is the ability to connect to enterprise data sources, deploy into virtual private cloud, give you observability and deployments that you don’t necessarily get if you’re just to be able to run the open-source projects directly, and finally connect that engine to a gas line, which I think in our world will again be the underlying GPUs, cloud infrastructure that this is going to run on.

The second analogy is for how we think about the product, and then the other piece you always have to figure out is how much the core problem that the open-source product is solving really matters to that end customer and what the visibility around it. That’s where I think about there’s some things that can be done in the back of the house as a kitchen, where maybe someone has an internal pipeline for doing something. It doesn’t need to be pretty, doesn’t need to be production-ready. It could be so much better if they had this commercial product that was built on top of the open source, but it’s not necessarily mission-critical, or they’re not going to lose customers and users because it doesn’t work flawlessly. Think about these as especially those internal cell pipelines.

I think about the front of the house, which to me is taking, for example, fine-tuned models and being able to serve them very well, things that are going to go in front of customers and user traffic. This is the part of the restaurant that you want to make sure services folks really, really well. So, I’ll need to figure out how to combine the car analogy and the restaurant analogy, but to me, the car analogy is how we figure out what’s the commercial viability around the open-source projects, and the restaurant analogy is a little bit of how you think about if an open-source project is going to be important enough to be able to justify some of that commercial viability.

Vivek: I love it. Don’t be surprised if we steal both of those analogies for some of our own companies because it’s really important distinction, like selling the car versus the engine, and the front of the house versus back house. At the end of the day, all of these things roll up to what’s most important for customers.

One of the things that I love when we talk about the front of the house or even what people see is on your website you have a great tagline, which is, “Bigger isn’t always better.” With the explosion of GPT and everything we’ve been hearing from them, we’ve been hearing about these models with billions of parameters. For a while, it was a bigger is better, and we got to create the best model and how many parameters is GPT-5 going to be. In your view, why is bigger not always better? Specifically related to models and the customers you serve, where do you find that bigger is not always better?

Dev: My favorite customer quote is, “Generalized intelligence is great, but I don’t need my point-of-sale system to recite French poetry.” I think that customers have this intuition that bigger isn’t better, and we don’t always even have to convince them that much. They sort of hate this idea that they’re paying for this general-purpose, high-capacity model to be able to do something rote. I want it to be able to classify my calls and tell me did I request a follow-up or not. It’s a very common type of task that people might be able to do using GPT-4. Today, they have a model that can do everything from that to French poetry to write code. There’s this intuition that when you’re using a large model like that you’re paying for all that excess capacity, both in terms of the literal dollars but also in the latency, reliability, and deferred ownership.

When I talk to customers, I think they’re very enamored with this idea of smaller task-specific models that they can own and deploy that are right-sized toward their task. What they need is a good solution for something very, very narrow. Where I think the trick is that customers have is, well, can those small models do as well as the big models? It’s very fair if you’ve played around with some of these open-source models, especially some of the base model versions, you have that intuition that they don’t do as well as the big models as soon as you start to prompt them. That’s where we’ve spent a lot of our time investing in research to figure out what actually allows a small model to do and punch above its weight and be as good as a large model.

To us, what we’ve just unlocked might not be a massive secret, but it’s been around data and fine-tuning. What we found is, that if you fine-tune a much smaller model, a seven billion parameter model, a two billion parameter model, probably an order or two orders of magnitude smaller than some of the bigger models people are using, you can get a parody with or even outperform the larger models, and you can do it in a lot more cost-effective way and also be able to do it a lot faster so you don’t have to wait for that spinner that you often see with some of these larger models.

Travis: A big aspect of this that every organization should think about is the type of tasks that they’re trying to do with the model. If you’re primarily interested in very open-ended tasks where you do want to be able to say, like have a chat application where you want to be able to ask anything from generate French poetry to solve this math problem to whatever, you do need a lot of capacity. That’s why ChatGPT is as successful as it is, is because when a user comes in and uses it, they don’t know a priori what type of question the user is going to have in mind. You do need something very general purpose.

When you’re productionize something behind an API, it’s just like an endpoint, you’re calling it, “Classify this document,” and you’re going to call that over and over and over again thousands of times, you don’t need all those extra parameters just at the baseline. That point depends on how complex your task is the capacity of the model that you need. The less capacity you need, the smaller the model you can use, and the lower the latency. It goes on a task-by-task basis that people should be evaluating these sorts of things.

Vivek: I feel like we are now in the moment of time where we are seeing that the balance swing back to, I might not need this really, really massive model with a trillion parameters and all of that. I need something that works for me, that works for my use case. Dev, you mentioned customers there and what your customers have been saying to you. Take us to the first customer, how did your first customer come through the door? How did you land them?

Dev: I wish there was a very repeatable lesson for founders here, but sometimes your first customer is a little bit of luck mixed with just a little bit of internal network and elbow grease. I remember we started the company in March 2021. I had no idea how we were supposed to get our first customer. What people told us is the common advice, and I think this is correct, is your first few customers are probably in your network. Looking initially at my network, I didn’t know really where to start digging. We ended up getting in-bounded from a large healthcare company based here in the US. They were curious just to know, they had seen Ludwig out there. They weren’t even active users at the time, but they had seen Ludwig and they’d seen it solve a very specific use case that Uber had published a case study around, which was customer service automation. They wanted to know if there was something that could be applied with Ludwig inside their organization.

I’ll never forget the very first customer meeting that we had was with this organization. We started in March, this customer meeting was in April. It was an hour-long meeting where we walked them through what Ludwig had been for a few minutes, but also what our vision was and what we were building with Predibase. The meeting ended with, “If you guys have a design partner signup sheet, just put our name on that list.” That was the end of the very first customer meeting that we had. It came in because of that open-source credibility. I walked away being like, “Are they all that easy?” They’re not just as a very quick recap, but the first one for us did come in a way through network, but really from organic open source inbound that came through.

From there what we’ve found is helpful is the repeatable lesson of content that attacked a use case that somebody cared about and that allowed them to come to us. That’s something we still see as a pretty effective channel now as we’re landing our next sets of customers as well.

Vivek: See, folks, it’s just that easy, just start the company and a month later someone’s going to ping you. Having that channel, as you mentioned, this is one of the nice great benefits of open source is people can start playing with it. Someone may find intrinsically there’s a lot of value and say, “Hey, how do we go from where we are today to doing even more with this?”

Zooming out a little bit, I would say to the outside observer, at least today compared to a few years ago, the AI infrastructure space has become very, very crowded. It seems like there are a lot of infra companies building at the inference layer, at the training layer, doing things around fine-tuning. I would say often to the outside observer and even sometimes to the inside observer, it’s hard to tell what’s real, what’s working, what’s the difference, do we need all these products.

In some ways, it’s really healthy because it gives people a lot of options, and I think when we’re early in the AI era as we are right now, you need a lot of these options, and there’s a lot of space to build. How do you both think about it? One, there’s a day-to-day of just maybe there’s hand-to-hand combat against some product, some more than others maybe, and then there’s just the long term of how do we resonate and stay above the noise and build for the long term.

Dev: I think this market is extremely competitive and that there is a lot of noise that gets introduced into the system. In terms of staying above the noise, the only way that we have found to be effective to stand out as an organization, especially when you don’t have that hour with a customer, you need to go ahead and build a brand where people are just going to look at you for maybe a few seconds or a minute and make an assessment of, “Is this worth my time?” The only way that we’ve seen work is you have to do work that advances the ecosystem yourself.

What I’m saying is I think you need to do something that can be a narrow slice but somewhat novel or a differentiated take. There are two ways that we’ve thought about doing this. The first is people have always liked this idea, that small task-specific models that are fine-tuned will be able to dominate these larger models. I spoke with a customer who said something that stuck with me; he said, “I want to believe that these small task-specific models actually will be the way my organization will go, and I want to use open source, but I just don’t know if it actually works.”

The world out there today is a lot of anecdotal experiments and memes on Twitter and others. One of the first things we did was we started to benchmark data and put out results in our benchmark. We put out a launch in February called LoRA Land, a mix of La La Land as well as the play on LoRA fine-tuning, which is how we did the process, where we took 29 datasets and fine-tuned Mistral-7B against those datasets. We wanted to compare initially to see how much fine-tuning helps against the base model. What we actually found was that fine-tuned Mistral-7B will be at parity with or actually, in many cases, outperform GPT-4. When we do some prompt engineering and try and find prompts that work the best for both of them, it’ll outperform that one out of the box.

That became a moment where we went semi-viral. We were at the top of Reddit for a little while. We had a partnership with Hugging Face and Mistral to go ahead and re-share it; Yann at Meta also, I think, re-shared this. It was a way for us just to start to put some data in the industry. We also released all these models as open source. We even built a little interactive playground where people could play around with these models directly firsthand and start to see what actually the model performances would like.

From there, we’ve scaled this out 10 times. We’ve not only now trained 27 models, but we’ve trained over 270, because we stopped just benchmarking Mistral. People would say, “What about Llama 2? What about Microsoft’s Phi?” We’ve added a number of different models, Google’s Gemma, to this entire list, and we’ve started to just build out our own internal benchmarking to understand what is the fine-tuning leaderboard. We’ve put this content out there, we’ve put these models out there, and there’s going to be more on that very soon. That was one way that we thought about advancing the ecosystem.

The second way, I would say, is honestly building novel frameworks that didn’t exist before. The best example of this is LoRAX. I’ll let Travis speak towards LoRAX as the lead author and creator of the framework, but one of the things that made it very popular was we attacked a problem in a way that no one else had been thinking about, and that really helped us cut above the noise.

Travis: To Dev’s point, attacking a narrow slice of the market is the only way that I found to be able to stay above the noise. The reality is that we talked about how pre-LLMs so much of our focus was on getting people to care, even understanding what the value proposition was, and now everyone cares. Therefore, there’s tons of people in the market attention and many of them are much better capitalized than we are. We’re talking about companies that have hundreds, thousands of employees, in some cases, working on this stuff.

The challenge, I think, was definitely on a product and engineering side to think about ways that we could attack something that, while they were technically capable of doing it, we could do better than them just by sheer focus and execution. We saw an opportunity with this multi-LoRA inference in the second half of last year. It was definitely on people’s radars; there were some early blog posts about it and some research that was happening at institutions like the University of Washington and UC Berkeley, but no one had productized something in this space. We launched LoRAX in November of 2023 and really tried to make it clear that this was a paradigm shift for organizations that instead of thinking about productionizing one general purpose model, you could productionize hundreds or thousands of very narrow task-specific models but solve the essential question, which was, how do you do that at scale in a way that doesn’t break the bank? The previous conventional wisdom was every model is a GPU at a bare minimum. If you have hundreds of models and hundreds of GPUs then you’re paying tens and thousands of dollars per month.

Breaking down that conventional wisdom was the first way that we saw to attack this problem, the goal, of course, being to establish ourselves as a thought leader in that particular space of building these very task-specific models in a way that’s cost-effective and scalable. LoRA Land was a way of building on top of that, saying, “Now that we have this foundational layer with LoRAX, here’s what you can do with it.” I think that that demo of being able to swap between all these different adapters that were better than GPT-4 at the specific task that they’re doing and do it with sub-second latency, I think that started to prove to people that there’s actually something real here. Not to diminish research, but it wasn’t just research, it’s something that you can be using in your organization today.

Vivek: I love reading the Predibase benchmarking reports and seeing how these various models do. There’s almost a sense of fun every time a model comes out, “How’s it going to do? How well does it perform relative to all these benchmarks and relative to all these other open-source models out there?” And because you guys are so close to this and have a great perspective, right now Llama 3 launched. Meta, they’re crushing the open-source model game right now, and obviously, they’re spending a lot of money, resources, and time behind this. It seems to be initially resonating really well. I am curious: how do you see the open-source model world playing out over time? Is it feeling like we’re going to have a handful of providers of large open-source models, Llama, Mistral, Google, or do you think that there’s going to be a world where we’re going to see a long tail of developers and we’re going to see many different types of open-source models and providers of open-source models?

Travis: My opinion on this is that it will break down a little bit by model class. I think that these larger foundational models, particularly the ones that people are open sourcing for general applicability as opposed to building internally on proprietary data to be IP in some way, that is a little bit constrained by the resources of these larger organizations like that. It’s not something that’s generally accessible to a small group of enthusiasts or something like that today. That might change in the future, but right now, the two big barriers are you need lots and lots of compute, and you need lots and lots of data, and both of those things are difficult to come by.

I do think that, at least for the foreseeable future, there’s going to be a requirement to lean onto some of these larger organizations like the Metas of the world to provide those foundational models. Where I do have a lot of optimism on the ability of less well-capitalized, the GPU-poor, to be able to make inroads is definitely on the fine-tuning side. I think that as fine-tuning matures from the research point of view and from the data efficiency point of view, there is an opportunity to create much better-tailored models for specific tasks that do have general applicability that can potentially be something that is valuable to lots of organizations beyond just the individual that made it.

Once we start seeing that become true, there’s a whole new space of creators similar to how content creators create art or videos or music or whatever, being able to create fine-tunes that attack very specific problems and have that be something that people consume at scale. A very interesting opportunity that I believe is coming on the horizon. We’ve seen it to some extent in the computer vision space with diffusion models where diffusion LoRAs for style transfer are starting to become mainstream, and communities around finding different LoRAs that help adjust the way that the models generate images. I definitely think that that moment is coming for large language models as well, where this sort of work moves beyond being restricted to individual organizations that have lots of data to something that can be even more open and transparent, and community-driven.

Vivek: Well, this leads me to my next question, which is for both of you, what do you think is over-hyped and under-hyped in AI today?

Dev: Over-hyped today is chatbots. A lot of organizations started to see value in GenAI actually post GPT-3. The very first thought that organizations had was, “I need ChatGPT for my enterprise.” We were talking to some companies about a year ago and they were like, “I need ChatGPT for my enterprise.” I was like, “Great, what does that mean to you?” They said, “Don’t know. I need to be able to ask the same ChatGPT-style questions but over my internal corpus.”

A lot of the early AI use cases have looked at how to build a chatbot that I can start to ask questions over documents. And one of the main reasons for that is because the interface that went viral was ChatGPT. It was the ability to do this in a consumer setting. The way I think about GenAI models is you essentially have an unlimited army of high school-trained humans essentially that can do different workflows. If you had this kind of unlimited army of knowledge workers, is the most interesting thing you’d really apply them to just better Q&A and better chat? I struggle to think that’s the case. Instead, where I think a lot of the case is going to be in these automation workflows. We’ve used the back-of-kitchen analogy, but there are also the back-of-office tasks, ones that are repetitive and mundane; how do I go ahead and automate document processing? How do I need to go ahead and automate being able to reply to these items? Now, we’ve started to see this become more of a thing.

That’s where a lot of the future for AI is going to go. The over-hyped sentiment is all of these organizations that are saying, “I want ChatGPT for my enterprise,” probably want to start thinking a lot more about, “How do I consider the fact that I have access toward this large essential talent pool that has general purpose knowledge and then can be fine-tuned to do a particular task very well?” I think about it like a college specialization. I can take this high school level agent and give it my college specialization in how Predibase does customer support, for example, and put it to work. That’s the biggest delta that I see from what might be hyped in the market today and where I think a lot of the production workloads are going to be going over the next 12 to 24 months.

Travis: I liken it too saying that it’s the boring AI that’s really under-hyped right now, but that’s where most of the value’s going to come from. I think that in any hype cycle, there’s this very overly optimistic view that we’re going to get 1,000X productivity improvements because we’re going to replace every knowledge worker with an AI system of some sort. Already we’re starting to see the reality unfolding is that, oh, it’s not that easy. It’s never that easy by nature of these things; the 80/20 law — getting the little details right ends up being where the majority of effort is spent, but those things matter.

We’re still quite a way from generalized intelligence and chat interfaces being able to do everything, like replace coders, but certainly, I think that it’s very real that we can get material productivity improvements and efficiency improvements on the order of 20% here, 50% there on very specific parts of the business. It’s going to be through these very narrow applications, to Dev’s point of saying, “We have a system here that requires humans to manually read all these documents. What if we can automate that into something that just turns it into a JSON or turns it into a SQL table for them, and they can just run a few quick sanity checks on it and then send it downstream to the next system?” Those are the sorts of things I can see having a very meaningful impact on the bottom line of businesses, and those are the things that are actually attainable.

Vivek: That’s part of the fun of the hype cycle. Now the initial euphoria has worn off, what are the really interesting things to build from here, right? Let’s get in the nitty-gritty and figure out something that may not have been just the initial like, okay, let’s go build this chatbot on top of GPT. There’s so much more you can do with all of this. Guys, let’s end with this, you both came from iconic companies, as well as Piero, and you obviously have seen a lot of really interesting things and been around some great people, and you’re all first-time founders. If you had to give one tip to a first-time founder, what do you think it would be? And maybe Travis, we’ll start with you, and Dev, we’ll end with you.

Travis: I think the biggest learning for me is you’re always a bit too ambitious when you start out with what you think you can do as an individual or as an organization. Oftentimes, particularly if you spent most of your time being an individual contributor, you have this idea that “If I’m a good engineer and then we hire 10 good engineers, then we’ll be 10X more productive, and we can do all these amazing things.” The reality of doing something and then doing something well enough that people want to buy it and rely on it in production every day is quite a big gap. Definitely being very narrow in terms of the type of problem that you want to tackle early on and say, “Let’s do something very highly specific that maybe doesn’t have a very big TAM in and of itself, but get that working perfectly and then start to think about where we go from there,” that’s definitely been the biggest learning for me I’d say.

Dev: One tip I would say is to be wary anytime someone suggests that you should pick something that’s strategically important to your business, like pick your go-to-market motion or pick what you want your differentiator to be. A real risk that happens toward first-time founders is you sit on a couch, and you’re like, “Hey, what can we do that would be really interesting?” And that’s a really interesting trap, I think, to be able to fall into that doesn’t essentially take in the customer lens of what customers actually care about. I think a lot of first-time founders are very smart. They worked at iconic tech companies, and they’re like, “I saw this happen at, let’s say, a Google, or I saw this happen at an Uber, and so the right way for the future is X, Y, Z.”

That’s a really good starting point, but that needs to be baked in what you’re hearing directly from customers 100% of the time. It’s very easy to essentially pick something that you think would be interesting, cool and differentiated that customers don’t care about. The reality is the thing that you’re suggesting might be the right idea, but it’s a different framing, it’s a different form factor that you need, and you won’t know whether or not you’re right until somebody is willing to purchase an invoice and send you money for it. Make sure you hold that as your primary objective function.

The last bit of advice I liked a lot was everyone who’s done startups has emphasized the importance of velocity. It’s very easy to mix up velocity with, “I need to go ahead and pull 16-hour days and be building a lot of code.” To me, velocity is building highly iteratively. How do you get feedback as soon as possible? The easiest way to do that is what Travis’s point is, is cutting scope. One of my favorite bits of advice that I’ve gotten, which is a bit controversial, is nothing takes longer than a week. The reason that I’ve liked that bit of advice is because I think it forces you to think, “How am I going to take whatever that I’m working on building this week and make sure that I understand at the end of the week is it actually worth doing, is it something that’s putting me in the right direction, is it delivering customer value?”

Both are about baking it into feedback and listening to customers, but understanding that you want to be able to optimize for that feedback cycle. That’s the only way that you’ll probably get to where you’re going.

Vivek: Great advice from both of you. And it’s super exciting to see everything going on at Predibase, at least from the outside. I’m sure it’s even 10 times more exciting and incredible from the inside. Congrats to both of you on all the momentum and really excited to see where things go. Thanks, Dev and Travis.

 

Read AI’s David Shim on Making Meetings More Efficient With Intelligent Agents

Read AI's David Shim on Making Meetings More Efficient With Intelligent Agents

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Managing Director Matt McIlwain hosts David Shim, founder and CEO of Read AI. These two have known each other for over two decades, having worked together at Farecast and then Placed, which David founded in 2011. They came together again in 2021, when David started Read AI, which has raised $32 million to date, including its $21 million round led by Goodwater Capital in 2024.

Read AI is in the business of productivity AI. They deliver AI solutions that make meetings, emails, and messages more valuable to teams with AI-generated summaries, transcripts, and highlights. In this episode, David and Matt examine how leveraging emerging technologies, getting engagement from multiple members of your ecosystem in the early days, and learning your way into a business model have been three themes across David’s startup journeys. They explore the challenges and success of implementing AI in various categories, the benefits of using AI model cocktails for more accurate and intelligent applications, and where they see the opportunity for founders well beyond productivity AI. It’s a great discussion you won’t want to miss.

This transcript was automatically generated and edited for clarity.

Matt: I’m just delighted to be back again with David Shim. Welcome back, David.

David: Excited to be here. It’s been a while.

Matt: It’s been a while, and you’ve been up to some good stuff. Let’s talk about Read AI. Why don’t you take us to the founding and the genesis?

The Genesis of Read AI

David: I was CEO of Foursquare for about two years, and I left the company. During that time when I was leaving, I was in a lot of meetings where I wasn’t making a lot of decisions at the end because I wanted the team to make those decisions because they were going to inherit that for the next 12 months. I took a step back, and I just started to watch and saw, hey, there are a lot of people in this meeting. What are they doing — and half of them are camera off and on mute. I made this term up called “Ghost Mode,” where there’s no sound, there’s no video — you’re there, but you’re not actually there. I started to notice that as I left Foursquare and took a couple of months off, bummed around different beaches in Mexico. I started to have a lot of video conferencing meetings, and I started to think about this problem again to say — this is not a good use of time to have this many people in a meeting.

A lot of the time, we know within the first two or three minutes if this is going to be a valuable meeting for me or not. I started to say, can I notice people who are and aren’t paying attention when I’m bored? I looked at one person’s camera and realized they had glasses on and the colors in the lenses looked very similar to what I had on my screen — and it was ESPN. So I looked closer to the glass reflection and they were on ESPN. So I was like, there’s got to be a model or machine learning model that you’re able to go in and say, can I identify when someone is paying attention using visual cues versus just audio cues? And then, if I combine those two things together, is that something that’s differentiated in market? And what I found out there was no one was doing it.

There were people doing it for different use cases, but not for video conferencing. That was a really interesting thing to dive into. Then I reached out to someone that I know who heads up Zoom, specifically Eric Yuan. I said, “Hey Eric, real quick question.” We know each other from emails every once in a while, but I was like, “Hey, I’m really interested in this concept of engagement and video conferencing meetings. Is this something that you’re working on? And if it is, is this something that I should be thinking about as well?” And his response back was, “We thought about it before Covid, it was one of the features that we were going to invest in, but with Covid, priorities changed. I think you should absolutely go into this category and into this space.” That checked the box to say, “Okay, now the platform is bought into saying this is something that is valuable, and this is something potentially that they might support.”

Matt: There are a couple of things that, if we grounded in Placed, will apply to Read AI. Let’s start with this. What were the emerging technologies at that time that it wasn’t obvious how they were going to be useful in terms of building a next generation company? How did you approach those emerging technologies?

Leveraging Emerging Technologies: The Journey of Placed and Read AI

David: When we started Placed in 2011, smartphones were just starting to get into the mainstream. iPhone 2 had just released, Android hadn’t been released yet, or the dev version of Android had been released. People didn’t know what exactly to do other than, hey, there’s Fruit Ninja, there’s games you can play, there’s calculators, there’s flashlights, and those are all great, but that was version one where people saw some novelty with the applications that were available, but people didn’t know what to do next with that.

Where I started with Placed was on the thesis of can you actually get some unique data out of the smartphone that you couldn’t get out of a computer. Previously. It was this concept of could you measure the physical world in the same way that you do the digital world, and is that smartphone going to be that persistent cookie. For the longest time it wasn’t the case. I had engineers going to the iPhone and the Android conferences and say, “Can you get GPS-level location data in background with software?” The answer for a while was, “No, that isn’t possible.” It was only around 2011 that all those things fell into place where that was possible to do.

Matt: You had the mobile phone that was becoming increasingly common. You had location services and GPS data. I remember one of the challenges early on was being able to ping the app, the Placed app that you had often enough that you could get that data without burning up the battery life of the phone. That was a fun adventure, huh?

David: A hundred percent. I had my cousin, I think he was 19 or 20 at the time, use 12 different phones with different models that we had bought for battery life optimization, walk around downtown Seattle, go to different places inside and outside so that we could actually infer what is the best model to last throughout the day where the user isn’t impacted, and also get enough fidelity or context to actually infer did they go into the store or did they walk by the store.

Matt: Let’s talk about Read AI. There were some interesting technological and even some societal “why nows” there. This is early but in the Covid-era. Video conferencing exploded. The cloud has come a long way — applied machine learning and all the different kinds of models. Talk us through some of the technologies that enabled Read AI that were, at a minimum, super-powered versions of things that you were using a decade before to start Placed and maybe some others too.

David: I think the bottoms-up approach to video conferencing was something that we hadn’t seen until Covid came along, and that accelerated where you’ll see all the research reports now that says five years of adoption was pulled up within the first two or three quarters of Covid because everybody went remote. You started to see that people were using the best solution available to them. It wasn’t a top-down approach of you have to use this solution like BlueJeans. A lot of us used to have the equipment for BlueJeans and video conferencing. Now it was you’re remote, what is the easiest solution to use? It was Zoom. Then you saw Google Meet come in, and Microsoft Teams come out.

Now you saw this bottoms-up approach where people were adopting video conferencing as the default where you’re interacting with someone. You don’t go in and say, “Hey, I’m going to go fly out to see you.” Now the default is, “Hey, let’s set up a call.” And the call, if you say, “Let’s set up a call,” It’s not a conference call number. I can’t remember the last time I had a conference call. It always defaults to video conferencing, but that was brought up five years ago. That brought up the demand in market where people were used to going in and saying, “I’ve now chosen a platform. I am able to use this platform on my own for essentially every single meeting that I have.”

That’s very much like the iPhone and the Android devices that came out where, at first, people were like, okay, this is kind of interesting. Then the app started to catch on, and people started to implement them, and the businesses weren’t ready; the enterprises weren’t ready for the smartphone. You started seeing people install apps, they connected their email, they connected their calendar and they didn’t have policies in place. That was actually a good thing for driving adoption because it was a block. The flood of usage was actually so strong that I believe that is the way that we are seeing the same thing that we’re seeing when it comes to Read AI and AI. It’s that we’re seeing the same level of adoption as we are for smartphones. I’d even say more so from a mainstream perspective because the cost is so minimal, it’s no longer sign a one-year or two-year subscription to get a smartphone and then you have to sign up for a data plan. It’s sign up where it’s completely free or maybe $15 or $20 a month.

Matt: Let’s go all the way back to Placed. You had to learn your way into the business model. I think you had a first assumption, and then you evolved because you listened to the market and you listened to the customers. Tell us about that evolution.

The Evolution of Business Models at Placed & Read AI

David: That was a hard one. Madrona was great for this one. I believed that location analytics was going to be a multi-billion dollar industry back in 2011-2012. I think, ultimately, it did become that; it just took a little bit longer. But the use case that I had was not the right use case. The use case I stuck with for about 12 months was I could pick any business in the United States and give you a trend of foot traffic across time. You could start to see trends like Black Friday, where did people go shop and where did they go shop next. Really cool data.

We got to the point that we were on the Wall Street Journal and the New York Times; it was not a problem getting press. It was also not a problem getting big C-level or VP level meetings because they had never seen the data before. They’re like, “Oh, you could tell me where people come after they visit my competitor? Okay, this is really interesting. I want to look at that data.” Or do you get gas first or groceries first when you go on a trip? We were able to answer those questions, but the downside was there wasn’t a use case for that data. They would come in and say, “This is a great meeting, we love it. Can you give me a little bit more data?” We send the data over and they’re like, “All right, thanks.” And we’re like, “Hey, do you want to buy anything?” And the answer was like, “No. We’re out, peace out, we’re gone.” What do we use this for?

The use case ultimately was the customer coming to us, and they said the customer wasn’t the end consumer. It wasn’t the enterprise clients that we were directly talking with, surprisingly, it was the mobile ad networks and the mobile publishers. They had come to us and said, “Hey, installing games is the ecosystem when it comes to mobile apps today, but we’re trying to get more dollars from brick-and-mortar retailers because we believe that people are in the physical world and you want to be able to close that loop.” They said, no one trusts our data because we’re already selling them the data or selling them the advertising. You don’t trust the person that sells you the advertising to tell you that it works generally in market. That’s changed a little bit today.

But they said, “We know that you have the cleanest data out there. Can you intersect our ads with your panel’s store visits and actually attribute, did someone hear an ad for Target, and then did they actually go to the brick and mortar target location data three days later?” And for the longest time, I said, “No, I believe that we’re a pure play analytics company and we’re not going to do any advertising.” Then you and Scott and the Madrona team were very much like, for the first six to 12 months, “If they’re willing to pay you money for this, maybe you should try it.”

Matt: Maybe you should see what customers who are willing to pay you money would actually be willing to do. The rest is history. It’s a very, very well-built, successful company. Let’s talk about Read AI. You’ve got these technology changes, these societal changes, and then you had to get engagement. How did you think about that? Ultimately, getting alignment with different parties, not just the consumer, but even making it work reasonably well with Zoom and these other platforms.

Engagement and Distribution: Partnering with Platforms

David: On the engagement side, we took the approach of: Work with the platforms. They have the control at the end of the day. They’re the ones that can also get you distribution. And I think with a lot of startups, distribution is a problem where you can have a great product, but if people can’t find it, if people can’t install it, that becomes a problem. And so what we did was we took the approach of working with the platforms. We had great partners at Zoom that said, “Hey, we’re launching this program called Essential Apps. And what Essential Apps does is we’ll put it in front of 10 million users on a daily basis where they will see an app on their right-hand sidebar.” So that was an incredible opportunity and we’re like, “Absolutely, let’s get it done.”

And the same thing came along this past year with Google Add-Ons, where Google said, “We are going to introduce apps or add-ons into Google Meet. We would like you to be one of six apps that are featured in that app store.” We’ve been featured over the last three or four months, and that’s driven significant adoption, and Teams has been similar in terms of the promotion that they’ve given us.

Those platforms and discoveries have enabled us to get a lot of traction. The thing I would say is I made a very similar decision with Placed, but I made it a lot faster. With Placed, I was wrong the first time, the location analytics to understand where people go in the physical world and not combining it with anything else, I just said that standalone is the use case, and that was not the case. The use case was attribution. With engagement, the use case was, “Hey, in a real-time call, when I’m talking with Matt, and he’s a venture capitalist, I want to know when he’s disengaged because when he’s disengaged I can try to recover in that meeting and say, ‘I know this slide’s not very interesting. Let me go to the next one, Matt.'”

The problem was once people started to use it, they didn’t know what to do with it. They saw this chart, it would go up and down. As it went down, they started to get more nervous, “Well, what’s going on? How do I actually recover from this?” And there wasn’t this knowledge base to pull from to say, “Oh, when engagement drops, you should do this.” And so, there was a lot of education that was involved in that process. We found a lot of users; there were certain use cases that we did really well for, especially large groups and presentations. The stickiness wasn’t there. Where we found the stickiness was to go in and say, what can we combine our ability to measure engagement and sentiment in real-time based on a multimodal approach, audio and video?

How can we take that really unique data? These are proprietary models that we’ve built out tons of training data. How can we actually apply that to something else to make it even better? What it came down to was transcripts. There were companies that had been doing transcripts for the last 10 years; some charge per minute; some spot-joined the calls, and some were built into the platform. What you started to see was they were starting to do some summarization, and this was partially due to OpenAI, partially due to their own models in hand where people were asking for, “I don’t read a twelve-page transcript after a call, but I would love to see a summary, and I would love to share that summary with other folks.”

We took that approach of, this is interesting, but everyone can do this. This is a wrapper. This is a commodity at the end of the day where I could take a whole transcript, write a couple prompts and get a summary. And that wasn’t interesting. So we said, “What do we do that is different?” And we applied the scores that we had. So when Matt says this, David reacted this way. When David said this, Matt reacted that way. We created this narration layer that wasn’t available in any transcript, and we played around with it, and we started to see that, “Okay, this is incredibly valuable. This materially changes the content of a summary because you’re taking into account that reaction from the audience.”

Matt: I think moving from trying to give me the assessment of the engagement in the sentiment, you then created what some call an instant meeting recap, but not just a superficial one, quite a robust one because you were actually using a variety of types of data, video, voice, and a variety of models. How did you think about which models are you going to take off the shelf? Which models are you going to customize? How are you going to mix these models together with the data that you have to ultimately produce this output of an instant meeting recap?

The Power of Model Cocktails: Enhancing AI Applications

David: Yeah, that’s a good question. Where we really focus in on them was we are the best when it comes to measuring engagement and sentiment, and then we are the best when it comes to layering that engagement and sentiment on topics, on sentences, on content, on video. Those things were really strong. We then went in and said, “Okay, what is a commodity?” At the end of the day, if you look at OpenAI great partners, you’ve got Llama with Google and Place, you’ve got a bunch of other open-source solutions in market. That is a very hard problem to solve, but it is kind of like the S3 bucket. At the end of the day, it is the commodity that everybody will be using at some point, and you’ll choose between the licensed models that you prefer or the open-source models.

We said, “Hey, if 90% are in-house models, that last 10% where I’ve identified 14 sentences that do the best job of recapping what that meeting was about, here are the action items. We think this had 90% engagement when this content was being discussed.” If you can then load that in to just summarize it down to four sentences, that’s what we’re going to use the third-party solution for. It wasn’t about figuring out how they can analyze engagement. It was more how do we bring our secret sauce into their system? That really did result in differentiated results where on the Coke challenge, we’ve been winning more and more over the last 12 to 18 months where we’re seeing more traction in terms of market share adoption. I think the best feature that we’re seeing is people are starting to copy our features, the legacy incumbents are starting to copy our features.

Matt: Always a great source of compliments when people are copying you. Let’s take us back maybe 12-14 months. You’ve learned a bunch of things. You’ve got this product and this ability to deliver these instant meeting recaps. It’s what I like to call an intelligent application. How do you start to get momentum around customers? At the time, I think you had very little revenue, and today, you’ve got tens of thousands of paying customers — incredible momentum in the business. How did you get from essentially zero revenue at the beginning of last year to the great growth you’ve had over the last 12-15 months?

The Momentum of Read AI

David: That was a bit of a journey. I think where we had in 2022, we had very little traction like I already talked about. We had a lot of interest, we had a lot of users, but ultimately it wasn’t what we’ve seen in the last 12 to 15 months. And that was an iterative process to be upfront. We had the summaries that launched, and we got some traction there, but then people started to come in and say, “Hey, can you do video?” At first, we were like, “Ah, we don’t know if we want to do video.” We did the video and then we started to tune the video where we’ve got three different concepts. The full video recording, everybody has that, but that’s table stakes. Then we went in, and we said, “Hey, highlight reel.” Think of it like ESPN for your meeting where we can identify the most interesting moments by looking at the reaction.

If you think about it this way, if you’ve watched an episode of Seinfeld, you can go into the laugh track, and if you actually look at 30 seconds before the Laugh Track, that actually does a really good job of understanding what people are interested and what people find funny. We started to build this highlight reel, but then we also took into account content. Now you’ve got the content plus the reaction that creates a robust highlight reel. We did something very similar to create a 30-second trailer. The idea here was customers were asking us for video, we enable that. The funny thing that we didn’t do that others did do was we didn’t roll out transcriptions until the summer. We said that is table stakes. Everybody has transcription. That is a commodity service at the end of the day. Yes, you could do it better than other folks, but everyone has it. It’s available in-platforms where you hit a button.

We said, “We don’t want to deliver another copycat product. What we want to do is be the best when it comes to meeting summaries, the intelligent recaps, the key questions, the action items, the highlight reels.” And then we started to go and say, “Okay, we got all this, this is great, but how do we activate against it?” That’s a little bit of the advertising background that I had and attribution background is how do you activate against it? Interesting data is interesting data, but if you’re able to activate it becomes valuable.

So we started to test things like email distribution and Slack distribution where we pushed out the summaries to where people consume the information. We didn’t need to be that central hub for reading the reports. We’re going to send it to wherever you’re used to reading it, and that actually started to gain more traction where people said, “Oh, this is great. After the meeting is done, I get a recap.” Or, “Hey, this is a recurring meeting and you’re going to send me the notes an hour before, so now I can actually review the notes because I forgot why we were going to even do this call.”

Matt: I love that feature. Going back to, I think, what was your original inspiration here is really trying to make me personally more productive and the teams that I meet with collectively more productive. It’s this awesome combination of productivity and action, and I think something you like to call connected intelligence. Talk a little bit more about this concept of connected intelligence.

Read AI: The Future of AI Productivity & Intelligent Agents

David: It really plays in with intelligent apps. Intelligent apps, you’ve got that marketplace set up, but when you go to connected intelligence that’s going in and saying, How do I connect those individual apps so they talk with one another? If I have a meeting about a new product launch, well, that meeting will generate a report today, and it’ll send it out to everyone, and that’s great, and there’s some actionability there, but what if that email, sorry, what if that meeting then talked to an email that was sent up that had follow-ups that said, Here’s the deck associated with that. Here’s some proposed wording for the specific product launch and timelines. Now if those two can talk together, it creates a better summary, it creates a better list of action items, and now the action items are updated where it could go in and say, Hey, David was going to send this deck over to the PR team.

Did David actually send it over? Well, the email’s going to tell the meeting; yes, David did send that over; check that off of the list. So that is a completed deliverable. Now, the follow-up in the email is, Hey, the PR team is going to provide edits associated with this, and it hasn’t been delivered yet because we’re connected to email. These entities, at the end of the day, are able to talk with one another, and they act as your team. We tried the concept of team really early on where it’s like, it joins your meetings, it does this. That was the early version.

What we’re seeing now with the prevalence of AI is you can make each one of these things an entity and these entities can independently talk with one another and deliver content just in time where you’ve got a meeting coming up, you had an action item for the pre-read, but now we’ll look at your Slack messages, your Gmail messages and say, “Hey, these things haven’t been delivered yet, or the client’s going to ask you these three questions. Keep this in mind. That’s going to create a much more productive interaction at the end of the day. A shorter interaction. a more productive interaction.

Matt: No, I love this. It’s kind of going both to some of the things that you’re already doing and also some of the vision of where you’re going with the company. There are all these business processes and all these workflows, and they’re increasingly digitally based. As you point out, it is interconnected between email or Slack or Zoom calls or whatever it might be. I like to think of them as sort of these crossflows. They’re workflows and processes that cut across different applications that I live in. And effectively all those things are different forms of applications. So maybe say a little bit more about where you see this world going and Read AI’s role in it around this vision of connected intelligence.

David: Where we’ve got two similar visions, and this is in the next year we expect to get there. One is, let’s say you’re a project manager, and you have a number of different meetings that occur. You have a number of interactions that happen via email. You’ve got internal follow-ups within Slack and teams, and then you’re updating Asana, Jira, and updating tickets. All of those things today are manual. You have to go in and connect the dots. Now I’m going to look at the meeting report and look at what was delivered to the client. Now I’m going to think about did this file actually get delivered, and then I’m going to go into Asana, I’m going to check some things off, and I’m going to go to the Jira ticket. Those are a lot of steps that take place, and those are steps where it doesn’t require critical thinking. All the information is there. That connected intelligence is there.

Where Read AI is going is we’re going to update all that for you. Where if you’re a product or project manager, all of that mundane work, that busy work that moving around of papers that is taken care of where you don’t need to worry about that, that ticket is automatically updated. Then Jira is going to send an update to the people that are looking at that ticket and then it’s going to say, “Hey, this got completed. This is the next step because that was discussed in one of those entities, one of those connected apps.” You actually inform what the status is.

A bigger problem that comes up is sellers, Salesforce, and HubSpot; where I remember leading revenue at Foursquare, I could not get my sellers to update Salesforce. I would threaten them to say like, “Hey, you’re not going to get paid unless you update it.”

At the end of the day, it didn’t matter how many times I said it, and I was CEO at the time; people were busy. They don’t have time to do it. They’re going to prioritize it internally to say, I’m going to close deals. I’m not going to update the books. Where Read AI is going to come in is we’re going to go in and update that Salesforce opportunity. We’re going to go in and increase the probability of close based on the information that’s available. By doing that, you’re enabling your sellers to do what they do best and go into market. That’s what Read AI does on the back end, is to make sure that everything is up to date, and it’s talking with one another that says, “Hey, seller, you might want to go ping that client because it’s been three weeks and normally your cadence is to ping every three weeks and we see this is missing.”

Exploring Product-Led-Growth and Market Expansion

Matt: I love those prompts and nudges that allow the individual to be more personally productive. I think that’s been one of the great attributes of Read AI. It’s really been a bottoms-up product-led growth kind of motion. What’s cool, of course, is I all the time am using it and people are like, “Oh, what’s that?” And I get to tell them about it. There’s a neat embedded virality, but how have you thought about PLG, and what have you been learning about how to be successful as a product-led growth company?

David: The approach that we took is a little bit not the norm. I think when you’re a startup, you want to focus in on one segment. I think when we started Read AI, we took the approach with support from Madrona, and Matt was the one to say, “Hey, we want to go broader.” We want to go in and be mass market when it comes to engagement sentiment and wherever we apply it because we don’t know. This is a new technology, and we don’t know where it’s going to be used. It took us a while to figure out that product-market fit, but by being broad, we’re able to see use cases that we would’ve never gotten the ability to experience or get feedback on.

I’ll give one example that’s a little bit more individual and less of a big market, but really impactful. We have someone who has dementia, and they reached out to us and said, “This has changed my life because now when I meet with my family, Read AI joins the calls, it takes the notes, and before I meet with my family again online in Zoom, or in Google Meet, I can actually look at the notes and remember what we discussed.” They actually had us connect with their caregiver to say, “Hey, this is how you make sure this is paid for. This account is up to date and make sure all these things are set up.” Because he wasn’t able to do that, but he said, “This has changed my interaction with my family.” That was awesome. That wasn’t a market that we were going after, but that was great.

We see off-label use cases too. Off-label use cases would be —we have government agencies, state agencies, we have treasury departments in certain countries that are using Read AI. When they use Read AI, a lot of times it’s bottoms up. They just saw it, and they’re like, oh, this would help me out. What we found is that the bottoms-up approach finds new use cases. For this one agency that I won’t name, they have clients which we’ll call patients and they’ll go see these patients out in the field.

The old way to do that was they would go to the meeting, they would have a tape recorder, they would record that meeting, they would take notes, and they would interact with that person, and then they would go to the next client, the next client, the next client. A lot of the time, they would spend about one day a week writing the notes, putting them into their patient and client management solution. That was a lot of work. Well, they started to go, and we introduced a feature where you could upload your audio and video. They started to record on their phone. They uploaded the audio from the interaction with the client, and they generated these meeting notes and summaries and action items and key questions, and they just cut and paste and uploaded that into their patient management solution.

They loved it. One person said, “I do not know how to write a report. I did not learn this in college. This is not what I specialized in. Now that I can use Read AI, I can interact with the clients, which is what I want to do. This is my job, and Reed will take care of it and upload all that information. They said this is phenomenal. Then they started to use our tagging feature and say, Hey, we’re going to tag individual clients, and now we can see how things are progressing across time because we don’t just summarize for a single meeting, but a series of meetings. So hey, are things improving? Or did we answer this question that came up last week? Hey, they wanted to know what was going on with this. Did we actually deliver an update on that? Those are things where, a lot of times, we get lost in the noise with the amount of work that we have. Read AI is able to make sure no one’s lost in the noise — none of those action items are lost.

Matt: That’s fantastic. I remember back to some of the earlier days of cloud collaboration and how Smartsheet, one of our companies that we backed, gosh, 16-17 years ago, started out in a very horizontal set of use cases like you’re describing. I think it’s important to have a big disruption, whether it was cloud back then or now, all these capabilities, all these different kinds of models I can use in applied AI to build connected intelligence. And I think that’s part of why you can start more horizontally at this point in time and let the market teach you and tell you about different kinds of use cases

Horiztonal v. Vertical

David: The level of understanding is key, especially in this early market because if we had gone too narrow, we would’ve missed out on these opportunities. I can tell you that 30% of our traffic is international. Outside of the US, that traffic is predominantly centralized in South America and Africa. If you said when I first started Read AI, would 30% of my traffic come from South America and Africa, I’d say, “No, that’s not the market that I would expect to adopt AI very quickly and go in and use it in their day-to-day.” What we’re finding is the adoption has been phenomenal where we’re covering 30%, 40% of a university student base where they’re starting to use it and adopt it.

We’re starting to see our peak; this was a couple of weeks ago: 2% of the population in South Africa was using Read AI, not necessarily as the host of the meeting, but Read AI was on a call that had participants in that meeting. I think those things get me really jazzed up to say like, “Wow, this is something where AI is not just about the enterprise.” There is a clear enterprise opportunity, but it’s how do you help the people from a PLG perspective? How do you actually deliver ROI to the oil driller in Nigeria who has to write a report and send it back to China, which is an actual use case — and they’re using it.

Matt: Wow, amazing set of use cases there. Then sort of embedded in that is just the ability to do this across languages and there’s all kinds of exciting things that you’ve done and you’re working on. One of my wrap-up questions here is what are the challenges for what I’ll call a gen-native company like yours, and in particular relative to the incumbents, the companies that are trying to enhance their existing software applications with generative AI capabilities, how do you think about native versus gen enhanced?

Gen-native v. Gen-enhanced

David: The gen-enhanced, if I was going to say from a competitive standpoint for Read AI, a lot of people would say, is it Copilot? Is it Zoom Companion AI? Is it Google Duets? And for us, it’s not really the case. It’s going in and saying they’re educating the market. I’ve been in a market where I had to educate everyone, and that is a very expensive thing to do. These incumbents are educating the market about the value proposition. People are using it. The free users are going to go in and 80% are going to say, “This is great, this is good enough.” There there is the audience a little older like me where it’s like — if you remember Microsoft Works, that was the cheap version of Microsoft Office. Microsoft Works was $99. Office was $300. A lot of people used Works, and they started to use it, and they’re like, “Oh, this is actually pretty good. But when I need to do a pivot table. Okay. I need to upgrade to that next version.”

What we’re seeing is there’s this whole new base of users that understand AI and the value, and they’re going in and saying, “I need more. I need specialization. I need cross-platform compatibility where half of our users use Zoom and some other solution, or use Google and some other solution, or Teams and some other solution.” From that standpoint, it has been great to actually get the incumbents to adopt this technology and evangelize it.

What you’re going to see is the Smartsheets of the world come in when it comes to AI. You’re going to see the Tableaus of the world, where there’s an incredibly large market to be had there. I think it’s just the start, and I think this is where the consumer and the horizontal play is actually really big, is that we are seeing that AI provides value even without an enterprise involved. If you can take that level of education, accelerate it, and show the value of one step above for $15-$20-$30 a month, that’s a slam dunk. We’re seeing that level of adoption today.

Importance of Model Cocktails

Matt: You’ve got this whole set of cross-platform capabilities. I’ve also been really impressed with the way that you’ve used different kinds of models, some things that you’ve fine-tuned yourself, and others that you leveraged something like an OpenAI and how you’ve brought those things together to get the transcription you were talking about before, to get the very best out of, I like to call it model cocktails, where you’ve mixed a bunch of models together to create these amazing instant meeting recaps and now increasingly these kinds of connected intelligence crossflows.

David: That will be key because if you only rely on one single model, you become a prompt engineering company at the end of the day. We’ve seen some competitors, great competitors, of course, use our solution, but competitors are good, but they’re going deep into like, “Hey, do you want to pay a little bit more for ChatGPT-4 versus 3.5 versus 3?” For us, that just highlights that you’re too dependent on that solution. You’re not differentiating, you’re not adding enough value that you’re just going to show that underlying technology that you’re utilizing. It’s been really important to go and say, “Let’s use a mix of models.”

It’s valuable from a language model perspective for transcription if you use only one single model. There are some really good ones out there, open source as well as paid. If you’re able to leverage two on top of each other, it goes in and says, “How do I stop some of the hallucination that comes up where certain words are totally incorrect? If we’ve got a score between model one and model two and the variance can’t be more than X, you can start to identify points where it starts to hallucinate a little bit or goes off the rails.” Those are kind of those checks and balances that you get when you have multiple models running, and then you bring in your own proprietary model on top of that, it says, “Okay, what other secret sauce can I put into that mix?” I think that is where the market is going to go more than a standalone.

Matt: I totally agree with you. Even more generally, apart from your specific area of personal and team productivity, this is going to be a big year around these intelligent applications and applied AI. Where do you see some of those big areas of opportunity outside of your particular category? What are things that you’re excited about in this world of intelligent applications?

Opportunities for Founders

David: From an education standpoint, I think there’s a lot of opportunity. I’ve been talking with a few teachers at different grade levels, and some of them don’t even know OpenAI exists. Some of them are starting to say like, “Hey, I’ve heard about this. I think my kids are probably using this, but I don’t have a POV there.” I think there’s an ability to provide personalized, scalable education that’s customized to the student. I’m excited about that as an opportunity, especially as an uncle to go in and say, “Hey, where you’re strong, we can make adjustments, and where you’re not as strong, we can provide a little bit more focus in the hand holding that the school system might not be able to provide at any given point in time.” That’s really interesting for me.

I think when it comes to productivity AI, so it falls in our space, but I think there are some really interesting things around things that we do every single day, like emails. Emails could be so much better. There’s the concept of a context window, and these context windows are getting larger and larger. If you can have intelligent apps that have connected intelligence, those context windows aren’t related to just email, you can start to bring in other things. The ability to bring in different data sets is going to find some interesting learnings.

Matt: I love both of those points. The education domain, there are so many opportunities to be helpful to the teachers, more personalized to the students. It’s going to be a very exciting time ahead. As you point out, whether it’s when we just had Llama 3 announced, no doubt GPT-5 is quickly around the corner here. As you say, things like context windows are going to make the capabilities even more robust. It’s going to be an exciting time ahead. You’ve been just an awesome founder and CEO to work with. You’ve got an amazing team, and I’m looking forward to the journey to build Read AI into realizing its fullest potential. Thanks for joining me here today.

David: Absolutely, Matt and you and the team at Madrona have been phenomenal champions, especially for Read AI and for Placed when we were just an idea and the market was starting to get founded. This is the opportunity that if you’re listening here, it’s a great time to actually build a company. It’s never been a better time. It’s never been faster and easier to build and scale.

Matt: Well, let’s go build. Thanks very much, David. Enjoyed having you.

David: All right. Thanks, Matt. Appreciate it.

 

Transforming Corporate Travel: A Conversation With Steve Singh and Christal Bemont

Transforming Corporate Travel: A Conversation With Steve Singh and Christal Bemont

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Digital Editor Coral Garnick Ducken chats with Madrona Managing Director Steve Singh and Christal Bemont, the CEO of Direct Travel, Madrona’s newest portfolio company. Over the last few years, Steve has been seeking out companies that are transforming the corporate travel ecosystem with the goal of delivering a dramatically better value proposition to business travelers, the companies they work for, and the travel providers that serve them. The acquisition of corporate travel management company, Direct Travel on April 2nd is the fourth pillar in Steve’s vision, and that’s what these three dive into today

This transcript was automatically generated and edited for clarity.

Coral: So, to kick us off, Christal, why don’t you walk us through how this new venture is going to work?

Direct Travel: Transforming Corporate Travel Management

Christal: Sure, I’d love to. I am absolutely thrilled to be partnering with you again, Steve. We spent many years at Concur, and you’re also a very dear friend, so thank you for this opportunity. As you mentioned, we recently announced that there was an acquisition of Direct Travel, and Steve, you were a critical part of that, as I know Madrona was as well, along with a number of other renowned investors. I’m very excited about serving as the CEO and having you as the chairman on this. This is an exciting adventure that we’re headed into. For anyone who doesn’t have a lot of background in travel management companies, I’ll get into Direct Travel. A travel management company works with corporate travel businesses and focuses on companies with a managed travel program.

What that means is that there’s usually a group of individuals that sit within a company who have oversight of the safety of their travelers, the spending of the program, and making sure that they optimize the travel experience for their employees. And so essentially what that means is there’s usually a group of individuals that sit within a company that have oversight of the safety of their travelers, the spend of the program, making sure that they optimize the travel experience for their employees.

This managed travel space is something that’s been around for a very long time.

And so essentially what that means is there’s usually a group of individuals that sit within a company that have oversight of the safety of their travelers, the spend of the program, making sure that they optimize the travel experience for their employees.

Direct Travel has been around for many years. They are the fifth-largest corporate travel management company in North America, focusing on mid-market and enterprise customers. Let me give you a couple. Cirque du Soleil just announced that they are going to move forward with Direct Travel. Topgolf is another example, a Chick-fil-A here in Atlanta where I’m at.

If there is something I can say that stands out that’s most unique about Direct Travel, it is our customer service. That comes down to the people at Direct Travel. So the people at direct travel are working with customers who, in some cases, have been customers for 30 years.

So, what are we here to do at Direct Travel? There is a $1.4 trillion business travel market. We see Direct Travel as a critical component of doing a few things: providing incredible value to business travelers, making sure that those business travelers can support the companies that they’re working for, and making sure that we support and take care of the suppliers that take care of that ecosystem.

We feel that there’s a great opportunity and even responsibility for making sure that we show up with the incredible service that we’ve always shown up with and also making sure that we bring some of the new technology that I know Steve’s going to talk in just a moment about to the forefront.

Coral: Steve, before we dive into Direct Travel and how it fits into this tech stack that you’re envisioning, perhaps you can detail the challenges business travel has today and what you thought needed to be tackled.

The Challenges of Modern Business Travel

Steve: Before I delve into that, I want to touch on a few things Christal said. There are times in life when you get to work with amazing people, and while it’s always wonderful to do great things in life and create great businesses, it is all made incredible by the people that you work with. And Christal, I feel you have no idea how happy I am to work with you again. And obviously, the other members of the team as well: Todd, Scott, Christine, and John Coffman. These are incredible human beings, and that’s what makes life just a blast.

When you think about the travel industry, this is a multi-trillion dollar industry. There isn’t a business person who isn’t touched by it or who doesn’t take business trips. This is a very antiquated, fragmented technology stack that serves this multi-trillion-dollar industry. Whether you’re talking about the distribution of content, which is the GDS layer or global distribution systems, or online booking tools or mid-office tools that are predominantly provided by Concur, but other companies such as Neko and Dean. Or you’re talking about the integration into the back office. All this stuff is an aging infrastructure. More than that, these are closed systems, which basically means that you’ve got fragmented data and you’ve got disjointed travel experiences — and that’s not a great recipe for delivering delightful travel experiences.

So, even today, basic things such as integrating into a traveler’s calendar, providing access to all the travel inventory that we want to consume, predicting the needs, providing proactive and intelligent responses to travel disruptions, or, frankly, just simple change requests. Things like checking into the hotel at the ideal time, seamlessly integrating ground transportation into that travel experience, or, ideally, eliminating the concept of the expense report. These are just ideas. We’re sitting here in 2024, and they’re not a reality. And you have to ask yourself, well, why aren’t they a reality? And by the way, that’s normal business travel. The problem is even bigger when you think about group travel, which, by the way, almost every element is manual. It’s typically Excel spreadsheets and the like, but it’s manual.

I’ll just add one more thing about the legacy systems. I think what makes it even worse is that because they’re closed, the level of innovation that you can drive on top of these systems is very, very limited. I would argue that, at best, it’s limited. So, take one step further, advanced concepts like transparency at the point of purchase between the traveler and the suppliers to serve them — that’s completely non-existent and can’t even be enabled on the legacy infrastructure in the industry today. This is a big set of challenges. Now, we’re fortunate that these can be solved with modern cloud-native architectures. Frankly, it’s not just technology; it’s an open platform mindset that’s also critical. Anybody can build technology, but you have to take this mindset that we’re going to be open, that we can allow others to innovate, and that all of us can benefit from the work of others.

Coral: So now that Steve has set the scene a little bit. Christal, I’m sure you remember Steve coined the term “The Perfect Trip” while you guys were together at Concur. Why don’t you tell us how that concept came about, and what that concept really meant for you guys?

The Perfect Trip: A Vision for Seamless Travel

Christal: It is something that I’ll never forget, and I really mean that. It’s a feeling and an experience when it happened many years ago that just made so much sense. Maybe the best way to frame it, in my opinion, is that it’s something that you never stop the quest for. There’s never going to be a perfect trip because there will be things outside of your ability to control them. But, even as we were thinking about it back then, and as Steve presented i, it really was about everything that goes into the minute you start thinking about taking a business trip to the point where you’re back home. What is that connected experience? How do you take care of the travelers who are such key employees? What are all those gaps or things that could be disrupted that we can anticipate, and how do we get ahead of them?

Look, we all know that travel is a massive grind. It’s probably the least enjoyable thing. People think that travel is fun and all these great places you go. You just groan at the idea of all the things you might encounter. So, back when that first came up, it was an exciting proposition about looking at the complete picture. At the time, I was looking at it, and many other employees were looking at it in a lane of a few critical things we could do. When Steve took those blinders off and said, “Hey, this is really about us showing up from the moment someone thinks about it and every step of the way to the moment when they return home.” There were limitations back then in terms of what we could do, but now I feel it’s why you hear this deep passion from me about wanting to be on this journey again. Because I feel like we’re in a much different place and have a much bigger opportunity to solve some of those things. And I know Steve, you just mentioned a few of them, but I don’t think you ever stop on the quest for the perfect trip. It’s a responsibility we have. And when you’re in this industry, you just see so much opportunity, and I feel like we’re at a perfect time to embrace it.

Coral: So, I know, Steve, from talking to you before that there was a time when you decided, okay, this perfect trip is not going to happen. You’d have to reimagine all these foundational elements of these closed systems that you just explained to us. Why don’t you tell us about that first meeting with Sarosh at Spotnana, when you realized, “Oh wait, maybe we can still do this. Maybe transforming corporate travel is possible.”

Spotnana: Building the Next-Gen Travel Platform

Steve: This speaks to the mindset comment I made earlier. First of all, I also believe in karma, by the way. I had not prioritized meeting Sarosh even though he had reached out to me multiple times. When I did spend time with him and listened to what he was trying to do, I kicked myself for not taking that meeting earlier. He’s just a wonderful human being who also has incredible experience in the travel industry, and, frankly, he has a great technology vision. The part that spoke to me in that meeting was that he understood that there are lots of different ways to build a next-generation travel company. One is to do a better job than the last one, like Concur, and go build something that is Concur 2.0. To me, that’s not interesting. Because all that happens is there’s incremental improvement in the experience. Maybe there is a slightly better user interface.

What Sarosh was talking about displayed a level of understanding around why some of our travel experiences are disconnected or disjointed. Or why the customer service experience is so poor. He said, “Look, the thing I want to go focus on is I want to fix the plumbing of the travel industry.” And we spent a bunch of time defining what that is. What does that mean? What is the plumbing of the travel industry? And what he was really talking about was, that we have to have a data model. It is a little bit geeky, but we have to have a data model that is broad enough to encompass a modern definition of what business travel really is.

We have to have an open system that anybody can build on, that anyone can extend that data model. We also need to be able to allow the supplier to know who the buyer is at the point of purchase so that the two people who are the most critical elements of the trip can collaborate in a way that ends up being a better outcome for both of them. As we aligned on that vision, I quickly changed my mind about wanting to invest in the travel industry again, and I said, “Look, I mean when you find people like that, you want to support them, and you want to help them deliver on their vision.”

And, we’ll talk about Naveen and Dennis shortly, but this is the same thing that’s true with Christal. You invest behind incredible human beings who also are solving some very, very big problems that make the experience dramatically better for you and me as consumers of those experiences. So, that meeting with Sarosh led to us investing in Spotnana, which was building out this open platform that could be consumed on whatever services a client of Spotnana platform would want to use. It was built on the idea that there’s transparency between the traveler and the supplier.

And, that was back in late 2019. We went through COVID-19, and that was a difficult time for all of us, for every member of the travel community. But here we are now in 2024, where Spotnana sits. Some of the biggest customers in the world have moved to the Spotnana platform because they want all the content. They want serviceability on NDC; they want open, extensible platforms they can build on that others can build on where they can benefit from that innovation. But it’s not just customers. Suppliers — American Airlines, Copa Airlines, Qantas Airlines, and — these are some incredible names in the travel industry, and you’re going to hear more in the not-too-distant future, have decided they want to partner with Spotnana. They are partnering with Spotnana, right now, on this concept of NDC. I want to spend just two seconds defining what NDC is because most people are going to say, “Well, geez, NDC has been around for a long, long time.” And it has. But just because it’s been around doesn’t mean it’s been useful. It hasn’t been used because buying the content is one thing. You also want to provide service on the content should the traveler ever need servicing. And what Spotnana figured out was how do you actually, in this new technology stack, how do you take content from the GDS and content directly from the supplier and provide the technology capability to provide service on top of that content regardless of where it came from? That was a game-changer. That’s what allowed Spotnana to be incredibly valuable as a next-generation platform.

So great customer adoption, great supplier adoption, but there’s also another piece that they really executed well on, and that is a testament to the fact that they’re an open platform. They’ve gotten other leaders, companies like Center, which obviously delivers an expense management platform. Center is integrating at the travel policy and travel profile level. So you don’t have to replicate this if you happen to use both services. Troop is extending Spotnana’s services into the group meetings and events arena. Direct Travel, obviously, as we just heard, is integrating all three of these services in combination with their own and all the data that they have on their customers to deliver a highly personalized travel experience.

So you’re seeing Spotnana become this open platform that’s being adopted aggressively across the industry. And I think that’s going to lead to not only better experiences for the traveler and for the supplier but, frankly, a new generation of technology companies in the travel industry that will define what this industry looks like in the decades ahead.

Coral: So with Spotnana, that’s resolving this poor plumbing issue that you’ve talked about. And then, of course, you just mentioned Naveen and Dennis, which is Center and Troop. So why don’t you talk briefly about each of those solutions and how they fit into this travel stack that you envision for transforming corporate travel?

Integrating Solutions: Center & Troop’s Role in the Travel Ecosystem

Steve: It was critical to get the travel infrastructure right, and that was the Spotnana platform. But, it was also just as important that other core services that were required to deliver a delightful travel experience were also reimagined and reinvented. And so, one of them was the expense report. Christal and I certainly know a lot about this market segment. And I would argue that Concur did a lot to go from the concept of paper-based expense reports. I don’t know if most people would remember this, but it wasn’t more than 20 years ago that all expense reports were done on Avery forms. Well, now we should be thinking about this and saying, well, why does the concept of the expense report even exist? Why can’t it just go away? Why can’t it just be automatically, effectively created in the course of my business travel?

And turns out that that’s possible, but you have to reimagine the technology landscape to do this. And so Center really said, “Well, look, we can build a tech stack that integrates all the way down to the card processing layer.” So, at the point of swipe, we can pick up all the information we need to process that expense report. In fact, through a range of AI services that are also built into the stack, it can actually go from swipe through approval, through audit, and then integration into the GL, typically in three seconds or less. Literally, the concept of an expense report is now just a swipe. So, as you’re using your card, you’re actually creating the expense entry. And that fundamentally changes the user experience. But more than that, they innovated further and said, “Look, we’re going to integrate this with financial products like the corporate card.” So, even the economics of what an expense reporting company looks like are fundamentally different.

So now, let me move over to troop. I met Dennis, the Troop CEO, a number of years ago. We had this view at Madrona that there has to be a better way of planning, booking, and expensing group travel. To be far, so that everyone understands, group travel is about half of the corporate travel industry. This $1.4 trillion number that you sometimes hear — about half of that is group travel. All of that group travel is manual. And our view was that we could bring a level of automation and better client experience to business processes the same way that Spotnana is driving a better experience in individual travel in the same way that Center is driving a delightful experience in expense reporting. And so we invested in Troop. This is much like Spotnana many years ago — Troop has spent the last few years really building out the technology infrastructure to solve for the planning process, the booking process, and the expense reporting process.

There are a couple of things within that that, I think are a further testament to this idea of an open platform. You can plan group travel within Troop, and we’ll manage your itinerary at group itinerary, by the way. So you can see when are your colleagues arriving, what’s the group itinerary for the entire trip? But not surprisingly, when you book, there are API calls into Spotnana to do the booking, and you don’t even know it. It’s just completely seamless. When you file the expense report, not surprisingly, it’s Center that’s doing all the expense reporting. It’s all seamlessly done and integrated into the process. And to me, this is the modern example — this is how modern applications will be built. You’ll consume services from the best-in-class providers of those services. In the case of expense reporting, it’s Center. In the case of group travel, it’s Troop. In the case of core travel infrastructure and individual business travel, it’s Spotnana. All are seamlessly integrated. And then, obviously, we’ll talk about how that’s integrated into Direct Travel, but these are incredible companies that just expand the value proposition.

Coral: Perfect segue, Steve. So we have Spotnana, that layer that everybody can build on top of now. Troop is going to manage the group travel side of things, and Center is going to take away these pesky expense reports that we all just love doing. So then, Christal, why don’t you tell us how Direct Travel fits into this vision of transforming corporate travel and what your vision is now that you’re CEO.

Transforming Corporate Travel: Direct Travel’s Strategic Vision

Christal: It might be helpful to give a little bit of the landscape of the different types of travel management companies. It’s kind of split down the middle. There’s a smaller group that approachs it from a service and very heavy agent perspective. It’s built on technology that’s somewhat antiquated, closed — very much like what Steve just described, these kinds of aging travel infrastructure. So traditional travel agencies that you might be familiar with that just very much a lead with service and working off of this older, aging technology, which is a disadvantage for them and their customers. And then you have another group of companies that really lead with technology, and they think about the technology standpoint and really just have, in many cases, a technology-only view and then maybe the very far second or distant second is the servicing.

So, if you take what exists out there in the ecosystem today, you take really two very different types of approaches. I think this goes back to the point that I was making earlier about why Direct Travel as this acquisition and it’s pivotal to the way forward is — it’s not just about the technology. It is also about the service, but you have to have exceptional technology and service combined. It’s the perfect coupling and curated set of technology that Steve mentioned before, but the very first time that the three technologies will come together seamlessly in a way that’s serviced as seamlessly, and in a way that, from Direct Travel’s standpoint, allows us to really show up for our customers. These are big changes for customers. These are things that really evolve what it means to partner with a travel management company like Direct Travel.

Being able to bring the technology is certainly part of it, but being able to service the technology as seamlessly and as carefully as we have in the past, but on this new tech stack is really, really important. So that’s No. 1 — bringing that together in a way that’s very fluid and very seamless for our customers. Steve also touched on something that’s really important, and it’s about data. It’s about being able to provide insights to not only our customers. We think about what managed travel means today is likely to change in the future in terms of the way that people think about the programs they have set up. We will be incredible partners in the ability to leverage the data that we have, the insights to help our customers along the way and making sure that they evolve as programs and as new opportunities change.

But the same thing goes for suppliers. Being able to really work with our suppliers to make sure that we provide personalized information and information that really is the reason why NDC is so important right now. Being able to get suppliers and personal offerings to the travelers, connecting those things. And the last one, we’ve already talked about, but i think it’s important to reiterate. It’s about the open platform and the continuous innovation — it’s not just about the innovation that we will certainly set out with AI and some of the other things that we’ll be doing with building on top of the application stack that Steve just talked about. But it’s about being able to provide value to our customers by having an open platform that others can innovate on top of as well.

We feel like when people decide who they want to partner with from a travel management perspective, it’s going to be a partner that evolves with them, leads the changes in the market, and makes sure that we can not only provide this best-in-class open architecture technology that brings all the benefits that Steve just talked about. We can do that and work with them on service, and we can continue to evolve as their needs evolve and even lead some of those changes in the future as well. Going all the way back to that perfect trip, each one of these components is critical to us on this quest of really trying to fully realize the perfect trip. It is putting them all together very carefully and very thoughtfully so that our customers and their travelers can benefit from it.

Coral: So as we’ve talked about all of these different pieces and how it all is going to fit together, Steve, when are we all going to be able to live this life of the perfect trip?

Realizing the Perfect Trip

Steve: Well, first of all, you can see why I’m so excited to partner with Christal again. I think that the mindset and the team that tends to follow Christal is really what will allow what are great ideas to become a reality. So, I’m very excited about the next five years. Now that said, let’s bring it back to today. What Christal and the team I know are working on is standing up this new stack of Spotnana, Troop, and Center on the Direct Travel platform. Everything from our core systems that run our business to how we provide service to our customers. We expect to be done with that in the summertime period. We plan to actually showcase integrated travel and expense offerings, plus what our group travel offering might look like at GBTA in Atlanta. We are looking forward to any customers who want to stop in an see the products that we’re building. We’d love to have you join us.

We think that sometime in late summer to early fall, we’ll be shipping our first sets of products. Now that said, one of the things that I love about Christal and the team that she is working with is that’s version one. And there’s an ongoing innovation cycle and an innovation mindset that makes this incredible. So, literally every single month, you’re going to see new functionality being delivered and new services that we will make available to our customers. So very, very excited about what we can do together in not just delivering a perfect trip but, frankly, reinventing the business travel industry to be far more customer-focused, far more supplier-focused, and just a more streamlined industry.

Coral: Well, I know that all of us business travelers are going to be eagerly awaiting this, and we could keep this conversation going between the two of you, I’m sure for hours. But why don’t we go ahead and stop there? I want to thank you both so much for joining me today.

Christal: Thank you very much.

Steve: Thanks for having us, Coral.

Building a Modern Database: Nikita Shamgunov on Postgres and Beyond

Building a Modern Database: Nikita Shamgunov on Postgres and Beyond

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Palak Goel hosts Nikita Shamgunov, co-founder and CEO of Neon, a 2023 IA40 winner. Nikita is an engineer at heart, with early stints at both Microsoft and Facebook. He previously co-founded SingleStore, which he grew to over $100 million in revenue. Neon is powering the next generation of AI applications with its fully managed serverless Postgres. In this episode, Nikita gives his perspective on the modern database market, building a killer developer experience, and how both are speeding up the creation and adoption of intelligent applications. Congratulations to Neon for recently hitting GA. And if you’re watching on YouTube, make sure to hit that subscribe button. With that, here’s Nikita.

This transcript was automatically generated and edited for clarity.

Palak: Nikita, thank you so much for joining me. Why don’t you share a bit about your background and what your vision is for Neon?

Nikita: I’m a second-time founder, and my core background is systems. I started, after finishing my Ph.D. in computer science, my first kind of real job, even though I was doing a lot of moonlighting jobs while studying, my first real job was at Microsoft, where I worked on the SQL Server engine. From there, I started a database company called SingleStore. It used to be called MemSQL, and I started as CTO. I wrote the first line of code. I had the experience of building a startup from scratch, went through Y Combinator, and lived in the office for a year until I got a place where we would wake up and code until we went to sleep. Then, I took over as CEO in 2017. The company had about $7 million in run rates, and I scaled it to — as a CEO, that’s how you measure yourself — I scaled it to about $40 million, after which I joined Khosla Ventures as an investing partner. Walking in, I told a famous venture capitalist, Vinod Khosla, that there was another database idea that I was really itching to build, which I couldn’t act on while a SingleStore because you can only have one engine in the company. As I was thinking about it, I already had the idea because I had been thinking about it for three years. And he’s like, “Why don’t you just incubate it here at Khosla Ventures?” Which I did. It took off really quickly. Now that’s the company. Neon is three years old, raised over $100 million, and we’re running 700,000 databases on the management.

Palak: That’s awesome and super impressive. You’ve worn a number of different hats. One of the things you said that caught my attention was that you had the idea for Neon for three years. When did you feel it was the right time to build it?

When To Launch Your Startup Idea

Nikita: I think I’m late. That’s what I think. This idea should have started when AWS introduced AWS Aurora, and it became clear that this is the right approach to building and running database systems in the cloud, as well as the separation of storage and computers. It was just such a good idea for the cloud in general. It doesn’t matter what staple system you’re running. Once that became obvious — and it became apparent in 2015, so I might be seven years late — I was trying to convince some of my former colleagues at SQL Server to go and build it while I was still running SingleStore. I said, “Every successful cloud service inside a cloud could be unbundled, and every successful SaaS service deserves an open-source alternative.” Those are the clear openings for building an AWS Aurora alternative, which I already knew was just such a juggernaut as ramping revenue really, really quickly.

That allows you to first counter-position against Aurora and have very clear messaging because people already understand what that is. I was sitting on that for a while and unable to act on it. I also thought somebody else would build it, and no one did. I see all these database attempts by building shared-nothing systems such as CockroachDB, Yugabytes, and CytosDB. None of them were that gigantically successful, but Aurora was, and I was like, “This is the right approach.” Because I was thinking about this for a while, the map in my head was already planned out. The downside is timing is everything. The opening is narrower, but our execution was very good because the plan was well-thought-out.

Palak: Technology leaders at companies have a number of different databases to choose from. You mentioned CockroachDB, Yugabytes, Aurora, Postgres, and even SingleStore. Where does Neon stand out?

Neon’s Bet on Postgres in the Modern Database Market

Nikita: It’s important to map the modern database market. It’s roughly split in half for analytics and operational systems. The company that still reigns operational systems or OLTP is Oracle, but nobody is excited to start new projects in Oracle. If you take somebody graduating college and want to build a new system or app, they’ll never choose Oracle. Because of the commoditization pressure, they will choose one of the open-source technologies, which is MySQL, Postgres, or Mongo. People won’t choose Yugabytes, or SingleStore, which breaks my heart, and people won’t choose a long tail of niche databases.

Fundamentally, there are two major use cases in that very large modern database market. One is “I want to run my analytics,” a data warehouse and use case or big data analytics use case. And then there is “I want to run my app,” your operational use case. Those of nature on the operational side, it’s clear that Postgres is becoming the dominant force, and workloads are shifting into the cloud.

Now the question is who will own Postgres in the cloud and who will own the share of Postgres as a service? Right now, the king is Amazon, which is why we’re not inventing a new database. We are actually saying we’re Postgres and will be on the trend lines of Postgres becoming the most important operational database on the planet. All the trend lines are there. We want to be on the right side of history. The question is, how do you become the default Postgres offering? And that’s the question that we ask ourselves every day. The answer to that is differentiation. Like what differentiation do you bring to this world? And, what we think that differentiation is — it is the cycle speed for developers. Operational databases exist in the service of their users, and their users are developers.

Palak: That makes a lot of sense. You even touched on a consumerization of developer experience. I’d love to get your thoughts on how you build that 10X better developer experience.

The Importance of Developer Experience

Nikita: When we think about our ICP today, it’s a team of developers who don’t have enough skills to start building their infrastructure. That small team wants to deploy daily and optimize them for cycle and overall speed. Over time every team is going to be like this. What you want to do is zoom out first and see what the set of tools developers use today is. Some of them have complete permanence. They’ve become standard. Something like GitHub has a ton of permanence, and GitHub is not going anywhere. Every day, GitHub entrenches itself more and more in developers’ minds.

Standing up VMs and running node servers in those VMs will go away. Then you keep zooming in, and in there, you will see, “What do developers do every day when they build a feature?” That comes down to the developer workflow. In that developer workflow, people create a branch, and Git Sync creates their developer environments and sends a pool request. Inside that developer workflow, modern tools plug in there very, very well. If you take something like Vercel, it allows you to generate previews. Every pool request will have a Vercel preview. It’s like a copy of your web app but created for just this pull request before you pushed it into production. Guess what doesn’t fit that workflow? Well, the database.

The database is the centralized piece, and you give access to all your previews and developers for that centralized piece, but you run this thing in production. “Whoa, this is scary.” Right? What if some developer writes a query that destroys the data or does something with the database, so now you protect the thing, and databases don’t support that branching preview slash preview workflow? We’re changing that. At Neon, you can always branch the database, so that becomes an enabler. Of course, we are prescribing developers that use Neon, and we’re telling them how they go about building features and what the role of the database is as they build features.

We introduced the notion of branching. We understand migrations, but what I mean by that is schema migrations. We let developers instantly create those sandbox environments with full protection. If there’s PII data, you can mask the PII data and whatnot. Their local and staging environments are fully functional and have access to a clone of their production database. And you can create those environments easily with an API call or a push of a button. This is an example of zooming into the developer workflow and ensuring developers feel like this is built for them. It follows their day.

Palak: As you think about building, especially a modern database company, there’s pressure to have good performance and reliability. How do you build the culture of the company? Or advice for founders who are serving the developer market? how do you keep developer experience as a first-class citizen both in the culture of the company, the people that you hire, and the customers that you bring on? And the reason why I ask is I think Neon has just done an incredible job of that.

Building a Culture of Reliability and Performance

Nikita: There are a couple of things. One is developer experience, and the other is time, performance, and reliability. The latter is incredibly important for a modern database company. If you don’t have them, you will never succeed. The first one is the developer experience, which is your differentiation. You have it, you’re succeeding because the latter — performance, reliability, uptime — they are requirements, it’s necessary. But it doesn’t guarantee that you will be able to compete with Amazon because they have it. Let’s be honest about it. RDS and Aurora are good, reliable services. You can build a business on them. You need both. So how do you get both? Reliability is a function of a few things. Do you have a robust architecture and a good team to develop that architecture? Once you have that in place, it’s a function of time.

You won’t be reliable day zero. That’s why we took more than a year to be in preview, and we started with being extremely open and transparent with ourselves and the world of where our reliability is by showing our status stage and feeling a little naked when people see how reliable your system is. For a modern database company, if you’re not reliable, it attracts a good amount of criticism. And we got it. A good amount of that was in the fall, where the usage went up, and we had this hockey stick growth. We were onboarding, call it a hundred databases a day, at the beginning of 2023. And we started to push 3000 a day by the end of 2023. And so all hell broke loose. Then, if you set the priorities right when you write postmortems, you have a high-quality team, and you slow down your feature development when you need to in the name of reliability.

The architecture is key. If the architecture is off, you’ll never get there. Then, over time, you will get incrementally better and better and better, and eventually, the reliability will be solved. Now that we’re a GA, we feel good about the past history of our reliability and the systems in place, but we can’t, with a straight face, say we are a hundred percent reliable. I don’t think there is such a thing when you run a service, but you can get incrementally better, and at some point, you have enough history to take the calculated risk of calling yourselves GA. When it comes down to developer experience, there are a few things that are important to get right. One is the team like Jiro’s Dreams of Sushi; you must eat good food to make good food.

The tools that we use internally should be top-notch. The tools that we praise out in the world should be top-notch. We love GitHub, we love Vercel, we love Linear, we love super well-crafted, razor-sharp tools that get a lot of developer love. I’m going to call my competitor, which is like a big no-no, but I think Supabase is doing a great job in terms of developer experience. We’re not shy about talking about this internally to level up and potentially exceed the developer experience that they provide. I guess it’s emphasizing what’s important, which is relentlessly investing in your team, having good architecture, understanding what good looks like, and getting better every day.

Palak: You have a long history of working on SQL Server and SingleStore, focusing on reliability and winning those upmarket accounts and workloads. When do you start thinking about when it’s the right time to prioritize that at Neon versus focusing on net new workloads and really good developer experience?

Product-Led Growth v. Sales Teams

Nikita: Heroku is a $800 million run-rate business. Half of this is Postgres, half of that is PLG Postgres, product-led growth, small teams coming into Heroku using Postgres, and the other half is enterprise. You can make a lot of money by providing the best Postgres in the world for small teams, but you don’t want to get stuck in the same place as DigitalOcean. By the way, there is no shame in building a public company, and it’s still growing very healthily all these things, but the stock does not reward DigitalOcean at this point in time. DigitalOcean didn’t go upmarket. We want to go upmarket, but we gate it. We gate it as the signal that we’re getting through our own users that are coming through the side door and just signing up.

We have a good number of enterprise customers, companies like EQT and companies like Zimmer Biomet. There’s a fairly long list of small use cases inside large enterprise customers that will obviously keep expanding, but we don’t want to spread too thin and over-focus on that. So, for example, what does it mean specifically? We don’t have a sales team. We don’t have a sales team, and in all our growth in the last month — we grew 25% months and months in revenue — it is all PLG. And so that’s what focus is looking like. We do have a partnership team and the partnerships team helps us with strategic partners such as Vercel, Re-Tool, and Replit, and making sure we give them the best quality of service possible. And now we’re also in the AWS marketplace and then working with some strategic partners.

Standing up a sales team is not a strategic priority today. We have a very good and thoughtful board that is completely aligned with us in that strategy. And so, how do we gate it? We’ll look at the signal of people coming in on the platform converting and what kind of teams those are. And we are also looking at the partner pool, and these are the things that will eventually tell us like, “Okay, now is the time.” And the book that created a very good impression on me was “Amp It Up” by Slootman. And when he talked about the data domain, they were in the technology development, and then they’ve proven their go-to-market motion in the enterprise because the updated means, obviously, enterprise.

They scaled their sales team very, very quickly. I want to live in the PLG world, and my 200 million of Heroku is telling me that there are plenty of dollars to be captured in this world, but track very closely what’s happening with larger customers driven either them coming in directly to the platform or through partners. And once that’s happening, I’m going to scale the sales team very, very quickly. What I want to avoid is a prolonged process of having a sales team and then keep scratching my head about how expensive it is and how effective it is. I think having a very tiny team early on proving everything, but then scaling very, very quickly is the right way to go.

Palak: It’s 2024, and everyone’s talking about AI. You mentioned you’re in two unique roles. Not only are you CEO of a company like Neon that’s raised $100 million, but you’re also a venture partner at Khosla. You’re uniquely able to weigh in. How big do you think this wave is going to be? Is it all hype, or is it something that is here to last for a while?

Nikita’s Hot Takes on AI

Nikita: So I’m a true believer this will change the world. It’ll change the world of software development and it will change the world of living. The question, how long this is going to take and are we going to have a trove of disillusionment sometime in the future? I don’t know the answer to this question, but I’m as bullish on this thing. I even think that we’ll live in a much more beautiful place — as in our planet — because AI will eventually drop the price of beauty down so much that it will just make sense to have it. And you know how in the seventies we had all this Brutalism architecture, which is super-duper ugly.

I was born in the Soviet Union and grew up in Russia, where Moscow was very pretty. A lot of cookie-cutter houses were built, mostly in the sixties and seventies. The reason they’re so ugly is that they’re cheap to build. I think we will arrive at a place where the cost of building things, software, or even physical things with robots and all those models will be in their robotic brains, will allow us to live in a much more beautiful world. The cost of design is going down, the cost of software engineering is going down, the cost of construction will go down. We’ll kind of re-terraform the planet through this.

Palak: One of the things that every enterprise is figuring out is that every AI strategy requires a data strategy. How are you seeing this impact at Neon, especially focused on net new workloads and net new projects?

Nikita: We’re potentially moving into OLTP, which is more important than analytics. What I mean by that is we just went from, zero cloud spend on analytics to, I think Microsoft is 20 billion. It could be 10, it could be 20, you can cross-check.

Palak: It’s a lot.

Nikita: This is your big data; this is your data warehousing; this is your data lakes; this is your training models and stuff like that. Training like old school models, just like ML and AI workflows of the past. We’re going to have more apps. Apps need databases, operational databases, modern databases. That’s not going to go away, that we’re going to have more apps and therefore we’re going to have more databases. The whole inference thing does not belong to an operational database. It’s all a new thing. It’ll be triggered by both. So that’s an additional line of spend.

Some days, I’m bullish on data warehouses, and other days, this will be older technology because it feels like we’re babying those data warehouses way too much. I’m observing by looking at the data team at Neon, where we’re obsessed with the semantic model, how clean the data is, and how much that represents. The data quality is very important because garbage is in and garbage is out. If you want to have the data in your data warehouse reflect the state of your business, you have to be obsessed with it. Today, people are working on that. I think tomorrow, that whole thing might be a lot more simplified where AI can deal with data being a little dirty because it understands where it’s dirty.

So, you won’t need to make a picture-perfect schema representing your business. I think the center of gravity might shift a little bit where people calling into AI, and I’m going to call out my friend and his tweet on Ankur Goyal and Braintrust Data, where he talked about what changes we are going to have for the data warehousing. He’s obviously looking from his standpoint, and he thinks evals will start pulling data closer to the model and start replacing observability and start replacing maybe even some product analytics at the minimum and then, over time, data warehousing there. I don’t know. It feels archaic to collect data around ETL jobs and run a SQL on a SQL report, but the alternative doesn’t quite exist today, so it’s still best in class.

But that whole AI thing should, some one way or another, disrupt those things where the quality and the full history of that data will matter slightly less.

Palak: Totally. Referencing Ankur’s tweet, it is almost like a real-time learning system of how the app can update itself using AI versus doing it offline, the human way, looking at the warehouse, looking at how users are reacting, et cetera, et cetera.

Nikita: Yes, and the engine of crunching the historical data should still exist. I still want to know how my business is performing. I still want to analyze the clicks, traffic, and product analytics, but how tedious it is today to set all those things. How tedious it is, that should somehow go away. A self-learning kind of AI system that ingests the data like kind a human brain and that thing becomes the driver for running those aggregations, that may change the center of gravity. Today, it’s firmly in the data warehouse or data lake; that is where the centerpiece of the enterprise data today is. We’re super eager to partner with all of them because that’s what the data is. You want to be close to that. Is it going to stay there or not? I don’t know.

Every App Needs A Modern Database

Palak: Do you see there being tailwinds on the transactional side or how do you see Neon fitting into some of these new modern AI architectures?

Nikita: Look who’s doing AI stuff today versus yesterday. In the past, yesterday, it was a data scientist. I don’t know if we need that many data scientists anymore. Today, it’s app developers and prompt engineering and building RAGs. All of them are real-time operational systems. Every app still needs a database. So we’re not replacing operational systems; we are augmenting them. The people who are driving AI value in the company are mostly product engineers running stuff in TypeScript.

The whole Devin demo broke the internet, and we had an incredible amount of excitement about Devin and how Devin can add print statements and, debug your code and run full task. We’re actually using Devin already at Neon. We gave Devin a task to convert our CLI tool from TypeScript to go. It didn’t complete it, but it made enough progress for us to see that maybe it’s not quite there, but it’s almost as good as an intern. And then tomorrow, well, tomorrow, we’ll just do the task. There are so many tedious things that you want to do, like you see, look at the design, and it could be picture perfect. Replace this button from red to gray. We just ran something like this today, and then a human had to do this, but those things shouldn’t be humans doing this kind of thing.

Palak: What intelligent application are you most excited about and why?

Nikita: I’m mostly excited about Devin-like systems. In the whole AI world, we’re replacing humans. We’re not giving humans sharper tools. Software engineering is one of the most, it’s an elite job. We’re able to replace a software engineer, obviously starting from a very junior one, but over time, that’s an incredible amount of economic value because most companies are bottlenecked on the amount of engineering they can ship. I think that’s probably my high-end excitement there. And we’re certainly doing work internally on that front.

Palak: Beyond AI, what trends, technical or non-technical, are you excited about and why?

Nikita: Obviously, our developer experience is very on-brand for us. I think we’re entering a world where there’s just so much technology, and the way to stand out is the incredible degree of craftsmanship that goes into creating a new product or a new tool. I’m both excited about it and in fierce competition with Supabase. They get it, too; we get it, and we will see. This is a Jiro Dreams of Sushi kind of argument here again.

Palak: This isn’t a question, but on one hand, you have AI that’s potentially replacing the junior developers, and then you also have killer developer experience that’s making software engineering more accessible.

Nikita: Correct. By the way, if you make software engineering more accessible for humans, you are also making it more accessible to agents.

Palak: What unexpected challenge have you had where something didn’t go as expected, and how did you deal with it?

Nikita: The team was born as a systems project. Then, the developer experience was layered on top, and we couldn’t really move on to anything developer experience until the foundation of that storage was built. Those things were not on the core DNA of the founding team; it took us a couple of iterations to get there, and we’re not quite there yet, but we’re better and better and better that we see material progress. Something that for a systems engineer may look like, “Oh, of course, that’s a lot easier than doing the hard stuff like the storage thing.” But that turned out to be quite hard to get, and I appreciate the skill sets and the people who go and work on those things.

Palak: Any other parting advice for founders who are just starting the founder journey?

Nikita: I think the one that I would give is, A, don’t wait. I waited on this idea. I should have started this earlier. If you feel like you are ready— and maybe you feel like you’re not quite ready, but close enough—just go and do it. Then, go on the lifelong journey of learning and being passionate about your users and the technology that goes into making delightful experiences for those users. And never look back.

Palak: Awesome. Well, thank you so much, Nikita. I really appreciate you taking the time.

Nikita: Absolutely. Thank you so much.

How Bobsled is Revolutionizing Cross-Cloud Data Sharing

How Bobsled is Revolutionizing Cross-Cloud Data Sharing

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Investor Sabrina Wu hosts Bobsled Co-founder and CEO Jake Graham for the latest episode of Founded & Funded. Jake had stints at Neo4j and Intel before joining Microsoft, where he worked on Azure Data Exchange. Jake is revolutionizing data sharing across platforms, enabling customers to get to analysis faster directly in the platforms where they work. Madrona co-led Bobsled’s $17 million series A last year, which put the company at an $87 million valuation.

In this episode, Jake provides his perspective on why enabling cross-cloud data sharing is often cumbersome yet so important in the age of AI. He also shares why you can’t PLG the enterprise, how to convince customers to adopt new technologies in a post-zero interest rate environment, and what it takes to land and partner with the hyperscalers.

This transcript was automatically generated and edited for clarity.

Sabrina: Jake, it’s so great to have you with us today. Maybe to kick things off for us, why don’t you start by telling us where the inspiration came from to start Bobsled?

Jake: Absolutely. I’ve always wanted to start a company because I love the competitive aspect of business, the idea of trying to go out and win and bring something into the market. Over the first 15 years of my career, I found that you can get that in pockets in larger organizations, but it’s hard to get that feeling that it’s only the efforts of you and your team that stand between you and success.

I’ve always been on the lookout, but I’ve also always had two tough requirements before deciding to jump in. I wanted to believe in the idea. I wanted to feel, especially in my data infrastructure space and focused on the enterprise, that I understood the problem and that a product in that space deserved the right to exist. It was critical to me to have at least one strong co-founder. For me, especially, it was a CTO I deeply believed in, and I thought we could win together.

As far as how Bobsled itself came about, I sometimes joke that life has three certainties. There are death, taxes, and reorgs. And it was a reorg that made Bobsled exist. I’ve been at Microsoft Azure in a role that I enjoyed. I spent about 18 months building the strategy and plan to create an Azure Data Exchange and an Azure data ecosystem to make data significantly easier to share across Azure consumers. And we finally got a significant budget to start and hire in earnest to start building.

When that happened, a significant reorganization resulted in the birth of Microsoft Fabric. Microsoft Fabric was a fantastic decision. It was and is a product I’m excited about, but it made what I was building not make sense at the time. It’s finally starting to make sense as Fabric has gone GA. But I remember thinking, I took a run, and I realized that I’d finally uncovered a problem specifically in this case; it was around not just data sharing but changing how data is accessed for analytics and evolving data integration into being cloud and not AI-native. I was motivated to solve that problem. I’d finally found something that deserved the right to exist as a successful business.

It would also be totally unfair for me, not to mention that my wife had been pushing me to start something for a while. She realized that — I wouldn’t say that I would be happier founding because this is a really hard job — but that I would be a lot more fulfilled. So, my wife, Juliana, was the secret co-founder who pushed Bobsled to exist.

I spent a couple of days maturing the idea. I started thinking, who do I know I can start to bounce this off of? I called the person who is the best engineer I’ve ever worked with. The gentleman I worked with at Neo4j. We hadn’t spoken in probably a couple of years. I gave him this idea, and he said, “Jake, are you asking me to be your co-founder?” And I said, “I mean, I’m starting to think about this.” And his response was, “I’ll quit tomorrow. Let’s do this. How long do you need? I think we should get going quickly.” I was really taken aback, and I said, “Tomorrow feels a little fast, but we can talk again tomorrow.” At this point, I sat back and realized I was starting a company.

Sabrina: I love that, and I love the co-founding story. It’s definitely a testament to you as a founder to be able to attract great talent and build Bobsled. Can you help us set the stage more and explain what exactly Bobsled does? What traditionally were companies doing before Bobsled to access and share data?

Jake: Bobsled is a data-sharing platform that allows our customers to make their data accessible and available in any cloud or data platform where their customers work without having to manage any of the infrastructure accounts, pipelines, or permissions themselves. Fundamentally, the product is pretty straightforward. You grant Bobsled read access to wherever you store your data, whether in files in an S3 bucket or indirectly within Databricks in BigQuery, Snowflake, et cetera. You reach out to our API to say, “This data needs to be consumed by this individual in this cloud, this data platform, and be updated in this way.” We manage intelligently and efficiently replicating that data to where it will be consumed for white-label infrastructure. Whoever’s consuming that data feels like they’re getting a share from ZoomInfo or CoreLogic, or any of our other customers, but they didn’t have to build any of those pipelines. It allows them to move from putting all the work on their consumer to making it easy without suddenly having to manage an infrastructure footprint in every platform where their customers work.

It’s still crazy to me that the volume of data used for analytics, data science, and machine learning has grown a couple of orders of magnitude over the last decade. The actual mechanisms for that data to be accessed are almost exclusively the same as ten and even 20 years ago. The overwhelming majority of data that’s used to drive any form of data pipeline is either pulled out of an API or an SFTP server. That doesn’t make sense in a world where so much of the value being generated by modern enterprises is in data and in which you need that data to be consumed by others, whether in your organization or others, to extract that value. We needed to see the cloud-native data exchange mechanism take off.

The reality is that the right way to solve this problem is generally by the actual data platform itself. The best way to access data within Snowflake is Snowflake Data Sharing. They have built a fantastic experience where you don’t have to go through and ETL data from another platform. You can query live, well-structured, up-to-date data as long as it’s already in the same cloud region and in Snowflake.

That sharing mechanism was pioneering, and every other major platform followed it. The problem is that it puts a significant infrastructure burden on the actual producer. We want to move away from a world in which every data consumer has to ETL the data out of whatever platform it’s in. The issue you get with that is that sharing protocols aren’t connectors. It’s not just taking the traditional model of we’re going to toss data over the wall. They provide a better consumer access experience because you have to bring the data to where it will be consumed. You have to structure it in a way that is ready for analytics and then grant access to it.

I believe data sharing would be how data is accessed for modern analytics and ML pipelines. But in order to make that happen, the data producers needed a way to interact with all of these systems without having to do it all themselves. That’s fundamentally what Bobsled is.

Sabrina: As you alluded to, data sharing is a critically important part of the stack. I love how easy Bobsled has made it for data providers actually to share the data and how you’re also agnostic, to your point, across the different cloud platforms. If you’re on Snowflake or you’re on Azure, maybe it isn’t easy. But you’re allowing companies to share across different cloud platforms.

You’re flipping this idea of ETL on its head, which is one of the parts I love the most about what Bobsled can do. I’m curious, though, to this point about different cloud providers, what role does cross-cloud data sharing play in this new age of AI and the new way that companies are starting to build?

Jake: Someone recently told me, “I really hope this ML wave sticks this time.” I’ve been working in ML for over a decade, and it gets bigger every year. This just seems kind of like a natural evolution of what we’re doing.

Something you talk about a lot is the idea of intelligent applications. That term applies to an enormous amount of where the market is going. There’s something there’s a really strong definition around. The way I think of them is they’re applications that leverage data in order to better automate whatever workflow they actually solve for, and then also generate more valuable data as a part of that. I think that’s true whether you’re leveraging an LLM, or you’re leveraging more traditional machine learning, or whether this is built on more human-in-the-loop analytics. In order to continue to move toward this more data-driven and now AI-focused age, data has to be able to move between the systems and organizations where it’s generated.

One of the things that people are starting to realize is that often, in any application, a lot of the value it provides is actually in the data generated by running that application. There’s an enormous amount that you can do in that workflow to use that data to improve it and continue to automate it. Another thing we’ve learned about data over the past decade is that it becomes valuable when it’s blended with other data sets. Within data, almost always, the whole is greater than the sum of its parts. When you realize that if you want to think intelligently and predict, and I think even more if you want to do that in an automated fashion using LLMs, you have to be able to bring in the data that represents different aspects of a problem, and that is never sitting in one system.

We saw a push led by Salesforce around the Salesforce Data Cloud. Well, if we can get everyone to bring all of their data into our application, we can solve all the problems. And the answer is no, you can’t. You might be able to answer many questions, but in reality, data is being generated across this enterprise and its partners and other vendors. It needs to flow into the systems where it’s going to generate insight. I fundamentally believe that data sharing will be the mechanism to do that.

How I think this shift enables the move toward the age of AI is we’re going to allow every company to create data products and to have them be accessible wherever they need to work without, again, having to manage an infrastructure footprint and have an army of people who understand how does clustering work differently in Databricks versus Snowflake? How does the permission protocol work differently in an S3 endpoint versus an Azure Blob store? How do I replicate this data? There are better things for people to be working on. Bobsled is going to be a lot of the plumbing for how the world becomes AI-driven.

Sabrina: Data is key to what everyone has referred to for years as a critical part of the modern data stack. You’ve hinted that you think the modern data stack is at a turning point. We’ve talked about this a little bit, but I’d love you to walk me through your thought process. What is this turning point? How do you think that might impact the tech sector and startups overall?

Jake: My general feeling about the modern data stack is it’s no longer a valuable term because it won. There was a time when the modern data stack described a few companies and categories that were bringing analytical infrastructure into becoming cloud-native. That was a relatively exclusive set of companies, so it didn’t include any of the previous vendors who weren’t cloud-natives, whether that was Informatica or even Microsoft, in a lot of ways at that time. And it also didn’t include any part of data that wasn’t pure analytics. There was a separate data science and an L stack, and there’s been a separate operational stack.

I think what has happened is that those incumbents have caught up and are now cloud-native. The companies that weren’t purely in the analytics stack have started to move in there. The more successful companies in the modern data stack are branching out beyond it. The discrete categories within the modern data stack are blending.

There’s been a lot of unnecessary angst around the death of all the companies in the modern data stack. I view it as a victory. The modern data stack is just now a key part of technology. We’re moving from a purely software-centric technology market to a data-centric one. That’s the idea for me of intelligent applications, or if you want to call it the age of AI. The software is incredibly important, but it needs the data. It’s no longer enough to build for a very specific set of users in a very specific category. We now have a much larger field to play in, but also, it’s a much more competitive field.

I think that’s a little bit of what people have been shying away from in the modern data stack. It is like: But wait, this company we thought was going to be a unicorn isn’t going to get there just by solving this slice of the problem.

I go back to the first thing I said: I love the competitive nature of this. I love the fact that every day you can wake up and figure out, okay, how do we execute our way to win, and how do we make sure that we’re solving real problems and that we can bring those solutions to market, and that we can get that feedback loop going? A lot of modern data stack companies are going to be incredibly successful. I don’t think that term is valuable anymore.

Sabrina: Do you think there’s going to be more consolidation of the players within the data stack? Do you think people are going to start to bleed into the different swimming lanes?

Jake: It’s going to happen for good reasons because the most successful company is going to want to keep growing back. Again, this is a competition. You have to win more every day. You’ve earned the right to expand beyond existing categories. I think that’s really good.

I use a joke term: the enterprise strikes back. The vast majority of software spending in the United States is done by the Fortune 1000, and that’s true in the world. The way in which enterprises buy technology is fundamentally different. The idea of fully adopter-led tying together many different solutions is just not working.

Benn Stancil is probably the best writer in our space. I don’t want to parrot his words and take credit for them myself, but he wrote a really great piece on this, with the example of Alteryx. About a lot of startups are looking at Alteryx doing all of these things, and they can only say they do their best in class. It became easy to say, “We’re going to attack this part of it.” Until you get in and realize Alteryx is selling to the enterprise and that complexity, it’s not for nothing. It’s because they built it to meet their customers’ needs. And that breadth is in and of itself a feature.

We’re seeing a little bit of that enterprise strikes back — of the way in which software was built and brought to market for a long time. There’s still a lot of value in it. We need to learn to blend some of the PLG and more pure means of software and product development with some of the realities of your enterprise is going to get you to the promised land. If you can’t add value to the largest companies, you’re putting a pretty low ceiling on yourself.

Sabrina: One phrase that I’ve heard you say before, and we’ve talked many times before, is that “You can’t PLG the enterprise.” Maybe you could talk a little bit about what that means and how you view that statement.

Jake: It means a couple of big things to me. Holistically, as an industry, we’ve lost respect for the enterprise sales part of enterprise software. The pendulum has swung a little bit too far toward the product should sell itself in some ways that’s for really positive and great reasons. It has pushed us to think about product design and user experience. A lot of it has been pushed by individuals within organizations being much more empowered to adopt technology. There are hundreds of millions or billions of dollars in truly product-led growth revenue happening every day. I’m not saying that’s not real, but it doesn’t consider how large enterprises make decisions around technology.

If you think of a few things, often, your buyer and your user are not the same person. Generally, if you’re building something that’s of strategic value and is looking for, you’re not starting small; you need to be attached to a strategic initiative in which there are multiple decision-makers, not just the person who will be using your product. Creating not just a sales motion that allows you to get in front of those people, understand their requirements and goals, navigate their organization, and transfer your excitement about your product to them. That’s a big part of what people have missed: the art, craft, and need for actual enterprise selling.

The second part is that you also need a product development process that feeds into that. Every company, but really every startup, you live and die based on your feedback loop. One interesting thing is that, as an industry, we’ve all internalized the idea that our product ideas are not that good. Focus on a problem, not your solution; ship quickly, get feedback, and iterate. That is awesome in a PLG world where the cost of getting that feedback is incredibly low. It’s challenging in the enterprise space because there is a gap between your buyer and your user. It’s often easier to get time with executives and the users who are going to implement. You’re not getting perfect information there. If you are building something entirely net new, like there is no direct equivalent to Bobsled, even your user will think they’re going to understand how they’re going to use their product. And it’s going to be somewhat wrong once you actually get into production.

The biggest part of it is that the gap between my interest in using this product, I’m evaluating using this product, and actually using it in production is unpredictable and longer. What you end up with is if you don’t build a product development process that views your sales process as a key component of feedback, you end up going back to building in a vacuum. You really need to get to a place where, one, you trust and listen to your sales team or your go-to-market team in general. Two, you’re much more actively asking your customers questions. Third, you’re much more willing to ship even more iteratively and recognize even faster that, a lot of times, what I talk about the team a lot is how we win by fixing the problems our customers identify with our product faster than they think it would even be possible.

If you take the existing agile processes we’ve all built over the past five to 10 years and try to apply them to this more challenging feedback loop, I don’t think you’ll figure out how to interpret the signals. When I say that you can’t PLG the enterprise, it’s not just that you can’t put a trial on your product and hope people come in. It’s not just if you don’t figure out how to navigate the enterprise and get to real decision-makers, budget, and stakeholders. If you build your product in a way that doesn’t take into account how feedback is generated from enterprises in your development cycle, you’re not going to build the right product.

If you’re a product like Bobsled in which larger customers experience our problem more acutely, that’s where we’re starting; you won’t build a product they can adopt. I think you’ll also have the other thing that many companies, especially from the modern data stack age, is we know we solve a real problem, but the product and the go-to-market don’t seem to quite fit where the actual money is. How do we get over that? I really think the answer is to fall in love with enterprise sales again. Really care about it.

Sabrina: Many companies that we talk to are struggling with this because PLG has become so favorable and fashionable, to say the least. But in reality, it sometimes comes back to basics. That’s what you’re alluding to here: How do you get that feedback loop going? How do you listen to your customer, especially when there’s a disconnect between the user and the buyer?

One critical component for Bobsled is being friendly with all the hyperscalers, but you partner with many of them, including Snowflake, Databricks, GCP, Azure, et cetera. How have you gone about managing these partnerships? Especially as an early-stage company, I think it can be very challenging. These companies are very large. Do you have any advice for entrepreneurs who might be navigating some of these relationships?

Jake: I’ve always been inspired by Apple’s 1984 commercial where they made clear they had an enemy, in which case was IBM, to try to motivate the team. I assumed it would be one or a few of these platforms because they were building walled gardens, and we were building a product to bring those walls down and connect these different platforms. I was shocked when none of those platforms oriented Bobsled that way. They all oriented to Bobsled for, “Oh, wow, this solves a real problem for our customers. It’s one that we don’t want to solve ourselves.” I go back to our specific problem, it involves managing infrastructure across all these different platforms, and that’s a hard thing for them to do themselves, although there are plenty of examples of them doing it.

We were pleasantly surprised. There was a warm reception from executives early on to partner. I was fortunate to be on the other side of partnership motions both at Microsoft and at Neo4j, from both the startup and the hyperscaler itself. I think they’re super dangerous for early-stage startups and that they can suck an enormous amount of your time, energy, and mental thought into them in ways that will eventually pay off, but not in the timeframe that you need them.

My advice for the early stage would be to focus on partnering with individuals at hyperscalers. So all of these companies have effective machines that move billions of dollars in revenue for partners, and almost none of those billions of dollars in revenue come from early-stage startups. You need to get to a place where you’re at scale, and then they can help you scale more effectively. At that point, you need to be able to be relevant to the individual AEs across every platform with an incredibly repeatable sales process, pricing motion, and mature integration. I look forward to that day, but it’s not series A.

The way that we’ve approached it, which has been effective, is we’re now starting to clip into the machine a bit more and start training sales teams at some of our partners like Snowflake and Databricks. The first two years have been finding executive champions willing to help us navigate and take calls with prospects and propose Bobsled, who are willing to individually send us leads. I can think of a few of our first enterprise customers where we’d be on, I was sharing a cell phone, actually my head of sales was sharing a cell phone with an executive from, in this case, Databricks. Or we were having joint meetings with executives at Snowflake and having them really reinforce the story, not just that the problem we’re solving was real, but that we are the company to help solve it.

You’re not winning by convincing 5,000 companies at once or by getting 50,000 Microsoft sellers to sell your product. You’re winning one by one. It’s like a day by day, you go out and convince these customers that your product is worth betting on. Focus on those individuals, get those wins, and that will earn you the right to scale.

Sabrina: You thought about all these different partnerships. Were there any ways that you’re prioritizing them? Or thoughts on, obviously you said it can take up a lot of time, and so if you’re an early-stage startup, how do you think about who might be the most valuable partner to you? Or how do you stack rank as you’re thinking about building? That’s a critically important part of the process for founders, so I’m curious how you thought about it.

Jake: I mean, part of that goes back to the last piece of advice that you’re partnering with individuals, not with organizations at that point. So, where you find individuals who really want to lean in and have influence in their organization, and really the specific part of their organization you want, you should be a lot more opportunistic, I think, than anything else.

For us personally, what we found, and despite my coming from Azure and having a lot of close contacts at AWS and GCP where we are partnered, was that the breadth of their portfolios made it much harder. So for us, Snowflake and Databricks, because everything that we do helps them achieve their goals. We’re in one of the few positions where it’s by working with Bobsled, a customer does two things immediately. It enables them to share their data on every one of these platforms, which directly drives consumption for every one of those platforms. So, getting data shared in is a good thing.

The other thing it does is it allows you to centralize more and more of your data in the platform of your choice because you are now able to access these other platforms, and you don’t need to worry about lock-in. So we’re in a bit of a unique position with most of these platforms where they all win when we win. It’s one of the few times where we could go to an enterprise and say, “Hey, Databricks would like to talk to you with us because they’re going to win if we win. And Snowflake would like you to talk to us because they’re going to win if we win.”

Anything that you can do to focus on where you are driving value they care about is similar to enterprise sales in the same way. If you’re trying to sell something, you must attach it to a strategic initiative that people care about. You’re not going to close your first half-million-dollar deal on a science project. Over time, it needs to tie to something. It’s the same thing with a partner. Find an individual that you attach to those strategic priorities. As much as you can, find the actual part of the business or possibly the entire company where you’re helping to drive their strategic priorities. You’ll find that they pay a lot more early dividends than trying to figure out if I could get in front of GCPs 30,000 sellers; I know that I wouldn’t have to hire my own sales team.

Sabrina: That’s great advice and I’m sure founders listening today will find that to be very helpful.

As we wrap up here, Jake, just a couple of questions left for you. One is I’m curious, what’s an unexpected challenge? Maybe one of those oh my gosh moments where something just didn’t work out the way that you thought it would, and how did you deal with it?

Jake: I think the one that’s coming to mind right now is that I believe in executive coaching, so everybody at Bobsled has a coach, and I have two. One of the challenges of being a CEO is figuring out the right timeframe for you to focus your energy on. You are responsible for the strategic vision. Especially if you’re building a VC-backed business, you can’t just be focused on, well, if we just win these customers. It needs to be a part of a larger master plan. Moments where I drove the least clarity were when my mind was focused on what it would look like for Bobsled to win two years out or even a year out and not focused on what exactly we needed to do to win right now.

I have an amazing leadership team that is often much more experienced than I am in the roles they bring. I can’t just say, “You figure out now, and you go figure out where we go next.” That’s not how this works. Make sure you’re actually defining what winning looks like for your team and talking constantly about how to win right now.

Make sure you’re still separating the space for yourself to evaluate: Are we going in the right direction? Are we building ourselves into a corner? Is there enough total addressable market here? When is the right time for me to start thinking about expanding our TAM? When is the right time to think about bringing on that next leader? But don’t get caught up in constantly building and thinking about the future. Make sure you’re focused on your actual wind condition today.

Sabrina: I love all that advice. It’s critically important to stay focused on what’s happening in the moment but also have that broader vision, knowing that the 10-year plan is potentially to get some other big wins, maybe that IPO down the road, or whatever else you’re looking forward to.

Jake: You’ve got to be able to convince yourself that there’s a path for you to get there. Otherwise, don’t take this path.

When we had a customer win, I must’ve had at least two minutes of real excitement around that before I thought, okay, we need 100 and some odd more of those to IPO. Like you just got to, you can’t celebrate the touchdowns. That’s the reality of VC. Madrona didn’t invest in Bobsled because they thought we could get to the Series B, you’re investing in Bobsled because you think we can go significantly further. It makes this decision point harder. Well, it’s like, well, I’m building for IPO, and I need to validate that. If we don’t do what we need to do to get to series B, it’s all for nothing. How do you constantly live in that time shift? It’s a mental challenge I hadn’t thought about, and I now spend a lot of time thinking about it.

Sabrina:

Well, it has been a pleasure and honor working with you, Jake, and the rest of the Bobsled team. We’re excited about what you guys are building and know that the best is yet to come. So thanks for allowing us to be part of the journey. And thanks for joining us today. Really appreciate it.

How Writer CEO May Habib Is Making GenAI Work For the Enterprise

Writer is a full-stack generative AI platform helping enterprises unlock their critical business data to deliver optimized results tailored to each individual organization.

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Vivek Ramaswami hosts May Habib, co-founder and CEO of Writer, a 2023 IA40 winner. Writer is a full-stack generative AI platform helping enterprises unlock their critical business data to deliver optimized results tailored to each individual organization. In this episode, May shares her perspective on founding a GenAI company before the ChatGPT craze, building an enterprise-grade product and go-to-market motion with 200% NRR, and whether RAG and vector DBs even have a role in the enterprise. This is a must-listen for anyone out there building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: So, May, thank you so much for joining us. And to kick us off, we’d love to hear a little bit more about the founding story and your founding journey behind Writer. And speaking of, you founded Writer before this current LLM hype, before ChatGPT and in this pre-ChatGPT era, we like to call it. And so much has changed since then. So what is Writer today? What is the product? What are the main value propositions for the customers that you’re serving? We’ll get into the enterprise customers, this great list that you have, but maybe just talk a little bit about what Writer is actually doing and how you’re solving these problems for the customers you’re working with today.

May: Writer is a full-stack generative AI platform. Most of the market sells folks either API access to raw large language models or productivity assistance. And there’s lots of room for insider companies to use both solutions and different use cases. For team productivity and shared workflows, there’s a real need to hook up LLMs to unstructured data to do it in ways where there are real guardrails around the accuracy risks, the hallucination risks, the brand risks, the compliance risks, and regulated industries. And the time to value when you’re building things like synthesis apps, digital assistant apps, and insight apps, there’s a real need to capture end-user feedback.

Putting all of that in a single solution, we have drastically sped up the time market on folks spinning up really accurate, ready-to-scale applications; doing that in a secure way, we can sit inside of a customer’s virtual private cloud. And so we’ve had to be able to build that. We’ve had to own every layer of the stack. We built our own large language models. They’re not fine-tuned open source, and we’ve built them from scratch. They’re GPT-4 quality, so you’ve got to be state-of-the-art to be competitive. But we’ve hooked up those LLMs to the tooling that companies need to be able to ship production-ready stuff quickly.

The importance of building your own AI models

Vivek: You talk about how Writer has built their own models, and this has been a big part of the moat that Writer has built, I think this is something that we’ll touch on, but we continue to see as so important is being able to own your stack, given how many different technologies we see competing on either end. In terms of building your own models, what did that entail for how you thought about building this company from day one? And how would you describe what Writer has done around building models and this movement from large models to small models?

May: It’s really hard to control an end-user experience in generative AI if you don’t have control of the model. And given that we had chosen the enterprise lane, uptime matters, inference matters, cost of scale matters, and accuracy really matters. We all really deemed those things early on pretty hard to do if we’re going to be dependent on other folks’ models. And so we made the strategic decision for text, so multimodal ingestion, text production, so we can read a chart and tell you what somebody’s like, I don’t know, blood sediment rate is because we read it somewhere on the chart, we can analyze and add and tell you if those compliant to brand guidelines, but we’re not producing imagery. With the multimodal ingestion to text and insight production, we made a strategic call almost a couple of years ago that we’re going to continue to invest in remaining state-of-the-art. Today, our models are from the Palmyra-X general model, to our financial services model, to our medical model, and our GPT-4 zero-shot equivalent.

When you pair that with folks’ data, it’s pretty magnificent. Unlike other powerful and state-of-the-art models, this is a 72-bill model that can actually sit inside somebody’s private cloud and not require a ton of resources. For us, a whole host of things have allowed us to be state-of-the-art and still relatively small. That’s still a massive internet-scale model, but everything from the number of tokens the models have seen to just how we have trained them has helped us be super-efficient.

Those are all decisions that stem from that first strategic one, and really important problems are going to have to be connected to data that folks don’t want to leave their cloud. And so to do that, we’d have to be in there, and it would have to be a model that could be efficient, and so we weren’t going to have a bunch of different skills in one. That’s why we’ve got 18 different models, similar types of training data, not too dissimilar, but the skills that they are built for are different.

The role of vector databases and RAG in enterprise AI

Vivek: One point you made here makes me think of a LinkedIn post you recently wrote, illuminating in many ways. You talked about unstructured data and where Writer can go using its models. You sit inside an enterprise and take advantage of the enterprise’s data, which is so important. This is something we hear a lot from companies, which is they need to be able to use their own data securely and efficiently when entering data into these models. We’re hearing a lot about RAG, Retrieval-Augmented Generation, and a lot about vector databases, and a number of them have caught fire. We’re seeing a lot get funded. And I’m sure a number of the founders who listen to this podcast have either used or have played with a vector DB. You have an interesting perspective on RAG and vector DBs, especially from the enterprise perspective. Please share a little bit about the posts you wrote and the perspectives that you have about this tech.

May: I don’t want to be like the anti-vector DB founder. What we do is an output of just the experiences that we’ve had. If embeddings plus vector DB were the right approach for dynamic, messy, really scaled unstructured data in the enterprise, we’d be doing that, but it didn’t, at scale, lead to outcomes that our customers thought were any good. A 50-document application, so a digital assistant where folks are querying across 100 or 200 pages across a couple of things, then vector DB embedding approach, fine. But that’s not what most folks’ data looks like. If you’re building a digital assistant for nurses who are accessing both a decade-long medical history against policies for a specific patient, against best practice, against government regulation on treatment, against what the pharmaceutical is saying about the list of drugs that they’re on, you just don’t get the right answers, when you are trying to chunk passages and pass them through a prompt into a model.

When you’re able to take a graph-based approach, you get so much more detail. Folks associate words like ontologies with old-school approaches to knowledge management, but especially in the industries that we focus on and regulated markets and healthcare and financial services. Those have really served those organizations well in the age of generative AI because they’ve been huge sources of data so that we can parse through their content much more efficiently and help folks get good answers. When people don’t have knowledge graphs built already, we’ve trained a separate LLM. It’s seen billions of tokens. So this is a skilled LLM that does this, that actually builds up those relationships for them.

Vivek: I was going to say you were saying that you don’t want to be the anti-vector DB, and I don’t think it’s anti-vector DB; I think it’s things that chunking and vector DBs work for specific use cases. What was interesting about your post was that, hey, from the enterprise perspective, you need a lot more context than what chunking can provide. This is helpful because many of the founders or companies working in narrow areas don’t often see the enterprise perspective, where all of this context matters versus some chunking. You probably got some interesting responses.

May: Both customers and folks were like, “Thank you so much. I sent that to my boss. Thank you, God.” I’m a non-technical person, so when I explain things to myself, I try to share them with other non-technical folks so that they can also understand them, and that actually helps technical people explain things to other technical people.

We got a lot of folks reaching out, and thanks. Now, of course, our customers know this already. Still, we’re in market maturation, where the underlying techniques, algorithms, and technologies matter to people because they seek to understand. In a couple of years, when this is a much more mature market, people will be talking solution language. Nobody buys Salesforce and is like, so what DB is under there? What are you using? Can I bring my own? But we’re not there in generative AI. And I think that’s a good thing because you go way deeper into the conversation.

People are much better at understanding natural limitations. Nobody is signing up for things that just aren’t possible. The other side to this conversation being so technical is there are people who don’t know as much as they would like to and are now afraid to ask questions. We’re seeing that a little bit, especially in the professional services market, where folks need to come across as experts because they’ve got AI in all their marketing now. Still, it’s much harder to have real, grounded conversations.

Navigating the challenges of enterprise sales in AI

Vivek: The commercial side is interesting because there’s so many startups in AI, and there’s so many technical products and technical founders and companies, but not many of them have actually broken into commercial sales. Even fewer of those have broken into enterprise sales. I know Writer has customers like L’Oreal and Deloitte and a number of others, some of which we don’t really associate with being first movers in tech, and especially first movers in AI. And so maybe you can take us through a little bit of how Writer approaches the commercial aspect of things in terms of getting all of these AI solutions in the hands of enterprise users. Take us through the first few customers that Writer was able to get and how you broke into this. What was the story behind that?

May: Our first company sold to big companies, so in the machine translation and localization era, Waseem and I sold to large companies. And we started off selling to startups, and then I can’t remember, someone introduced us to Visa, and we were like, oh, that’s an interesting set of problems. And definitely, that was probably Qordoba circa early 2016. And so for three solid years, we penetrated enterprises with a machine translation solution that hooked up to GitHub repos, and it was great. We just learned a ton about how companies work, and it really did give us this cross-functional bird’s eye view of a bunch of processes because you think about a company going into a new market, they take a whole slice of their business cross-functionally, and that now has to operate in that language. And once you’re in kind of $100 million cost takeout land, it is really hard to go back to anything else.

Our mid-market deals are six figures, and it’s because of the impact that you can have. Now, it does mean that it’s harder to hire, so yes, we’re under 200 people. I’d love to be 400 people. But we’re interviewing so many folks, dozens and dozens for every open role because you really have to have a beginner’s mindset and just this real curiosity to understand the customer’s business. No customer right now in generative AI wants to have somebody who’s learning on the job on the other side of the phone. And the thing is, in generative AI, we’re all learning on the job because this is such a dynamic space, technology is moving so fast, the capabilities are coming so fast. Even we were surprised at how quickly we got to just real high quality. We launched 32 languages in December, GA in Jan, and it was like, whoa, I really thought it would be a year before we were at that level of quality.

All to say that, we need people who can go really deep. Enterprise sales requires everybody in the company to speak customer, and not generic customer, but you’re talking to CPG, it’s a different conversation in retail, it’s a different conversation in insurance, and really understanding how our customers see success. And it’s not this use case or that use case. That’s a real underutilization of AI when you think about it that way. But what are the business outcomes they’re trying to achieve? And really, not just tying it to get the deal done, but actually making that happen faster than without you. That’s what the whole company has to be committed to.

Hiring for GenAI

Vivek: How do you find that profile? Technology is moving so fast that we’re not experts, and many of us are learning on the job and as learning as things come through. At the same time, you have to find a terrific sales leader or an AE or someone who not only understands AI and the product but understands and can speak to enterprises. So hiring is difficult, but how do you find that person, or are there certain qualities or experiences that you look for that you think are the best fit for the sales culture and group that you’re building at Writer?

May: I would start with the customer success culture because that was hard to get to the right profile. We believe in hiring incredibly smart, curious, fast-moving, high-clock-speed people. And we’re all going to learn what we need to learn together. So there’s no like, oh, that was the wrong profile, fire everybody, and let’s hire the new profile. We don’t do that. What I mean by profile is what we need folks to do. And, of course, over time, you refine the hiring profile so you don’t have to interview so many people to get to that right set of experiences and characteristics. On the post-sales side, we’re looking for end-to-end owners. In customer success, it can be easy for folks to be happy that they got their renewal, or we’re over the DAU/MAU ratio we need to be. We’re just going through a check-the-box exercise. We have a 200% NRR business, and it’s going to be a 400% NRR business soon.

And that doesn’t happen unless you are maniacally focused on business outcomes. This is a no-excuse culture, and it’s necessary in generative AI because the gates come around all over. Matrixed organizations are the enemy of generative AI because how do you scale anything? The whole point of this transformation is that intelligence will now scale, and most organizations are not set up for that. As a CSM, you have to navigate that with our set of champions and our set of technical owners inside of the customer, and that just requires real insistence, persistence, business acumen, and a relationship builder who’s also a generative AI expert. So it’s a lot. And then on the CS side, on the sales side rather, it’s definitely the generative AI expertise, and it’s a combination of the hyperscaler salespeople swagger. And we don’t hire from the hyperscalers.

We interviewed a bunch of folks, but it’s like a guaranteed budget item and a guaranteed seven-figure salary for those sales folks. Obviously, the brands are humongous, and the events budgets are humongous, so it just hasn’t been the right profile. We have loved the swagger. When you can talk digital transformation, and you’re not stuttering over those words, there’s just a confidence that comes across. Interviewing lots of different profiles has helped us come up with ours, and it is growth mindset, humility, but real curiosity that does end up in a place of true expertise and knowledge about the customer’s business, the vertical we have potted up in terms of verticalization that’s going to extend all the way into the product as well soon. My guess is most folks who are building go-to-market orgs in generative AI companies are doing more in a year than other companies do in five because our buyer is changing, and the process is changing. It’s a lot of work streams.

Vivek: It’s a lot. And I think I heard you drop the number 200% NRR or something in there. I want to make sure I heard that correctly because that’s really incredible. And so hats off to the team that’s-

May: 209.

Vivek: That’s basically the top 1% of 1%. It’s interesting to contrast that with other AI companies that we’ve seen in the last 12 or 18 months. We’ve all heard stories of others where, probably not enterprise-focused GenAI products, but the term GenAI wrapper has been thrown around, and a lot of them have focused on more of the consumer use cases. They’ve had incredible early growth, and then in the last six to 12 months, we’ve also seen a pretty rapid decrease or a lot of churn. Is that something that you all had thought about early on in terms of the focus of Writer? Did you think about that early as a founder trying to see what worked?

Creating a Generative AI Platform

May: Around ChatGPT time, I think there was a fundamental realization among our team, and we wrote a memo about it to everybody and sent it to our investors, that real high-quality consumer-grade multimodal was going to be free. It was going to go scorched earth. That was clear, and it has come to pass. The other truths that we thought would manifest that we wrote about 15 months ago, every system of record coming up with adjacencies for AI that the personal productivity market would be eaten up by Microsoft. And so for us, what that really meant was, how do we build a moat that lasts years while we deepen and expand the capabilities of the Generative AI platform? And so what was already happening in the product around multifunctional usage right after somebody had come on, we basically were able to use that to really position horizontal from the get-go.

That got us into a much more senior conversation, and we worked to buttress the ability of our Generative AI platform to be a hybrid cloud. We’ve got this concept of headless AI where the apps that you build can be in any cloud. The LLM can be anywhere. The whole thing can be in single tenant or customer-managed clouds, which has taken 18 months to come together. We will double down on enterprise, security, and state-of-the-art models. That’s what we’re going to do, and we’re going to do it in a way that makes it possible for folks to host the solution. I think even those companies have reinvented themselves. A lot of respect for them. But the difference is that in a world of hyperscalers and scorched earth, all the great things OpenAI is doing are super innovative, and every other startup is trying to get a piece. The bar for differentiation went way up 15 months ago for everybody.

Vivek: Hats off on navigating the last 15 to 18 months in the way that you and the team have because it’s incredible to see, compared to a lot of the other companies that are both on the big side and the small size incumbents, startups that are all challenging different parts of the stack. Two questions for you that are more founder-related for you as a founder; let’s start with the first one: what is a challenge that came up unexpectedly, call it, in the last six months that you had to navigate, and how did you navigate it?

May: More than half the company today wasn’t here six months ago. Six months ago we had just closed the series B. And I think in the last few months, it’s been just this transition from running the company to leading the company — if that makes sense. Then working with the entire executive team around just naming the behaviors, the inclinations, the types of decisions that we wanted everybody to be making and to be empowered to make, and then running a really transparent org where information went to everybody pretty quickly.

We’ve got a very competitive market that’s really dynamic, that is also new. Signal-to-noise has to be super high or else everybody would end up spending 80% of their day reading the news and being on Twitter. We needed folks to make the right decisions based on the latest insights, the latest things customers and prospects were telling us, the latest things we were hearing, latest things product was working on, and all those loops had to be super tight.

Vivek: Its execution and speed matters, especially in this space.

May: Yes and execution while learning. I think it’s easier if you’re like, all right, Q1, OKRs, awesome, CU and a Q1. But this is a really dynamic space, and the hyperscalers are militant. This is really competitive.

Vivek: All right, last one for you, what’s the worst advice you’ve ever gotten as a founder?

May: The worst advice that I have taken was early in Qordoba days, hiring VPs before we were ready. It felt like a constant state of rebuilding some function or other on the executive team. That’s such a drain. We have an amazing executive team, we’ve got strengths, we’ve got weaknesses. We’re going to learn together. This is the team. And it’s why we spend so long now to fill every gap. We’ve got a head of international, CFO, CMO. We’re going to take our time and find the right fit. But those were hard-won lessons. The advice that we got recently that we didn’t take, was to not build our own models. And I’m really glad we didn’t take that advice.

Vivek: I was wondering if that might be something that came up because you’re right; we see so much activity around saying, hey, build on top of GPT, build on top of this open-source model. It works for some sets of companies, but as you say, thinking about moats early on in technology and IP moats from the very beginning is only going to help you in the long run. Well, thank you so much, May. Congrats on all of the success at Writer so far. I’m sure the journey’s only beginning for you, and we’re excited to see where this goes.

May: Thank you so much, Vivek. For folks listening, we’re hiring for literally everything you might imagine. So I’m May@writer, if this is interesting.

Vivek: Perfect. Thanks so much.

Dust Founders Bet Foundation Models Will Change How Companies Work

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Madrona Partner Jon Turow hosts Gabriel Hubert and Stan Polu, the co-founders of a 2023 IA40 winner, Dust. Dust enables information workers to create LLM-based agents that help them do their own work more efficiently. Gabe and Stan go way back, having started a data analytics company that was bought by Stripe in 2015 where they both stayed for about five years. After stints at a couple other places like OpenAI for Stan, they decided to come back together in 2023 to work on Dust.

In this episode, Jon talks to Gabe and Stan about the broader AI/LLM ecosystem and the classic adoption curve of deploying models within an enterprise. They talk about Dust’s bet on horizontal versus vertical, the builders of tomorrow, limitations of AI assistants, and so much more.

This transcript was automatically generated and edited for clarity.

Jon: You have made a fundamental product choice that I’d like to study a little bit, which is to build something horizontal and recomposable. Depending on what choice you made, I imagine we’d be having one conversation or the other. On horizontal, on composable, you’re going to say, “There’s so many use cases. Flexibility is important, and every different workload is a special snowflake.”

If you had decided instead to build something vertical, it doesn’t have to be AGI, but you’d be saying, “To really nail one use case, you need to go all the way to the metal.” You need to describe the workflow with software in a very, very subtle and optimized way. Both of these would be valid. How do you think about the trade-off between going vertical and going horizontal? How have you landed on that choice that you made?

Stan: It comes with complexities and risks to go horizontal. At the same time, if you assume that the product experience will evolve so fast, it’s also the best place to do that kind of exploration, that playground for product exploration. It’s the intersection of all the use cases at the intersection of all the data. During the current age of GenAI where we are in a conversational agent interface, we are making the right bet. It means we are on par with the UX, but we have access to much more data and much more flexibility.

Empirically, we have checked that hypothesis. This means that some of our users have been confronted with verticalized solutions. We beat the verticalized solution in many cases because we provide more customer stability. Customer stability comes in two flavors. First, provide instructions that better match what the user wants to do in their company, depending on the culture or existing workflows. Second, being able to access multiple sources of information.

On some use cases, having two sources of information and one being the one associated with the verticalized use case, you are much better off because you have access to all the information that lives on Slack, all the information that lives on Notion that’s related to the best case use case. Many of them could be sales and customer support use cases. That enables people to create assistants that are much better by being able to tap into that information that the verticalized SaaS products will never be able to tap into. As a verticalized product, it will be either the incumbent or somebody building on an incumbent, somebody building on a customer support platform, somebody going on the sales data platform, whatever it is.

In that interim, where the UX is being defined, we have an incredible advantage of being horizontal. That may close in the future because as we realize what it is to use models in a particular use case efficiently, then it’s all going to be about not getting a text, but it’s going to be about getting an action ready to be shipped. It’s about being able to send an email to prospects automatically from the product, etc. There, we might have a steeper curve to climb with respect to the verticalized solution because they’ll be more deeply integrated and have the ready-to-go action built probably before us. It’s still an open-ended question. As it stands today, it creates a tremendous advantage.

We’re still in the process of exploring what it means to deploy that technology within companies. So far, that bet has been good, but that’s the most important product purchase we’ve made. We’re checking it every week, every month, almost every day. Was that the right choice? We can comfortably feel that it is the right choice today. We also realize we’re inside an ecosystem that is on moving ground. That’s something that has to be revisited every day.

Gabe: Some of the convictions from data access being a horizontal problem rather than a vertical problem when you distribute to a company have helped with that choice. You can convince a CISO of the validity of your security measures and practices. It’s just as hard to do whether you need one sliver of that data set or all. Your bang for the buck is better when you want to play with the more concrete use case of having access to different sources of information to deliver on a fairly simple example. Imagine you had access to a company’s production databases. You could generate very smart queries on those production databases any day of the week. That’s a product that we hope to see today with these SQL editor interfaces that everybody in the team is supposed to use.

Where in that product is the half-page document describing an active user? Or what a large account is? What do we consider to be a low performance on customer support scores? Those documents live in a different division as the company grows. It’s a division that doesn’t even know a code version exists. It’s meetings where their definitions are updated in a very business-driven setting. That constitution of what an active user is in a separate set of data.

For somebody within the company at a low cost, to be able to ask an agent a question about the number of active users in Ireland or the number of active and satisfied users, you have to cross those data sets. That’s almost systematically the case. A lot of the underperformance you could look at or a skeleton audit in companies today comes from this siloed conversation. To us, being excited to start another company, trying to build an ambitious product company with the experience we’ve had from companies that have seen traction, that have grown fast, that have burnt out, incredible individual contributors that start seeing the overhead and the slowness and how tricky it is to just get a simple decision pushed through because nobody has the lens to see through all these divisional silos, it seems more exciting too to build a solution to that problem. When we pitch it to people who’ve seen and felt that pain, that’s the spark that goes off.

It is a risk. But the people excited with that type of software existing in their team, I argue that they were excited to build for the years to come. Come back in five years, and let’s see if we were right on it.

Jon: Oh, come on. It’s going to be one year.

Gabe: Yeah. That’s true. Fair enough. Fair enough.

Jon: It’s so tempting to project some Stripe influence on the horizontal strategy that you guys have taken. Before I hypothesize about what that would be, can you tell me what you see? What is the influence of how Stripe is put together on the way you’re building Dust?

Stan: In terms of how Stripe was operating, which influenced us in defining what we were to build with Dust, there is a lot of creating a company OS where people have the power to do their job at that desk, which means having access to information and a lot of trust. Some of that has trudged to our product decisions. We’ve built a product that’s not a great fit for a company with granular access control on who has access to that very small folder inside that Google Drive. Those people added only manually on a case-by-case basis.

We are optimistic that the companies that will work the best in the future are the ones that make their internal information available. That is, at the same time, a great fit for building AI-based assistants.

Gabe: Regarding the product, there’s a ton of inspiration from what Stripe did to developers. It gave them the tools to have payments live in the app before the CFO managed to meet with an acquiring bank to discuss a payments pool. It was like if the developer came to the meeting and said, “It’s life. We’ve already generated a hundred bucks in revenue. What are the other questions?”

I think if we can build a platform that puts to bed some of the internal discussions as to which provider of a frontier model we should go and trust for the next two years, a builder internally says, “I just built this on Dust, and we can switch the model in the back if we feel like it’s going to improve over the next months.”

That’s a scenario that the aggregation position is a good one. It requires you to be incredibly trusted. It requires composability. It does mean a set of technical decisions that are more challenging locally. But it enables, optimistically, I think to Stan’s point, some of the smarter, more driven people to have the right toolkit. That’s something that we take from Stripe. Stripe was not a fit for some slower companies when we were there, which ended up okay.

Jon: When we think about the mission you folks are going after, there’s so much innovation happening at the model layer. One thing that we’ve spoken about before is there’s a lot you can accomplish today. When we start talking about what it is that you can accomplish with Dust, can you talk about the requirements that you have for the underlying tech?

Gabe: One of the core beliefs we had in starting the company that essentially dates back to Stan’s time at OpenAI and his ability to be front-row seats at the progression of the various GPT models is that the models were pretty good at some tasks and surprisingly great at some tasks. We will face a deployment problem for that type of performance to reach the confines of the online world. The opportunity was to focus on the product and application layer to accelerate that deployment. Even if models didn’t get significantly more intelligent in the short term if you’re a little hesitant about calling an AGI timeline in the years to come, there’s an opportunity to increase the level of effectiveness people can have when they have a job that involves computers with the current technology.

In terms of requirements with the technology, for us, it’s let’s make the models that are available today, whether they’re available via API as business models or they become available because they’re open source and people decide to host them and make them available via API, et cetera, et cetera, and package them such that smart, fast-moving teams can access their potential in concrete day-to-day scenarios that are going to help them see value.

Stan: The interesting tidbit here is that the models will get better as research progresses. On their own, they’re not enough for deployment and for acceptance by workers. At the same time, the kind of use cases that we can cover depends on the capability of the model. That’s where it’s different from a classical SaaS exercise because you’re operating in an environment that moves fast. The nature of your product itself changes with the nature of the models that it evolves.

It’s something that you have to accept when you walk in that space — the hypothesis that you make about a model might be changed or might evolve with time, and that will probably require changing or evolving your own products as that happens. You have to be very flexible and be able to react quite rapidly.

Jon: There are two vectors we spoke about before when we’ve discussed this in the past. One is that if you stop progress today with the underlying models, there are years of progress that we can make. The other is that if we go one or two clicks back in history to say mobile apps, we saw that there were years of flashlight apps before the really special things started to arrive.

Where would you put us on that two by two of early models versus advanced and how much it matters versus not?

Gabe: What’s interesting is to talk about people who’ve been on the early adopter side of the curve, who’ve been curious, who’ve been trying models out, and who’ve probably been following the evolution of ChatGPT as a very consumer-facing and popular interface. You get this first moment of complete awe where the model responds in something particularly coherent. I’m asking it questions, and the answers are just amazing. Then, as I repeat use cases and try to generate outputs that fit a certain scenario, I’m sometimes either surprised or disappointed.

The stochastic nature of the output of the models is something other than what people think about at the very beginning. They attribute all of the value to pure magic. Then, as they get a little more comfortable, they realize that it might still be magic, or they might be unable to explain it technically. Still, the model isn’t always behaving in a way that’s effectively predictable enough to become a tool.

We’re early in understanding what it means to work with stochastic software. We’re on the left side of the quadrant. In terms of applications, the cutting-edge applications are already pretty impressive. By impressive, I mean that they fairly systematically deliver at a fraction of the cost or at a multiple of the usual speed, an outcome that is relatable with or on par with human performance.

Those use cases exist already. You can ask GPT-4 or a similarly sized performant model to analyze a 20-page PDF. In seconds, you will get several points that no human could compete with. You can ask for a drill-down analysis or a rewrite of a specific paragraph at a fraction of the cost of what you can ask on a Fiverr marketplace or an Upwork marketplace for some tasks. We already have that like 10x faster, 10x better.

In terms of broad adoption, especially by companies, if you take ChatGPT with a few hundred million users, that still leaves 5.4 billion adults who have never touched it and don’t even know what it means by some scale for sure. If you go into the workplace, there are very few companies that are employing generative artificial intelligence at scale in production that were not created around the premise of general artificial intelligence being available.

Some companies have been created in the last years and do, but most companies that existed before that timeline are exploring. They’re exploring use cases. They’re rebuilding their risk appetite and policies around what a stochastic tool might bring regarding upside and downside opportunities and risk. We’re still very early in the deployment of those products.

One indication is that the conversational interface is still the default that most people are using and interacting with when it’s likely that it shouldn’t just be conversational interfaces that provide generative artificial intelligence-powered value in the workplace. Many things could be workflows; many things could be CronJobs. Many applications of this technology could be non-chat-like interfaces, but we’re still in a world where most of the tests and explorations are happening in chat interfaces.

We still want the humans to be in the loop. We still want to be able to check or correct the course. It’s still early.

Stan: One analogy I like to use is that today, to recall what Gabriel just said on the conversational interface, it really feels like we are in the age of Pong, the game for models. You’re facing the model. You’re going back and forth with it. We still need to start scratching the multiplayer aspect of it. We haven’t yet started scratching and interacting with the model in new ways and more efficient ways.

You have to ask yourself, what will be the civilization for LLMs? What’s going to be the Counter-Strike of LLMs? That is equally important to dig into compared to model performance. The mission we set for ourselves is to be the best for our users to be the people who dig in that direction and try to invent what’s going to be the C-5 of interacting with models in the workspace.

Jon: Can you talk about the mission for Dust in the context of the organization that’s going to use it?

Stan: We want to be the platform that companies choose as a bet to augment their teams. We want it to be a no-brainer that you must use Dust to deploy the technology within your organization. Do it at scale. Do it efficiently. This is where we’re spending cycles. It’s funny to realize that every company is an early adopter today. The board talks about AI, the founders talk about AI, and so the company itself is an early adopter. But once you get inside the company, you face a classical adoption curve. That’s where product matters because that’s where the product can help deploy the companies through that chiasm of innovation inside the teams. We want to be the engine of that.

We’re not focusing on the models; we’re trying to be agnostic of the models, getting the best models where they are for the needed use cases. Still, we want to be that product engine that makes deploying GenAI within companies faster, better, safer, and more efficient.

Gabe: One of the verbs that we use quite a lot that is important is augmenting. We see a tremendous upside scenario for teams with humans in them. We don’t need to spend a lot of our time working for companies that are trying to aggressively replace the humans on their teams with generative artificial intelligence because that’s shaving a few percentage points of operational expenditure. There’s a bigger story, a play here, which is if you gave all of the smartest members of your team an exoskeleton, an Iron Man costume now, how would they spend their day, their week, their quarter? What could happen a few quarters from now if that opportunity compounds?

When we decide at Dust about different avenues to prioritize, one that’s consistently a factor is whether we are augmenting or replacing. By replacing, there’s a high likelihood that we, one, provide faster horses to people who see the future as extrapolated from the present. It’s like, “I need to replace my customer service team with robots because robots are cheaper,” when the entire concept of support and tickets as an interface between users and a company is to discuss how a product is misbehaving and may be challenged in the future.

It’s a tension for us because there are some quick wins or deployment scenarios that many companies are considering. It helps us explore and spend time on some of the cars instead of the faster horses scenarios dawning upon us.

Jon: I think it has implications not just for the individual workers, but to your point, Stan, and to your point, Gabriel, there’s going to be a difference in how the employees interact with one another. I’ve just put it to you one way. If I’m going to decide whether to join your company or not, and you’re going to tell me, “You should because I have Dust,” what would be the rest of that explanation?

Gabe: It’s a great point. I think that that’s an example we sometimes use to describe what an ambitious outcome for the company in a few years’ time or what an ambitious state of the world would be for our company in a few years’ time. If you take a developer today — the senior developer getting offers from a number of companies — and in the interview process getting to ask questions about how that company runs its code deployment pipeline. I can ask how easy it is to push code into production, how often the team pushes code into production, and what a review process looks like.

I can read a lot into the culture of that company on how it respects and tries to provide a great platform for developers. Today, developers are at the forefront of being able to read in the stack that the company has chosen, how they prioritize their experience. If you do not have a version control software that allows for pull requests, reviews and cloud distribution that works and is fast, I don’t think you’re very serious about pushing code.

We think that the future has more of the roles within a company having associated software. You could argue that, to a degree, Slack has created that before and after aspect, where if you’re applying at a company today and you ask how do employees check in with each other in an informal way to get a quick answer on something that they’re blocked on and the employer says, “We have a vacuum tube system where you can write a memo and just pipe it in one of the vacuum tubes that’s available at the end of the floor and you’ll get a response within two days,” that should help.

You’re like, “Okay, great.” I don’t think that real-time collaboration is prioritized. We think there’s a continuum of those scenarios that can be built. For us to be able to imagine a future where employees say, “Hey, we run on Dust.” We would love that to be synonymous with, “Hey, we don’t prioritize or incentivize busy work.” Everything that computers can do for you, which really computers should have been doing for you decades ago, we’ve invested in a technology that helps that happen. We’ve built a technology that helps burn through overhead and time sinks of finding the information, where it is, understanding when two pieces of information are contradictory within a company and getting a fast resolution as to which one should be trusted. The OS of that smart fast-growing team is something that we hope to be a part of the strategy for.

Jon: That’s such an evocative image of the vacuum tube. I actually bet if there were a physical one, people would like that as long as there was also Dust.

Gabe: It could be a fun gadget that people actually send employee kudos notes to at the end of the week and just team phrase updates.

Jon: What we’re talking about, though, is there’s a metaphor of the agent. Which is really in our 2023, and 2024 brain, we think of that as another member of the team. Maybe a junior member of the team at the moment. I think it was something you said, Gabriel. That the binary question of whether it’s good enough or not is actually not useful. But rather, how good is it? How should I think about that?

Gabe: Yeah. I stole it from the CIO of Bridgewater, who their communication around GPT-4 was compared to a median associate or analyst, I can’t remember what the name of the roles was. We believe that it performs slightly higher than a junior-level analyst on these tasks. Bridgewater is a specific company that has an opportunity to segment a lot of its tasks in a way that maybe not all companies are able to do.

As soon as you’ve said that, a number of logical decisions can be made around that. We often get asked about specific interactions that people have had with Dust assistants. Like, “Hey, why did it get that answer wrong?” I was like, “Assistants are getting a tough life in those companies because a lot of your colleagues get a lot of stuff wrong a lot of the time, and they never get specifically called out for that one mistake.”

That’s part of the adoption curve that we’re going through where it’s at the level of an interaction. You’re looking at a model that might be, on average, quite good and sometimes quite off. Instead of turning your brain off, you should probably turn your brain on and double-check. At the level of the organization, you’re getting performance that is, in specific scenarios, potentially higher than the median. Then, it was higher than if it got pushed to another team member for that specific set of tasks.

As models get better and the structural definition of the tasks we give them gets clearer, and as the interfaces that help feedback mechanisms get more and more used, those scenarios will progress. The number of times you feel like the answer is good enough, better than what you would’ve come up with on your own, or better than what you could have expected if you had asked a colleague.

One of the things that we systematically underestimate here is also the latency. Ask a good colleague to help you with something. If they’re out for lunch, they’re out for lunch. That’s two hours. General Patton is the one who says, “A good plan violently executed today beats a perfect plan executed tomorrow.” If, as a company, you can compound and rely on that compounding edge in terms of execution speed, the outcomes will be great.

Jon: What we’re talking about is assessing agents not by whether they’re right or wrong but by their percentile of performance relative to other people. Yet, there’s another thing that you both have spoken about: the failure modes will be different. It’s easy for a skeptical human, especially, to say that one weird mistake means this thing must be done.I don’t think it would be the first time in history that we’ve mis-evaluated what technology would bring us by judging it on some anecdotal dimensions.

Stan: Something interesting to jump back on your 2023 brain and how we might not be foreseeing correctly; there’s a massive difference between having intern-level or junior-level assistants where this is a human, so you want to leave the task to them entirely and leave some agency to them. The shape of tasks that can be given to that person is defined by their capability and the fact that they’re junior and have assistants where the agency is kept on the side of the person who has the skills. There’s a message difference between what you can do with junior-level assistants where you keep the agency versus just junior assistants for the humans.

It will be interesting to see how that automation and the augmentation of humans play out. It might be the case that it will be very different from adding 10 interns to a human and adding 10 AI-based assistants to human. It may well be the case that 10 AI assistants augment humans much more than having 10 interns. There’s going to be an interesting landscape to discover.

Jon: Depending on how you frame this, I’m either going forward or backward in history from unlimited manual labor to spreadsheets. A Dust agent reminds me in many ways of a spreadsheet.

Gabe: In terms of capability and the level of abstraction versus the diversity of the tasks, that’s not a bad analogy. It’s unclear if the primitives that currently exist on Dust are sufficient to describe the layer and space that we really intend on being a part of. If we are successful, Dust will probably retain some properties of malleability, the ability to be composable, programmable, and interpretable by the various humans that are using it, which does remind me of spreadsheets, too.

Jon: One thing that you see in your product today is a distinction between the author and the consumer of a Dust agent. It’s reasonable to expect there’s going to be a power law of distribution of more people consuming these things than creating them. Were there some way to really measure spreadsheet adoption? I’m quite sure we’d see the same. That a handful of spreadsheets, especially the most important ones, get created by a small number of us and then consumed by many more of us.

These things are infinitely malleable, and many people can create a silly one that is used once and thrown away.

Gabe: We see that today in some of our customers, who will see the assistants. I had a product manager admitting that they had created a silly assistant mixing company OKRs and astrology to give people a one-paragraph answer on how they should expect to be doing in the quarter to come. They were admitting that it was a distribution mechanism for them. It’s like, “I just want people to know how easy it is to create a Dust assistant, how easy it is to interact with it, and how non-lethal it is to interact with it, too.” There’s always that fear of use case, all of this usage scenario.

The reason we believe that they’re not going to be developers is that the interface has become a natural language in many cases; you’re essentially just looking at the raw desire for some humans to bend the software around them to their will. I think the builders of tomorrow with this type of software have more in common with very practical people around the house who are always fixing things and who won’t let a leak go for two weeks unattended, who’ll just fix the leak with some random piece of pipe and an old tire. It just works, and it’s great. That is seeing opportunity and connecting the Lego bricks of life.

One of the big challenges for companies like us is how to identify them. How do you let them self-identify? How do you quickly empower them such that the rest of the team sees value rapidly? One of the limitations of assistants like Dust within a company is access to the data that the company has provided to Dust. The number of people controlling access to the data gates is even smaller than the number of people who can build and experiment with assistants in some cases. How can a builder reach out to somebody at the company with the keys to a specific data set and say, “Hey, I have this really good use case? This is why I feel we should try it out. How could I get access to the data that allows me to run this assistant on it?” Those are all the product loops.

They have nothing to do with model performance. They have everything to do with generating trust internally about the company, the way the product works, the technology, and where the data goes, all these things that are product problems.

Jon: If I move half a click forward in history, you start to think about data engineering and how empowering it was for analysts to get tools like dbt and other things that allowed them to spin up and manage their own data pipelines without waiting for engineers to turn the key for them. That created a whole wave of new jobs, a whole wave of innovation that wasn’t possible before. To the point that now, it’s impossible to hire enough data engineers.

There’s this empowering effect that you get from democratizing something within a company that was previously secured — even if for a really good reason. I’m connecting that to the point that you made, Gabe, the data itself that feeds the agents is often the key ingredient and has been locked down until today. Based on the use cases that you’re seeing, this is going to be a fun lightning round. My meta question is, has the killer agent arrived? Maybe you can talk about some of the successes and maybe even some of the fun things that aren’t quite working that your customers have tried.

Gabe: I think that killer agent is a product marketable concept that you can slap on your website, and 90% of people who visit upgrade, regardless of their stage of company, their developing, et cetera, et cetera, I don’t think we’re there yet. They ask questions that the Dust, let alone an LLM without a connection to the data, would have no chance of answering.

Those are some interesting cases where I think we’re failing locally and tactically because the answer is not satisfying. Where I’m seeing weak signals of success is that people are going to Dust to ask the question in the first place.

On some of the use cases that we’re incredibly excited about, it’s almost similar situations, but with satisfactory answers, where people are asking surprisingly tough questions that require crisscrossing documents from very different data sources and getting an answer that they unmistakably judge as being way better than what they would’ve had by going to the native search of one SaaS player or by asking a team member, et cetera, et cetera.In some cases, the number of assistants that midsize companies generate on Dust is high. Do you see that as a success or a failure? Does that mean that you’ve been able to give the tools for a very fragmented set of humans to build what they need, and you interpret it as success? Or we’ve essentially capped the upside that they can get from these two specific assistants? That’s still one of the questions that we’re spending a lot of time on today.

Jon: If we go back to our trusty spreadsheet metaphor, there are many spreadsheets. They’re not all created equal.

Gabe: Yeah, it’s fine. Maybe it’s fine. Maybe yeah, not all spreadsheets need to be equal.

Jon: Thank you so much for talking about Dust and your customers. I think customers are going to love it.

Gabe: Awesome. Thank you very much for having us.

Stan: Thank you so much.

Coral: Thank you for listening to this IA40 Spotlight Episode of Founded & Funded. Please rate and review us wherever you get your podcasts. If you’re interested in learning more about Dust, visit www.Dust.tt. If you’re interested in learning more about the IA40, visit www.IA40.com. Thanks again for listening, and tune in a couple of weeks for the next episode of Founded & Funded.

 

From Creating Kubernetes to Founding Stacklok: Open-Source and Security with Craig McLuckie

From Creating Kubernetes to Founding Stacklok: Open-Source and Security with Craig McLuckie

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Managing Director Tim Porter chatted with Craig McLuckie, who’s most known as one of the creators of Kubernetes at Google. Madrona recently backed Craig’s company Stacklok, which is actually the second company of Craig’s Madrona backed (Heptio in 2016).

Stacklok is a software supply chain security company that helps developers and open-source communities build more secure software and keep that software safe. Tim and Craig discuss Stacklok’s developer-centric approach to security, the role of open source in startups, the importance of open source and security, and they also hit on some important lessons in company building — like camels vs. unicorns – and where Craig sees opportunities for founders. It’s a must-listen for entrepreneurs out there. But, before I hand it over to Tim to take it away, don’t forget to subscribe wherever you’re listening.

This transcript was automatically generated and edited for clarity.

Tim: I’m very excited to be here today with Craig McLuckie. It’s the second time I’ve had the privilege of working with Craig on a startup. So, Craig, It’s awesome to be with you here today, my friend. How are you?

Craig McLuckie: Oh, I’m doing great. Thanks for having me on.

Tim: Absolutely, It’s our pleasure. Well, I didn’t do it justice. Tell us a little bit about Stacklok and what you’re building now and a bit of the founding story, and then we’ll double back and talk more about how some of your experiences in Kubernetes and Heptio, et cetera, led you to build this company now.

Stacklok & Securing Open Source

Craig McLuckie: The story behind Stacklok is it’s a little company, a series A company, Madrona backed, that was started by myself and my friend Luke Hinds, who was the founder of a project called SigStore. The story behind Stacklok goes back several years. I’ve been known for my work in the cloud native computing space, and I had some success with open-source efforts like Kubernetes and many other projects that we built on the back end of Kubernetes to operationalize it and make it more accessible to enterprises. Open source has served me incredibly well as a professional, and I’ve spent a lot of time in open source building open-source communities and navigating those landscapes.

One of the things that occurred to me is that it seems obvious, but open-source is important. It is the foundational substrate for what is driving a substantial portion of human innovation right now. We spend a lot of time talking about generative AI, and you look at something like ChatGPT, and we dig into what’s behind the scenes, and there’s Linux, there’s Kubernetes, there’s Python, there’s PyTorch, there’s TensorFlow. All of those were precursor technologies before many ChatGPT-specific IPs even lit up. That’s a significant stack, and it’s all open-source technology.

The question that went through my mind and continues to echo with me is, we’re so contingent on open-source, but how do we reason about whether open-source is safe to consume, as a fundamental building for almost everything we consume? Historically, the currency of security has been the CVE, so there’s a critical vulnerability associated with a piece of technology. However, it’s increasingly been challenging for organizations to deal with this. My interest in this space predates things like the SolarWinds incident, which got people to think about supply chain security. This only predates the Log4J incident, which continues to plague enterprise organizations. It comes down to this: we are contingent on these technologies, but we don’t necessarily have a strong sense of the security or sustainability of the organizations producing them. We’re not necessarily creating the right amount of recognition for organizations going above and beyond to produce more open-source security.

What we’re doing with Stacklok is two things. One is we’re building a set of intelligence that helps organizations reason about open-source technology, not just in terms of whether it looks safe, whether it has a low number of vulnerabilities or static analysis being run against it, et cetera, but also increasingly whether the community that’s producing it is operating in a sustainable way or not. So, producing that signal is a good starting point. Still, we also want to make sure that on the consumption side, when organizations are looking to bring that technology in-house and continue to invest in it and build around it, we have the apparatus to drive sustainability to create control loops that enable folks that are both producing open-source and consuming open-source to institute policies to make sure they stay within the lines of safe consumption. It’s about bringing those two things together in a set of technologies backed by open-source sensibilities to help organizations navigate this incredibly critical landscape.

Tim: Stacklok is built on open-source, or at least around Sigstore for open-source, as companies and developers ingest lots of open-source, put that together to build their product, and then have to keep that updated over time. So those open-source projects that fed it continue to be updated themselves. So you get to track that whole chain. Talk a little bit more about how that works and how Sigstore plays into the strategy here.

Knowing where open-source tech comes from

Craig McLuckie: One of the most important things to know when consuming an open-source piece of technology is where it came from. It’s less obvious than people might think. If you go to a package repository and download a package, and then you install the package and run the package, you’ll look at the package repository’s metadata for where that package was published. So hey, it was built in this repo by this organization, et cetera. But how do you know that that’s the case? Because most of that information is self-reported, it’s not being generated. The answer is that folks aren’t even publishing the signatures associated with the public keys associated with signatures.

You need something that can deterministically tie a piece of software back to its origins. This is what Sigstore has done. It’s effectively a bridge between an artifact and the community that produced it, and it’s a set of tools that make it easy for a community that’s producing something to sign that thing and say, hey, we produced it, here, and this is what it contains.

Now, Sigstore is one little piece of the story. It ties the software to the origins but doesn’t tell the origin story. It does not necessarily give you insight into the community behaving responsibly or show what the transitive dependency set will look like. We’ve created this technology called Minder, which helps communities or consumers of software or producing software operate in more intrinsically sustainable ways. Let’s say, hey, I want all of these policies applied to all of the repos associated with producing this piece of technology and make sure that those repos stay in conformance.

When a new merge request comes along with something sketchy, let’s block that merge request and recommend an alternative instead. If someone misconfigures one of your repos so that branch protections are off, let’s catch that in the moment and make sure that that can be remedied. In so doing, you’re producing valuable information about how that community produced that piece of software, which then feeds the machine so that all that information then becomes a context that can get written into a transaction log like Sigstore so that the next person who is coming along to consume that piece of software now has that intelligence. It can make informed decisions about the consumption of that software. It’s turtles all the way down because when you look at a piece of software, it’s composed of other pieces of software that are composed of other pieces of software. Enabling these organizations that are in the process of composing things together and enhancing them to document their practices and write them into a record that someone can then consume subsequently is an incredibly powerful metaphor.

Tim: You mentioned supply chain security and talked about some of the major exploits that have occurred, like SolarWinds. For the audience, put Stacklok in the context of the broader supply chain security market. If it’s a market, it’s an umbrella term that gets used by a lot of different companies. It might be a little confusing for a lot of people and confusing for customers, too. Can you help frame this?

Craig McLuckie: First, Let’s look at the security landscape and consider how the world has worked. Historically, it used to be a world where hackers were effectively like burglars sneaking around your neighborhood, and they would look for an unlocked window and then open the window and sneak into your house and steal your silverware. That was the world that existed.

With the SolarWinds incident, we’ve seen that it’s not enough to lock your windows and turn on the alarm system with some of these active security practices. The burglars are now getting jobs at window manufacturing companies or are breaking into window manufacturing companies and tampering with the apparatus that produced the latches on windows so that the windows that are being produced and installed are intrinsically insecure, so when everyone’s out at a town banquet, they can go and clean out the entire neighborhood. That’s the sea change that we’re seeing in the security landscape.

It’s not enough to look at a piece of software today and say, hey, this thing has no known vulnerabilities, it’s good. No known vulnerabilities don’t necessarily mean anything. It could be that it’s good, or it could also mean that it’s perfectly malicious, meaning it was produced by someone sufficiently sophisticated to make it look exactly like what you want to consume. Still, they added a little naughty something that will ruin your day when it gets deployed into your production system. This idea is about understanding and knowing where your software is from and its origins, like understanding the farm-to-table journey around your software.

In terms of our positioning, where does this start? It starts with your developers. It’s not enough to say we will insert this into the production environment because, by the time you put something as a control in a production environment, your organization is incredibly invested. Failing a quality scan in a production environment is the first time you discover a Log4J instance in something you’re trying to deploy. That’s very painful because you must return to the development team. You have to figure out who produced that thing and go back to them and say, hey, this is not okay. They then have to go and reproduce it, revalidate it, redeploy it, and get it to you, and it takes an inordinately long time to deal with.

You want to intercept developers not just at the time when they’re submitting a merge request but when they type that import statement and give them signals saying, yeah, what you’re doing is good, or, what you’re doing is probably going to get you hurt later down the pipeline. Next, instituting further controls, starting with the Git repository, moving into the pipelines, the container registry, and the production environment. You have these controls along the entire journey, and you can look back on the farm-to-table story of a piece of software you’re looking to deploy.

The Role of Developers in Security

Tim: You mentioned being focused on the developer. Stacklok’s a very developer-first, developer-centric company and product you’re building. A lot of times, you think of security software as something that’s more top-down; it’s a necessary evil, and it’s being imposed on you from above within management. Talk more about why it’s important to start developer-first and how it can become something equal that a developer wants to embrace and ultimately create a better experience for their users.

Craig McLuckie: One of the things that is true about developers is they generally want to do the right thing. It’s not like developers are sitting there saying, you know what? I want to produce insecure code, or you know what? I want to mess with my ops team’s day and produce something insecure. They don’t want to go back and forth with the operations team, but it’s also worth recognizing that, at the end of the day, they’re primarily being metric-ed on producing software that creates business outcomes. The thing that’s going through their head is that I need to create customer delight through my code, and in creating customer delight, I’m going to create money for the company I’m working for. If you’re starting to produce capabilities that inhibit that, they may still want to do the right thing, but they will find ways to work around you.

The later you leave the discovery of something in the left-to-right flow of a developer’s workflow, the more intrinsically painful it’s going to be, and frankly, the less likely people are going to be to accept what you’re doing. That’s from the pragmatic side of getting this technology adopted. This is a constant challenge for the CISO, which is, hey, I want to introduce this, but the minute I do, my business counterparts yell at me because I’m slowing down their production world. So starting with technology that’s developer-friendly and developer-engaging is a good story.

Now, it’s important not to confuse developers as buying centers. Developers don’t have budgets by and large, but increasingly, when you look at the IT landscape, developers are just a disproportionately significant cohort, not because they’re necessarily going to buy your technology directly, but because all of the people that do buy your technology care about them, and want to enable them, and are going to see you as being a more intrinsically valuable capability if you appeal to the developer.

To Open Source or Not

Tim: A lot of the listeners are thinking about, hey, I’m building a new company. Should I open-source? Should I not? How do I capture developers’ attention, hearts, minds, and community? I’ll back up just a little bit. So Craig, you co-created Kubernetes at Google and left a while later and started Heptio, which was all around helping enterprises adopt Kubernetes. Kubernetes became arguably the most popular open-source project in the world. There was a point where only Linux had more daily interactions than Kubernetes. Of course, then at Heptio, you built it for two years, sold it for 600 million to VMware, and it continues to live on there. I know there’s a lot here and a lot of founders come and talk to you about it, but what are some of the principles that you thought about in building out Kubernetes and now are thinking about Stacklok or other companies that are focused on developers and using open-source to build this adoption flywheel and community?

Craig McLuckie: There’s a hard truth here, and this is important for the listeners to embrace, which is that open-source is effectively mortgaging your business future for lower immediate engagement costs. So, you get lower activation energy. It’s much easier to get the flywheel turning with open-source particularly if you’re a small company. it’s going to create a virtuous cycle with a lot of the individuals that you want to engage. They may even contribute directly to the project. They’ll certainly make it their own. They’ll give you feedback in near real time; they’ll build the project with you. It’s a wonderful way to build technology. It reduces your barriers to entry in enterprise engagements, particularly if you’re a rinky-dink little startup. If you have an Apache-licensed open-source project, particularly if the IP is Foundation Homes, you’re far more likely to get through procurement because, at the end of the day, they know they can just run it themselves if things get weird. So your path is easier.

When I was thinking about Kubernetes, what distinguished us at Google was that I didn’t need to commercialize Kubernetes. I just needed it to exist because I had something to commercialize it. It was called Google Compute Engines, which had decent margins. I needed to disrupt my competition with the technology to run better myself on Google’s infrastructure, which motivated us to drive the Kubernetes outcome. With the success of that project, Heptio was enabling enterprise, this idea of enabling enterprise organizations to achieve self-sufficiency, to bridge the gap between them and the community to fill some of those operating gaps associated with the consumption of the open-source technology, and that worked out well.

When I look at what I’m doing with Stacklok, I recognize that over time, if I’m successful with the Minder technology, I will have to accept a lower value extraction narrative than if it were proprietary technology. But realistically, the probability of me succeeding and getting something that’s consumable out there is far higher if I can embrace the community. You have to have a plan. What is your plan for commercialization?

In the case of Stacklok, I’ll be very open with our customers and the community. Our plan is to create incredibly high-value contextual data, which we’re manifesting as a trustee right now, to support the policy enforcement that Minder has. That represents a differentiated thing. It’s hopefully something that our customers will value over time as we bring more and more richness to that data set. It represents something that’s differentiated from the community open-source technology. That’s my broad plan. I’m very open; I wear my heart on my sleeve. I don’t plan to change the licensing model. I have to stand by my commitments to my customers and to the community that I stand by. But the point is, I do have a plan for commercialization. It’s not just, I’m going to be the RedHat of this thing because it turns out RedHat is a pretty singular creature.

There is another story here, and we see a lot of this, which is that people are happy to pay you to operationalize something. If you have built a system that’s reasonably complex and you’re able to operationalize it better than other people. You’re able to set the boundaries of where commercial starts and ends and open-source starts and ends; you can navigate building a SaaS business around a single vendor, open-source technology. We’ve seen great companies emerge in that sort of context.

Tim: You built Heptio over a couple of years and it was bought, great journey, faster than probably anyone anticipated when you started. What were some of the things you learned? It could have been about some of these open-source threads we were just talking about, or there’s a whole host of other just general building startups successfully. What things did you take from that experience that you’re making sure to bake into Stacklok? Were there any things that you didn’t want to repeat having done it once before?

The Importance of Culture in a Startup (Camel v. Unicorn)

Craig McLuckie: I enjoyed the Heptio journey, and it’s hard to complain about the outcome. It’s also worth recognizing that we were riding on the momentum of Kubernetes. It was a very luckily timed undertaking. I don’t claim to be able to create that kind of luck for myself again. We need to approach this from a bottoms-up perspective.

What is different with Stacklok is that I definitely had a bit more of the unicorn versus camel mindset. A common narrative around my leadership circles is no, that’s a unicorn thing; that’s not a camel thing. We’re camels, not unicorns. We’re building something that is incredibly lean and purposeful, that’s going across the desert to an oasis, and if we don’t make us die, that’s how we think. We know we have to get this far on this much money, and there are no alternatives.

Opportunistic hiring that’s a unicorn thing; that’s not a camel thing. Getting crazy with swag, that’s a unicorn thing, not a camel thing. I think one essential delta is just a reality of the environment that we’re in. It’s like you have to have a plan for commercials. Like the days of being able to raise on promises and a winning smile are over, you better have a business plan. We’re more thoughtful in terms of our approach. We’re much smaller than I was with Heptio at this point. We’re twenty-one engineers, but they are crazy efficient and really hardworking. I’m very proud of our two products in nine months. So that’s the one difference.

The second difference is that I’ve approached this with what I think of as a hardwood growth model. Heptio was like a linear growth model, resulting in a lot of fog of war. We struggled with folks feeling disconnected because we were growing so quickly. What we’ve instituted with Stacklok is a grow hard and grow hard and grow hard, like a growth execute and run the team really hard, produce an outcome, then grow, run the team really hard, produce a result, then grow. I haven’t seen that referenced or talked about much. Still, I’m finding it to be incredibly beneficial just to the culture and to the organization because you establish the team, by establishing the norms. Then you use those norms to direct your next wave of hiring.

The other important thing is that in Heptio, we were a very culture-forward company. I think that I’m retaining that culture-forward perspective with Stacklok, but I’m now very purposeful in this case that the culture isn’t the mission, the mission’s the mission, and the culture supports the mission, and making sure that as we embrace our hiring practices, we’re very, very diligent about not just hiring the kind of people we want to work with, but hiring people that have that sense of purpose, that sense of mission, that willingness to engage in the way that we need to. That camel mindset is very powerful.

Tim: I love the camel versus unicorn mindset. It’s absolutely essential in today’s market. The days of the unicorn are distant memories in many respects. You’re talking about culture and how you’re really intentionally and thoughtfully building it at Stacklok. You wrote this great blog early on in the company called The Operating System of the Company. I like your point about how it supports the mission but is different from the mission. You have these great company tenants that you sort of alluded to. Maybe just say more about how you’re building culture and how you thought about those company tenants. This is the most common thing between all startups and all founders, is that you’re building this culture, building it right for authentically you and your team, how that carries forward for the long run in the company’s just absolutely essential foundation.

Three Jobs of a Founder

Craig McLuckie: Yeah, I heard a founder, say this, and I’ve adopted it as my own story.,which is I have three jobs, right? I have to effectively set the strategy for the company, I have to execute the strategy for the company, and I have to define the culture of the company. And of those, the third is probably the least understood, and most important. So culture for us is it’s our operating system. It’s not just the warm and fluffy things you write on the whiteboard to make you feel good about yourself. It defines how we hire and how we make decisions, and It’s indistinguishable from things like our brand. Our culture is our brand. Our brand is our culture, it defines us in ways.

In my previous company, I built with Joe Beda, and Joe and I are old friends. We’d worked together for ten years and done impossible things together. You would not find any air between us. You could present us with the same data, and we’d always formulate the same conclusion, which was easy. So when Joe and I started Heptio, we first sat down to write the culture and then went off.

Now, with Luke, an amazing human being whom I’ve come to respect in the most fundamental ways possible, we were relatively new. We had yet to work together. We had spent many hours together and gone hiking but had yet to work together. So, we both did this exercise and wrote down what we believed, just drawing on our lived experience. These are the things that define us and how we make decisions. Those little statements Luke says, like you always have short toes. You can’t step on my toes because I have toes. You can say anything, and I’ll just take it at face value. I’m not going ever to take defense. So that’s an example of something he came up with.

We both wrote those down and then I did a day exercise where I built an affinity map and went through all of the tenants, everything that Luke and I felt. I tried to distill it into a set of five things that I could then articulate that would represent what I believed to represent the way we operate because you can’t fake culture. The minute you try to fake culture, at some point, a hard decision’s going to come up, and you’re going to make a decision that’s not aligned with your culture, and then hypocrisy is going to creep and your culture’s dead and you have to start from the beginning.

I wrote down the five tenets. One: We’re a team. No one gets lifted by pushing someone down. The organization is invested in community-centricity. Community is a symmetric advantage. Two: We’re mission-oriented, we’re a startup, and we’re a camel-based startup. There will be some long, dry patches, and you better have the world to make it to that oasis. The only thing that will get you there is if you believe in the mission. I want people that feel that burn, that really want to engage in this mission with us. Three: We’re a startup, so we have to be data-centric and find truth in data. We cannot gloss over a hard truth staring us in the face because we refuse to engage with and believe the data. Four: We have to be humble but determined. The camel is a humble but very determined creature. Some of the best leaders I’ve ever worked with have embodied this. Five: We’re a security company, so we have to stand vigilant. Bringing that all together and then operationalizing it. When discussing hiring candidates, guess what exists in our candidate management system? It’s these five cultural virtues. Our interviewers should provide feedback on those five things. We talk about how to assess this in people that we’re talking to. When we’re talking about promoting individuals, we reference back to the cultural elements. Whenever you’re having a complicated conversation, and you have to make a decision, you point back to these tenets. So these are the things that are informing our decisions. That’s how I’ve approached building a culture.

Tim: Fantastic articulation. It’s energizing to hear you talk about it, Craig. Of course, you referenced your fabulous co-founder, Luke Hinds, and he was a long-time senior engineer at RedHat, co-creator of the Sigstore project that Stacklok is drawing on significantly, and is also based in the UK. So you also have a bit of a geographic/ time zone that you’re working through, which in this world where companies are at least to some degree hybrid, this cultural blueprint and operationalizing it, I think is that much more critical, compared to when everyone can just be together in the same office.

Craig McLuckie: In this kind of remote first world that we find ourselves living in, canonizing that and expressing it and being very deliberate in your communications is so critical as a startup founder.

The Future of Stacklok and the Cloud-Native Ecosystem

Tim: Awesome. Let’s come back one more time to Stacklok. So you just formed a company and founded it earlier last year. An incredible pace of building this great team of predominantly engineers that you referenced, launched the first two products which you referenced Trusty and Minder. What’s up for 2024? Maybe without giving away anything confidential, what are you looking forward to from a company’s standpoint this year? If there’s a call for customers, who should be interested, what type of developer or what type of enterprise, the developers there should be looking to Stacklok for help in the areas that we were talking about earlier?

Craig McLuckie: There are a couple of things that we’re looking to bring to market. One is, there’s Minder, which is a system that helps you mind your software development practice. This year, we want to help organizations embrace and engage with Minder. We want to make it free to use from an open-source community perspective. We want to support and engage open-source communities that are looking to improve their own security posture by providing this technology. They should feel like it’s, hey, Stacklok’s bringing us something that’s really useful. So if you are an open-source project and you have, say, 500 repos, and you’re worried about licensing taint showing up for some purpose. Having something that you can run reconciliation loops across 500 repos and reason about the trans and dependency set and make sure that nothing’s showing up in your world that shouldn’t show up, is useful. We want to engage with communities to help support their use of this technology.

We think of this as a way to give to these communities to enable them to start operating securely but also to be able to show their work so that they can produce Sigstore-based signatures and generate the attestation signal that the people who are looking to use their projects are starting to expect. We’ve focused almost exclusively on GitHub integrations, but through the year, we’ll add other critical integration points, such as Bitbucket, GitLab, Kubernetes, and pipeline integration.

On the other hand, we wanted to start the flywheel going with some high-value information. So, while Sigstore’s gathering ahead of steam and these communities are taking time to start producing consumable attestation information, we wanted to precede the segment with some very high-value intelligence. We started by doing some statistical analysis using Trusty, which is data science against open-source packages, looking for signals that tend to indicate both vulnerabilities and health. You can expect us to continue to enrich that. I’m not ready to announce, but keep watching the space. We’ll start to introduce some very, very sophisticated and cool ways of thinking about open-source technology that complements a lot of what’s already out there in the ecosystem. We’ll make that all well integrated into the Minder capabilities so that you can start to define and enforce policies based on those signals.

Tim: Awesome, looking forward to this year and beyond. A lot of people wonder your point of view on Kubernetes and the Cloud Native ecosystem. You’re firmly focused on building the security company now. Of course, lots of interrelated work with development that takes place around Kubernetes, but what’s your view on the state of that community, Craig? Is it at a maturity phase? All the big companies have their hosted and managed services. Do you still see room for more innovation for startups broadly across the Cloud Native ecosystem? Are there any pain points that you’re continuing to hear about? Any advice for founders who are looking to continue to build in your old world?

Opportunities for Founders

Craig McLuckie: There’s tremendous opportunity in that space. Call me crazy. I don’t want to be that ’80s band with that one hit song, like singing at corporate events for money in my 70s. I want to branch out and try some new things, and the supply chain stuff has been something I’m very passionate about.

Honestly, I think one of the things I’ve observed is bringing platform sensibilities to the security space, it introduces a very novel way of thinking. Look at the security ecosystem, why isn’t everything just running reconciliation because they work so damn well for platform sensibilities. It’s like, why is this not just a Kubernetes-esque integration platform? Why doesn’t this exist? We should just build it.

I think there’s a lot of work to be done. I mean, obviously, generative AI is hot, and from firsthand experience of building and running large language models that we use behind the scenes to produce some of the value that Trusty offers, there’s a lot of fragility and brittleness there. I think there’s an almost unfettered demand for capabilities to simply operationalize and deliver as a service-like experience for some of these community-centric large language models. I think in the Gold Rush of generative AI, the operationalization pickaxes are going to sell very, very well. I think that’s something that I would certainly be interested to see where it goes. I would certainly be looking to consume that myself because right now, we just find that operationalizing those models is brittle and can be a bit of a challenge.

I think there’s still unfinished business in the gap between the platform as a service and the Kubernetes layer. So, the gap between Heroku’s and Google App engines and the pivotal cloud foundries and the world of Kubernetes still exists. We haven’t seen a lot of really great turnkey experiences. I think the work that we started doing with the Tanzu portfolio was a nod in the right direction, but I definitely think there’s a wonderful opportunity to continue to explore and play with the idea of the service abstraction and basically looking at service dependency injections for modern workloads.

I’m also acutely interested in the way that WebAssembly is going to start to shape up and represent ways to bring new atoms of computational resources into new form factors using and borrowing a lot of the distributed system sensibilities that Kubernetes created. I think there’s tons of opportunity. If I hadn’t met Luke Hinds and fallen in love with supply chain security, I can think of three or four great startups that I’d be happy to do tomorrow, but I’m very happy with the one that we’re doing.

Tim: We’re as well. Thanks so much, Craig, great insights. We could talk about these things for days, but I really want to thank you for spending time and sharing these insights. It’s just a great pleasure and fun to be able to work together here and do a little bit to help you and Luke and the team as you build Stacklok. So, thank you.

Craig McLuckie: Thanks, Tim, really appreciate it.

 

The Evolution of Enterprise AI: A Conversation with Arvind Jain

Madrona investor Palak Goel has the pleasure of chatting with Glean founder and CEO Arvind Jain on the evolution of enterprise AI

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today Madrona investor Palak Goel has the pleasure of chatting with Glean founder and CEO Arvind Jain. Glean provides an AI-powered work assistant that breaks down traditional data silos, making info from across a company more accessible. It should be no surprise that Glean is touted as the search engine for the enterprise because Arvind spent over a decade at Google as a distinguished engineer where he led teams in Google search. Glean has raised about $155 million since launching in 2019 and was named a 2023 IA40 winner. Palak and Arvind talk about Glean’s journey and the transformative power of enterprise AI on workflows, the challenges of building AI products, how AI should not be thought of as a product but rather as a building block to create a great product, the need for better AI models and tooling, and advice for founders in the AI space, including the importance of customer collaboration in product development, the need for entrepreneurs to be persistent – and so much more!

This transcript was automatically generated and edited for clarity.

Palak: Arvind, thank you so much for coming on and taking the time to talk about Glean.

Arvind: Thank you for having me. Excited to be here.

Palak: To kick things off, in 2023, VCs have invested over a billion dollars into generative AI companies, but I can probably count on one hand how many AI products have actually made it into production. As someone who’s been building these kinds of products for decades, why do you think that is and what should builders be doing better?

Arvind: People are doing a great job building products. I’ve seen a lot of really good ideas out there leveraging the power of large language models. Still, it takes time to build an enterprise-grade product that can be reliably used to solve specific business problems. When AI technology allowed people to create fantastic demos that you can amazingly solve problems, the expectation in the market went up very quickly. Here’s this magical technology, and it will solve all of our problems. That was one of the reasons we went through this phase of disappointment later on, which is it turns out that AI technology, while super powerful, is extremely hard to make work in your business.

One of the big things for enterprise AI to work is that you connect your enterprise knowledge and data with the power of large language models. It is hard, clunky, and takes effort, so I think we need more time. It’s great to see this investment in this space because you’re going to see fundamentally new types of products that are going to get built and which are going to add significant value to the enterprise. I expect we will see a lot of success in 2024.

Palak: What are some of those products or needs that you feel are extremely compelling or products you expect to see? I’m curious about the Glean platform. From that perspective, how are you enabling some of those applications to be built?

Arvind: If you think about enterprise AI applications, a common theme across all of them is that you connect your enterprise data and knowledge with the power of LLMs. Given a task that you’re trying to solve, you have to assemble the right pieces of information that live within your company and provide it to the LLMs so that the LLM can then do some work on it and solve the problem in the form of either providing you with an answer or creating some artifact that you’re looking for.

From a Glean perspective, that’s the value that we are adding. We make connecting your enterprise data easy with the power of large language model technology. We want to take all the work of building data, ETL pipelines, figuring out how to keep this data fresh and up to date, and setting up a retrieval system where you can put that data in so that you can retrieve it at the time a user wants to do something.
We want to remove all of that technical complexity of building an AI application from you and instead give you a no-code platform that you can use and focus more on your business or application logic and not worry about how to build a search engine or a data ETL pipeline. We will enable developers inside the enterprise to build amazing AI applications on top of our platform.

The use cases I always hear from our enterprises are, first, for developers, you already know how the code copilots add a lot of value. Statistics show that about 30% of all the code is written by copilots. You’re starting to see good productivity wins from AI for software development. You can use these models for test-case generation, and there are a lot of opportunities. We’re still improving that 10% of the time that developers spend.
Developers are not spending most of their time writing code. Most of the time, they’re figuring out how to solve a problem or a design for the solution. Most of the focus is on the scope of AI. AI focuses on 10 or 20% of the time a developer spends to bring some efficiencies there.

Next year, we will see many more sophisticated tools to increase productivity in that entire developer lifecycle. Similarly, for support and customer care teams, AI is starting to play a significant role in speeding up customer ticket resolution and answering tickets that customers have. These are some of the big areas in which we see a lot of traction today.

Palak: As developers move from prototyping to production, do you think it’s a lack of sophistication in the tooling around some of these models or is it the models themselves that need to get better?

Arvind: It’s both. AI models are smart, but everybody has seen how they can hallucinate. They can make things up because, fundamentally, they are probabilistic models. All the model companies are starting to see, whether in an open-domain or closed-domain models, that incredibly fast progress is happening there to make these models better, more predictable, and more accurate. At the same time, a lot of work is happening on the tooling side. For example, if I’m building an AI application, how do I ensure it’s doing a good job? The eval when we started earlier this year when we built an application, all the evaluation of that system was manual. Our developers were spending the majority of their time trying to evaluate whether the system was doing well because there was no way to sort of automate that process. Now, you’re seeing a lot of development in AI evaluation systems.

Similarly, there is infrastructure on models, making sure you’re not actually putting sensitive information into the models or taking sensitive output and showing it back to the user, so privacy and security filtering, that plumbing layer is getting built in the industry right now.

There’s a lot of work on post-processing of AI responses because when the models are unpredictable, how can you take the output of the models and then apply technologies to find out that something like that happened? If it hallucinates, then you suppress those responses. The entire toolkit is undergoing a lot of development. You can’t build a product in three or six months and expect it to solve real enterprise problems, which was an expectation in the market. You have to spend time on it. Our product has worked because we were on this journey for the last five years, so we didn’t start building it in November of 2022 after ChatGPT and suddenly expected it to work for our enterprises. This technology takes time to build.

Palak: As somebody who has gone on that journey from prototype to production, what advice do you have for founders that are starting similar journeys and are looking to be part of these conversations with big enterprise customers?

Arvind: A generic piece of advice for folks doing this for the first time is that building a product is always hard. There are a lot of times when you’ll feel that, oh, maybe this is a bad idea, and I should not pursue it, or it’s too difficult, or there’s a lot of other people doing it, and they may do it better than me. I constantly remind people that a startup journey is hard, and you’ll keep having these thoughts and just have to persist. The thing that you have to remember is that it’s hard for anybody to go and build a great product. It takes a lot of effort and time, and you also have that time. If you persist, you’re going to do a great job. That’s my number one advice to any startup or founder out there.

The second piece of advice concerning AI is if you start to think of AI as your product, then you will fail. We don’t see AI as fundamentally any different from other technologies we use. For example, we use infrastructure technologies from cloud providers, and that’s a big enabler for us. I have built products in the pre-cloud era, and I know how the cloud has fundamentally changed the quality of the products we built and how easy it is to build scalable products.
AI should be no more than one of the building blocks you will use, and you still have to innovate outside of the models. You have to innovate to make workflows better for people, but something beyond that, hey, yeah, I can do something better, and therefore, I’m going to build a product.

Palak: Yeah, I think that makes a lot of sense and I love the customer centricity there, really figuring out what their needs are and building a product to best serve those needs rather than taking more of a technology-first approach and taking a hammer and sort of looking for nails.

To keep on this AI trend a little bit more, I think every Fortune 500 CEO this year was asked by their board, what is your AI strategy? And we’ve seen companies spin up AI skunkworks projects and evaluate a lot of early offerings. Naturally, a lot of founders and builders want to be a part of those conversations. I’m curious how you approach that at Glean and if you have more advice for founders looking to be a part of those conversations.

Arvind:
In that sense, 2023 has been an excellent year for enterprise AI startups because you have this incredible push that you’re getting from the boardrooms where CIOs and leaders of these enterprises are motivated to experiment with AI technologies and see what they can do for their businesses. We have found it very helpful because it allows us to bring out technology. There’s more acceptance and urgency for somebody to try Glean and see what it does for them.
I’ve heard from enterprises that many of these experiments have yet to work out. A lot of POCs have failed, and so my advice to founders is to have an honest conversation with your customers, with the leaders that you’re trying to sell the product to. If you create an amazing demo, which is easy to create, and sell something you don’t have, you lose the opportunity and the credibility. It’s hard to bounce back from it. Even the enterprise leaders understand that, hey, this technology is new, it’s going to take time to mature, and they’re just looking for partners to work with who they feel have integrity and who have the ability to be on this journey with them and build products over time. That’s my advice to folks: be honest and share the vision, share the roadmap, and show some promise right now, and that’s enough. You don’t need to over-promise and under-deliver.

Palak: That will become the conversation in 2024: what’s the ROI on our AI strategy? From the enterprise leader’s perspective, who had mixed results trying out a POC, do they double down and stick with the effort? How do you think about that, and how are you seeing it from the Glean perspective?

Arvind: AI is a wave that you have to ride. For example, when cloud technology was just information, there was a lot of skepticism about it. “Okay, I’m not trusting my data is going into a data center that I don’t even have a key or to lock; I’m not going to do it.” Some companies were early adopters, and some companies adopted it late. Overall, the earlier you adopt these new technologies, the better you do as a company.

The AI technologies are now bigger in terms of the transformative impact they will have on enterprises or the industry as a whole than even the cloud. It’s an undeniable trend, and the technology is powerful. You have to invest in it, and you have to, as an enterprise leader, be willing to experiment. You don’t have to commit to spending a lot of money, but you have to see what’s out there, and put in that effort to embrace and adopt this technology.

Palak: Absolutely. I’d love to double-click on that a little bit more on AI being a bigger opportunity than cloud. I’d love to get a sense of where you think that is and what are some of these amazing experiences beyond our imaginations that you think will result out of this new wave of technology?

Arvind: A decade ago, it was about $350 billion in overall technology spend. It’s probably double that now. The cloud is worth 400 billion, which is more than all the tech spending that used to happen 10 or 15 years ago. AI impacts more than software; it impacts how services are delivered in the industry. For example, think about the legal industry and the kind of services that you get from them; what impact can AI have on those services? How can it make those services better? How can it make it more automated?

If you start to think about the overall scope of it, it feels much larger. It will fundamentally change how we work, and that’s our vision at Glean. The way we look at it today, take any enterprise: you have the executives, and they get the luxury of having a personal assistant who helps do half of their work. I have that luxury, too, where I get to tell somebody to do work for me, and they look at my calendar, they look at my email, they help me process all of it, and I have that help, and I feel like it’s unfair, I have it, but nobody else in our company has it. But AI is going to change all of that.
Five years from now, each one of us, regardless of what role we play in the company, how early we are in our career, we’re going to have a really smart personal assistant that’s going to help us do most of our work, most of the work that we do manually today. That’s our vision with Glean, that’s what we’re building with the Glean assistant.
Imagine a person in your company who’s been there from day one and has read every single document that has ever been written inside this company. They’ve been part of every conversation and meeting, and they remember everything, and then they’re ready for you 24/7. Whenever you have a question, they can answer using all of their company knowledge, and that’s the capability that computers have now. Of course, computers are always good at processing large amounts of information, but now they can also understand and synthesize it.

The impact of AI is going to be a lot more than what all of us are envisioning right now. We are trying to overestimate the impact in the next year, but we underestimate the impact that the technology will have in the next 10 or 20 years.

Palak: Just so you know, Arvind, when you were starting Glean, I remember because I was working on the enterprise search product at Microsoft, and I think you were cold messaging people on LinkedIn to try out Glean, and one of the people you happened to cold message was my dad who also went to IIT. And so it was just like a funny story.

Arvind: If I’m trying to solve a problem, I want to talk to the product’s users as much as possible. So, for example, even at Glean, I was the SDR No. 1. I spoke to hundreds and hundreds of people, whoever I could find, whoever had 10 minutes to spare for me, and asked them about, hey, I’m trying to build something like this. I’m trying to actually build a search product for companies. Does it make sense? Is it going to be useful to you?

The reason it’s so powerful when you do that exercise yourself, and you don’t stop, you don’t stop after you hire a couple of people in sales, you keep going is because it generates that immense sort of conviction for you in your mission. Talking to you earlier about how often the journey’s hard, and you start to question yourself on a bad day. But if you had done that research and talked to lots and lots of users, you can always go back to that and remember that, hey, no, I’ve talked to many people. This is the product that they want. This is a problem that needs to be solved. And so that’s what I always find very helpful for me.

Palak: Arvind, you’ve been in the technology industry for a very long time and have been a part of nearly every wave of technical disruption. What have you learned from each of these waves and how have you applied those learnings to AI?

Arvind: I think each one of these big technology advances that have happened over the last three decades, we’ve seen how that fundamentally creates opportunities for new companies to come in and bring new products that are better than the products that were built before when that technology was not available to them, to anybody. And so that’s one thing that I’ve always kept in mind. Whenever a big new technology wave comes, that’s the opportunity for any company, whether you are starting a new company or if you are a startup, you have to figure out how that is going to change things, how that is going to give you opportunities to build much better products than what was possible before and then go and work on it. My approach always has been to see these big technology advances as opportunities as opposed to thinking of them as being disruptions.

Palak: I’d love to get a sense of your personal journey as you’ve gone through each of those different waves. What are some of the products that you’ve innovated on and what are you so excited about building with Glean?

Arvind: I remember the first wave; I was in my second job, and we were starting to see the impact of the Internet on the tech industry and the business sector. I got to work on building videos on the web. It was incredible to allow people to watch video content directly on their laptops, machines, and the internet. We started with videos, and they would be so tiny because there was no bandwidth available on the internet for us to provide a full-screen experience. Regardless, it was still fundamental that there was no concept before that; hey, I can watch a video when I want.

Then, the next big thing we saw was mobile with the advent of smartphones and the iPhone, and it’s fundamentally changed again. At that time, I was working at Google, and it changed our ability, for example, to personalize results, personalize Google search, and personalize maps at a new level for our users than was possible before because now we knew where they are, we knew what they’re doing. Are they traveling, or are they steady at a place? Are they in a restaurant? And you can use that context to make products so much better.

We’re in the middle of this AI trend now, and our product is a search product for companies. The AI technology, especially the large language models, has given us this opportunity to make our product 10 times better. I think back to when somebody came and asked a question to Glean, we could point them to the most relevant documents inside the company that they could go and read and get the answer to whatever questions they had. But now we can use the LLM technology to take those documents, understand them using the power of LLMs, and quickly provide AI-generated answers to our users so they can save a lot more time.
It’s been really exciting to have that opportunity to use these big technology advances and quickly incorporate them in our products.

Palak: Yeah, I think that’s one thing that’s always really impressed me about Glean. As far back as 2019 or even before ChatGPT, Glean was probably everybody’s favorite AI product. I’m curious, how has Glean’s technical roadmap evolved alongside this rapid change of innovation over the last 12 months?

Arvind: We started in 2019, and in fact, we were using large language models in our product in 2019. The large language model technology got invented in some ways at Google, and the purpose of working on this was to make search better. So when we started, we had really good LLM models in open domain that we could use and then retrain them on enterprise corpuses of our customers and make it work for them to create a really good semantic search experience. But those models were not as good as you could put them in front of the users. And all of that started to happen in the last year where finally the models are so good that you can take the output of these models and put them right in front of the users.
So this has allowed us to first completely evolve our product. What used to feel like a really good search engine, something like Google inside your company now, it’s become a lot more conversational. It’s become an assistant that you can go and talk to and do a lot of things, so not just give it complicated instruction because we can follow complicated instruction using the power of large language model technology and then solve the problem that the user wants to solve. But we can go and really parse and comprehend knowledge at a level that we couldn’t ever before and also go beyond answering questions and actually do work for you.

One thing that we realized is that the AI technology is so powerful and there’s so many internal business processes that with the power of AI, you can make them much more efficient and better, and we’re not going to be, as a company, be able to actually fix all the things like go and solve all the AI problems, but our job has now become more of that, how can you use all of your company knowledge that we have at our disposal and then give tools to our customers so that they can bring AI into every single business workflow and generate efficiencies that have never been seen before.

Palak: I think that’s one of the things that from an outsider perspective, has made Glean such a great company, just how big the vision is and how you’re starting with the customer and working backward. I’m curious how you think about internalizing some of those philosophies within your company and how that sort of evolved and how the product’s evolving to this bigger, broader platform vision that you just alluded to.

Arvind: In an enterprise business, it’s very important for us to actually spend a lot of time with prospective customers and understanding different use cases that they have that they’re trying to actually solve as a business problem. All businesses are very different in some ways, and a big part of building products that can help a wide range of customers is actually spending the time at the forefront working with our customers and really understanding the scenarios there, understanding their data, and then trying to extract common patterns and common needs and then drive our product roadmap based on that.

Our product team spends a lot of time doing this process. Every quarter we would go and look at what are the top things we are hearing from our customers, and then of course, we do have our own vision of where we think the world is headed with all these new AI technologies. And so we combine those two things to actually come up with our quarterly roadmap and then execute on it.

The reason we started, for example, working on exposing our underlying technologies as a platform that businesses can go and build applications on was exactly that, that as we talked to so many, all these enterprises and they were all actually showing us that, hey, I want actually bring efficiency in my order workflow process, and this is how it works. And somebody else comes and tells us that, hey, I want, I’m getting a lot of requests for my HR service team, and I want help people build a self-server bot for all the employees in the company whenever they have HR questions. We start to listen to all of them and you realize that, oh, we can’t do, no idea what are all the things they want to go and solve.
Our job then became that, hey, can we give them the building blocks that can make it easy for them to then take this platform and build that value that they were looking for? Do a little bit of work on top of the product and platform that we provide to solve those specific business use cases.

We started building our AI platform for that reason, because AI is so broadly applicable across so many different businesses, so many different use cases, and it became very clear to us that we need that collaboration from our customers to really get full value.

Palak: Awesome, Arvind. So I have a few rapid fire questions as we wrap up. The first is, aside from your own, what intelligent application are you most excited about and why?

Arvind: GitHub Copilot is one of the applications. I see how our developers are able to use that, and there is a clear sign, from signal from them that this is a tool that’s truly improving the productivity.

Palak: Beyond AI, what trends, technical or non-technical are you most excited about?

Arvind: I’m really excited about the nature of work, how it is evolving rapidly and quickly, distributed work, the ability for people to work from wherever they are, so technologies that is helping us become more and more effective from working from our homes. That’s the thing that I’m really excited about and we’re going to see a lot more. Hopefully we’re going to see some things with telepresence, which makes working from anywhere the same as working from office.

Palak: Yeah, that’s a good one. Arvind, thank you so much for taking the time. It was really a pleasure to have you on.

Arvind: Thank you so much for having me.

Coral: Thank you for listening to this week’s IA40 spotlight episode of Founded & Funded. We’d love to have you rate and review us wherever you listen to your podcasts. If you’re interested in learning more about Glean, visit www.Glean.com. If you’re interested in learning more about the IA40, visit www.ia40.com. Thanks again for listening and tune in a couple of weeks for our next episode of Founded & Funded.

 

How To Not End Up In A Board Governance Situation Like OpenAI

Managing Director S. Somasegar and General Counsel Joanna Black discuss startup board governance and the role of a board in a startup's growth.

In this episode of Founded & Funded, Madrona Managing Director S. Somasegar and General Counsel Joanna Black discuss the fundamental role of a board in a startup’s growth and development with Madrona Digital Editor Coral Garnick Ducken. They touch upon the importance of aligning strategic views with board members, managing disagreements, and effective board governance to ensure that the organization is run efficiently. The duo offers numerous insights into the intricacies of board structure at different stages of a startup lifecycle, drawing parallels from recent events at OpenAI. The conversation covers the need for transparency, both in sharing good and bad news, and the necessity for a functional board reflecting a functional culture.

This transcript was automatically generated and edited for clarity.

Coral: What we’re trying to do here is set the stage before we dive into all the advice and startup board governance and structure and everything that we’re going to talk about here. So, to kick it off for us, Joanna, can you give us the quick and dirty with the problems surrounding the board structure that we saw at OpenAI not that long ago?

Differences between Nonprofit and For-profit Boards

Joanna: I think what’s interesting about OpenAI is that it is really a nonprofit. And the nonprofit owns a for-profit subsidiary. That for-profit subsidiary is fully controlled by the OpenAI nonprofit. Most people don’t realize that a nonprofit board is very different from a for-profit board. We all understand that a for-profit board tries to maximize shareholder interests. However, in a nonprofit board, they are guided by a stated mission. So that’s a very different output. The nonprofit board does have the same duty of care and duty of loyalty, but they have this separate duty that’s not found in a for-profit organization, which is a duty of obedience. They have an obedience to the laws and the purposes of the organization.

Coral: Before we unpack a lot of that and then apply it to startups and founders more broadly, why don’t we just define the term governance a little bit? I know that in a lot of the media reports and other things that we’ve all been hearing and reading, it’s gotten thrown around a lot, and I don’t know that people fully understand what that means. Okay, governance, yeah, okay, we all get it, but I don’t think a lot of people do.

Defining Governance and its Importance

Joanna: Governance, from a legal standpoint, really refers to how an organization is managed or governed. So, by who and how. For most entities, governance starts with their governing documents. These are the foundational rules for the entity. They’re typically a charter and the bylaws. The charter is where you will find the rules by which everyone must follow when it comes to governing the company. Now, the charter is typically what sets up a board of directors or whoever is going to manage and govern that organization. They have the ultimate management authority over the organization. And then, the authority of the leadership team, the CEO specifically, derives from the authority of the board.

Coral: So then, Soma, if we pivot to you a little bit, you’re obviously the guy with all of the board experience here. I’ve heard you say before that, basically, a board is half company building and half governance, especially in the early stages. So why don’t you break down for us? What’s the purpose of the board and some of the experiences you’ve had in early-stage board building?

The Role of a Board in Early-Stage Companies

Soma: In general, I think a board is all about how you provide the right level of checks and balances to ensure that an organization is being run or managed in the most appropriate way. You follow the laws. You ensure that the right things are happening. You also keep in mind the fiduciary responsibility that the board has in terms of putting the shareholder’s interests first. As much as I say that, you know, governance is important right from day one. Usually, what happens is in a very early-stage company, the responsibility of the board is twofold.

One: Focus on ensuring that the right startup board governance is in place. But it’s also being a trusted partner to the CEO and the founding team and helping with what I call the company building — whether it’s building the team. Whether it’s helping with product building, product strategy, go-to-market plans or go-to-market strategy.

There is a bunch of things that need to happen in early-stage companies where everybody is learning as they go along. And it’s the board’s responsibility to be a trusted adviser and a trusted partner to the founding team and to the CEO. In the early days, it’s actually probably more time spent on company building. And as the company scales, as the business scales, then more of the governance comes into existence. By the time a company reaches a level of scale and becomes a public company, then the board is predominantly about governance and about what I call strategic alignment.

Coral: You made a good point there. Startups are growing — it is basically their purpose. And the board needs to grow along with it, while making sure all of the missions and everything are aligning, which obviously is where we got into a little bit of trouble with the OpenAI stuff.

So why don’t we talk through the board at those different stages? When you bring in independent board members and that sort of thing, and I think you can, both of you can tackle that a little bit. Joanna, why don’t you start by setting that stage and tell us how things evolve as more funding comes in.

Board Evolution with Company Growth

Joanna: I do think, exactly what Soma is saying, although the ultimate role of the board is oversight, that oversight changes over time — just like when you’re a parent. Your parenting duty over a toddler is very different than your parenting duty over, say, a teenager or even an adult child.

I think very much that, at the end of the day, the board’s role is oversight. They oversee strategy. They monitor risks. So, when you launch a startup, you might have a sole director on the board. Like when you don’t even have any investors, you are literally having a board of maybe one, maybe two, maybe both of the founders, if there’s two founder. So at this stage, it’s literally the founders company, and the board directors and officers are often exactly the same people. Once you raise some money at this point, it’s often comprised of three people. One is the CEO. One would be what we’d call the preferred director, which is usually the representative from the lead investor at that Seed stage. And then a third really depends. It could be an independent member. It could be another founder. It could be another investor representative. Something to note here, Coral, and I think most people know this, but since decisions are made on a majority basis, for the most part, it is good practice to have an odd number of board members. Having five members at this stage is a bit unwieldy, so we usually don’t go quite a five. As a company raises more and more money and more investors, such investors may ask for a board seat. We might want to change the composition as they grow and tasks change. And so, the number of board members you will have might increase, but we usually keep it odd-numbered and balanced.

Coral: And Soma, what should founders and CEOs keep a lookout for? Any sort of markers or moments that, okay, maybe it’s time to reevaluate what the board looks like. Do we need independent directors? Are there moments that you look for specifically when it might be time for some of that?

Soma: I think it’s twofold, Coral. It’s a balance, right? You don’t want a nine-member board for a company that has two employees. You also don’t want to have a thousand people in a company and like two members on the board.

But having said that, they should focus initially on getting people on the board that are really, really aligned with you — both from a strategic perspective and from a creating value perspective. The reason the firm that leads your Seed round of financing wants a board seat is — they are putting their money where their mouth is, so to speak. And, they are responsible. They have fiduciary responsibilities to their LPs. And they want to make sure that they have the right level of oversight, governance, and visibility to ensure that all the right things happen.

So every founder has to think about which venture capital firm they’re taking money from and which partner is going to be on their board. And is that partner somebody that they’re excited to work with? Because there is a meeting of the minds, so to speak.

The closest story I can tell you, Coral, is the following. If you are sitting in Seattle, let’s say. And, let’s say you’re the board member, and you have a CEO who wants to take a road trip to San Diego. Both of you need to be aligned that you want to go from Seattle to San Diego. Then you can argue over whether to take I-5 South. Should I take 101? Should I take some other road? But let’s say the CEO wants to go to San Diego, and the board member wants to go on I-90 East. Then you’ve got a problem.

Coral: So, how do you navigate that? Obviously, you do your best to pick those people that are aligned with you and those best partners. But at some point, if you get into a place where things, you’re not aligned anymore, obviously, like we saw in OpenAI, what’s the actual route that should be taken to handle that sort of conflict?

Choosing the Right Board Members

Soma: That’s why it is important for you to know who you’re going to be working with. And you know that like, Hey, this is a person I’m ready to work with because it’s not like you’re always going to be in agreement. There are going to be disagreements. There are going to be what I call debates. There are going to be some ferocious arguments along the way. But as long as you know that you are aligned on doing the right things. And people have different perspectives and different points of view, then I think you can work it out. Sometimes a board and the CEO might just decide “This is one thing where we disagree.” But they have enough trust in the other person that they are going to disagree and commit.

That, I think, is a good way to solve things sometimes because it’s not like everybody’s going to say yes to everything all the time. That’s not what a board is all about. And that’s not what a CEO should be expecting. And if a board expects a CEO to behave that way, then probably they’re not the right CEO in the first place anyway, right? So having arguments and having different differing points of view is okay as long as you can work through them and get to a common place of understanding about how you’re going to move forward. But if you realize that the relationship is so strained for whatever reason and you can’t navigate through it, then some change needs to happen.

Coral: Is there a process by which that change has to happen whether in terms of what the rules are that have been put in place, Joanna, or otherwise?

Navigating Disagreements and Conflicts within the Board

Joanna: There’s usually rules in the charter and the bylaws that talk about how you would remove a board member or how you would add a board member or you would switch out the board members. But you know, again, all of those rules, like almost all legal documents, are there just to be a fail-safe for what to do if you have to figure things out. Ultimately, it’s about the relationship, like Soma is saying. You have to work things through. A very functional board, I think, reflects a very functional culture. And the startup does best when there’s a functional board that knows how to work with its management. Ultimately, it involves both sides figuring that out.

Things that I have seen in my past where it hasn’t worked out great is when the CEO or the management don’t want to bring things to the board. They don’t think the board will understand. They don’t think the board will consider it. And I think oftentimes that can cause some issues when the board does find out. I think it’s really helpful for the founders and the board to have a good trusting relationship and vice versa. The board having honest open conversations with the leadership and the management really go a long way toward having good startup board governance and not having sudden decisions that come out of nowhere and surprise either the leadership or the board.

The Importance of Transparency and Communication in Startup Board Governance

Soma: The thing I would add to that is that it is a two-way street. As much as you could argue that, you know, uh, the CEO reports into the board, the CEO and the board should really think of themselves as like sort of partners. Right. And partnership works when even both sides, like, you know, operate with the same level of, uh, uh, what I call integrity, transparency, communication, and willingness to work together. The other thing is — as a CEO, sometimes you have good news to share, you have bad news to share, you have lousy news to share with the board. Rather than try to be super nuanced about it, ensure that communication happens as real time as possible, whether it’s good or bad, doesn’t matter. Because if you want to build a trusted partnership, it’s really important that you communicate as and when something happens that you think the board needs to know. But things that you think ought to be shared with the board have a different velocity for good news versus bad news. In some sense, I would say, the velocity for bad news should be higher than the velocity for good news.

Coral: Never wait to tell any news, just keep it open and transparent, all lines of communication always.

Well, I think that this will be really helpful for people, you know, thinking about starting a company or those sitting in companies right now. And I thank you both so much for joining me today.

Joanna: Thank

Soma: Absolutely, Coral. Thanks for doing this. And, I think, this year has been particularly interesting with what happened with like FTX and what happened with OpenAI. And all these conversations about like, hey, how do we ensure that the right startup board governance is in place, until we realize it’s too late kind of thing. And so this is a good reminder for every founder and every CEO, no matter whether they are day one, or they are looking to go public tomorrow, or they’re a 1-year-old company — governance and oversight is really, really important. Have the right level of energy and focus and attention on that. Don’t go overboard, and don’t underinvest in that.

Coral: Perfect. Thank you guys both so much.

Karan Mehandru and Anna Baird on Navigating Sales, Growth, and Leadership

They dive into the evolution of the CRO role since COVID, building a high-performance sales teams, go-to-market strategy, and so much more.

This week, Madrona Managing Director Karan Mehandru hosts Madrona Operating Partner Anna Baird. The operating partner role is a new one for Madrona, and we’re excited to have her on the team to advise our founders. Before transitioning to board positions in the last couple of years, Anna was most recently the CRO at Outreach. Before that, she was the COO at Outreach, and before that, she was the CFO at several companies. That trifecta of C-Suite experience gives Anna a unique perspective to help founders navigate any blind spots they may have. Anna and Karan, who is on the board of Outreach, dive into how the sales profession has changed, how the role of the CRO has changed since COVID, how to talk to customers, how to build high-performance sales teams, go-to-market strategy, AI’s role in sales, and so much more. These two tackle it all, and it’s a must-listen.

TL;DR

  • Maintain focused consistency: When running a business, especially in the sales domain, avoid changing strategies too frequently. Find a core focus and maintain it for sustainable growth.
  • Adapt and stay close to customers: The market might change but always understand the pain points your product solves. Continuously evolve by staying close to the customers and their changing needs.
  • Balance growth and profitability: Understand the balance between growth and profitability. It’s crucial to stay focused, not just on expanding but also on maintaining a profitable approach.
  • Hire for traits, not just experience: When hiring for sales teams, seek traits like intellectual curiosity, a desire to win, and collaborative skills. These traits often matter more than specific experiences when navigating new or challenging markets.

This transcript was automatically generated and edited for clarity.

Karan: Hello everyone. My name is Karan Mehandru. I’m one of the managing directors here at Madrona out of the California office. It is my pleasure and privilege to welcome Anna Baird to this podcast, and Anna and I go back a long way. I’ve had the pleasure of calling her my friend for almost a decade, and I’ve had the privilege of working with her for almost five years out of the 10 at Outreach. So, super excited to have you with us, Anna. Thank you so much for making time.

Anna: Hey, Karan. Super glad to be here and part of the Madrona family.

Karan: Awesome. We’re excited to have you. We have a lot of great topics to cover with you. Maybe we start with how your journey and your career took off and how did you get into tech in the first place?

Anna: I’ve been in tech since I started. I was an accounting major in college and got into one of the big accounting firms, KPMG, but I wanted to do tech. I wanted to do the accounting side and help tech companies. So, I moved to Silicon Valley, San Jose, at the time and was with KPMG for 17-and-a-half years.

I started by helping startups go public. I worked with Google pre-IPO, and then Intuit, and a bunch of others for the next seven years. There’s probably not a boardroom in Silicon Valley I haven’t sat in at one point or another after 17-and-a-half years. I moved to the consulting side partway through that and really loved helping set those foundations to operate effectively.

I left and became a senior vice president at McAfee, running finance governance risk. When they were bought by Intel, I decided to be CFO. I like to do a lot of different things. I became CFO and then a COO and then a CRO. Those are all the C’s I’m covering. No more. I’m done. That was it.

Karan: Great. Not many people we encounter have taken the CFO, COO, and CRO roles, making you unique in and of itself. Tell me why you transitioned from being a CFO to wanting to be A CRO. Did you just wake up one day and say, well, I’m done counting the numbers, and now I’m going to start driving the numbers? How did that mental switch come about in your head?

Getting into tech from accounting

Anna: Well, it’s funny. People don’t realize this, but as a partner at KPMG, I had a 6 million quota a year, so I was already selling.

Karan: We’re all salespeople.

Anna: That is so true. As a CFO, you’re trying to get investors. You’re selling the company. I loved that aspect. I loved customers. I loved understanding what the customer pain was. My favorite thing, even when I was in accounting, was what is product building and how are they solving the customer pain?

I loved understanding those things. I think it’s what made me a good consultant and an accountant at that time. I think it’s also what made me a good CFO, but even as a CFO, I kept ending up taking on other things because I loved the operational side. Everybody’s like, oh, Anna, take this, and Anna, take that. Then I said, you know what? I’m going to do the COO side. I have more flexibility in the breadth.

Then, of course, in 2019, you, Manny, and others asked if I would take on the CRO role. When you are building a company, and it’s all hands on deck, and you’re growing like crazy, and there’s a change that you need to make sure stabilizes, it’s the right thing to do, right? Sometimes, there are things that are just the right thing to do for the business.

One thing I’ve been pretty good at in my career is always putting the business first. I was helping a lot with sales anyway as A COO, so it wasn’t like some crazy transition, but I loved the team, and I knew we needed to stabilize and make some different changes, so took that on from 2019 until 2022.

Karan: We’re so glad you did because you took Outreach. I remember, when you came in, we were still under 20 million, and you took it all the way to 250 million, so it was a wonderful journey. I’d love to ask you about that, but just to clarify for listeners, the decision for us as the board was simple because we just got you to ask for the budget, approve the budget and then drive the numbers, so we didn’t have to do anything. So that was awesome.

Anna: I was the COO and the CRO for too long. It took us a while to find a CFO. That was the easiest job I ever had. I was like, is this a good idea? Yes, it is. Let’s go do that.

Karan: That’s awesome. Let’s talk about how the sales profession has changed over the last decade. I mean, there’s so much that has happened, new tools in tech. There are so many changes in the actual way things are sold, whether it’s product-led growth or sales-led growth. Then, things changed even more when we went through COVID.

I’d love to talk to you about your experience in this profession. You’ve been around CROs, you’ve been a CRO, you’ve been a CFO, you’ve been a COO. So, from your purview, what are some of the biggest changes that you’ve seen play out in the market in the sales profession, and then how did that get either accelerated or exacerbated when COVID hit?

Changes in the sales profession since Covid

Anna: It’s been a fascinating journey, especially these last few years, right? Everybody says, oh, how did you lead through a pandemic? Well, no one’s ever done it, so there’s no playbook. We were all figuring it out as we went and trying to make the best decisions.

I think one of the things that changed, and I’ll start at the top, is the CRO role in general, as it expanded significantly to what you need to understand, the skillset you need to have. I always explain that CRO became part CHRO from a human resource side, part CIO, and part CFO.

CHRO to deal with the mental health and the remote work, and how did you make sure that you kept your team focused and effective and addressed issues quickly so that you could make sure that everybody was healthy and in a good place, CIO, to understand the technology available to you to work with remote employees and what tech was available and how did you think about utilization of that tech for your go-to-market strategy?

Then CFO, the data, I mean data is so key when you have the technology, you have to understand what data you need to be able to run the business and how you are viewing it and what is the rhythm of your reporting and operating so that you could make sure you were making data-centric decisions because you couldn’t see everything anymore.

That changed for CROs, obviously, but it also changed for the teams for their understanding of customer engagement, what was happening, and how they wanted to be educated. When you think about the world being so remote when we first hit COVID and still quite a bit remote today, there’s so much self-education happening with customers.

We’ve talked about the number of touchpoints you have to get a deal approved and went from 10 to 17 to 21. You know what I mean? It was insane. They’re all working in different locations, and they’re not sitting in some conference room talking about your deal, your product every day. So, how do you educate that group? How do you create central locations where they can access business cases and videos on the product and those sorts of things?

If they’re already a customer, how do you do some product-led growth with showing them what’s available? Those things became way more critical than they used to be. For the account executives and the go-to-market teams, people didn’t want to wait to have three calls to get their question answered anymore either. They didn’t have the time. They were back-to-back with scheduled meetings because that’s how we’ve had to operate and still do.

So, they can’t sit there and go, yeah, let me get back to you. Let me get back to you on that question. Let me come back. We used to do things like that, I think, wtih go-to-market strategy, and it was okay. It was accepted as part of the ecosystem, and it’s not anymore. How can you educate a customer before they show up on a call?

Can you send a video to give them some visibility into the product and talk about, “Hey, like you, here are the four things from leaders that we hear?” So you make it, understand their role, and you understand who they are. Here’s some things we hear from leaders like you. We’re experts in the industry, how could we help you and how can we make this next call the most effective? Here’s a video of what our product does. Those kinds of things are game-changing now.

Karan: It’s interesting. As I listened to you, it makes total sense, and it sounds coherent, it sounds logical. That said, put in the context of companies that are starting out and the founders and their experiences. There’s so many founders that we work with these days that are product founders or tech founders, and a lot of times go-to-market isn’t a very natural trait, that skill or experience that they’ve had.

A lot of times, they don’t have any go-to-market people around them, so help us understand, when you advise founders now in this role as an operating partner at Madrona and being on boards of companies and you’re looking at early-stage founders that just got their product out, how do you advise them to interact with their go-to-market teams?

How do you think about advising them about go-to-market strategy? At what point should they start thinking about it? If you build it, they will come, which is usually the model that a lot of them operate on. So, help us understand what do you advise them when you’re looking at a new founder or a founder of an early-stage company that’s starting to think about go-to-market strategy.

Go-to-market strategy

Anna: I think there’s a couple of things. Getting your go-to-market organization, getting them to have a foundation, a process that is consistent across the teams, you have to build it, right? You got to build it from that, again, that foundation up. Part of it is, you’ve got to make sure you let them focus. Don’t change strategy every quarter. You cannot have new messaging, new strategy. You need to make sure you’re getting the message out to the market and watching how the market responds, and that can be pretty fast, but don’t change strategy every quarter. I think, especially when you are a product founder, you can change product sometimes quickly, and nobody knows. They don’t see what’s happened in the background.

When you’re changing your messaging all the time, you definitely see it, right? Each of those go-to-market people is the best marketer you have. They’re the voice on the street every day, talking about who you are, what you stand for, and what pain you solve.

I think that is so critical because, especially as product founders, you get so engaged in the engineering and the tech, and you want to talk about the features that you built and the functionality that you have, and your customers don’t care about that. They care about the pain you solve for them, the problem that you make less painful. So, how do you make sure that you are creating the language and keeping that consistent?

Here’s why we exist as a company, here’s the pain we solve for you, and here’s why we know your pain. We’re totally empathetic because we see other leaders like you all the time because that’s who we talk to every day, and we know that these are the types of issues that you run into. Then you get that whole empathy, I’m in your shoes, you’re selling with them, not to them, because they’re trying to solve a problem in their organization, not buy a feature or functionality.

Karan: Love it. I love that framework — why we exist, what we do, and how we do it — which is a great framework for all of us to remember, and more so, I guess, in a time when there’s massive change, just to remind everybody to rinse and repeat that message. You used the word focus, and I want to come back to that.

Obviously, scarcity does breed focus, but then companies grow up, and then they raise a lot of money, then they become multi-product, and then they hire a lot of people, then they have multiple locations, and then they have the tyranny of choices in front of them.

We went from scarcity to abundance in the market, and now we’re going back to a little bit of scarcity because the market is correcting in front of us. One of the things I’d love to understand from you is, you manage the sales team through scarcity, then to abundance, then back into scarcity.

You’ve been a CRO, and you’ve been a COO, and you’ve been a CFO, so you understand the difference between growth and profitability more than most CROs who are just thinking about growth. It would be great to hear some of your anecdotes or lessons learned as you manage that massive growth engine within Outreach and Livongo and all these companies. How do you balance that growth and profitability? How do you breed focus when there’s so many choices in front of you?

Balancing growth and profitability

Anna: We say it all the time, and it’s like what you say no to, right? You do have to say no to things. You can’t have 20 key priorities for the business for a go-to-market team or engineering. There has to be three things that we are trying to accomplish in this 12 and 18 months, whatever your timeframe is, and you have to be maniacal about that focus.

That’s where it’s not changing every quarter because it takes time to build it. If you’re looking at building competitive moats and you’re looking at solving that customer pain, whether the market has an abundance or in scarcity, you control what you can control, which is why do you exist? It’s back to that again, right? Why do you exist as a company? Because you solved something, and don’t ever forget that.

Even when it’s abundant, you must stay focused on solving a pain. How do we solve it even better? How do we make it more effective? How do we make it faster? How do we solve the next piece of the pain, and building that roadmap of understanding? We talk about our product roadmaps and really needing something that is 12, 24, and 36 months out of what would we do next. If we said this is the customer’s pain, and here’s what we’re seeing, what would we take next?

Always stay close to the customers, always stay close to understanding what’s happening in the market, and make sure you are controlling what you control, which is what your company does, and what you build, what you put in the marketplace. That is always a recipe for success. What happens is people get scared, and I get it. When you watch your child suffer, it is hard to be objective.

You must also surround yourself with leaders who will help you. It’s too personal. So, when you have leaders around you who can help you go, let’s all breathe for a minute together and let’s talk this through. What will be the most impactful thing for our business and our team?

Always remember the most impactful thing for your customers is going to be the right strategy to go after, and obviously, based on the tech that you built and where you have the assets to take those major next steps.

Karan: That’s great advice. I want to talk about culture and as part of that, hiring in particular, and you mentioned one of the first things you can do is build leaders around you that embody the cultural values that you and the company espouses.

When you think about building a high-performance sales team and when you think about hiring your next-generation leaders, maybe even the first rep or the first five reps, what are some of the organizing principles or traits that you look for when you’re lighting up a sales team?

Building a high-performance sales team

Anna: That’s a great question. There’s something that a lot of people don’t do. Everybody’s like, I just hired a salesperson, and they were great, and they came from this other company and they did great there, so they’re going to do great here. Anybody who’s been in this long enough, you know that’s not true.

I still remember when I first came to Outreach. We hired some of the top salespeople from companies that had been crazy successful, but they’d never introduced a new category. They’d never done an educated sell because the company had a great product that people came to them, or they just needed to talk about it. When you’re trying to educate for something that never existed before, that is a whole different selling strategy.

So, it’s like what are the core attributes you’re looking for in your salespeople, and those understanding the business you have and what they need to do in the marketplace is the first step. What are the traits that you need? But I’ll say three traits that are always critical when you think about the skill sets you’re hiring for.

It’s intellectual curiosity. You want them to understand the customer’s pain. That’s critical. Understand the customer strategy. How do we align our product so that we’ll win every time? It is a desire to win, Which comes from a lot of different places: overcoming adversity in their background, big families is a good one. They’re one of seven children. They had to compete to stay ahead.

Not just athletics, but competitive arenas in general, like chess, and lots of different areas where people are like, I really like winning. That adrenaline is that desire to overcome and to get to that outcome. Then you also need somebody who knows how to quarterback but is an incredible collaborator because one of the things they know is how to bring the power of a team to a customer, not just themselves.

If they’re all about being the hero and winning every time on their own, that works okay in some of your early days sometimes. That is not a strategy for long-term success. When you bring the power of the team, you win faster; you win more efficiently; you win bigger.

So, somebody who knows how to get out of their own way and they don’t have to be the star. They know how to take a step back, bring the right people in, and prepare them for a great customer conversation. I think those three.

Karan: I remember you telling us all at one of our sales offsites at Outreach when you got up on stage and said if you want to go fast, go alone. If you want to go far, go together. That’s one of the African sayings you reminded everybody, and I still remember that.

Anna: Yeah, and it’s totally true. It, just like history, repeats itself over and over and over again. We see that. You forget sometimes. You have to remember the basics.

Karan: So, Anna, now let’s fast-forward to today. You are our first operating partner at Madrona. You’ve collected a wealth of experience, lived every role in a company that exists from early to late, worked in different industries, such as sales and healthcare. I would love to hear your reasoning for why you joined Madrona and why you’re an operating partner here today. And then secondly, to the next Anna that is just graduating from college and potentially getting out of KPMG and aspires to be a CEO, CRO, COO, what have you, what would you tell the 22, 23-year-old Anna that’s graduating today?

Why Madrona

Anna: Oh, great questions. I joined Madrona because I’m a big believer, and this was part of the stage of my career. I think it’s important, and you see this a lot at Madrona. How do we help fill in the skill sets and the gaps that some of our founders might have? They’re not all coming from 10 different backgrounds, so how do we bring to the table skills and capabilities to make them even more successful?

When you’re a founder, and you’re looking at, especially those ABC rounds, you need a partner who will bring to the table things that help you figure out those corners you can’t see around because you’ve never been around them.

This stage of my career was about giving back. It was about teaching all those things that I learned in all those roles you mentioned and how to avoid the mistakes and take advantage of the opportunities faster because you do learn a lot of that. It’s like getting a mortgage. By the time you get good at it, you don’t do it anymore. So, how do I make sure I help other people avoid that mistake or those mistakes?

In this marketplace in particular, when you are looking for a great partner, people like Madrona are thinking about bringing executives onto their teams, prior executives onto the teams, and we have quite a few at Madrona. Those people have your back. They’re there to make sure it’s one phone call away to say, hey, can I talk about X or Y?

At one of our CEO’s all-hands the other day, I talked about this maniacal focus on focus. Stop looking at the market and focus on the pain to solve, what you’re building, and how you’re taking that to market. If you do that, you will get through this. Getting distracted by all the noise is one of the challenges that you’ll deal with, and we’re learning it well right now. That’s really critical.

One of the other things that is key, and you hit on this earlier, is the culture you build. We were talking about performance culture, and I just wanted to hit on it for a second. One of the reasons I joined Madrona was because it had an incredible backbone of ethics, culture, and incredible skills on this team.

That is important as you think about the partners you’ll work with because part of something that your board and your investors will do for you is help be culture bearers and think about what’s going to make your company successful. When I look at founders and some of the founders that I’ve worked with and said, okay, what are the things that help lay some of that foundation?

It’s okay to say, I don’t know. As a leader to say, it’s okay to say I don’t know, but it is not okay not to try, not to come with a perspective, and not to come with a point of view. When you start to create that openness, you get the diversity of thought. I always try to make sure I say it. It’s, I don’t know, I don’t know the answer to this, but let’s talk it through. Let’s work this out.

As a founder, when you do that, it’s even more powerful because you are the one who created this product, this company, so bringing that openness and opening the floor to diversity of thought gives people the freedom to do that. Fear is a tactic, not a strategy. When you put fear in play, it’s because you are trying to emphasize how dire a situation might be.

Sometimes that’s okay, but you use it at a point in time, not as a strategy every day, because people who respect you and want to please you and want to make sure that they’re working hard for you work 10 times harder if they don’t have that fear because fear makes them do things that are not thoughtful and are not strategic and are the opposite of really what you want.

When they admire you, they are going to work that much harder, and I think it’s a foundation you have to build from the early days of who you want to be, what you want your company to be known for, and when you have those things, you also recruit incredible talent. That is something I see with some of our founders, and what would I tell the 22-year-old Anna?

Karan: Yeah, 23-year-old Anna. I mean, that’s only five years ago, so you should have no problem remembering.

Advice: don’t try to make failure look pretty

Anna: I wish it were only five years ago. I think I’ve been pretty good at this, but being an executive and a founder is about taking risks. It’s about saying no to things, but it’s also about what you say yes to. But don’t try to make failure look pretty. Call it a failure and move on.

It is one of the most critical things that we all can do, and it is easy for us to say it wasn’t that bad. It was okay. If we just do this with it, if we just do that with it, or maybe we just need to try harder. Sometimes, it is just failure, and that’s okay. You learn so much from that, but you don’t if you don’t pivot and take another direction quickly. If I’d known that earlier, it would’ve been super helpful.

The other thing that I just hit is, what pain do you solve in the marketplace? It’s why you exist. Don’t ever, ever forget that. I say this to our founders, too. You have to sit in front of customers, no matter what leader you have on the team. I’ve heard CROs go, oh, when you get to be a CRO, you don’t have to do customer meetings anymore. I was like, that’s insane. Everybody should be doing customer meetings, everybody.

Heads of product, heads of engineering, heads of marketing, obviously CEOs and CROs every day, right? That is how you make sure you are staying in touch with what you’re solving in the marketplace, and is the market changing on you because, if you miss that, you miss those cues, then it will cost you. It’s going to cost you time. It’s going to cost you money, so stay close to that pain, and don’t make failure look pretty.

Karan: I love it. Talking about failure reminds me of a wonderful speech that JK Rowling gave at the Harvard commencement a few years ago about the power of failure. Your comment around failure reminded me of that, and I would encourage everybody to listen to that, as well.

Anna: You are the king of commencement speeches. I know that is one of your things. It’s funny, I just read something the other day. Apple was talking about one of the things they look for in all their interviews, and I was like, okay, I’ll click on this. What do they look for in every interview? It’s for somebody to say, I don’t know.

Karan: That’s great. Awesome. Well, I have two other questions that’ll hopefully be short. I don’t think I can do a podcast in 2023 if I don’t talk about AI. So, is AI going to take all the sales jobs away, Anna?

Is AI going to take all the sales jobs away, Anna?

Anna: No. Hopefully, it just really helps. I think we’ve talked so much about how do I enable you to be faster, better, and smarter. That’s what customers want. They want you to be faster, better, and smarter, so how do we do that with AI? AI is going to be game-changing in really improving time-to-value for customers and in time-to-value for go-to-market teams, as well, right?

On both sides of that, that’s a win-win. Will there be changes? Absolutely, and you see it already, but I think there’s so much there that will be positive, and AI is not as sophisticated as we’d like it to be yet. We all want it to be the end-all-be-all.

As you all know, you have to tell it the 10 things to get to the answer you want, and it’s only as smart as what it can find out there. That will obviously improve. That will get better. It will learn, and so will we. There’s just, we are going to get into a culture where go-to-market stategy is a lot more effective than it used to be.

Karan: That’s great. Well, I’m sure many people will breathe a sigh of relief after hearing that. Are there any books you’ve read recently that you would recommend to our listeners?

Book recommendation: “Simplicity”

Anna: Oh, wow. The one I just started is called “Simplicity.” It’s by Edward De Bono, and it’s about how you boil things back down again? How do you get back to simplicity? It resonated for me, and I’m just starting it. Somebody had recommended it to me, so I’m going to recommend it out, but it was about the world being so complicated, and we are trying to do 25 things to impact, and sometimes it’s not 25 things, sometimes it’s three.

We over-engineer ourselves. We over-engineer the problem sometimes. I loved this concept because, as I step back and I look at my career and the not five years that it has been, it is so much. There are three key lessons. I come back to the same core. I come back to the same principles.

The problem is we forget to focus on them, and we don’t focus on them and weave that thread through everything we do, and that’s what creates more of the challenges. We over-complicate the environment versus focusing on what’s really critical here for us to win, to be successful, and to move our company to the next stage.

Karan: That’s great. I love it. Well, on that note, I want to thank you, Anna. This has been awesome. Obviously, I was sad when you left Outreach, and I figured we’d never get a chance to work again, but I’m so glad you chose to come to Madrona as an operating partner. Our founders are lucky to have you as an advisor.

We are lucky as investors to have your insight and experience guide us in our investment decision-making, and I’m just so happy that we get to have lunch right after this in the office. So, thank you again for your time. Thank you for sharing all these pearls of wisdom with our audience today.

Anna: It has been such a pleasure. I love working with this team. Thank you.

Coral: Thank you for listening to this week’s episode of Founded & Funded. We’d love you to rate and review us wherever you get your podcasts. Thanks again for listening, and tune in a couple of weeks for our next episode of Founded & Funded.