Data Visionary Bob Muglia on the Modern Data Stack and Lessons from Snowflake

In this week’s episode, which is leading up to our Intelligent Applications Summit on November 2nd, Soma speaks with Bob Muglia. Bob has thought deeply about the Modern Data Stack, and they speak about it here — what is needed in the data stack to enable Intelligent Applications (or data-driven apps as Bob calls them) and the opportunities for new companies to innovate. Bob is also well known as the CEO that took Snowflake from a promising application for the public cloud to success by focusing on the problem of scaling a data warehouse in the cloud, and building product and sales teams that could win the hearts and minds of their loyal customers. Bob talks here about the early days after he joined Snowflake and what he did to get a product to market, how partnering with the big public cloud providers worked- and had its challenging moments. It’s a great view into how both Soma and Bob are thinking about the future of enterprise data and intelligent applications.

Note: Bob is on the board of IA40 company, Fivetran, which was the focus of a recent podcast, and he is chairman of the board of FaunaDB and RelationalAI, Madrona portfolio companies. Madrona holds shares in Snowflake,

This transcript was automatically generated and edited for clarity.

Soma: Bob, good afternoon. It’s fantastic to have you here with us. I’m very excited to talk to you about the future of data and data-driven and intelligent applications

Bob: Good to be here with you, Soma.

Soma: Absolutely Bob. And as you know, we at Madrona have been longtime believers in ML/AI and, more importantly, how do we apply ML/AI to different enterprise use cases and to different scenarios to be able to build what we refer to as next-generation intelligent applications.

And I was thinking about this, and as I was getting ready for the session, I couldn’t think of a better person to have this conversation with — and let me tell you why I say that. First of all, let me introduce you. Here with me is Bob Muglia, the former CEO of Snowflake, and prior to that, a long-term senior executive at Microsoft. He has done a variety of incredible things in his career — a lot of it data-driven, and this is where I come back to why I think you are the best guy for this conversation. Bob. Ever since I’ve known you, and I’ve known you for almost 30 years now, I think about you as a data guy first and foremost.

You go back to when you started your career at Microsoft, you were a Product Manager in SQL server. And through the following decades, I’ve seen you do something or other with data in one way, shape, or form. After leaving Microsoft, you decided to take on the range of Snowflake when it was a pre-product and a pre-revenue company. You spent over six years at Snowflake, growing it literally from zero to hundreds of millions of dollars of revenue. And I think you laid a lot of the foundation for Snowflake to be the leader that it is today in the cloud data platform world. After your stint at Snowflake, you’ve been working with half a dozen or more companies, startups, I should say, private companies as an investor, adviser, and board director.

The one common thread among all these companies is they all are doing something or other with data. I just look at the body of work behind you, and I say, “What a fantastic opportunity for us, and by extension, our audience to be able to hear from you about the future of data and how you see the world of intelligent applications evolving.” With that as a backdrop, I thought let’s just dive into some questions to kickstart this conversation. Let’s first go back to your days at Snowflake. As I just finished mentioning, when you started at Snowflake the team was still working on a product.

The product wasn’t in the market, and you went through this sort of what I call the “growing pains” of birthing a product and bringing it to market, thinking about the business model and getting it to scale. But along the way, I’m sure there were a handful of what I call “defining moments.”

Where you had to make a decision, or you had to think about something that literally laid the foundation for why Snowflake is who is today and what it is today. Can you think about a couple of those defining moments and just share with us what those are and how you navigated through those?

Bob: Sure. There were a couple of things early on that happened in the early time of Snowflake. You’ve got to go back to the period we’re talking about, which is 2014, 2015, where it was the early days of the cloud. Really, AWS was the most viable cloud at the time. Azure was still very early, and GCP was in some ways even earlier. it was a very different time. And a lot the of the focus of Snowflake was really about changing that. But a big part of it was also getting the product to the market because we were fortunate in the sense that we could scale to data of any size basically and as many users as you wanted to throw on it. And only having one copy of data for the whole organization instead of having to have copies scattered hither, tither and yon, which was the default at the time. So, it was a revolutionary product, but it still had to come to market.

And it was funny because when I started at Snowflake, the founders, said to me that their plan was to make the product generally available to enterprises in about six months. That was in June of 2014. I knew that somewhat unlikely, frankly, from all of our experience. You’re smiling, Soma, so you know what this is like in terms of developers in the early days. And I watched them for a period as they went through a couple of these two months’ milestones that were doing. And I had this observation that during that two months, they said they were going to do a bunch of things, and basically, none of them got finished during that period of time.

Other things did get done. They were working hard. They were certainly working hard, but it wasn’t like they were really working towards some well defined goals. One of the things I focused on was really trying to help bring some rigor and discipline to what it means to be an enterprise class product , , over the period of the next year or so, a little bit less than that, we went through a process whereby we really defined what general availability meant, and we went through a process of, getting, focusing on getting those tasks done.

I literally turned weekly team meetings into a project review only thing the sales people cared about was the status of the product as we don’t everybody cared about. And, we went through focused effort and got the product shipped in the middle of June. And that was really the beginning the beginning of the Snowflake experience.

The another thing that, we had, there’s always these sort of things that happen to companies in their early days that they, that, that happened to them and they survive or not. one of the more challenging things we went through early on was inside Snowflake, the transactional heart of the product is a technology called Foundation DB.

And Foundation DB the time in 2014 was a company, it was actually a sister company of one of our VCs, Sutter Hill. So we knew the company well. But I was in the process of negotiating agreement and I was able to negotiate an agreement. If anything happened to that technology, if it went off the market, that we had access to it through a code escrow, of course we hoped never happened, but it turns out it did happen.

And in seven, eight months later, Apple bought Foundation DB and immediately took the product off the market and made it unavailable to which is our worst nightmare. And, fortunately we were running it and we’re a bunch of database people and the source code escrow actually worked and we got the product through that we actually had to do what it took to actually learn how to be good at bugs and Foundation DB.

And pretty much had to do that all from scratch. And that was a very big deal. Had that, we not had access to Foundation DB, there really was no other good choice. Not at that time. Products like Fawn and Cockroach didn’t exist. And I don’t know what we would’ve done, to be honest with you.

I don’t know. I honestly don’t know what we would’ve done. But we survived that and now, fortunately, Foundation to be as open sourced and that’s very healthy. It’s actually healthy and Snowflake’s a major, actually a major contributor to it. So it’s actually a really good story, but it was a tough one.

So there’s, things like that happen in your process. I would say the other thing is just customers, I focused on all the time and being successful with customers. And, we didn’t lose customers basically speaking because didn’t take on things we couldn’t do, times would turn people down because we couldn’t do the work they wanted us to do occasionally.

And and really focused on the success of working with costumers.

Soma: That’s super helpful, Bob. I also remember that there was a lot of talk during the initial days of Snowflake about hey, we should think about like, you separating out computer and storage and that could enable us to get to the level of scale and economics, right?

That would be good for our customers and hence for us kind of thing. Any sort of color on, on, like how that came about.

Bob: think architecturally that was always part of the idea of separating compute and storage was always part of the design demo. And Terry had, the architecture at Snowflake has something called Global Services, that manages the metadata and does the query and does the query planning.

And then they have an execution processor that runs the actual, is the virtual warehouse and runs the actual SQL jobs. And now I believe it’s running the Python jobs too and, other languages. So it’s become multi-language really. And, the evolution of that whole thing changed dramatically over time and how we stored the metadata and everything and that separation of the metadata was a fundamental component of Snowflake.

But the way we did it certainly changed over time and I think, we were able to stay ahead from a scale perspective. I always said was interesting because, we were just ahead of our customers. In the early days, they were chasing our tail and scale in a variety of ways. And we, it was always, we were always working hard to stay ahead of customers, so customers had a great experience.

Soma: Bob, I do want to take this opportunity to say thank you to allowing both Madrona and I to be able to invest in Snowflake and be a part of the journey along with you and see it come to scale

Bob: …you guys were super helpful too. At the time we were opening Bellevue, the Bellevue office, which was very, I think, very pivotal office for Snowflake. of course Madrona has such strength in the Seattle area,

Soma: . I’m glad it worked out all well. But when I got involved in Snowflake, one of the things I heard, a fair bit is you got all these like big cloud platform providers, whether it’s AWS or Azure or GCP, all wanting to have their own solution in the space and how they’re going to , you crush Snowflake and how a fledgling startup can just not compete with any of these, at scale, massive cloud providers , but somehow Snowflake navigated through that and just reached a level of scale and success and is literally a leader in the cloud data platform world today.

, two questions that I want to ask you. In that context, how did you feel about it when everybody was probably telling you, or you heard the same thing about “Hey, all these big guys are there, they got their own, data warehousing solution, the cloud”, and how did you feel like confident that Snowflake was going to be able to navigate through it?

But then the second part, which I want to focus on is the partnership that Snowflake had with all the cloud providers. Because on the one hand, you could argue like, if if there is a customer that know goes with Snowflake on Azure, it is still a win for Azure. On the other hand, if you think about Snowflake running on AWS, Snowflake is competing with Redshift on AWS, right?

So you got this what I call a cooperative, in mind, midway, you are, partnering with the platform, but you’re competing with the service. How did that whole landscape, work out for you?

Bob: Yeah. To, so in terms of how we competed with the big cloud vendors, we had a better product. It was really that simple. If the product has, I said many times, if Snowflake was 10% or 20% better than Redshift, snowflake wouldn’t have gained any material share, but it was many times better.

I It worked in situations where Redshift didn’t work and Redshift is a very good product, it, paved the roads for cloud data warehouse. And for that I’m eternally grateful.

By bringing out Redshift very early in the marketplace, which they did. It was, the thing is it was a on-premises product brought to the cloud, so it didn’t really take advantage of the cloud. And what it was cheaper. It was definitely cheaper than anything you could buy. It on premises.

, but it didn’t ultimately scale.

People in the cloud world particularly wanted it to scale. And we saw, I wonder, in fact, one of my earliest, my earliest first salespeople, Vince Trada, I remember vividly, I was on the subway with him in New York in February or so of 2015. And we’re seeing customers who were talking about adopting Redshift.

And Vince said to me, “Bob, don’t worry about this because every one of those customers that adopt Redshift are going to come to us in the next 18 months when they run out of gas”. And he was right. That’s essentially what happened. A lot of Snowflake’s early business was Redshift conversions, well as working with semi-structured data, which we did a good job on and nobody else did.

Certainly we were better than Hadoop, which is what people were using at the time. And and so that was a major, the major part of the success force. So we were just better and frankly, particularly in AWS, we had a much better product. We’re lucky that we were in the timeframe we were establishing, and that’s the other thing, it was the right time. That was the time for establishing , the position in that, in the data space in the cloud. Because it was all pretty new.

Terms of our relationships with the vendors, they were challenging to say the least. We certainly had, many challenging times with Amazon, who we were competing with in Redshift. What I would say is first of all, is Amazon did an, incredibly good job of supporting Snowflake at all times and they were great at support and

AWS is a great product to build on top of. But they were brutally fought against us in the business marketplace in the early days and, it was pretty challenging it sometimes, but we were winning. We, the thing is we won those challenges partially because again, we had a better product and frankly we had them much a better trained sales team.

Our sales team. Was able to, to outsell Amazon’s. And so that was the early days of Amazon. And then, when we moved to Azure and we established Azure as our second cloud in part because of my relationships with Microsoft people, we were able to build good partner relationships there and actually had some amazing, very positive going to market motions with Microsoft in the early days where they did a bunch of joint selling with us and really discovered, whole different business.

What we discovered was that Azure, it was a whole set of customers we’d just never seen before. It’s just a whole, it’s a whole different market almost, really customers and we always said this, choose your cloud first and then choose your data warehouse.

And Snowflake ran on all of them, so it makes it a little easier. But at the time we were just running on AWS and then Azure and so it was positive it was a win-win situation in some senses for Microsoft and Snowflake to go together. I think that about the time I left Snowflake in 2019, Snowflake was probably becoming more competitive in a number of ways.

And in some senses the strength of the partnerships in Snowflake, I think flipped really. And they had a rough time with the Azure folks for a while and they actually built some very strong relationships with AWS, .

There’s lot of good things happening there. I think Google is still tough, if I’m not mistaken. , Google has, generally speaking, not the most partner-centric company on the planet. And I know that’s been a little bit more challenging for Snowflake, in part because they really love Big Query and they have the same feelings about Big Query that, the folks used to have about Redshift.

Only time will tell. These are challenging because they’re competitive they’re definitely complimentary and competitive.

Soma: Yeah the thing that was interesting for me to watch is, there would be a face off time when you would think now, hey, this particular cloud provider is the best partner. And then things will change and then things will change back. Just the volatile team, the partnership as Snowflake went from strength to strength and depending on where the other cloud providers were all, it was just fascinating to see how it was a very interesting and ever changing landscape.

Bob: It It just proves my first rule of partnership. Soma, partnerships are tactical. You know when it’s win-win, they work. When not, they start to falter a bit

Soma: But I think no Snowflake could be a sort of a good good, what should I say? An inspiration or a good role model or a good case study. For a lot of the other new startups that are coming up and saying “Hey, am I competing with the cloud providers or am I partnering? How do I navigate this tough thing?”

And depending on what space they are in and what the cloud provider’s aspirations are, it, many companies could be in a similar situation. That’s why I want to make sure that we talked a little bit…

Bob: That’s very true in fact. I have this conversation with a number of the companies that I talked to about potential, about their potential conflicts with cloud vendors. A lot of the stuff people are working on these days is complimentary to it’s new things that I don’t think have the same kind of conflicts that we had with Snowflake.

I do think in general though, that Snowflake is a good role model. Building a partner-centric company in general. In addition to really working with the strategic, the strategic cloud vendors and spending a lot of energy there, we spend a massive amount of time working to build an ecosystem and working with partners all around, whether they be, partners, like BI partners, ML partners, I mean, ATL partners, whatever it might be there, as size, and I feel very good about what Snowflake has done in that space.

And I think that, I definitely felt like I had something to do with that. And the history our shared history together at Microsoft, is the lessons that I learned from

Soma: Great, Bob. I thought it’d be good to take a step back now and think about hey you’ve been, as I mentioned, you’ve been working in data in one way, shape, or form for the last 30 years. How do, how have you seen the..

Over 30 years,

Bob: Been over 30 years. It was Windows NT Summit. It was Windows NT.

Soma: Over 30 years. But during this period of time, Bob, how have you seen the world of data evolve? New platforms, new computing paradigms, new devices, new, everything has happened. But the importance of data seems to have only gone from strength to strength and has exponentially gone up in the last 10, 15 years.

Now, I wanted to get your quick thoughts on “Hey, where do you see data today and where do you see data moving forward?”

Bob: It was just over 30 years ago that Bill did his information at your fingertips. Saw talk, which really, I was working on, I was a program manager when I started at Microsoft and SQL Server. So I was working and I had been working on database things as a, as really building applications inside a company before I joined Microsoft.

So I’d been focusing on data pretty much my whole career. And so I, while I’ve been focusing on SQL in the business side, but I still feel in some senses, like the beginning and the perspective began with information at your fingertips and all the focus that we had on information of all kinds at Microsoft and building out businesses and enabling people to work with data.

In the early days, I was found in, involved in SQL Server from the, from very early on. And then I watched as other folks at Microsoft built SQL Server and built it into the business that it really became. And I watched that transform. I watched these kinds of data systems together with the applications that sit on top of it, transform businesses of all sizes.

And, Microsoft contribution was the of all sizes really. You know, if you were a big company you could buy from IBM, Digital or Sun. A big, expensive set of systems, but Microsoft made servers that were quite inexpensive and brought computing to literally millions of small businesses around the world, maybe tens of millions never had it before.

And that was really, data was a centric part of that. We’ve watched, now we’re clearly head to head, we’ve gone through the internet era and the evolution of that has been new types of data that have become important, in particular, semi-structured data, that’s generated in large quantities by machines.

In some ways is some of the most important data we analyze today. We’re now living in a cloud centric world, which allows us to do things that we never could do before. I am a big believer that, data’s generated everywhere, but you need to centralize it to a certain extent to do analysis around it, to bring different types of data together so that you can, you can perform the relationships, but look at the relationships between them and perform the kinds of and dashboarding that people want to do, as well as deeper analysis with machine learning. So things have changed so dramatically from, a fairly simple environment where literally people worked on pencil and paper.

Literally we’re, Excel was a massive step forward and in or 1, 2, 3, was a massive step forward in dealing with information now this world of the cloud where we have this vast amount of data available to us. It’s pretty amazing really.

Soma: You just summed it up Bob. It’s pretty amazing actually. What in the world, how far we’ve come along, but for all the progress we’ve made, I feel like there is still a turn more that is waiting to happen. And it’s just that the rate of innovation is only getting faster as opposed to slower as we move forward here.

Today, like now you can’t have a conversation about data without know, talking about the modern data stack. That’s that’s a sort of buzzwords or a new concept or whatever you want to think about it kind of thing. But everybody talks about like the modern data stack. In your mind, how do you define the modern data stack?

Bob: So I people have been trying to work with data in a variety of ways and fundamentally the cloud and the ability for companies to work together to provide a complete solution for organizations on the cloud, has never been as strong as it is today. And that’s really what the modern data stack is about.

Really enabling the industry to work together to provide solutions to companies. And those companies take on a cer, those solutions take on a certain shape in the modern data stack. And there’s three defining characteristics that I think exemplify that the modern data stack is really building modern, building data analytics.

That is delivered as a SaaS cloud service first and foremost, which means that building these components, you’re purchasing them from third parties that are providing the service for you and means that a lot of the things are taken care of for the customer. So the first thing is that it’s a SaaS service.

The second thing is that it runs in the cloud and it takes advantage the scalability, that the cloud provides to allow you to work. All of your users and any kind of data, and I mentioned earlier that data, is both structured data that people work with SQL and semi-structured data that comes out of machine generated systems.

But it’s also more and more other types of data that are, is quite rich in terms of its content. People sometimes refer to this as unstructured data. I really tend to think of it as complex data. Data types such as video, audio, photos, turns out to be a rich source of complex data.

All of these things that exist in business in the form of documents of all kinds and recordings of all kinds are essentially sources of data for the modern data stack. And with the cloud, it needs to scale for all, to work with as much as many users as you want, and so the final point is that when you’re doing the analysis against it the data is modeled for a SQL database.

And that’s, that I think is a distinguishing element of the modern data stack. Is when the data comes into the system the way you actually transform it, there are multiple techniques for doing it. And so let’s put that aside. But the target environment you’re modeling it for is a sequel database and you use relational commands.

Relational algebra, basically to operate against that, that data in a relational form. So three things. Data analytics is a service that leverages the cloud for scale and models data for SQL.

Soma: That’s great. And today, Bob, if you look at it and this is know I don’t know whether I would’ve predicted that. Maybe in hindsight it’s easy to say I thought about it this way kind of thing. But today you could say that there are like, key or big technology vendors that are providing like, vast parts of the modern data stack.

, you got the three cloud platform guys in Microsoft, Amazon, and Google. And then you got Snowflake and Data Bricks. And the fact that Snowflake and Data Bricks have literally come out from nowhere in the last, let’s say, eight years or so is fantastic because hey, that shows that hey, you can innovate, you can get to scale, you can get to a level of success even outside the biggest platform guys kind of thing.

And that I think is just goodness for the whole innovation ecosystem. Do you feel like, five is too many? Do you feel, five years going to become eight or any sort of thoughts on.

Bob: It’s about right. The database market has always been somewhat fragmented. It’s never been a winner take all. I Oracle has classically been the largest winner in the database market, but even they’re only like 40%. It’s a, it’s, it is a market that has a number of vendors and I think that’ll change.

My guess is you’ll actually see some more vendors appear. We see some smaller players coming in trying to on these five vendors in a variety of ways. I think it’s hard. There may be some, we may see six or seven having some small percentage share because for some more niche markets, I think these are the big five.

I think that, there’s going to be a big dog. There is a big dog fight happening between Snowflake and Data Bricks, and we’ll watch that get fought out for the next year or two. And meanwhile, the cloud vendors will just do what the cloud vendors do and their products will all get better. I do believe that the cloud vendor products, what clearly behind things like Snowflake, are getting better.

Google is probably the furthest along. And this micro, I know a fair amount of what the Microsoft team is doing. There’s a lot of actually great work happening there. I see some good stuff coming in the future.

Soma: That’s excellent as much as all these five players, and there are a whole host of other companies that are talking about “Hey, I’m building this for the modern data stack, “. As you see what is happening in the world today, do you see any big gaps in terms of what needs to happen in the modern data stack to make it really more complete and more robust for the next set of applications?

Bob: Yeah, I think there’s a number of really major gaps. I’m fairly sure that the platforms people are using for machine learning are fairly nascent and will evolve. I mean, I’m fairly sure that’s true. That, Spark is, has a lot of adoption, I don’t think it’s the end answer to every problem

and I think we’ll see evolution in that space. There are data types, , problem characteristics that are very poorly solved today, like graph. Problems are really situations where you have a lot of relationships between things and and if you look at the data model it’s a very large number of relationships that need to be managed more than a sequel database can handle.

And in general, the graph problem is poorly solved by today’s products. Meanwhile, there’s other things, that are critical to business logic. Like reasoning, which are still done pretty separate from the modern data stack. And you have bits and pieces of code all over the place and I think that’ll converge into more model based things over time.

I think a lot of the future is really around the evolution of model based development. And I think we’re in the early stages of that.

Soma: You talked about know, SQL systems and you talked about graph databases. Bob my perspective, and I’ll share that with you and tell me if that makes sense or not, is historically, and even today, the world is bifurcated into “Hey, you can go deal with relational database systems”.

And or you can go deal with the knowledge graph systems. Those two words are what I call two silos. They don’t, they really haven’t come together.

kind of thing.

Bob: You mean? relational systems or procedural systems today.

Soma: Or procedural systems. Yeah.

Bob: Yeah, like you’re working writing code in Python the one side and then sequel on the other side. Is that what you mean?

Soma: Do you think they’ll come together? They should come together? Do you think there is an opportunity?

Bob: I do. And I think that’s, as you said, a knowledge graph, that’s what a knowledge graph really can do. It and really the idea behind a knowledge graph is that you can encode of the attributes of the business into the database, the logic associated with the business.

And, that makes it, the idea then is it becomes a complete model. The organization is actually executable, where the model is the code itself. And in a way this has been a dream of computer science since I was a kid, when I was just not far out of school. I did work in early models based things where you model stuff with these diagram stuff and they spit out coball code at the bottom, which of course didn’t really was unmaintainable and didn’t work and all kinds of issues.

But, and because it never worked those sorts of ideas of modeling became more of a whiteboard effort, and I will argue that people always model business. Every, when you’re working at anything you’re doing, you’re modeling. But in today’s world, we do it implicitly. Implicitly, and we do it, with, might write a model of something that’s relatively thought through on a whiteboard, but then that gets implemented as bits and pieces of code all over the place, implicitly within the systems.

And I think we’ll move to a world that’s much more explicit in our, in what we’re defining. And that will happen when the knowledge graph comes about and when we think about implementing a knowledge graph. I’m pretty clear that they will be relational and they will leverage relational algebra and relational mathematics.

Partially because the industry has moved forward significantly in the last 10 years in terms of understanding algorithmic changes. New algorithms that allow you to work with large numbers of relationships sufficiently, and actually do things that you could never do previously with a SQL database because we just didn’t have the sophistication of the algorithms.

That are now appearing. So it’s pretty exciting actually. But it’s also early.

Soma: That’s true but as data becomes, what I call more democratized. One of the things that you talk to enterprise , CIOs, they’ll tell you that “Hey, we are really putting a lot of energy and effort into consolidating and standardizing our data infrastructure”.

But along with all these huge volumes of data and what can you do meaningfully with the data kind of thing. One issue that keeps coming up pretty much in every customer conversation today is data governance. This is also an area where, particularly in the last two years I would say, there have been a ton of new startups emerging.

All addressing one part of data governance or the other part of data governance kind of thing. How do broadly aid the space of data governance, and the kinds of companies that are coming up? Are there any specific companies that particularly catch your attention in the space?

Bob: Not really. They’re good companies. There are some good companies. They’re doing pieces of, solving pieces of the problem. But when I think about, you the issue with the modern data stack, governance is a very real problem. I It was always going to emerge as a major issue when we took data that was scattered everywhere and we put it together that it creates a certain risk profile where, which makes access control to that very important.

And in particular, that’s the aspect. I There are many aspects of governance data modeling. Data observability all sort, many things but the one that, that I think is that sort of at the top of people’s list is access control. And while there are products in the market that, that address some elements of that, I don’t think we’ve really reached the pinnacle of where we need to be.

And I, I don’t feel like we’re well served, that our customers are well served here. I believe that while there are some ways to, there are different ways to solve the problem and perhaps there are some shortcuts that people can take. I think in the long run, the right way to solve that is by having, establishing a semantic model that understands what the business is, which is essentially a knowledge graph.

And then from that you can derive the rules, for the policies that you want to establish on your data have a very much of a policy based approach, that’s based on the business data itself. And I think we’re still away from having a standardized platform to enable that. And that’s what we really need.

You know, one of the challenges we have, and I think one of the reasons why there we’re not seeing as much success in governance and modern data stack as customers might want, is that all of these tools that are coming out don’t use the modern data stack as their database. And the reason they don’t is cause they can’t. Because doesn’t solve the problems for them.

So they all use some sort of operational database of their own. They take different approaches, but none of them inter operate. I think what we need is a common platform for a semantic model, that will become the basis for modern data stack governance. I believe that platform will be a relational knowledge graph.

It’s still early. But that’s where I think it’ll go. In the meantime, I hope we can get customers and get some answers out there if it helps to solve their problems.

Soma: True. Let’s move up the stack a little bit. You’ve seen like open AI, do some great work and in the last many years on large scale machine learning models. You’ve got all kinds of face recognition and other kinds of machine learning models that are coming up at scale.

What do you think this, the situation is today in terms of these machine learning models? Do you feel , the right amount of innovation is happening there? How do you think these models are going to be evolving over time? Any perspective on where we are with models and where we are going.

Bob: It’s very exciting, I have to say. I mean it, we’ve seen incredible progress in the last five years even. I would say it’s accelerating progress. I recently had a conversation with Xuedang Huang who runs the, all the artificial, the machine learning team at Microsoft and is working with open AI and working on foundation models it, and they’re doing a lot of work on combinatorial foundation models where they bring multiple different types of data together into one large model.

These foundation models, let’s talk about that for just a second. What they, sometimes they’re called large language models. Which is fine, but it only speaks for one domain, which is the language oriented ones, because some foundation models also apply to photos and other domains besides written language.

What they really are is world scale trained machine learning models that are trained on a corpus that approaches, global scale. And so you know, what they become essentially is incredible concentrators human knowledge. Into a model. Now the models are statistically driven.

They’re not perfect. There’s still advances that need to be done in these. But this idea of of using machine learning to take the expertise of a given domain in the world and distill it into a model is fairly incredible in my opinion.

I don’t, I can’t think of a domain that it won’t effect. Honestly, I think it affects everything. I think it affects every single element of everywhere we go. So I think that’s a very exciting, element of what’s happening. We see some incredible stuff. This DALL-E stuff is interesting. Now people are doing videos against it.

This model that, that came to OpenAI, that was one of the early rewrite code, writing models code X. You know, has done some amazing things. GitHub copilot has been an incredible success for Microsoft and is really doing dramatic things to improve developer productivity.

And, I’m seeing people use that for different purposes. Can take and improve it writing in and do things around running sequel as an example. And very powerful ideas can come from that. On the other end of the sort of spectrum I think that there’s an opportunity, where you’re trying to use artificial inte machine learning AI to improve the business workflow in a given organization where the domain is actually the terminology of that organization itself.

And it, it’s much smaller and it’s, there’s no global model to look at. There’s some local set of content you can look at. And in that case, the interesting thing is how do you inform the model more and more about the business? I think what we’re going to see is, user assisted, interactive training models appearing, which are really applications they look like, applications are working with a given domain and then leveraging machine learning to really improve business process.

And the company I’ve been involved in that’s been working on this, that I’m pretty excited about is Docugami our friend, Jean Paul of Microsoft is CEO. And they’re really focusing on taking business documents, and other high value business documents and inverting them and in turning them into data that can be processed by data systems.

And in order to do that, you really need to understand the semantics of that document. And that requires user assistance. And that’s why this interactive development is important. So that’s a real, a UI kind of experience in that. That’s two different ends of the spectrum in some senses but both examples are pretty interesting to me.

Soma: . Bob you heard me talk about this quite a bit in in the last many years. We we are absolutely convinced that every application that gets built today and moving forward is what we refer to as intelligent application. You like to call it data driven applications but it’s basically hey, taking a corpus of data that is a available to the application.

Being able to, build a continuous learning system using some machine learning models, and then continue to get better and better. Deliver a better service, you get more data. The process just, it creatively makes the application, the service get better. That’s the world we see happening today, and as we move forward. (A) do you agree with that viewpoint? And then (B), what are some of the core things that you think are happening that is going to drive the world, getting there?

Bob: If I didn’t agree with that viewpoint, I wouldn’t be continuing to do this work. I Let’s face it, that’s the reason I continue to do the work. Look, my, my whole purpose basically, in, in my business career has been to build infrastructure components that make people’s lives easier in business in one sense or another.

And data has been a huge part of that. But I I worked on Windows Server for a lot of years and I built System Center and helped with Visual Studio and all those sorts of things where we’re not databases, they were all, they’re all about making it easier for people to build.

Systems to help them more effective in their lives. And in particular in business, I’ve mostly focused on the business side, not the consumer side. It’s interesting because, because when we think about this world today, we’re seeing a world where machine learning is transforming. I think pretty much every application category I talked about, foundation models, essentially distilling the world’s knowledge into a model in some sense or another.

It’s not perfect. And even though I would say it, it’s it it is a great learn model. It might not it probably isn’t what one would call fully intelligent. It doesn’t reach the point of saying this is intelligent. However, it is an incredible source of information and can be used as a base as many things.

But there’s things that are missing machine learning and that are missing in a more full intelligence things. And in particular, that’s things like reasoning. The ability to to reason over something that says, is this because I know that this is something else. And these systems, these models have a very hard time with that today.

They have a very hard time with that. And they can, sometimes that’s when they go off into wacko things, it’s because they haven’t got the ability of adding reasoning to that. Now, I’m sure that’ll change. I mean, I’m very confident that we’ll see reasoning get added to these models in a variety of ways.

I think of this problem is what’s the infrastructure that would actually solve this problem in a more generalized sense? I mean, give somebody a Python compiler or a C compiler. In a hundred nodes they can do an awful lot. But, to me it’s what sort of infrastructure you can build, can you build to make these systems more available to a larger number of companies.

And that’s why I think it all consolidates ultimately into a relational database that will take the form of a knowledge graph. And I do think ultimately these things will come together that where you can take all of the components of intelligence, and let me to somewhat define that, is a program that can sense.

It can reason, it can plan, act, and adapt. And we see these components coming together in different systems today, in different parts of intelligence systems today. But the idea of them coming together in a cohesive platform, we’re still some distance away from that. And to some extent where I’m thinking, how does the world, lots smarter people than me are going to build these models that do these amazing things. But to me, I’d like to figure out how I can help facilitate the creation of platforms to enable all these things to be created cohesively by mere mortals. Not just the great, smartest minds on the planet.

Soma: That, that’s awesome, Bob. That’s great to hear. And I’m so glad you’re continuing to be fully focused on that mission because I think the world needs that kind of an infrastructure and the kinds of innovation that the infrastructure can provide. Like you say, that makes what I call building an intelligent application, something that every developer can do and not just the rocket scientist in the world kind of thing.

So that I’m a big believer in democratizing access to all the developers and so kudos to everybody who’s working on the infrastructure that’s going to enable that to happen. In a, I know that we are coming up on time before we wrap up. There is one thing that keeps coming up quite a bit.

When I talk to all these, what I call modern data companies, are startups, right? There are two things that they ask about “Hey, how should I think about open source?” and the other thing is, how should I think about in a product led growth, these are two things that every startup founder or CEO’s thinking about, Hey when does it make sense for me to think about open source?

When do I not think about this? When do I think about this? Particularly given your sort of, experience with a variety of sort of an proprietary and open source work and product led growth versus enterprise sales kind of thing. Are there any sort of parting words of wisdom that you want to share with the next generation entrepreneurs?

Bob: What I would say is, the biggest advantage to open source is the potential path to rapid adoption of particularly a developer focused technology. And the, the ability to get more end users, using it more, developers using it more quickly. It’s appropriate if the component runs in the infrastructure of the customer.

And it, I would say today it’s may even be essential. If you expect a component to run as a core integral part of an infrastructure, Kaftka is a good example of it, right? I Kaftka’s a perfect sort of example of something like this where that thing’s going to be sitting in, know all over the place inside customer’s infrastructure and they’re going to, they just want it to be open source for their ability to choose vendors and all sorts of stuff.

Those are good reasons to, to do open source. You know, the, the challenge with open source is that you essentially have to abandon it. To build to build a business. I’m not going to say it’s a rouse, but it’s a, it, it’s it’s a, you’ve got to do an extended focus at the very least, where you’ve got open source and then you have something that’s commercial because that’s the only way to monetize.

I In the old days, people monetized open source with services and that was Red Hat’s business model. That’s gone away with the cloud. There’s no, the cloud doesn’t help that. You can’t take what you just put in open source classically and just run it in the cloud because the cloud vendors can do the same thing and they have infinite distributions.

And their cogs are lower than you, so you’re screwed from day one. But if you differentiate and start with an open source integral component, then build on top of it. In some ways it can be very successful. There are, there are certainly examples of that. But again, what is these companies are going off and innovating in, in, in non-open source ways right now.

Soma: Bob, fantastic to chat with you as always, fun conversation and really appreciate you taking the time to be with us here and do this podcast. Thank you.

Bob: Great. It’s good to talk to you again, Soma. Thanks.

Coral: Thanks for joining us for this week’s episode of Founded & Funded, and don’t forget to check out our Intelligent Application Summit event page if you’re interested in these types of discussions. Thanks again for joining us and tune in a couple of weeks for our next episode of Founded & Funded with dbt Labs Founder and CEO Tristan Handy.

Fivetran CEO George Fraser on Data Integration and Connectors

 

Sabrina Wu, George Fraser, Data IntegrationThis week, Investor Sabrina Wu talks with Fivetran Co-founder and CEO George Fraser. Fivetran is fundamentally a data replication company. That’s how George explains it. But the company actually started out trying to solve a completely different problem when it was founded in 2012 and pivoted to strictly data integration in 2015 after multiple customers started asking for it. But in attacking that problem differently than it had ever been done before — by only focusing on replicating people’s data into the desired destination without getting sucked into any of the workflows that the user intended to do on the other side — they’ve come out as a leader in the space. Fivetran landed a $565 million series D last year and made two acquisitions, and it was named one of our top 40 Intelligent Applications. In this IA40 spotlight episode, Sabrina and George not only dive into the story behind Fivetran and how it has taken time and patience to get to where they are now, but Sabrina also gets some hot takes on the modern data stack, reverse ETL, query federation, and why Fivetran doesn’t open source. George also has some great advice about coming up with a company name, but you’ll have to listen to get it.

This transcript was automatically generated and edited for clarity.

Sabrina: Hi everybody, I’m Sabrina Wu, and I’m an investor at Madrona. I’m excited to be here today with Fivetran CEO and Co-founder of George Fraser. It should be no surprise that Tran was selected as a top 40 Intelligent Application in 2021. Intelligent applications leverage artificial intelligence and machine learning to continuously become better at solving business problems. However, the data ingestion, preparation, and management challenges that enable intelligent applications are extremely difficult to solve, which is why intelligent applications require enabling layers, such as Fivetran. Fivetran has built fully automated data connectors that enable businesses to extract, load, and transform data from different cloud storage and database sources into one central data storage system. I’m excited to dive into more of this with George today.

George, thanks so much for being here with us.

George: Very nice to be with you. That was a great overview. And applications, of course, our original bread and butter — things like Salesforce, NetSuite, stuff like that.

Sabrina: That’s a great transition. Let’s go back to the beginning —how did Fivetran go from being a data analysis tool to becoming the leading provider of automated data integration? Was there some light-bulb moment for you guys that made you want to pivot the business?

George: Well, it took about two years. Basically, we figured out one thing in those two years, which is that we would find product-market fit by cutting out everything except the data integration. So yes, we built a vertically integrated tool that had connectors as part of it, that stored data behind the scenes and Redshift, and that provided you a user interface to interact with that data. And, it was, it was not great. It was a vertically integrated BI tool built by three guys, and we worked hard at it, but it had a long way to go. What happened was we started to get this request from people we were talking to, which was, “Hey, I’ve got my own Redshift cluster that I’ve just set up,” and this was 2014, right? So, Redshift was this phenomenon in data warehousing at the time. It was the first data warehouse that was fast and cheap. It was available in the AWS console. And at the time, it was the fastest-growing product in AWS’s history. So, we were talking to all these people, and more and more started to say, “I’ve got my own Redshift cluster that I’ve just set up. It’s empty, and the problem I have is just getting data into it. And I like the sound of these connectors that you’ve built as part of this system. Can I just get those into my own Redshift cluster? And then I have my own plan of what I wanna do with the data.” And eventually, we said yes to one of them, and then another, and then another, and that became the company.

Sabrina: And was it obvious at the time that these automated connectors would be so valuable to customers? And obviously, the database world has exploded since then. I’m curious if the problem has been much bigger than you had initially anticipated?

George: It has indeed. It’s turned out to be more valuable and more widely popular than I think anyone anticipated. It’s funny because this is not a new problem. Data integration — taking data from a bunch of systems of record, whether they be databases or applications, and putting it into a central data warehouse — has been around since the 1980s, so it’s not a new problem. There have been tools for doing this for years and years. But the tools were not very good at the time — in my humble opinion. They didn’t really solve the problem. They were like a pile of building blocks upon which you would build your own integrations, and you were responsible for the correctness and reliability of those connections. If it broke, that was your fault. You had to figure out why. So, most people were just writing code to do data integration. I had actually done a lot of that at my previous job in a very different context in biotech. But this is a problem that had been around for a long time. It was everywhere. You talk to 10 software engineers, five of them have written some code to do data integration inside their company at some point. There had been companies founded trying to solve this problem. None of them had ever really taken off. They were a fraction of the size of the data warehousing companies, even though logically, both always have to exist in every account. So why was this such small potatoes? Nobody had ever really found a good solution to this problem.

And we completely ignored the way it had been done before. There was a lot of conventional wisdom about the right way to do data integration, and we just totally ignored it. And we said, Okay, what seems like the right way to us? We should just replicate everything. You know, the data starts in the place that it lives. Replicate everything into a normalized schema. The destination is a database, so people can do whatever they want with it from there. As long as you get it all there in a correct replica, you’ve solved a big chunk of the problem. And the nice thing about that is it’s like a bounded problem. You don’t get sucked into all the workflows that the user intends to do on the other side. What is your definition of revenue? You don’t get pulled into that. You’re like, “Look, I’m going to replicate the amount field from the opportunity table in Salesforce, and I’m going to do that correctly — that’s my job. How that maps to your revenue recognition roles, that’s for you to figure out.” So, there was a nice, clear separation of concerns between what our job was and then the problems that remained to be solved by the user. And the part of the problem that we were taking on was a big chunk of the problem. It’s not the whole problem. And that was kind of the conventional wisdom we were walking away from, that you had to tackle those downstream elements. We said, “No, we don’t. We’re going to solve the replication portion of this, which is actually a pretty big chunk of the problem. That’s where we’re going to focus our efforts — correct replication. And the very first connector was for Salesforce, and then we did Zendesk and Stripe, and then we started doing databases and a bunch of other things. But we always took that philosophy like Fivetran is fundamentally a replication tool, and you are going to work out the semantics of that data in the destination after it arrives.

Sabrina: I think one of the reasons customers really love Fivetran is because the product itself is very simple to use, but in reality, you are solving a very complicated and difficult problem — something that is very cumbersome and tedious … an engineer spent a lot of time building this. You started to talk about it, but I’d love to understand Fivetran’s approach to ELT. So, extract, load, transform, which shifts, as you were saying, the transform step to the end of the process. Can you talk a bit about why Fivetran created this new approach and the evolution of ETL — extract, transform, load to ELT — extract, load, transform?

George: It’s interesting. You know, ELT is a little bit of a lie. It’s a lie that reveals an essential truth. But the reality is that Fivetran does a ton of transformation. So, it’s really ETLT. The way the data comes out of the source, people always imagine what the source gives us is like a nice little list of inserts, updates and deletes, and then we just take it over to the destination and load it in. If it worked like that, we’d have lots of good competitors by now, and we don’t. So, the reality is the sources are sort of crazy in what they give you, so we have to develop these extremely convoluted rules for reconstructing the set of what has changed from the source. And it’s different for every source. But, if you make your goal to produce a correct replica, then that problem is the same for every user. So it’s very complicated. There’s a ton of complicated transformation we’re doing under the hood, but it’s always the same, whether it’s your Stripe account or someone else’s Stripe account or whatever the case may be. Stripe is actually not as hard as some of the other ones. That’s not a great example. Stripe gives you the data in a pretty good format, but if it’s your HANA database versus someone else’s HANA database, the rules of stitching together changes are going to be the same. They’re very complicated, but they’re always the same. And so, we do that level of transformation, but then we abdicate this other job, of like, how do you map all this data onto the concepts of your business. There’s another layer of transformation that is different for every customer. And this is the problem we very intentionally chose not to tackle. We said, “You’ve got to solve that after we deliver the data, and there are multiple ways to solve that problem.” But by making this choice, it sort of gives us a problem to solve and the customer a problem to solve. And it’s a very nice, clean handoff. And so you get that very simple user experience. Fivetran is doing very complicated things under the hood, but you don’t need to be involved in that. You don’t need to know how that works because it’s fundamentally the same for every user, so we can just automate it behind the scenes.

Sabrina: That’s a good point. And I think that what you’re talking about here is also Fivetran’s big investments in change data capture, so this whole process of CDC as well as replicating and allowing customers to move really large amounts of volume with very low impact. I know Fivetran recently also acquired HVR to further strengthen this. And so, I’d love to understand why data replication is increasingly important for customers.

George: Well, I don’t know if data replication is increasingly important. It’s just always been very important. Maybe for smaller companies. You know, one thing we have noticed is that historically data warehouses were something that were only really created by the largest companies. And we have an astonishing number of like 50-person and under companies building data warehouses, which historically that was not really a thing. But the problem has become so much easier at every layer of the stack that it’s become feasible for much smaller companies to pull this off. Then they can realize the benefits of knowing what’s going on in your business, which is the primary benefit of a good data warehouse.

Sabrina: When you think about CDC as a whole — why is this problem so difficult to solve?

George: It’s just a maze of incidental complexity. There’s nothing really fundamentally hard about it in the way that building a globally distributed transactional database is just like a very fundamentally hard problem, and people write Ph.D. thesis about it. Nobody writes a Ph.D. thesis about data integration. It’s just a huge forest of incidental complexity, and the way we’ve been able to tackle it is with a lot of grit, a lot of hard work, and a lot of time. If you look at the history of Fivetran, we talked about the original pivot to just data integration in 2015, but then for the rest of 2015 and a lot of 2016, we didn’t grow that fast because it still didn’t work that well. Part of it just took time to figure out all these little rules that create that easy user experience where you just push the button, and your data appears and keeps itself up to date, and the details of how that happened are unknown to you. So, a lot of it has just been time. As the years have gone by, we have studied ourselves, and we have started to learn how to solve some of these problems more systematically and, in fact, automate the process of developing connectors internally at Fivetran. And you will notice if you look at our release notes that the rate of new connectors being released in the last couple of quarters has increased dramatically. Our CTO Meel Velliste and a small group of engineers spent a bunch of time last year really studying basically all of these quirks that we had observed in every connector we had ever built. And they sort of mapped it out and determined that data sources do a lot of crazy things, but there is a finite number of crazy things. You can kind of catalog them all — all the weird stuff that we’ve encountered over the years. And then they developed a new way of building connectors, like a configuration-driven connector where you just have to describe, within this vocabulary, how any particular data source works. And you don’t have to implement all the procedural rules to correctly deal with all of those quirks. And so that’s a pretty cool thing — it took years to figure that out and to do that right. But you’re seeing the benefits of that now with the acceleration of how many new connectors we’re able to build each month.

And we do hope to get to the point in the next couple of years where we have thousands of connectors. Cause that’s what we ultimately need to have. There are so many data sources out there. And it is extremely valuable to customers, especially larger customers, if Fivetran can cover all their data sources — if they can say, “I can adopt Fivetran, and then my data integration problem is solved.” Whatever the source is, Fivetran’s going to have a connector to it.

Sabrina: That’s a really good point. And I’m curious — you also have decided not to open source, to my understanding, any of your connectors.

George: We never seriously considered open sourcing our connectors. The way we looked at it is our users are not software engineers. They’re analysts and data engineers. and most of them aren’t data engineers, by the way. So they don’t really have the ability or the desire to contribute to these connectors. Open source works really well when your users are software engineers who have the ability and the interest in contributing back to things like Postgres, for example, or Linux. For us, that’s mostly not the case. And the other problem is — when people build connectors for themselves, they tend to take shortcuts that work for their particular situation but won’t work for others. They rely on behaviors of how they use the underlying system. Like, oh, you know, when I update rows in this table, I always update this column named “updated at,” so I can use that in order to get changes from that table. Well, that might be true for you. But it’s not true for others. And so that trick is not going to work. And so, for all these reasons, we didn’t really see a community of contributors being a good way to get high-quality connectors. And I think that hypothesis has mostly been born out over the last few years. Fivetran continues to have a big advantage in terms of the quality of our connectors, by which I mean, are they working? And is the data in the destination actually a correct copy of the source? That is really the hardest part.

Sabrina: Got it. In my mind, Fivetran targets a lot of people and adds a lot of value to multiple people in the organization. So it could be the data engineer, the data analyst, the business analyst, right? And so I’m just curious, how are you building the product with these people in mind? You touched on it a little bit there, but I’m just curious if there are certain use cases where it’s better suited for collecting and analyzing data. How do you think about this from a data analysis versus ML workloads?

George: You know, it’s funny, uh, the answer to that is it really doesn’t actually matter to us. This is something that was al always very frustrating to VCs and often to salespeople — this is not really a persona-driven problem or product. We deliver data. The data needs to be correct. It needs to be up to date. And you can do a lot of different things with that data. And our users do a lot of different things with that data, and oftentimes, the proximate user of Fivetran, like the person who we’re talking to at the company, they don’t even know what all people are doing with this data. There’s this great mantra, “Data attracts workloads.” You know, once you have a good central store of all your company’s data, people will just come up with all kinds of stuff to do with it.

Sabrina: That’s funny. I think people are always kind of asking who’s the right persona. Who’s the right user, right? But um, in reality, when you’re thinking about some of the plumbing work that goes in that Fivetran is working on, it really is impactful to the whole organization. Now I’d love to kind of shift gears a little bit and get your thoughts on the modern data stack. In today’s world, customers are really patching together a lot of different best-in-bread tools and solutions to host, store, transform, and eventually analyze and understand the data. That’s the end goal that customers have. And so, as the number of tools has proliferated, there’s been more complexity than ever before. And I’m just curious how does Fivetran continue to differentiate in a very competitive and complicated data stack?

George: Yeah. You know, people make these like market maps, and they put all these logos on them. By the way, if you look closely at those market maps, the same logos will appear multiple times in different categories. So it makes it look a lot more complicated than it is. And the other thing is, you don’t have to use all these tools. I mean, the first few Fivetran customers, most of them, used Redshift, Fivetran, and Looker. Which was at that time a relatively new BI tool, but it had this key strength, which was this modeling layer called LookML, which allowed you to do that post-Fivetran transformation of data that we required you to do because we had abdicated that job. And LookML has largely now been replaced in that role by dbt, which just does that. But the point I’m making is that stack is still available. You can still buy that today. It still works just as well as it ever did. You don’t actually have to adopt all of these other things. I don’t think it really has gotten more complicated. I actually think it’s gotten quite a bit simpler. You know, if you look at people who are running Fivetran, a good cloud data warehouse, like Snowflake, Big Query, Redshift, Databricks, and dbt to manage their transformations, life is pretty good for them. A small number of people can manage a lot of data and a lot of complexity with that set of tools, and it’s a pretty good stack to adopt.

Sabrina: Yeah, I tend to agree with that. Do you think that there’s going to be more consolidation here in the years to come?

George: There’s always consolidation happening. So, the short answer is yes. And how things consolidate. What are the key strategic workflows that they consolidate around is always a big question. I think we’ve seen some of that happen. As you mentioned earlier, Fivetran acquired HVR. And the reason for that was very simple. HVR was really good at replicating the biggest, most difficult enterprise databases at low latency and the highest volumes. And Fivetran was really good at everything else. And so, it was a perfect marriage. And for users, now you have one vendor that can do both. And we’ll probably see other things like that. It’s something we do think about a decent amount at Fivetran — how much are acquisitions going to be a part of our future … probably a medium amount. So yeah, I do think there will be consolidation. And I do think that Fivetran is going to be one of those key poles that consolidation happens around. Because we’re at the top of the funnel, right? We’re getting the data from everywhere. We control the rules of how the schemas work and how the updates happen, and when things happen. And for that reason, there are a lot of other secondary workflows that make sense to pull into that ecosystem.

Sabrina: At what point would you consider acquiring a company versus maybe just developing stronger partnerships with those companies? Integrating more into the workflow?

George: We’ve only done it a couple of times. You know, there was HVR, and there was a much smaller company called Teleport, which we acquired for an algorithm also for replicating databases and very different circumstances. We are mostly a partnership-driven company. Always have been. Our second customer was a partner referral, so most of the time, the answer is partnerships. And you see that right now with our approach to metadata. Fivetran has all this metadata about the data that’s sitting in your data warehouse: where did it come from? When was it last updated? And we are working to expose all that data. The only times when acquisitions have come into the picture is when, you know, it’s something that’s in that core focus that we felt we were not good at and needed to get better at, and it was going take a long time to do it ourselves. But, as I said earlier, we’ve only done that twice. So how that philosophy will evolve in the future, I can’t say.

Sabrina: Well, it might be a good time to be on the offensive and get some acquisitions under your guys’ belt, given a lot of what’s going on in the macro environment today. But I think that your point about partnerships is really, really important. I think Fivetran is one of the companies that has partnerships with almost all of the key players in the modern data stack ecosystem, Snowflake, as you mentioned, dbt, Collibra, Hightouch, and some of the governance players now with the release of Metadata API. And so, I’m just curious, you know, how have you been able to maintain these partnerships? And it seems very core to the Fivetran strategy, and I would just love to touch a little bit more upon this partnership strategy that you guys have built over the years.

George: Yeah, it’s, well, we have a partnerships team that works very hard. Part of it is just intrinsic to what we do. We do something that all the partners need. None of these tools do anything without data in them, and that’s what Fivetran does — it feeds the data. There is an element of like, we’ve become partner driven just because we do something that all these partners really care about and need us to do. We’ve always tried to just be very straightforward in our strategy and in the way we communicate that to partners.

You know, like I said, the core of Fivetran is connectors and always will be— we’re fundamentally a data movement company, and that’s not going to change. So, we’re very predictable in terms of what we’re going to do next. And then in the details of that, we’re very transparent with our partners about like, “Hey, this is what’s happening next quarter. This is what we think is going to happen next year,” and I think that helps.

Sabrina: There have been some of these newer concepts thrown around. As I’m sure you know, reverse ETL emerged a couple of years ago, and so initially, data warehouses were introduced to essentially eliminate the data silos, right? But for many companies, data warehouses have now become a data silo, you could argue. And so, there’s this concept in the modern data stack known as reverse ETL, which aims to remove the barriers between the data warehouse and all the end applications such that teams can operationalize data into the systems themselves. What is your thinking around reverse ETL?

George: Reverse ETL is such a funny name — it’s a good name because you immediately know what it is. But then, after you think about it for 15 seconds, you’re like, wait, this is also ETL. It’s not reversed in any way. It was originally a Fivetran feature request. Our very first customer asked us to do that within weeks of going to production. And they were not the only ones. And the problem was forwards ETL is hard enough. We had to just focus on that. And we talked very openly about it with other players in the ecosystem and with friends, including the founders of Census, actually. I’m not taking credit or anything, but we did tell them about how we kept getting this request before they started Census, and that was probably one of many pieces of evidence that contributed to their decision to attack that problem. So, it’s a need that is out there. It hasn’t gotten as big as we have just yet, and time will tell how big the demand for that actually is. I think that’s the key question, is this idea of using the data warehouse as a data bus that you can then send data onward to other places. It’s definitely a thing people are doing. Is that something everyone is going to do in a couple of years or is that going to be something some people do? Time will tell.

Sabrina: Yeah. Census and Hightouch, right? Those are the two going after it in the reverse ETL world. There are also some newer approaches around query federation that suggest that engineers and developers may no longer need ETL altogether because you can query data regardless of where it lives. So, you’re removing this step of ETL altogether. Do you think ETL will continue in light of the federation trend?

George: What trend? Query federation has been around for decades. And it has been a stupid idea the entire time. Query federation makes for an incredible demo. You have live access to data instantly. You don’t have to wait for historical backfills or anything. And as soon as you go into production, it will fall over because you will discover that the data sources are simply too slow to give you data fast enough to support realistic queries. There are these little optimizations you can do, like predicate push down, that will sometimes speed it up. But as soon as you go to production, you’re going to discover you have a lot of queries that are not subject to those optimizations, and you have to move the data. There is an exception, which is if you’re reading data from object storage like S3, that actually does have enough bandwidth to make query federation work. But other than that, it’s hopeless. You can’t read the data fast enough to support a real data warehouse workload using query federation. It’s like a beautiful dream. And I will say that in some ways, you know what Fivetran did, where we said we’re going to move the data, but we’re going to treat this as a replication problem. In many ways, we are creating the same user experience as query federation. You know, you look at your data warehouse, and you just see exactly the same schema as exists in all the data sources, except the way it is actually implemented is using data movement because it’s impossible to do anything else. So those are my thoughts on query federation.

Sabrina: I love the hot takes. It’s always great to hear, you know, differing perspectives around what’s “in” and what’s not, and I’m curious if there’s any other, you know, approaches that keep you up at night or anything else that has been a concern — something that you are just thinking through as Fivetran. Like, “Hey, there’s all these different approaches now to moving the data to being able to analyze that data.”

George: Yeah, I think that there are two limitations of, let’s call it, the Fivetran-centered data stack. And they’re really limitations of the destinations of the data warehouses we deliver to, and opportunities for those data warehouses to fix these limitations. And they’re working on it. One is latency. It’s pretty hard to do any better than 15-minute latency the way things currently work. And there is a set of use cases where that’s not good enough, but it is really hard right now to do better. So the world needs a low latency data warehouse, and I think that will happen. I think the great data warehouses that exist today are going to make progress on this. They are making progress on this. And we may see another player emerge where the primary value proposition is they just have much lower latency. Then the other problem is, you know, the usage of the data warehouse as a data bus where you centralize all this data in your data warehouse, but then not everyone wants to run SQL queries. Some people want to just pull the data out and send it on to some other system that does something with it, and it’s just very costly to do that by running SQL queries against a data warehouse. And so, we’re seeing a couple of different ideas emerge about how to solve that problem. So, some people try to adopt something like Kafka as a central data bus to solve this. Now you have a whole different set of problems. And I think that’s unworkable for very complicated reasons that have to do with the way the data comes out of the sources. People don’t realize the sources produce very unclean data. And if you’re not sending the data to a relational database that supports updates and things like that, the data you’re going to be looking at is going to be very ugly.

That’s one vector people are pushing on to try to attack this problem. Another vector is like data lakes. The problem with data lakes is that the latency is even worse. Big Query has done something interesting here. They have this thing called the Big Query Storage API, where you can, basically cheaply, directly read data out of the underlying storage layer of Big Query. So, if you want to do something that isn’t a SQL query with the data, you can use Big Query as a data bus. But no one is using this. That’s kind of a big problem that I’m thinking a lot about right now — how do we attack those twin problems of latency, and how do I support workflows other than SQL queries at reasonable cost and performance none of the solutions that exist right now look great.

Sabrina: Yeah, I like how you frame these two problems. I think they’re definitely key pain points that we’re also seeing, and I think as analytics are moving more towards the edge, especially for use cases around things like streaming and data apps, and as existing workloads grow, it’ll be interesting to really see how some of these new technologies play out in the fullness of time.

So just to wrap up here a little bit. It’s really impressive how fast Fivetran has grown over the last two years, and as you said before, it hasn’t always been an easy journey. It took two years of solving a different problem before pivoting, and then it took some time to get some of those first customers. I’d love it if you could share some of your key learnings for founders.

George: It’s funny, once it was passed a hundred people, it looked the same the whole time. It helped that our growth was extremely steady over the last, you know, five years. So, if you’re growing, you know, 100% a year, that’s hard. But it’s really hard if you grow 200% one year and then 20% the next. It helps if things are steady. Again, that’s something that is as much a property of the market as your company, so you don’t exactly have control over that. So, it’s not maybe very actionable advice. But that has definitely helped. Another thing that has helped is a large percentage of the leaders of the company are people who have been here for many years. We’ve had a very stable leadership team. I think a lot of startups will replace their key leaders of all the key functions every time the company gets twice as big. And if you have to do that, you have to do that. But, through a mix of luck, of, I guess, hiring the right people early, and of really giving people a chance to learn and improve themselves, we’ve been able to maintain a more stable leadership team than I think most startups who have gone through such rapid growth. And that has some big benefits in terms of like controlling the amount of drama as you grow, you know, having these people who have been here for a long time, who have seen it grow up, they can manage certain categories of problems that are harder when it’s a revolving door.

Sabrina: Okay, so I wanted to wrap today with a round of lightning questions. So we asked this to all of our guests on the intelligent applications podcast series, and so — aside from your own, what startup company are you most excited about in the intelligent application space and why?

George: Probably Materialize. I should disclose that I am a very small angel investor in Materialize. The reason that they are so exciting is that they have started with the hardest problem — the hardest unsolved problem, let’s say, in database management systems, which is materialized views. And then worked outward from there. Whereas everyone else starts with all the things that you already know how to do, and then they try to do materialized views at the end, and they never are able to make a very good implementation. If you can solve that problem, there are some really dramatic implications.

Sabrina: Outside of enabling and applying artificial intelligence to solve real-world challenges, what do you believe will be the greatest source of technological disruption in innovation over the next five years?

George: I think that artificial intelligence is incredibly significant. I have a car that basically drives itself, a Tesla, and it’s amazing. But I don’t think it’s going to be a horizontal technology that changes every space. I think what people do with data is mostly going to be the same and is not going to be impacted by AI in the context of businesses managing their data. I think, actually, it’s going to have a lot less impact than people think.

Sabrina: And what is the most important lesson, likely from something you wish you did better, that you have learned over your startup journey?

George: When we started Fivetran, we were too early for what we would ultimately do, and we solved that problem by waiting, so timing is incredibly important, and you don’t control it. That is a big challenge for every startup. Are you too early, or are you too late? And if so, what are you going to do about it?

Sabrina: And then last, just fun question, why the company name — Fivetran — maybe it’s a play on words to Fourtran, but just curious why, why the company name?

George: Fivetran is a pun on Fortran. That goes all the way back to the original, original idea for Fivetran, which did not last very long at all, which was to make a vertically integrated data analysis tool for scientists. You know, way back, Fortran was how most scientists would write their data analysis code. So, it was a short-lived idea, but that’s where the name comes from. And, you know, we changed the idea and never changed the name, but it has turned out to be a very good name. It is reasonably memorable, and it is unique. If someone types Fivetran into Google, then they’re looking for us. And that has allowed us to measure the growth and awareness of our product using Google Trends, which has been incredibly helpful. And so, I always recommend it to founders. If you’re doing a B2B company where the name maybe isn’t that important, make sure you have something that registers a zero in Google Trends.

Sabrina: Yeah, it’s incredibly difficult now these days with all the startup companies — people have gotten pretty unique in the company name. And it’s challenging when you pick a name that’s an adjective or noun or something that is already a word. I’ve seen some interesting ones.

George: You want to know something really funny, and not a lot of people know this, but in the old days of Y Combinator, when Paul Graham was still there. Paula Graham was the master of coming up with names. And anytime a company had a name that was a problem, and they couldn’t get the domain they needed, or they discovered it conflicted with someone else’s name, they would send them to him. And then they would sit there in his office and figure out a new name. And he was so good at it. It’s like his secret talent. It was not needed for us, but there were some other companies in our batch that were marched off to go figure out a new name with PG.

Sabrina: That’s awesome. Coming up with that company name is one of the most challenging things. Well, thanks George for being with us here today. It’s been a lot of fun, and I appreciate you joining us and spending some time with us today.

George: Nice to be with you.

Coral: Thank you for listening to this week’s IA40 Spotlight episode of Founded & Funded. To learn more about Fivetran, visit Fivetran.com. That’s F-I-V-E-T-R-A-N.com. Thank you again for listening, and tune in in a couple of weeks for our next episode of Founded & Funded with Snowflake’s Bob Muglia.

Stack Co-Founder Will Rush on Teen Crypto Investing

This week, Investor Aseem Datar is talking with Will Rush, co-founder and CEO of Stack, a teen crypto platform to educate a young crypto-curious generation — and their parents — about responsible investing. After working at a company, building a banking app for teens, Will took his decade-long experience in securities and finance and applied it to crypto and Web3. He saw the excitement and curiosity that teens had for crypto and knew he could apply his experience to create a world of responsible teen crypto investing.

Stack has managed to balance education and access for both parents and their teens in a way that the younger generation is actually appreciating and connecting with. And Will is maniacally focused on the education portion. Not only because he is a soon-to-be parent himself, but because in what he calls the Wild West of Web3 and crypto, he says somebody needs to be the good guy. To help continue that project, Stack just landed $2.7 million in funding to continue developing its platform. And Will’s advice for every founder — “You need to have slime on your face” — a reference most of us millennials will understand, but you’ll have to listen to see how Double Dare relates to launching a startup.

This transcript was automatically generated and edited for clarity.

Aseem: Hi, everybody. My name is Aseem Datar, I’m a partner at Madrona Ventures, and I’m excited to have Will Rush, the CEO of TryStack.io. Will, welcome to the show, and I know that congratulations are in order.

Will: Thanks so much for having me, Aseem. I’m really excited to be here. It’s been amazing working with you guys and the Madrona Venture Labs team as well. We’re very fortunate to have you in our corner.

Aseem: I’m excited to talk a little bit more about Stack and make people aware of what you’re building. I’m very excited, and it’s amazing to partner with founders like yourselves who have a big, bold vision. But before we get into Stack, I wanted to understand a little bit about your background and where you are coming from. Tell us a little bit about your story.

Will: you know, I’ve been in securities and finance at some level for over a decade. In the early part of my career, I worked with companies like Charles Schwab, E-Trade, TD Ameritrade, and a lot of the old brokerages and traditional finance guard. And then I reached that point in my career that I think a lot of people do where I’d been doing one thing for a long time, and I wanted to lean a little bit more into the early side of things and startups. So, I went back to school. I got a master’s degree up here at UW. And while I was getting my master’s degree, I worked part-time at another venture firm now called Fuse Venture Partners — at the time, it was still Ignition Partners. And that was a really fun company and environment to work with — and all of their portfolio companies and founders were a great way to get my feet wet with this startup world. And then, at some point, I met who would become one of my favorite mentors ever in my professional life, which was Eddie Behringer. He was coming off of a recent win with Snap! Raise, which had just done its private equity exit. And he just started Copper Banking — I think they had three or four team members at that point. It was himself and his co-founder from Snap! Raise and one or two others. But he was also thinking, at that time, what it would be like to build a brokerage and stock product for teenagers. The thesis of Copper was to build the finance and banking app for teenagers that was built for them instead of their parents or somebody else. And so, it was really fun. It was a great product to be a part of, and I learned so much working directly under him. And then eventually discovered this thesis for Stack.

Aseem: You know, this leads us to the obvious next question, which is the inception of the idea around Stack and really trying to understand what patterns you saw while at Copper while being at the VC firm. I want to dig deep into understanding how that idea came to you. What are the macros in the world that you saw? I want to dig into “your why,” as they call it. Why did you build Stack?

Will: Oh, man. I mean, I think it really started from understanding the last 10 years in finance and FinTech. We saw companies like Robinhood outside of the 2008 financial crisis do an amazing job of democratizing access. And they did that by allowing you to trade $1 of any stock you wanted with fractional shares and do it commission free. And so, of course, that introduced a lot of new people to the stock market. But with democratized access, I think there was a huge lag in education. And so, being at Copper, being able to impact young people who were at the very start of their financial journey, the impact that you can make and the habits that people form at that age, you know, they absorb things like a sponge, and it was cool to see and be a part of those stories. While I was at Copper and I was researching our stock product, it became abundantly clear to me just through some of the AB tests that we would do where we would literally change one word “stock” to “crypto,” and all of a sudden, our engagement on things like Instagram would just fly off the radar. And so, I think there was a story there. There was a story about this zeitgeist behind crypto, this excitement, this curiosity with the youngest generation. And could you be the good guy in that world?

That was really like, where the thesis for Stack started and how we got to do what we’re doing now. I think there’s so much to unpack, which is how you educate successfully. I think a lot of parents hear teen crypto, and their first thought is, “Whoa! My teen doesn’t even have a stock account.” Or, ” I don’t know if I want to get my teenager involved in Web3 or crypto.” But the reality is that a lot of the teenagers we’ve found are hacking their way into crypto. They’re using their parent’s driver’s licenses to get a Coinbase account. They’re going over the law in a lot of ways to do this. And if we could create a bumper lane, if you will, that to us was a really compelling solution.

Aseem: Three things that you said stood out for me. One is this whole notion of democratizing access, right? If you go back a few decades, not everybody was trading in securities. Not everybody understood what it meant to be, for lack of a better term, owner in a company and bet on the growth of that company or that enterprise. And that’s changed over, I would say, even the past couple of decades. I think the second thing that you really mentioned, which was powerful, was this wave of learning and educated investing and guidance. And in the world of crypto, I think it’s more true than ever that people are just trying to figure this out. And especially teens, who want to latch onto this wave, but at the same time, kind of don’t know where to go. They’re discovering it as most people are. It’s fascinating that you are the on-ramp or the guardrail like you mentioned, to educated investing. Because this is going to happen with or without, I would say guides or mentors. And the third thing that you mentioned, which is fascinating to me, is even parents are learning. This is where I love the theory and the thesis behind what you guys are building, which is educated access, educated investing, and providing a friction-free on ramp to growth. So, tell us a little about what’s on offer today with Stack and where people can go to try it out.

Will: Well said, on so many fronts, there is this huge opportunity with what we’re building to, I think, tie two wants of our two users. There’s the parent user, and the want is education. The other user is a teenager, and the want is access. And one is hacking their way, and the parent who wants both themselves and their teenager to be educated really doesn’t have a lot of places to go right now. Because there really just isn’t that much education in crypto in general. We’re really living on the edge of the most cutting-edge area of the financial sector. The offering that we have now — our app is live on Google Play — it is set up so that a teenager opens a UTMA account, which is the Uniform Transfer to Minors Act. That requires a parent to be a cosigner on the account. And it allows teenagers to have legal access because if they access Coinbase or Robinhood or any of those other platforms that are out there right now, they’re actually living above the wall, meaning they only allow 18 or older users. But we allow legal access for teens. And then there are a lot of other really cool things about the app. So, for one, a parent can set approval limits. That means they can say, “I want to review everything,” “I want to review trades that are above $50 or a hundred dollars,” or “I want to give my teen the keys, and I trust them to go make good decisions.”

The second thing that we do a lot of is, like we’ve talked about a lot — education. How do you balance the education element and teen who is on TikTok and a lot of trends that they see are evolving in a matter of minutes or hours. How do you get their attention? How do you actually win in this education world that obviously very few people in finance have been successful with? And I think the way that we have chosen to win and really what we’ve learned about teenagers over the last 18 months is — it has to be bite size. It has to be tied to just-in-time learning. So that means, if you’re about to buy Bitcoin for the first time, you want to know about Bitcoin right then, and how can we win that moment? And then there’s the third piece of it that has proven successful in a few other startups, which is the learn-to-earn model. And that means you actually get benefits, and for us, you get fractional amounts of crypto if you teach yourself about it.

To a parent user, I think that third element is really important because we are doubling down on education from day one — by saying, we will give you free benefits, rewards, whatever you want to call it, via crypto, just for educating yourselves. There is a new offering that we’re going to release later this fall that I think will be by far the most exciting offering that we have to date. And it really plays on both a parent wanting education and a teen wanting access, and really how to balance the two of those as thoughtfully as possible. And also from a lot of the early engagement statistics that we’ve seen from our user base on our app.

Aseem: I think one of the things that stands out for me is the UTMA license you guys were able to get, and that’s really the foundation of how you’re approaching responsible investing. That’s awesome because it’s all about providing that guided education path for somebody to take control of their own destiny in a responsible way. I was curious what are you hearing from the teens — or, as I call it, the entrepreneurial generation — what are you hearing as they’re using your platform?

Will: We’ve learned an immense amount from teens working with them. I mean, we literally work side by side with them and in a lot of capacities — we’ve physically shown up to high schools. We’ve physically sponsored events. We’ve digitally sponsored events. What they’ve taught us over that period is you’re only going to command a very short period of time, and that includes education. That includes even if you reward them through something. And so, how can you win small moments that have a big impact? So, we’ve built that into our education content. We expect our team users to be spending an hour between 30 seconds and 90 seconds on the educational module of our app. But if we can do that consistently a few days a week or even every day of the week, I think that’s where you win.

The second thing that we’ve learned from teens is, in this world of technology, they’re highly capable. And I think many products built for kids and teens infantilize them a little bit, and this generation is different. As you said, they’re the entrepreneurial generation. They identify as entrepreneurs. A lot of them have side hustles. High school is different, and we need to recognize that, and we need to lean into that because they’re highly capable. And if you give them the right tools and the right guardrails, I think they’ll shock the heck out of you with what they’re capable of doing.

Aseem: Which is amazing. I think there’s so much to learn with this new way of consuming content, consuming education. You’re kind of on the bleeding edge of constant evolution from the product side of things. Shifting gears a little bit Will, tell us a little bit about the recent economic changes and the impacts that the markets are having. Have you seen any changes in terms of how people are interacting with Stack, and how they’re thinking about it? I mean, you guys just launched, but I was curious about how that’s playing into the minds of your customer?

Will: Well, I’ll say this, A) we’re incredibly fortunate to have launched. And we did because we launched after the down cycle. If you believe in the thesis that crypto is going to go back up, which I think, you know, anybody that’s getting involved in it likely does. There’s a good chance that our user base is getting in closer to the bottom of the market. I mean, I look at somebody like Robinhood, that was born in the shadow of 2008. And what Robinhood did well was they built with a diehard user base. And then, when the market started to catch fire again, that’s when you saw their user base take off. Not saying that we’re waiting around for the market to catch fire again. I think there will be another one of those natural upswing moments when there’s a lot of curiosity, crypto’s back in the news all the time, and people are talking about new coins, and they’re talking about Bitcoin or Ethereum and everything else. But in between now and then, we are really living and breathing the customers that want to be doing crypto right now. And they want to do crypto for, I think, a few reasons. Number one, they know that not only is there obviously crypto the investing product, but there’s the blockchain, and there’s the technology behind crypto. And there are so many other use cases, so many other industries and companies that are building on the blockchain as a technology that can be learned on our platform and can be learned through crypto. Because crypto is the currency of this technology. So, I think we’re really leaning into that. That user base tends to be gamers or highly tech-oriented. They certainly have a certain customer profile that we found over the past few months, but that’s really kind of where we’re living and breathing right now.

Aseem: Right. That’s so awesome on so many levels. Wanted to ask you a little bit more about the product itself. So, what can teens do with Stack today that no other platform offers them?

Will: So, number one is just legal and safe access for teens and their parents. I mean, we are one of the very few investing platforms — even if you open this up to stock products as well, it’s a small niche game, the under-18 finance and mobile app game. And we were the first ones to open in this under-18 arena specific to crypto and Web3. Number two, something that you get out of it just indirectly through a UTMA account is you get tax advantages. You can earn up to $2,300 tax-free on our platform because you have a UTMA account. Whereas you go earn on Coinbase under your parent’s account, and you’re going to get charged 30% or whatever their tax rate is. And then lastly, it goes down to a product built specifically for teen crypto rather than a Coinbase or a Robinhood. What we think about there is we limit the coins on our platform. We do a lot of vetting of what currencies are on there. We vet based on market capitalization, we vet based on trading volume, we vet based on a lot of characteristics and nefarious activity. And so, we’re tracking so that the top-of-the-funnel decisions we make about our product and app is making it safer to trade in. And then lastly, we also don’t do crypto on and off ramps, and I think that is an essential decision for us. Crypto on and off ramps is where all the fraud and scams happen. It is somebody tricking you into or hacking your wallet password and transferring assets on an immutable ledger that you’ll never get back. So just by disabling that, I mean, there’s no reason for our users to be sending crypto assets off platform right now in our use cases. And so, we can just guarantee safety there. I think that’s obviously important.

Aseem: I mean, I remember the days when, when you first got a bank account, when you first got a credit card, and like the world opened up. Right. And with that comes a lot of nefarious activity and risks. What’s amazing, as I’ve learned working with you, is you guys are trying to minimize all risk that there is out there from a fraudulent activity perspective. Now, yes, there’s always risk in trading, but at the same time, you’re giving them the tools and the platform to do it in a responsible manner.

Will: Totally. It is the Wild West, but it’s in need of a good guy. And we hope to be one of those good guys that can create a safe place for you to trade. And for you to learn.

Aseem: So, shifting gears a little bit, tell us a little bit about how you’re reaching teens. I mean, so much of what teens do is driven by social proof, right? I mean, yeah, if my friends do it, then I’m going to latch onto it. I’m going to learn how to do it. How are you approaching the community angle of your focus — is there a community angle?

Will: Oh my gosh. It is essential it’s everything that we do. We have an ambassador program, which you’ll see right on our website. And that ambassador program now has a little over 200, 250 teens representing 200, 250 high schools. Most often, they are the president of their finance or investing club at their school. And what they do is they work directly with our team, and there are two aspects to it. One is we come in and do a financial literacy workshop with them, which is incredibly powerful for us, whose mission is to educate this next generation of kids. Meanwhile, the curriculum in their own high school certainly cannot keep up with crypto and digital assets in this world.

And so, we can do a lot of cool education there through this extracurricular environment. And then the second thing that we do with them is give them opportunities to earn money on the app. And they can do it in a lot of ways. So, they create some of our TikTok content. We did a challenge where we sponsored a group of teens’ side hustles for the summer. And so, we had teenagers that created designs for iPhone cases and sold them on eBay. And we had teenagers that very entrepreneurial collected golf balls, you know, they lived on a golf course, and packaged them and sold them. And so, there were just a lot of cool stories that came out of that. And they video blogged the entire thing. Just creating those powerful stories of this teen community so that teens can see their peers, see the amazing things they’re doing, and understand that Stack, as a brand and a platform, is a place that is celebrating that. And I think that’s fundamental to who we are.

Aseem: Yeah, that’s so cool on so many levels this ambassador program can just be the beachhead of what you’re trying to achieve. And I think with that comes social proof and with that comes interest and awareness. It’s so cool on so many levels. You know, as a parent of a 9 year old, soon to be entering teenage years. And it’s almost scary to think of. So tell us what’s on the mind of every parent, tell us more about what your journey has been, how you’re discovering that. And frankly, is that going into how you’re thinking about product?

Will: I’ll admit right now that we have a lot of work to do with parents. We are starting to be good listeners of parents. I’m actually entering the parenting arena in a few weeks myself.

Aseem: Yeah, congratulations!

Will: Thank you. Thank you. I’m constantly thinking about what type of world I want to create for my own children. And I know that’s on every parent’s mind. So being the good guys and sharing the message that we are the good guys that, we’ve taken a lot of steps to create the best place for a teen and a parent to go be a part of this kind of crazy world of crypto and Web3, I think is a huge message to carry. We have started doing some more bespoke programs similar to our ambassador program but for parents. And then the second place that I think we’re spending a lot of time, which I think is just as important as parents and carrying the baton as teachers. Educators can be, I think, a really interesting part of our strategy.

I’ll say this, we didn’t really hit this light bulb until recently, but we had probably, I’m going to say 10 to 12 educators specifically send our website’s email to reach out about us educating on crypto and whether they could get their class involved in our platform and wanting to know how it worked. And from that, we’ve realized that there really is this cool opportunity to share stories through the educational mode of the world to parents because who do parents trust the most? They trust teachers. And so, if we can find a way to create some really cool storytelling of teachers being a part of our platform. And using it to educate kids in a place that curriculum just can’t, I think that could be a powerful way to celebrate parents and their needs and wants, which obviously fall into education.

Aseem: Yeah, I really like it. I mean, I really like the use of the word curriculum. Because at the end of the day, your kind of giving them a very curated set of learnings that you can then apply to real-world scenarios and start trading. And, you know, you’re bang on when you talk about teachers and in some ways, you know, parents are teachers, but in, in this scenario and in, in this new world, parents are also on the educational journey, and they’re learning alongside. Talking about learning, your first-time founder, you’ve had amazing experiences at a different company, you’ve been at a VC firm — what’s been different so far as an entrepreneur? What have you learned? And I love asking this question — what has been the “aha moment” — if there is one?

Will: Oh my gosh. I mean the founder journey, I tell, friends and, and even like early founders as they’re starting their own journeys — I say, I think being a founder either spits you right back out. Or just lets you fall into this ocean of loving every single day of what you do. And it, and it really takes, a certain set of wants and needs out of the founder journey to really embrace it and love the job. It has made my career come alive in so many ways. The six months that I both created every strategy and then executed that strategy, I think, is one of those unique moments in your career because we all don’t get enough time to just spend on ourselves, especially when you work at bigger companies and you’re part of the machine in a lot of ways. And so, that six months, for me, to really go back and be honest with myself and not tell myself any stories, say, “what are you great at?” And “what do you need other team members to come and do a really great job of?” It tells you lessons about yourselves, you know, just in that honesty that I think is beneficial, not only to your career, obviously, but just to me as a person. I find myself grateful for a lot of those lessons early on and celebrating a lot of those things, the bigger we get and the more people we add to the team and all of those good things.

And, and shout out to, you know, my two co-founders who round out my skill sets incredibly well. We had a discussion earlier today, in fact, about some areas that each of us feels particularly strong in and how we can continue to celebrate those in all of the strategies that we’re running.

Aseem: We’ve had the fortune of working with, you know, both Natalie and Angela as well, and I think we can see the team coming together and the sum of parts making the whole so much better. Right? Because at the end of the day, I think you’re complementing each other well. You know, you’ve got your strengths, and the team has theirs, and I think it’s so amazing to see the humbleness, the drive, and frankly, the passion for the space all get married with your skills that then take you along the journey of achieving something really, really great. The common knowledge around talking to founders is that there’s always one thing on their mind that they’re maniacally focused on — what is it for you when it comes to Stack?

Will: The number one thing, which I’ll just reiterate like a thousand times probably, is how can we more meaningfully educate? And so, we’re constantly obsessed. I mean, where my skillset falls, a lot of the time is actually in our product and designing it thoughtfully and listening to the data and listening to users and, and really celebrating those users.

We had a fun conversation about our marketing approach right now. And we talked about how we need to create the Stack feed to mirror the exact profile of our user. And how do we make the Stack feed look like a 16 year old that’s curious about finance and has certain behaviors and patterns so we can really live and breathe the content that they’re seeing and create meaningful educational content, create meaningful marketing content, create meaningful product content, all of that stuff. I think just celebrating a lot of that in high-quality content where I think a lot of finance apps send you to a blog or send you, you know, off app to something else. We can do a lot of good stuff there. So that’s the one laser focus that I have is winning that specific space of our app.

Aseem: And so much of what you’re seeing is actually reflected in the approach that you’re taking. The way different people consume content has changed so much over the years. Like we’ve gone from news feeds to RSS feeds to blogs to articles to Twitter to TikTok now, and I think it’s amazing, you know, how meeting people where they are is the approach that you’ve taken. It’s the natural way of how teens will learn. It’s exciting to have seen the work the team has done and in working closely with you seeing how much we’ve learned as a firm in terms of the approach you’ve got to take and you’ve got to change. It’s both enlightening and fascinating at the same.

So, kudos to you and your team for always being hungry and always being on the lookout for what’s around the corner and how you can make it better and how you can strive to achieve that curated approach in a way that is very natural.

Will: I appreciate that. It means so much.

Aseem: Yeah. Advice for young founders, Will. Like a lot of people who are in big companies, have ideas, you know, going after a bold vision, thinking about solving it in a meaningful way — what advice do you have for them, having just walked that journey and, and still walking that journey.

Will: Well, number one, I would say is the metaphor I’ve used from time to time. It’s an old show I watched in the ‘90s is called “Double Dare 3000,” and there’s a part of that show where you are going through this obstacle course. You’re given no instructions. And in each obstacle, you have to find a flag, and it’s like stuck in slime sometimes, or it’s in the middle of a slide, or it can be anywhere.

Aseem: It sounds very teenager focused.

Will: I mean, it was a great show. I loved watching it when I was a kid. And why I think it’s the perfect metaphor for early founders of a company is because you’re given very little instruction as a founder because you’re your own boss, all of a sudden. And you don’t have a lot of structure. And all you’re trying to do is find the flag at every obstacle. And that changes every day. Your job changes every day. One day, you’re trying to find money. The next day, you’re trying to find product-market fit. The next day, you’re trying to interview customers. The next day, you’re trying to keep your vendor expenses down. It is a new and different challenge every day that you enter the game. For founders who love it, that’s what they love about the job. But you need to be finding a way to just find the flag. The quote that I throw out there is, “You need to have slime on your face.” You just need to go for it. You’ll be told a thousand times that you’re not going to make it. You’ll get told “No” a thousand times. And to be resilient enough to get through that and to find it in yourself, to get to the next obstacle. I think that’s the biggest thing.

Aseem: I will tell you this, Will — in the context of “slime and your face,” we were enamored by your vision. We are enamored by your tenacity, your passion, and I don’t think that there was ever a question of “No” from our side. You know, without further ado, any fun fact or any fun story that you want to share with our listeners as we bring this to a commencement,

Will: Oh, man. I mean, there’s so many things I, I think there was a moment in the second month of being a founder — and this is when I had taken the plunge, this is my full-time job, you know, I was fully locked in — and I had one of those days where I got a lot of “no,” you know, I was getting a lot of doors closed in my face. And I had sat in my chair for a long enough time that my screen had went black, which meant it had been 15 or 20 minutes or something. And I was staring at my own reflection in my computer, and I just looked at myself, and I said, “oh my God, what are you doing?”

Aseem: Yeah.

Will: And I think to pick yourself up from those moments, which every founder has them. And, you know, one of my favorite books is “The Hard Thing About Hard Things” because it talks a lot about those moments that he talks about the struggle in that book and how every founder has been through the struggle. I think there’s so much knowledge, and just knowing that others have gone through it, and talk to a good founder, have a coffee with a founder that’s a step ahead of you. The amount of knowledge that you’ll learn and the empowerment that you’ll get. I mean, every time I have one of those conversations, I rip off the warmups, and I get back in the game. And so, I think that’s incredibly powerful, and I’m happy to be that for, for any entrepreneurs out there listening.

Aseem: That’s awesome. Hey, will I really want to, you know, congratulate you on the announcement and on the raises and, it’s fascinating to be able to back someone like yourselves and partner with you and the team in, in really going after a big, bold vision. Because I think this is truly transformative. If you think of it like 20 years down the line, but looking back and saying, “Gosh, we wrote the first line of code and had the first product on the market that actually helped teens invest in a responsible way. And that really created not just good habits, but it created futures.” I think it’s just fascinating that there is so much out there that we could be enabling in a responsible way. And we are fortunate at Madrona to be in this journey alongside you and to continue building or laying the foundational elements of this. So, thank you for your partnership, and I’m even more excited about doubling down and getting to work because the real work starts now in, in trying to scale the product and, you know, deliver a stellar experience.

So, thanks for being on the show and good luck in the future, and I’m sure we’ll be looking back and saying, wow, that that first line occurred, that first staring at the screen that moment was surreal.

Will: I love it. I thank you so much for having me, Aseem. I’m humbled and honored to be a part of the amazing Madrona portfolio. And I know my team is incredibly fired up to go solve this problem. And, as you said, the hard work starts now. And I think we’re really excited to be a part of it.

Aseem: Awesome. Thanks. We’ll talk to you soon.

Will: Thanks, Aseem.

Coral: Thank you for listening to this week’s episode of Founded and funded. If you’re interested in learning more about Stack, please visit TryStack.io. Please tune in in a couple of weeks for our next episode of Founded and Funded with FiveTran CEO George Fraser.

Battlesnake Founder Brad Van Vugt on Creating Community Through Programming Competitions

Welcome to Founded and Funded. My name is Coral Garnick Ducken, and I’m the Digital Editor here at Madrona Venture Group. This week, Investor Maria Gilfoyle talks to Brad Van Vugt, founder and CEO of Battlesnake, a multiplayer programming game for experienced web developers where your code is your controller. Battlesnake builds games for programmers that encourage self-directed learning and are challenging to master. In this episode, Maria and Brad dive into where the idea for the game came from, how they get the developer community so involved, and how they’re able to combine aspects of gaming, eSports, and traditional sports to create an engaging experience not just for developers, but for anyone to watch and enjoy. You won’t want to miss Brad’s stories and advice in this one, so with that – I’ll hand it over to maria to kick it off.

This transcript was automatically generated and edited for clarity.

Maria: Hi everyone. I’m Maria. I’m an investor at Madrona. I am honored to be here today with Brad, the founder and CEO of Battlesnake, one of our portfolio companies. Brad, I’m excited to be here with you today.

Brad: Hi Maria. It’s good to be here.

Maria: So, I thought we could start with the story of how Battlesnake got started. You have this incredibly engaged community with over 800 developers in a Discord channel, actively playing and coding games on Battlesnake. What was the initial vision, and how has that evolved from where it is today?

Brad: It’s a good question. It’s an interesting story. Battlesnake started a few years back primarily as a developer recruiting event. I had co-founded another tech startup, and we were at a stage where we needed to hire lots of developers. We were like, okay, let’s do something interesting, specifically for intermediate, senior developers, which are historically challenging to retain, challenging to recruit, and we were like, let’s do something really interesting. Let’s see what we can do to maybe make some buzz locally.

And so, we wanted to hold the developer event, and we wanted to do something different than what you’d normally see. And at the time, hackathons were really cool. It was getting all your friends together, staying awake for 48 hours, programming on our laptops, drinking lots of coffee and energy drinks, and seeing what you could make. We thought that hackathons were fun, but they’re incredibly accessible to students or younger developers or folks that have free time on their hands. But hackathons are inaccessible when you start thinking about anyone with kids, anyone with full-time jobs, and other commitments outside of just coding. And so, we wanted to do something different from that and hold a developer event that felt different. We wanted to accomplish two goals with it. The first one was to have everyone work on the same problem. Hackathons generally — everyone goes off in their own little silos and corners and builds something, and then everyone presents on stage, and nobody cares what’s going on. But what if everyone worked on the same problem? What does that look like? And does that encourage collaboration? But then also, can we make it really fun for non-participating folks to show up and watch? And so, can we hold some sort of tournament, show, or challenge at the end that would be engaging and fun to watch? And we came up with the idea of Battlesnake. The core premise was to spend some time building a web server that plays a game autonomously on your behalf, and then we’ll hold a challenge or a competition at the end and see who wins, and we’ll celebrate this. And it took off in an interesting way — to a point where we had thousands of developers show up and try to participate and win first place.

We had hundreds of parents, kids, grandparents, and colleagues show up just to watch the tournament. And that triggered something for us where we were like, okay, this is really interesting. We haven’t seen anything like this before. We started to get inbound interest from much larger tech companies that were struggling to recruit developer audiences that were more senior and more experienced. And that pushed us over the edge of like, okay, let’s see what we can do in the space. Let’s see if we can grow this and see what that would look like. That was a decision we made in 2020, and we started working on Battlesnake full-time. And now, we consider ourselves a global game platform. We have developers all over the world. We have more than 20,000 developers that have played Battlesnake at this point. We host monthly competitions and challenges, and we do a bunch of different stuff with a bunch of different developers, but that was how we got started in the sort of humble beginnings of let’s try to do something that was better for more experienced developers specifically.

Maria: You have this incredible story of being the founder of another company, seeing a problem internally and then trying to develop the solution, which became your next startup, Battlesnake, which is such a cool story. Is there anything you learned from being a founder of a prior company that you’ve taken with you as a second-time founder?

Brad: Yeah, there are a couple of things. So, we were a B2B SaaS company. We required an API integration, so a lot of our product engineering went into like API design and being thoughtful around how the API was presented and documented. So, we learned a lot about how to speak to the developers, how to engage more senior developers, and how to build an interesting developer experience, but also like no developer really wants to integrate with a SaaS product, so it was like, okay, how do we make this as quick as possible? Those learnings led to, okay, how do we grow Battlesnake in the early stages? Like how do we build an interesting developer experience? How do we get developers to fund fast, to use the cliche term?

But there’s also an interesting number of things that didn’t transfer to the developer gaming space. Fixed roadmaps are a big one. When you’re developing for a B2B SaaS, there is a problem your solving, you have phenomenal ideas for how to solve it, and that’s what you are going to do. That’s what you’re going to build. But, for consumers or gaming spaces, you have to be much more open to your audience and the community. And this is something that we learned early on, which was like, okay, it’s not just talking to the developers, but it’s listening to their feedback. It’s engaging with them directly. What ideas do you have? What would make this more interesting for you? How could we take the platform to the next level? Where do you see Battlesnake in the next five years? And how do we help you get there? That was, I don’t want to say, painful, but it was a bumpy shift in mindset between enterprise SaaS to developer-facing consumer gaming product. The other thing that helped is I partnered with a really strong CTO who has done lots of stuff in consumer gaming. He helps balance out that background and that previous knowledge, that previous experience with how we need to think about engagement on a daily basis or a weekly basis. And he helps a lot in that regard.

Maria: You’ve been incredible at listening to your customer and keeping them engaged. But before we dive into that, I would love to dive into how you think about the journey of the user in Battlesnake. So, you started with this new version of a hackathon, and it’s now evolved into this platform where anyone from anywhere in the world can log on and participate in Battlesnake. You’ve thought a lot about how you should focus on building an open-source community with extreme flexibility. So, anyone can participate with any programming language, any cloud provider. How do you think about that flexibility and user experience from the first time someone goes to the Battlesnake website to participating in a tournament?

Brad: I think it’s really challenging. And it’s something that we’ve worked very hard to improve. And I think we’ve done some great things in that regard, but there’s always more space for improvement. Battlesnake as a platform is all about exploring the technology on your own terms, and letting that mean whatever you want that to mean to you is good for us. And we want to support you in that. A lot of developers that get involved in Battlesnake, especially more experienced developers, are coming to Battlesnake already with an idea in their mind of something they want to do — they read a blog post six months ago, they heard a podcast about this new stack or this new technology or this new cloud platform. And it’s something that’s been in the back of their mind as like, “Oh, I’d really love to learn Rust,” for example. I use Rust, for example, because a lot of our players are using Battlesnake right now as an onboard ramp to learn Rust specifically. But if you’d speak to most experienced developers, they have a laundry list of things that they would like to explore more and things that they would like to try. And really, that’s what we’ve learned to latch on to. When you come to Battlesnake, we’re not going to teach you AI. We’re not going to walk you through AI. It’s not an ML competition. That’s not what this is. This is about giving you an interesting way to finally learn that thing you’ve wanted to learn forever. And most developers, especially experienced developers, have that list already. So, it’s about tapping into that and speaking to that directly.

The other interesting thing is — and this is something that we observed prior to making the jump to work on Battlesnake full time — there’s not a lot of opportunity for developers to go deep on specific technologies. If you are working professionally as a developer, the scope of things you get to work on is very small. You might be able to go deep on something that’s very specific, very siloed, but you don’t have a lot of say in how you explore outside of that or beyond that. What attracts senior developers to Battlesnake specifically is the ability to explore that over long periods of time. We’ve had developers play Battlesnake for multiple years now, and they’ve used it as a jumping-off platform to learn new languages, tech stacks, and cloud platforms. We’ll have an elite competitor do well in one particular tournament and then show up the next tournament with a snake that’s completely rewritten in a different tech stack of “Hey, I really wanted to try it this way, this time and see what that was like.” And that started to happen early on without our involvement, and that’s the core piece of growth, and that’s what keeps developers coming back.

Maria: That’s awesome. And how would you describe the profile of a super user on Battlesnake — the person you’re trying to target? You’re open to a lot of different profiles, but who ends up being the most engaged?

Brad: The most engaged tend to be senior developers with ideas about things they want to learn. But don’t have the time for a side project or are sick and tired of being told to do side projects. A lot of developers get told, “You want to learn React, or you want to learn Rust, or you want to learn this new technology, go do a side project on your own or go join open source.” And both of those things are very like lonely and intimidating, with huge barriers to entry. And our long-term core users right now — our power users — are folks that have these lists of technologies and things that they’d want to learn and things that they’d want to explore, and they just haven’t found a venue to do that yet. And those are the ones that stick around for a long time, and they help the community, and they get involved and continue playing for quite a while.

Maria: So, you’ve built this engaged community, and you’re great at listening to your users, and there are people from all over the world that are tuning into Battlesnake tournaments. How do you think about managing that community and growing it, and how the concept of community contributes to the overall growth of Battlesnake as a company?

Brad: I think it’s challenging. It’s hard to do, and we’re constantly learning. We’re constantly listening, we’re constantly learning, and we’re pretty good at acknowledging what we don’t know. But also, more recently, we’re learning that multiple channels are incredibly important — like highly engaged, multiple channels. So, it’s not enough to have a Discord. It’s not enough to have a GitHub discussion board. It’s not enough to be on Twitch. It’s all of those and more. One of the things that has worked out well for us is being on Twitch regularly. It puts real people behind Battlesnake. You can see me, or you can see someone on the team talking about how we think about the game, how we want to extend the game, the problems that we see, and what we think we’re hearing the community say. But then it also gives incredibly real-time and incredibly interactive opportunities to community members.

It’s a lower barrier for someone to show up on a Twitch stream and ask a question in real time. And I can answer right now. And that has been incredibly valuable early on and continues to be valuable even as we expand across other channels. I think that the idea of developer brands doing more live streaming or live engagement — it’s incredibly undervalued in the industry just in general. And it’s been one of our core advantages in growing and managing the community early on.

Maria: Is there anything that surprised you about growing and managing this community besides the importance of Twitch that you didn’t realize at the beginning? Or you’re continuing to explore as you build the community out.

Brad: Yeah. Developers as a community are going to do what they want, whether you give them permission or not. That’s the nature of this. Because we do so much open source, many developers will, if they have an idea, just act on it. And they’ll self-organize, and they’ll start building communities around this. A good example of this is that we didn’t have a Discord early on. We just hadn’t thought of it. I think we had a public Slack or something. And it wasn’t working, and we didn’t pay attention to it, and nobody joined it. And then what happened was someone tipped us off that a Battlesnake Discord had been started. And we joined, and there were like a hundred people in it, and they were talking about Battlesnake all the time. They just did it, they just self-organized, and they just did it.

Another good example this is more technically specific, but like, we didn’t have a CLI. There was no Battlesnake CLI to run games or run your own commands in a shell. And someone just built it and showed up on Discord one day and said, “Hey, I built the CLI.” And suddenly, hundreds of other developers started using it. We had no plans to build a CLI. We didn’t intend for that to happen. We didn’t think that was a thing we needed to work on, but the community just did it. And so now it’s like, do we fight that, or do we encourage that and engage with that? And obviously, the answer is the latter. And that’s surprising, but we’re working with makers, we’re working with builders, and it’s going to happen. And there’s a lot of lessons that we’ve learned around — rather than show a roadmap of where we want to go, or rather than build a feature and think this is the thing we need to exist. Our community’s going to build for Battlesnake whether we like it or not. So how do we encourage that? How do we leverage that and use that going forward? And we’ve got a good sense of that now, but we’re still very much learning how to treat the community like a living organism as we grow.

Maria: That’s incredible. It’s almost like you built this base platform and product, but then your communities decided the product roadmap versus you as the founder, which I think is a big difference between often consumer and enterprise companies — you have a bigger focus on listening to your community and then building the product with them. And so that leads me to your focus on building in public. So Battlesnake is great at building in public. What does open-source development contribute to the growth of Battlesnake? And how do you think about the focus on open source?

Brad: Open source is really important to us. We made a very conscious decision early on to do as much in public as we possibly can. The entire game engine is open source. The visualizations we show during streams and that you’re seeing during competitions are all open source so that people can extend those. And also, we provide a lot of onboarding content through open source. For example, if someone wants to learn Rust or AWS SageMaker, we provide some code to get them started. And we can provide a lot of jumping-off points. But it initially served two purposes for us. One was like the game itself is technical, and the mechanics are interesting, so we had a lot of community members that had interesting and great questions around how the core game mechanics worked. Like, why did I lose in this particular situation? Or how does turn resolution work in this case? Or I thought of this edge case; what happens here? And so, we found ourselves fielding these questions regularly and thought, what if we just open sourced the entire game engine? And then the conversation becomes not — here’s our justification for it behaving this way. It’s here’s the code that runs your games. Here’s the code that runs your turn resolution; go check it out and explore it. And that became part of the early onboarding experience for most developers. It’s not just playing the game; it’s exploring how the engine works. It’s exploring how it’s deployed. It’s exploring how timeouts are calculated. It’s exploring how turn resolution happens. And you’re doing that through open-source projects on GitHub. You’re browsing the code. You’re compiling the code. You’re making changes. The other major way that open source has given us an advantage is obviously contributions back. And I want to be very clear because it’s interesting how this has evolved. It’s not like fixing bugs. We’re not like, “Oh, can you please fix this bug?” Or “Can you increase the documentation on this repo?” It really is like, — how can we extend the game to make it more interesting? How can we do more things with Battlesnake? My CLI example, previously being a good example of this, right? Because everything is open source, someone was able to develop this whole other way to engage with Battlesnake through a terminal that didn’t exist before. And that was because the game engine was open source. And now we’re starting to see developers build modifications and extensions and their own game maps on top of the engine itself. We see open-source contributions as the pinnacle of our power users being engaged. They’re going to get involved. They’re going to have ideas. If we’re doing this right, they’re going to want to give back to the community. And they’re starting to do that now through open source specifically. So that’s something we really had to figure out how to lean into and encourage more. And there’s a lot of stuff we can continue to do there as well.

Maria: And as developers use the open-source opportunity to build game engines on top of Battlesnake, how does that impact your product roadmap, and where do you see Battlesnake evolving in the future?

Brad: We’ve learned that our role is not necessarily to build for the players. Our role is to build for the developers and to encourage them to contribute and be engaged. Our players want that. Our players want to give back. They want to build modifications; they want to build extensions. And so rather than us being like, okay, this game mechanic needs to exist. Let’s build an engine for players to create and release their own game mechanics, right? That’s the mental shift that’s gone on, and that’s the impact that that open-source development, that’s the impact that the community has had on how we think about building the game and building the product. And I’m not sure of many other products or games that are built that way — where our core goal as a dev team, especially long term, is enablement. Rather than what can I deploy today that’s cool. It’s what can I deploy today that will let someone else make something cool six months from now. And that requires a very concerted effort to make that happen. But that’s where we’re at right now. And I think that’s incredibly compelling.

Maria: You’ve been so great as a founder about listening to your community of developers that are participating in Battlesnake. But you have a whole other customer that we haven’t even talked about yet, which is your partnerships with companies like New Relic and AWS, who are partnering to help engage their developers at the company but also recruit developers into those companies. How do you think about these partnerships in satisfying those customers as well?

Brad: I think that right now, our core focus as a company, as a team, is our player base. It’s experienced developers. It’s getting them engaged. It’s keeping them engaged. It’s doing things that are fun and interesting — things that allow them to include their friends and their family, and their colleagues. On the partner side, we’re specifically working with large brands that understand that — that understand what developer marketing looks like. What do developer relations look like? What it means to recruit long term. If you’re DigitalOcean, you might be trying to make some hires this month, but really what DigitalOcean is trying to do is increase platform usage, increase customer base and make hires for the next three years. And that’s a much different play than, say, traditional recruiting, so we work with them because they understand our longer-term vision. They understand how we’re able to attract and retain players and more experienced developers. And they’re willing to work with us to figure out how to make that scale. One of the challenges, when you’re working with developers specifically, is you have to think long term. This works for recruiting but also for product usage. If you want someone to adopt your technology, they’re not going to show up one day and then just start using your API. You have to think long term, it has to be, you know, a 12-month onboarding experience or more, and you also have to be authentic. Developers are incredibly good at seeing through marketing tactics. That doesn’t mean you can’t market to them. It just means you have to be really authentic in how you do it. We’re able to work with a handful of really great tech brands that understand the long-term value of that. And are willing to work with us as we build the developer-facing side of this game.

Maria: So, if we think about this category, you’re building at the intersection of a lot of different spaces. You’re building at the intersection of ed tech in a way that there’s an education component to it. Gaming and eSports because of the community around it and the way you engage Twitch creators. And you’re also building up the intersection of dev tools, and you’re defining this new category. It’s a dev recruiting tool for companies like DigitalOcean, but at the same time, you’re helping developers explore new cloud providers and coding languages that they haven’t explored before. And you’re also targeting not just new developers but experienced developers. How do you think about this category and how you’re defining it, and how this kind of niche between these different spaces will continue to evolve?

Brad: I think we’re very early in this space. And my best go-to example of this is that Twitch just created a software development category this year. And there are thousands of folks that are live coding and building games and interacting with developer audiences on Twitch, but it’s just starting to get large enough now that folks are starting to notice. In terms of how we think about this intersection. We think about it from a bunch of different angles. We think about it from the eSports angle, from the traditional sports angle as well, but also from the gaming angle. Something that’s core to our mission and beliefs as we approach this space is making something that is innately incredibly technical and challenging but also incredibly accessible to most people. And I don’t know if that comes through, but an example is, say you’re watching competitive StarCraft, StarCraft being a very large eSport. Most folks watching a StarCraft game, especially if you play it at a very high level, have absolutely no idea what’s going on. You’re not going to casually watch competitive StarCraft. You have to be a player. You have to understand the strategy that happens at a high level, and you have to be available at that time in order to view that. And I think that needs to change. Gaming is kind of figuring out different ways to do that by adding different spectator modes and adding different on-ramps to different things.

But if you look at more traditional sports — most people can watch professional basketball. And they might not understand what’s going on in the strategy, but they can go home, or they can go outside and play basketball. They can try it, and they can understand what’s going on. And it gives them a sense of, when that shot is made, or that play is made, that was really hard to do. And that was impressive. And that was fun to watch as a result of that. And what we’re trying to do is how do we bring that spectator angle? How do we bring that accessibility to something that is more developer-focused or something that is more gaming-focused or eSports-focused. And this really leans into the game itself. The way that we’re building the game is universally recognizable to most people. I can spend time building an incredibly complex, incredibly competitive Battlesnake, and then when I go to play competitively, my parents can watch, and they can understand it’s just a game of snake — my options are to move around the board and try to outmaneuver my opponents. Or my kids can watch with me, and they can cheer, and they can understand what’s going on. If I’m live coding or if I’m playing StarCraft at a high level, no one’s watching me. I’m not bringing anyone along for the ride on that.

I think that the sort of intersection of the accessibility of traditional sports and the deep complexity of eSports — we work hard to combine those in an interesting way. And I think that’s where this industry is going for sure.

Maria: Let’s dive further into this focus on accessibility. How did your creative vision in terms of the visual parts of Battlesnake and this concept of using a snake? People know battle and game and snakes are understandable to any user. How do you think about the visual components of what the snake should look like? What the user experience should look like that makes Battlesnake so fun, visually appealing, and accessible at the same time?

Brad: Part of the choices around snake and the, especially the visual aspect of snake, we got lucky early on. We didn’t set out to build competitive multiplayer snake — that wasn’t the goal. The goal was to do something for developers that was fun. And the sort of core game mechanic was just chosen at random. And then we learned that it was universally recognizable. The way that we learned that was — like I mentioned, early on, Battlesnake started as this recruiting event. But people started to bring their kids. We have photos early on of 2,500 people watching this live Shoutcast Battlesnake game, and there are kids in the crowds that are cheering, and there are parents that are standing in the back, and they’re clapping, and they don’t understand how these things are built, but they understand that someone is doing well. And that’s impressive because they’re playing a game that they’ve played before at a much higher level than they’re able to play it. And that really tipped the scale for us. You know, we sort of have a third user that we pay attention to, which is the spectator. Someone who is not necessarily going to build a snake or isn’t interested or isn’t capable of building an incredibly competitive snake but still has a lot of fun watching and wants to follow the storylines and wants to cheer for their favorite.

I guess the other angle of that is we look to more traditional sportscasts, and we also bring an element of analysis and Shoutcasting and live entertainment that you wouldn’t necessarily see it at a developer event. And the way that we approach speaking about the game at a very technical level, at a high level. We try to be very accessible with that. Our target audience is roughly a 16-year-old that might be watching and might be interested in what’s happening. And we make a very focused effort to make this more interesting and tell storylines that sort of like overarch the tech itself.

Maria: How do you think about converting that spectator into someone that ends up coding in Battlesnake?

Brad: Oh, this is awesome. And this is happening already. I’ll go back to the basketball example. So, let’s say you’ve never heard of basketball. You’ve never watched the game. You watch a game for the first time, and then your friends are like, “Hey, you want to go try that?” It’s incredibly accessible for you to just go play. How do we enable that for programming? Like, how do we get to a point where you can watch this game? You can understand that someone is doing well. You can understand what it means to have an interesting strategy. How do we capture that interest and be like, try it, go home and try it, just try it right now and see what we can do? And like enter a beginner league or enter a beginner ladder and see how well you do. Our goal isn’t to build you into a competitive Battlesnake player. Our goal is to introduce you to programming — introduce you to the core concepts of what’s happening. And again, sort of marrying these two concepts of like traditional sports has this incredible accessibility to it that programming typically doesn’t, and so how do we bring those two things together?

Maria: I love the sports analogy because Battlesnake fills two things that are more native to traditional sports like basketball versus coding, which is that it’s accessible, you can watch it, you can then try it. And in Battlesnake, someone could watch and then decide to try learning to code and coding their own snake. But you could also become a professional basketball player, and you could play at a very high-level Battlesnake also great and appealing for someone who is an experienced developer but wants to continue to improve their skills. And so, there’s this broad range of users and ways that people can engage with Battlesnake.

Brad: Yeah. And I think it’s important that we acknowledge and serve both of those personas as we build. Our focus right now is on experienced developers. Like how do we get high, complex, strategically interesting games that we can watch that we can analyze because that gives us content to show on Twitch, to have on YouTube, to do live casts around, to do live analysis around — that can then expose many more folks down the road to what we’re doing and get them involved in the game itself. But it very much is — let’s tell the stories of these programmers and make it interesting to make it really compelling for anyone to watch.

Maria: So, you’ve built these different levels within Battlesnake, but you’ve also mentioned potentially another game is on the horizon. So, what are you most excited about in Battlesnake’s future?

Brad: I think that the thing that we’re most excited about is the community contributions back to the core engine. And going back to what we were speaking about before and our primary avenue for growth, we see the pinnacle of power user engagement is actually building your own game or game mode or game engine on top of Battlesnake itself. And snake being one implementation of what we built, but like what else could we do with it? I see our role in that as not necessarily becoming a game studio and being like, here’s the next iteration of Battlesnake. It’s more like, how do we enable the community to do interesting things and build their own renditions and build their own modifications and build their own communities around that. We look a lot to Minecraft. We look a lot to Roblox and these sorts of larger, contribution-based communities. When I was growing up, I was heavily involved in the early Counter-Strike scene, and I produced mods and got communities built and working behind those. I love that sort of natural extension to gaming in general of, you know, great we’ve we built a community rather than let’s go build a second community, how can we inspire folks to start their own communities?

How do we show core values around accessibility? How do we show core values around learning and collaboration and open source? How do we push that out through the communities rather than try to have a heavy hand and just dictate from the top? Let’s build a series of sub-communities and enable developers to do their own things and spread these values organically.

Maria: That leads me to the next question, which is, as a former founder and going into building Battlesnake and continuing to build Battlesnake and seeing where the future may go, what has inspired you that is dictating where you want the product to evolve to in the future?

Brad: We look to a couple of different places for inspiration. But the largest one above and beyond is the community. It’s listening to their ideas. Being active, being engaged, giving feedback, working on enablement rather than features. We also look to traditional sports. We look to things like basketball, or Formula 1 is another place that we look to regularly. Also, we look at other communities that are doing this sort of in a more niche sense. Kaggle is an example of very AI, very ML-centric, very monetarily driven developer communities. But I think they’re doing a lot of interesting things in terms of community growth. And we look at what they do and like, how do we do that bigger? How do we do that at a much broader scale at a wider spectrum of technologies? We also look at products like DeepRacer. They have a very Shoutcasty sort of live analysis, live show angle to it. And then we also looked at eSports very much so in like League of Legends and Overwatch League and MTGA (Magic: The Gathering Arena) and all this sort of stuff.

Maria: We have other early-stage founders that listen to this podcast for inspiration. So as a founder of an early-stage company, what do you wish you knew sooner or had implemented at an earlier date?

Brad: Oh, geez. We should have made our own Discord a lot sooner. We should have started streaming on Twitch a lot sooner. I talked about this before, but I cannot emphasize this enough. It’s not about live streaming. It’s not about content production. It’s about live interaction with your users — in our case, our players. Finding interesting ways to engage with them live and put a face behind it. Like it’s way different to have, you know, me write a blog post about where I think we’re headed versus see me answer questions live in real time on stream and be like, “Oh, that’s Brad” or “That’s the other member of the Battlesnake team. And this is what they think about this. And they’re a real person, and they’re incredibly accessible.” I say Twitch, but it doesn’t have to be Twitch. Obviously, go where your players are, or your users are. But I think live engagement is incredibly underutilized and undervalued by most early-stage companies, especially with developers.

Maria: Yeah. And I think that can be transferred in any consumer company, which is the importance of interacting with your users, like face to face, as early as possible and seeing where they are, learning from them, connecting with them. But also, this focus on, maybe it’s not creating a Discord channel right away, but it’s just testing different platforms that users may be engaging and getting involved, and creating a channel and seeing how people interact within it. And is it a place that you should buckle down and prioritize, or is there another space, like a Slack channel, better for your customer?

Brad: I think that’s a really good insight and a good clarification. It’s not about creating a Discord channel and being like, “All engagement will now happen on Discord. Please do so in real time with your real name attached, such that we can have an engaged community.” It’s about finding natural feedback channels, natural congregation points for your community, especially in consumer-facing, obviously. And also being open to the solutions being things you haven’t thought of. Again, using Discord as a very small or specific example of that. But like we had a Slack, and turns out more people were talking about Battlesnake on Discord behind our backs than we realized. And we could’ve just been like, “Please come to Slack.” But okay, how do we make this work? How do we not adopt this? How do we engage with it? We still don’t own the Discord server. Like it’s some community member that admins it and monitors it.

Maria: And so, speaking to that community engagement, as we wrap up, one of my favorite things about your email updates to investors is you always have these community highlights of someone talked about Battlesnake during their job interview with Google or someone got a job through participating in a Battlesnake tournament. Do you have any favorite community stories you want to highlight?

Brad: Oh yeah. Oh, so many. Okay. So, I think this happened earlier this year in the competitive spring play. We had — I’m not going to say their name because I wouldn’t want to draw unwanted attention to this particular player — but they had done incredibly well. It was like their first-ever competition, and they were just figuring things out, and they’d realized a unique strategy. And their partner baked them a Battlesnake-shaped cake. And the cake was their customized Battlesnake color and like their head and their tail. And there were like little cupcakes that were the food on the grid. And had this viewing party to celebrate the success that this one developer had in this competition. And again, that speaks to the accessibility of it and the fun behind it and that third sort of spectator persona. But when you see that kind of thing, like that just blows our mind. That is awesome. How do we do more of that? How do we encourage more of that? We want more of that to happen. That’s incredible. I could go on with community examples for hours.

Maria: That’s an incredible example of just how much your users love you to a point that they’re baking a cake around it, and they’re getting their friends and family involved, and it’s this significant part of their lives. And people are making friends through it, getting jobs as a result of Battlesnake, and developing new skills. There’re so many great outcomes. Brad, this has been so awesome. As we wrap up, where can people find Battlesnake, and how can they get involved in a tournament?

Brad: Everything you need to know is at Play.Battlesnake.com. You can also get involved on Twitter. You can get involved in GitHub. You can join our Discord channel, and multiple people will be happy to help you and answer any questions you might have. If you are a developer looking to get started, check it out. If you’re interested in just watching some Battlesnake games and seeing what this is about, our YouTube channel or our Twitch channel has a lot of great videos of what competitive Battlesnake looks like in the fun we have during more of the live tournaments.

Maria: Awesome. Thank you so much for being here today.

Brad: Yeah. Thank you for having me.

Coral: Thank you for listening to Founded & Funded. Like Brad said, if you’re interested in learning more, visit play.battlesnake.com. Thank you again for listening, and tune in for our next episode in a couple of weeks with Stack Co-founder Will Rush.

Tesorio Founders Carlos Vega and Fabio Fleitas on Automating Accounts Receivable

Tesorio Founders Carlos Vega and Fabio Fleitas

In this IA40 spotlight episode, Managing Director Hope Cochran talks with Tesorio Co-founders Carlos Vega and Fabio Fleitas about getting finance teams out of the world of manual, error-prone and inefficient spreadsheets and equipping them with intelligent applications more common in other parts of a company. That’s where Tesorio comes in — they offer automation solutions designed to help companies manage accounts receivable — in other words, they help companies turn their revenue into cash. The platform replaces that tedious and manual collections processes with accurate, real-time predictions, optimized workflows, and actionable insights based on behavioral trends. Tesorio raised a $17M Series B in July to expand its go-to-market efforts and last year landed on Madrona’s list of the country’s top 40 intelligent applications, a list we launched to recognize companies applying machine learning to solve business problems better than ever before. We think these intelligent applications will define the next generation of innovation and shift the SaaS technology landscape into application intelligence.

In this episode, Hope, Carlos, and Fabio dive into where the idea for Tesorio came from, how the company pivoted from the original idea, how these two met, the hard conversations any co-founders should have before deciding to launch a company together, and so much more.

This transcript was automatically generated and edited for clarity.

Hope: Hello, everyone. I am Hope Cochran, I’m a Managing Director at Madrona Venture Group. I am excited today to be with Carlos Vega, the CEO and co-founder of Tesorio and Fabio Fleitas the co-founder and CTO of Tesorio. You know, When I think back to when I met these two fun friends, I was excited by the fact that they were all about solving the problem of forecasting cash flow. I am a recovering CFO for those of you that don’t know me — I have spent many, many hours of my life in spreadsheets trying to forecast cash flow and often getting it wrong, and often being an area where there was a lot of errors. There’s so many pieces that go into forecasting cash flow, whether it be the AR ,the working capital, the AP, that all amounts to what is your ending cash balance going to be. So when I met these two, and that was their vision to solve, I immediately identified with the problem, and we were off to the races. So we’ve been on quite a journey together, and I just welcome you two to this podcast today. Thank you, Carlos and Fabio.

Carlos: Yeah, thank you so much, Hope. I’m really excited to be here.

Fabio: Thank you.

Hope: I know the moment I met you all, and that’s when I got to be a part of your journey, but let’s start with where you all came up with the idea for Tesorio and solving this problem.

Carlos: Yeah, thanks. This is Carlos — thanks so much, Hope. Before beginning to Tesorio, I spent about a decade working in finance. Most recently, I spent a couple of years working at Lizard doing investment banking in Latin America, and while I was there, I actually co-founded a factoring business. Factoring for those who are not aware is purchasing of receivables at a discount so that people can get their cash flow now instead of waiting to be paid. You know, what we were trying to do is say, “Hey, look, let us help you with your cash flow and model it out, and then wean you off a factor and get you a proper line of credit at a bank.” But folks weren’t really biting. The way it ended up feeling is almost like payday lending for business. Where we’re providing those discounts, providing people the money up front, but they weren’t really taking the advantage of a better way of doing things.

When I went to business school, I knew there had to be a better way, and that’s what I was out to solve. So, we started with accounts receivable because, given that experience with factoring and other things along the way, what we realized was what people really needed around cash flow was predictability. And the most volatile part of cash flow that you don’t control is your cash inflows because any money going out the door, you have agency over, right? You can choose not to mail a check. You can choose to slow down your hiring. But you can’t really force someone to pay you. And one of our customers said it best — Steven Odell, who was at Slack at the time, said, “Revenue’s not real until you get paid.” So, when we start thinking about that, it’s like, all right, what does that mean? At the end of the day, it means that you grow the business. You have this revenue, but in order to generate value, you have to have the cash to pay for your expenses, provide returns to your shareholders, and fund future investments. When we started looking at what we could do we said, look, can we provide software that helps folks manage the day-to-day so that they’re not stuck in spreadsheets? And also give them that predictability they need so they can plan for future growth. And at the end of the day, you know, that’s what we have here with Tesorio. It was an important problem for every business to fund their growth with their cash flow. And it was a problem that we discovered a lot of folks had that we could solve.

Hope: I love that concept because, really, it’s so basic and yet, so true that revenue is not real until you get paid. Truly we know that we need the cash in our bank to pay the expenses, the most basic concept out there. And yet, somehow, it’s such a challenge to not only forecast when it’s going to come in, but know it’s going to come in, have that certainty. And so, you all do a fabulous job of looking at patterns and ensuring that we can approach those payees at the right time and ensure those payments and also help with that predictability. So again, “Cash is king” is the mantra here, and how can we help businesses maximize that?

Carlos: Right. And you know, what’s a funny part about it? How many times have you not gone out with some friends and spotted someone a couple of bucks, and you’re like, “You know what, don’t worry about it.” Cause it’s kind of awkward to tell your friend, “Hey, remember I spotted you $20 bucks last time? Can you give me $20 bucks back?” You just forget it. And you go on. It’s an awkward conversation, even personally. And in business, if you think about it, a lot of things happen, and you close a contract, you sign it. That’s perfect. But then going to have that conversation or remind someone, “Hey, remember we closed this contract, and we’ve been doing business for the last three months. You still owe me some money.” It’s not comfortable for any of us. Yet, folks were still doing this in spreadsheets. And what would you do with a BDR that took your entire database and just blasted the same message to everyone? What would you do with a sales team that didn’t even have a CRM if you’re looking at investing at this company? Well, that’s actually what finance teams are living with. A lot of companies we’re finding are sending the same message to everyone. And then, not using that information the right ways, as you were saying, as inputs to forecast cash. Every little bit of information about who is trustworthy when they say “I’m going to pay you by next Friday?” Who says they’re going to do it, but they don’t pay that? Or who consistently pays late or has a certain pattern? All these things can be discovered with algorithms. And then you can use that to make your cashflow more predictable, as you’re saying, not only to anticipate when you’re going to get paid but also to know when you should follow up with someone and do that the right way. It’s kind of interesting, everyone was doing this in spreadsheets when we came along.

Hope: I attest to that. Everyone does this in spreadsheets before Tesorio. I did that myself, and I experienced it many times. I think many people are still amazed by how messy and manual the back office is. You made the comment that the BDRs, the sales team, they have lots of CRM tools or different tools to help them in the sales world as people are focused on getting the revenue in the door. But then the finance team that’s trying to get the cash in the door is really based on messy workflows and complicated spreadsheets that often break. I often bring this to my friends and peers about how messy the back office still is, and they’re always surprised. But I lived it, and I know it, and I have the scars to show for it. And I love the fact that you all are bringing that solution to bear.

One of the things that I want to explore with the two of you, since I’m so fortunate to have both of you on this with me, is when starting a company, finding that partner or co-founder is difficult and challenging and yet so vital. I often say that the co-founder relationship or the founder relationship is harder than marriage. There are so many nuances, your lives are so intertwined. You guys have been at this now for a while, and I’ve really enjoyed working with both of you and your partnership. I’d love to hear how you found each other, what you feel was complementary in skills, etc.

Fabio: So, Carlos and I met actually, while we were both in college. I happened to be an undergrad in computer science at the University of Pennsylvania, and Carlos was in business school over at Wharton. We were both independently involved in tech entrepreneurship in Philly / Penn. I had at the time co-founded a fellowship program that got funded by the city of Philadelphia to get students from all around the world to come work at Philly-based tech startups. And so, I was getting really involved with the tech community there. Carlos was running a group at Penn called Founders Club that was helping other like-minded students that are interested in entrepreneurship or founding a company to work with each other. How we ended up meeting was because of a professor. He was starting a brand new class that was focused on tech entrepreneurship, and he kind of recruited Carlos and I independently to get students to join his class. He wanted both business students and computer science students at the time, which was a very unusual mixture in school usually. And from there, I remember still meeting Carlos in one of the sessions with talking with the professor he had applied to Wharton with an idea of Tesorio. So, we met there and started working together. Carlos is originally from Panama, I’m actually originally from Cuba. We’re both native Spanish speakers — we both immigrated to the U.S. as kids. So, we had that connection between us. And then, before actually making it an official co-founder relationship, Carlos and I both actually worked together for about a year. And then spent another nine months or so before getting into YCombinator. So, for us, it was actually very important to work together. And Hope, as you said, it’s a little bit like marriage. This time that we spent together is basically almost like we were dating before we decided, “Hey, we have a good relationship together, we can work together long term.” And at that point, obviously, it gets to become a very serious thing. And so we had the opportunity to work together before making it official.

Hope: I love that — definitely understanding each other’s working styles, knowing what each other’s strengths and weaknesses are, is so important to going into this. And, just having really honest, and I want to say hard conversations — don’t know if you both did that. But I really encourage my founders to have those: What does your life look like? What are the things that work in your life and don’t work? I’m curious. Did you all explore that together so you would know that you had the same values going into starting a company?

Carlos: Yeah, totally. I would say that you’re spot on, Hope. Making sure that you have the value alignment just in life generally is so critical just because when you are starting a company, your core values are so key to the success of the culture, and culture is everything, especially once you get 50 people or so. And that, I’d say, is one of the key things we aligned on. And then the other one — talking about hard conversations, which we had a couple of times. But figuring out how you are going to split the work is something founders sometimes avoid in the early days, but you have to have that conversation. And then also, “How are we going to split the equity?” That’s a really basic conversation that could lead to a lot of awkward situations down the line if you don’t have it early on. To anyone who is getting into a co-founder relationship, you can’t avoid those.

Fabio: To add to that. I love to say that Carlos and have a really great working relationship. I think we respect one another. We know what we are good at, and what we’re bad at. And with that, we can support one another and go through all of this journey of building a company together. And I think again, having a great co-founder is arguably one of the most important things for the success of a company. And although it is difficult, it is one of the most important early decisions you’ll make. And finding that right person that you can gel with, that you can have the right values with, and work together and see a long-term relationship and success with is really important.

Hope: Yeah. You mentioned knowing your strengths and weaknesses. I often find that we’re attracted to people that are similar to us, and yet that’s not what you want as a co-founder. You want someone who’s good at the things you’re bad at, and that’s a little bit of a stretch — sometimes it shouldn’t be your best friend who is exactly like you. Can you guys talk a little bit about some of the ways you complement each other?

Carlos: That’s a great point. Right? And it was one of the reasons, like Fabio mentioned, this professor was putting together people from the business school and the engineering school and the design school. Because most of what was happening at the time on Penn’s campus was, business school students would go try and start something together and engineers would go and start something together. And people weren’t really mixing, and Fabio and I were very fortunate to meet. From a skill perspective, Fabio is an engineer and I’m not. I can’t do what he does. And I’ve got the finance background, and I love product and sales. Where we overlap is that he likes product, but he can actually build product. Then also, just more broadly, it’s interesting from a personality standpoint, it’s good to have a balance, right? Like I’m more passionate, more excitable, you know, I have more highs and lows, and I live with the passion and energy of everything that’s going on. And Fabio’s just even and steady the whole way. And those types of things are really valuable as well. Because it’s good to have both and have that complementarity. And then, from a pure skills perspective, I really admire Fabio’s ability to just work on a process and be extremely organized. He can’t live without his to-do lists. We all joke at the company that if Todoist were to go down, I think Fabio would stop breathing or something because everything goes on a Todoist.

Fabio: It’s pretty true.

Hope: Well, let’s jump into what Tesorio does. And I always like to look at companies from the lens of the customers. You guys have a tremendous list of customers — really amazing logos and names.o name a few, we’ve got Twilio and Slack and Couchbase, Smartsheet and many, many others. What do your customers give you feedback on what they love about Tesorio?

Carlos: Accounts receivable automation is like the core of how we started. That’s the workflow bit. For those who aren’t as familiar with the back office as you were mentioning, Hope, everyone knows there’s big sales teams out there closing a lot of business, but then in the back office in finance, you’ve got maybe a handful of people. At a company as big as Veeva, you’re talking about 15 people. Slack when they got acquired for $27 billion, only had two or three people in the back office doing this. And what these folks are doing when I say, “doing this,” they’re following up with your customers to make sure you get paid. So, what the customers tell us all the time that they like is, “Hey, your product feels like it was built by a collector — like one of us built it.” And so that’s pretty exciting to see. And then when you look at the stats — our DAU/MAU, the daily active user over monthly active user, is actually almost 20% higher than Slack’s. So, if you use Slack a lot, which we all do every day, or some of us use Teams — imagine something that gets used 20% more by the teams that use it. Like literally doing your job is using this tool. Again, what people say and what they love is like, “Hey, this feels like it was built by one of us,” and then the stats back that up because it’s what they need to do their job.

Hope: One of the things I loved when I met you both was the fact that you talk in DAUs over MAUs, meaning how many daily active users versus monthly active users that is not a common KPI for a SaaS company. And yet it shows the engagement and the amount of time that your users are in your application. I think I’ve also heard you say, Carlos, that you want it to be the coffee mug experience.

Carlos: When we talk about that is, what we want, what we strive for is that this is the first thing you look at in the morning if you’re in finance. And for a lot of our customers, I kid you not, it’s one of their default tabs in Chrome. So, every time they open up Chrome, Tesorio, next to Salesforce, next to their ERP is one of the things that open up. The team started calling it the coffee mug experience because every morning you get your cup of coffee, you sit down and look to see what’s going on with the business. How much cash do I have? What’s going to happen? What’s going to come in? What’s going to go out? And if you can have that in one place, that changes your day, instead of having to look everything up in Excel. And that was really critical. I started earlier by saying that people use a product daily, and they feel it’s created by a collector. But that comes from a lesson we had — we pivoted to what we do today. We knew that there was a problem with cash flow. But we pivoted to what we do today in focusing on accounts receivable because we learned two things. First off, if you want to do something a little bit more strategic, you have to earn the right to do that. And by that, I mean originally, we started focusing on supply chain financing, given my factoring background. But what we realized is that what people really needed was a solution to their hair-on-fire problems. The strategic day two problems of how am I going to finance my problem takes a second seat if you don’t even know when you’re going to get paid, how are you going get paid, and you don’t know how to follow up with your customers. And so, when we were building Tesorio, I literally worked out of Veeva’s office for three months next to their AR team to see how they did their job and what they needed to solve their day one problems. And then, after that, now we’re getting to the place, right Hope, that’s part of the vision, where we can start forecasting your cash flow. And then part of the longer-term vision is then now that you are managing your cash flow day to day with this tool, then you’re forecasting what’s going to happen. Then you can get back to where we started, which is how do I finance it? How do I do things down the road that are going to allow me to grow the business in a longer-term lens? So interesting stuff, a journey that feels like we’ve come full circle sometimes. It’s been exciting

Hope: I love that you put a lot of good points in that. Number one, you guys have been maniacally customer focused, and that shows in your NPS scores, which are extraordinarily high for a SaaS software company, which is very impressive. The other thing is, you mentioned Carlos, how you need to solve the burning issue in order to get in. And then we’re going to move up the chain to the broader strategic issue. And I think that burning issue is also where you’re getting that strong engagement — the DAU vs. MAU — they’re loving your product, and they’re using it every day. One of the things that I observe in both of you is that while you are maniacally customer focused, you’re also really good at stepping back and not just responding to exactly what that customer is telling you they want — different customizations — but you are able to look across all of your customers and decide what would be useful for all of them, so you’re not just focused on one. Those are hard choices. Can you guys talk a little bit about moments in time where you’ve had to really focus your product roadmap on things that apply to all versus maybe responding to one request here and there.

Carlos: Yeah, definitely. Gary Wiessinger, who’s the head of product at NetSuite, once told me, “You can measure a product manager by how many times they say ‘No,’ more than how many times they say ‘Yes.’ And that ratio is really important. And that’s hard as a founder because you’re so passionate about building things, and you want to get your vision out there. And when you see people responding to it, you almost want to say “Yes,” “Yes,” “Yes,” all the time. There there’s been a couple of things. In the early days, one of our very earliest customers was Spotify, and they were very large. And they wanted a bunch of features that would’ve brought us toward almost like building a treasury management system. And one of the key things for us from the beginning was, again, learnings from having pivoted and just along the journey, was we just simplified as “Insights to action.” One of the core things we’re trying to do from an intelligent applications perspective is tie the insights that you get to the actions that you actually take every single day. So, building a TMS or treasury management system would’ve required a whole bunch of work that’s almost like just purely data aggregation. Let me pull data from all of these different bank accounts, put it in one place, and then forecast out my cash flow. But what was really important for us was actually aggregating the transactional data before it happens. So that means going to the finance systems of record, like your ERPs, your CRM, you’re procure-to-pay tools, where you can see cash flow is going to happen — cash is going to flow, but it’s going to be because this customer’s going to pay me or I’m going to pay this bill. And so, for us, building integrations to those systems was more important than building a bunch of integrations to banks, into FX platforms and that sort of a thing. And so that was one where it was really hard, you can imagine, to walk away from our second customer, who we flew out to Sweden to spend time with, and you know, we’re still really close with their whole team to this day and say, “We’re not going to do this,” because customer number two might want this, but customer number one and then 3, 4, 5, 6, 7, 8, 9, 10 don’t, and it doesn’t align with our vision. I still think back to how painful it was. True story — we were curling with the treasurer of Spotify the day they went public in Sweden. Like we were very close with them. But to say no to them was really difficult and it’s hard. But back, similar to your hard conversations with founders, there’s just a lot of hard choices you have to make along the journey. That’s probably one of the toughest — sticking to your vision in the right ways.

Fabio: And to add on to that, another thing that we had to do alongside making those tough product decisions is we still had to build the product in a way that would allow configurability within it. So, one kind of core product philosophy we have is configuration over customization, especially when we’re dealing with the types of large businesses that we’re working with. Being able to support them and their slightly different workflows and slightly different use cases in a way that is not just a bunch of customization has been something that we had to do from very early on. And it allows us to adapt to these types of larger, mid to large-size companies. And I think that was another key thing that we had to make a decision on pretty early on is that we needed to support this type of configuration and continue to expand and support that as we grew.

Hope: Such good examples. And you guys are just reminding me that part of being leaders in a company and founders is just a series of difficult decisions. I do want to come back a little bit to the fact that here we are on the intelligent applications podcast, and what is an intelligent application? It’s something that is enabling a workflow or something using AI or machine learning and really helping make things more efficient in the back office as well as smarter. That’s been something you guys tackle from early on in your solution. And so, I’d love to hear about how machine learning and AI are incorporated in your offering.

Fabio: Definitely As we’ve talked about here, historically, the entire accounts receivable process has been driven by tribal knowledge and spreadsheets within the accounting team. They know based on personal experience that a certain customer always pays 10 days late, five days late, while another customer is often promising to pay by a certain day, but then they can’t be relied on versus other ones can be relied on. Right? All that information is all tribal. It’s all just within the accounting, maybe even just within an individual. So, if someone leaves that company or is on vacation, effectively, that data is lost. And so, what we’re able to do, is take all that data and actually incorporate it into the product, so it’s not just siloed within individual people or individual departments. Our models are able to look across all of the customer’s payment history and determine, without human biases, when will they pay and also what will they do. Our models are trained across over 50 data points of anonymized invoice history. We’re covering billions in transactional volume to further refine and improve that forecasting accuracy. The product is able to learn from all these data inputs, both from a financial perspective and a behavioral perspective. We consolidate that all into our product and tying back to our integration strategy — which is that our system can connect to various different data sources from ERPs to billing systems, CRMs, emails, and everything like that — we’re able to consolidate all of that, bring all of the actionability and insights into one single place, learn from all of that data and then bring it so that users can actually take action. Cause one of the core things about our product is it’s not just providing insights, but it’s always tying it back to action, and what can they do about it? What can they do if their cash flow isn’t the way they expect it to or if the customer’s not paying them the way that they usually tend to pay? What can they do to affect their workflows affect their cash flows? And that’s been something that from pretty early on, we noticed was a problem with our customers, and we went out to solve that.

Hope: I love this phrase that Carlos talked about earlier, which is “Insights to actions.” So often in systems, you’re either a system of record — where it’s the system where you’ve got the data, and you’re doing the thing — and then you’ve got the analytics of that system of record. Very rarely do you find an application or software that allows you actually to do both? Where you can get not only insights but you can actually take the action there as well. And I think that’s really unique to your solution.

Carlos: That’s a good observation. Even databases are either transactional or analytical. And so that’s from a technology standpoint, that’s one of the things that’s been a fun challenge is figuring out how we have both — the analytical aspect tied to the transactional day-to-day aspect from a tech stack perspective.

And then, interestingly, just before we move off the subject, if I were a founder listening here, a little piece of advice I’d share is, when you’re building intelligent applications, there’s no like blanket way to do it. I think it’s really important to have the context of the culture of your customer base that you’re planning to address and the cost of being wrong. So that’s something that’s really interesting, like for us, we’re proud of our analytical skills, right? We’re finance people, Fabio jokes, but every single meeting — I think in 95% of meetings, I’ll open up Excel and start doing something right. And people joke, but it’s the way we like it. So, if we’re building an intelligent application for people who like analytics, you have to take that into account, and you can’t get cash wrong. So, for us, when we think about machine learning, AI, and all of that, we think about it in our world, at least, more as the supporting actor. The lead, the hero, is the user. And so, for us, that leads us to think, okay, more than here is your cash flow forecast, or here is how much we’re going to collect, it’s how do we pre-process the data so that you can go and make a decision. The really key thing is making sure that we expose enough levers for the customer so that they have ownership of the output.

And that’s something we even got wrong at one point. We thought we should forecast out the cash flow using machine learning, but it felt too black box. That was a really key learning for us — you know, having that context, and for the culture of your customer base, the cost of being wrong are going to influence how you apply machine learning and AI tools.

Hope: So, you took the next question right out of which is great. No, that was exactly where I was going to go. Us finance people like to think we’re good at forecasting, and we like to be able to explain our forecasting and explain what the levers are and what’s impacting it. With machine learning, the truth is it’s probably more accurate than us, and yet we don’t really want to admit it, and we want to be able to explain it. We don’t want it to be a black box. So, you do have a hard customer to sell to, to utilize this type of technology. And you and I have had that conversation, which is why we both led to this topic, and you’ve really addressed that well with your customer base by giving them more transparency and allowing them to feel like they are the heroes. I love that phrasing.

Carlos: Yeah. It’s interesting. When you look at cash in the bank, we joke that if it already hit your bank ledger, cash already flowed. So, that’s why we say we have to get to the things before it leaves or comes in the door. And that’s why all those integrations we talked about earlier are so important. And that’s what allows you to give them the ownership, right? If they’re addressing the cash before it goes out the door and they make the decision on how it’s going to go out and how it’s going to come in, then they can own the forecast and have an influence on it.

Hope: I want to turn a little bit to the macro environment. I know that we’re all watching the capital markets and the macro environment and how it impacts us, whether it be the funding environment or how does it impact our revenue line and our customers. And so, as you think about your solution set, has it become more important to your customers? Is it something they’re cutting back on? How do you think this period of time is impacting the demand for what you’re offering?

Carlos: I guess I’d summarize that we’re cautiously optimistic because I think we’re seeing some tailwinds rather than headwinds. Which, if you think about this from a purely analytical perspective, it makes sense. With inflation the way it is, every 90 days, it’s more than 2% of devaluation. And so, where does cash typically get trapped on a balance sheet? As we talked about before, you control, when cash leaves the door, you don’t control when it comes in the door. Receivables is actually one of the biggest places where cash gets trapped.

And so, every 90 days, your receivables are late, that’s cash you didn’t put to work and just devalued by 2% in real terms. And on top of that, it’s kind of a double whammy with the cost of capital going up. So as interest rates are going up and the cost of capital is going up, the inverse of that, as we know, is your hurdle rate, and the rate of return your investors are looking for is going up. So that cash that you’re not putting to work, if you’re not getting it faster in the door and deploying it, the opportunity cost of that is also going up. And then the market just in general, as we all know, is no longer favoring growth at all costs, they want sustainable growth. For a long time there, it was so easy to go get cash from other sources, and now that’s not quite the case. So, what we’re seeing is that this is more of a pain killer than a vitamin in a lot of ways, and folks also need solutions fast. That’s been a really critical thing, at least for us, back to the theme of intelligent applications is how can you get these products deployed quickly? Because an intelligent application that’s going to take a year or two to get deployed might have great ROI someday in some spreadsheets, but it’s not going to help me achieve my goals now. And so, a solution like ours that can get in there quickly on average, 30 days to get live and 60 days to show ROI, that’s something that folks are really valuing and that you’re seeing today — folks need these solutions before their next earnings call, not by next year.

Hope: Yeah, I think we go back to something we said earlier in our conversation, which is cash is king. And when the economy gets tight, and tough cash becomes even more important. And therefore, the solution that you’re offering is paramount. I want to step back a little bit and talk about broader technologies in this space. And as you think about this space, what is a company or a startup or interesting technology out there that you two are both excited about?

Carlos: I’ll share one that changed my day-to-day. I think Clockwise was really cool because you just plug it, and it automatically starts allotting focus time. And you set rules around meetings and time between meetings and that sort of a thing. And it changed my life from just feeling exhausted at the end of the day of going back-to-back-to-back Zoom meetings, because we’re a distributed company, to actually having breathing room in between to get in the work that I need to get done.

Fabio, I know we’ve talked about another one. I’ll let you take the next one because it’s definitely more of a tech nerd out.

Fabio: Yeah. Yeah. Obviously, a lot of the IA companies out there are very CTO friendly, and the one I admire and I think is doing really cool work is Fivetran. Every company, as we already know, has a problem with getting data into a single place — it’s ETL, especially if you don’t have engineering resources. I found Fivetran is a really cool product that allows companies to actually connect all of their data streams and try to consolidate them into one place. And why I also think this company is cool is because it’s a little bit of a reflection of our product as well. We consolidate data from all these different financial, CRM, billing, email data sources for our customers to use, and Fivetran is doing similarly for their customers so that they can get their internal data together. It’s a really cool piece of tech and a pretty exciting company.

Hope: I love it. And then, so those are existing companies, as I think, five years out, what are some technologies that are coming to pass or things down the pike that you really think will disrupt how we do work today?

Fabio: You know, on my end, augmented reality, I think, is one of the biggest sources of the kind of technological disruption and innovation that’s going to come up soon. I’ve always felt that the same way smartphones changed the way we live and work, augmented reality is going to be that next big leap that’s going to really change the way we work and live.

Hope: So, we’ll end on lessons learned. And I know I certainly have a lot of them in my past history. But as we indicated before, a clear observation is that being co-founders, being CEO and leaders of a company is just hard. And I think it requires more persistence than anyone ever realizes — that ability to climb over any wall that’s put in front of you. I’d love if either one of you would be vulnerable enough to share some lessons learned with other founders and words of wisdom.

Carlos: Yeah, I think for me, the most important lesson just looking back is realizing that you’re never done. As a founder, you feel this extreme urgency. In the early days, someone once told me if you’re asking yourself if you’re moving fast enough, you’re not. But you know, at the sacrifice of what, right? Before being founders, we’re people. We’ve got families, or we’re in relationships. And sometimes I think for a very long time, in most of the early parts of the journey, or until recently, I kind of worked like there was no tomorrow, you know, working till every hour at the end of the day and at all times of night and giving up sleep and all of that. And then, eventually, it clicked. You never really de-risk the company, you just attain new levels of relatively higher stakes. So, even though you think you’re going to do this, and then you’re going to be all good, and you’re going to be done, and I have to do this now and forego a lot of things — I’m not going to go to the extreme of saying that you’re ever going to have good work-life balance as a founder. I don’t think that’s part of the job description at all — but realizing that there’s always going to be more to keep doing tomorrow is very critical. So, at the end of the day, I guess what I’m getting at is, to take care of your health, take care of your personal relationships, and remember that because it’s a long journey.

Hope: Yeah, it’s a marathon, not a sprint. and I think people go into startups thinking it’s going to be this sprint-like event. When really, when you look at the companies that have gotten to scale, it takes 10 years. It truly is a long road and a marathon. And you have to manage that in a healthy way. Fabio?

Fabio: The most important lesson I’ve learned throughout this entire startup journey is it, maybe it sounds a bit cheesy, to be honest, but I think it’s very true that — the people you work with matter the most. And I think it obviously starts with your co-founder and the relationship you have, but it quickly grows. It grows to everyone in the company, the board, the VCs you’re working with, everyone. And who you work with, I feel, has one of the biggest impacts on the success and outcomes of your startup. I believe that finding the right set of people, the early employees, your early investors, and your co-founder will have such a large impact on how things will go. And I have truly felt very fortunate to have this opportunity to work with such brilliant people. It’s ultimately what excites me most about the entire startup journey.

Hope: Those are great words to live by and great words to take to heart. Carlos and Fabio, thanks for spending this time with me together today. I always enjoy it when I get to have a conversation with both of you, and this was right in there, so thank you so much.

Fabio: Thank you.

Carlos: Thank you, Hope.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Tesorio, they can be found at Tesorio.com – that’s T-E-S-O-R-I-O.com To learn more about the IA40, please visit IA40.com. Thanks again for joining us, and tune in in a couple of weeks for our next episode of Founded and Funded with Battlesnake Founder Brad Van Vugt. We’ll be spotlighting another IA40 winner next month.

Cresta Co-founder Zayd Enam on Using AI to Empower People to be More Productive

Cresta Co-founder Zayd Enam

In this episode of Founded & Funded, Investor Ishani Ummat talks with Zayd Enam, Co-founder and CEO of Cresta AI, one of our 2021 IA40 winners. The two dive into the important topic of how AI can be used to help empower people to be more productive, specifically in the context of call centers — or now more commonly referred to as contact centers — because they include much more than phones.

We hear the story of why Zayd dropped out of his Ph.D. program to pursue launching his own company based on what he explains as the “schlep blindness” of contact centers. He also discusses the unique way he landed his first customer, which happens to be Intuit, one of the largest financial software companies in the country, and the benefits of taking a modular approach versus a full rip and replace of a customer’s entire system.

This episode may make you crave a Costco hot dog, but you’ll have to listen to find out why.

This transcript was automatically generated and edited for clarity.

Ishani: Hi, everyone. I’m delighted to be here with Zayd Enam today, the CEO of Cresta AI. Cresta is a productivity suite for the modern contact center that leverages artificial intelligence to drive a better customer experience in real time. We’ve all struggled with contact center experiences and so we’re really excited to have Zayd here to talk about the unique angle that Cresta takes.

Cresta was selected as a top 40 intelligent application by over 50 judges across 40 venture capital firms in 2021. A quick moment on that. At Madrona, we define intelligent applications as the next generation of applications that harness the power of machine and artificial intelligence to create a continuously improving experience for the end user and solve a business problem better than ever before.

Zayd, we’re super delighted to have you with us today.

Zayd: Awesome. Thanks so much for having me, Ishani.

Ishani: Why don’t we start it back at the beginning. Cresta was formed out of Stanford, but I believe you dropped out of your Ph.D. program. Tell us a little about the work you were doing and how that led to the founding of Cresta.

Zayd: In the Ph.D., I was working on how artificial intelligence can be used to help empower and augment people and make them more effective in their day-to-day work? Originally, I was looking at applications of artificial intelligence for things like helping graders and teachers grade assignments more effectively and be able to give better feedback to students based on the most effective feedback and based on common mistakes. It was really a thread of a lot of the work that was done in the early ’80s at the Stanford AI Lab where this concept of intelligence augmentation, which is pioneered by a lot of the folks that started that lab. And it’s a continued thread of work in terms of understanding how humans can be bicycles of the mind and extensions of bicycles of the mind. What ended up happening is I built software for teachers and grading assistants, and I built software for email and both those directions of the project, ultimately, didn’t lead to success. With email, what happened is at six months after I built it and I got a bunch of people in the Stanford building using it, Google released their smart buy project.

The day Google released the project, like 20 people messaged me that link because they had this big PR announcement. It was clear that Google had a data advantage there in the sense that they have access to billions of emails to train these kinds of models on. So, then I pivoted to graders and teachers, and there, the tricky thing is that a lot of universities and schools aren’t ready to adopt software. It’s just a slower market to try to get traction in. I ended up working with a bunch of offices in the Bay Area. I’d go sit down with someone and just observe the kind of work they do on a day-to-day basis. The goal was to build small tools to help augment or help automate basic things in the workflow. I started working with these sales and support teams, and, just as a grad student, I could sit down with a sales and support team and basically build these systems that would understand the effective way to resolve an inquiry or the effective way to have a conversation and provide these real-time prompts that guides someone through the conversation. At the first company I worked with while I was a grad student, within a few weeks, we were generating $100,000 more of sales per month — that’s more than $1 million per year. It was clear that there was opportunity here from a business perspective. And so, through a whole process, I decided to drop out. Because there’s something core to this, where there’s a lot of value to be delivered, it turned into this overall vision of using artificial intelligence to help empower people to be more productive. And it felt like a great place to start. And that’s ultimately how I got there.

Ishani: So, in many ways you were able to do the super early stages of customer discovery while you were in your Ph.D. program, spending time with different kinds of customers and mitigating a couple of these issues that we see. You know, you and I were talking about GPT-3 earlier — lots of people have good applications of big models like that, that feel like solutions in search of a problem. And the way that you got to spend your time doing this early customer discovery and building smaller tools for them was starting to figure out, okay, where is there a problem where I can really apply a solution that I know how to build, and I can build in a small way and create a wedge. So, a couple of trends there where you need to figure out 1) what’s a real problem to go solve. And then 2) where do I have a unique advantage to go do that.

Zayd: Right. And that’s a philosophy of my co-founder and Ph.D. adviser at Stanford — a gentleman named Sebastian Thrun. He started the Google X project and Udacity and won the DARPA Grand Challenge in 2006 with the first self-driving car. And his philosophy and what he teaches in his labs is basically that in order to go build a self-driving car, you don’t sit in the lab and research the future of computer vision, you go out in the Mojave Desert and figure out how to get those kinds of systems to work and then come back to the lab. And you’ll actually realize that you’ve made some fundamental breakthroughs in the technology and the engineering systems associated with it. That’s the same thing that happened with Cresta in terms of working with companies and identifying how can we make this really work for the customer and then coming back, understanding what are the fundamental technological breakthroughs that happened in terms of making it really work for the customer.

Ishani: It feels related to this concept of customer obsession. Madrona was one of the earliest institutional backers of Amazon. I know you think and talk a lot about customer obsession. Take us along the journey. You leave the lab. You start Cresta. What drove the decision to focus on contact centers? Contact centers are amazing. Notoriously bad experiences, long wait times, low NPS. You’ve shifted from call center to contact center, and that means that you have digital as a new modality and email and all these other components, but still, lots of friction. And everyone interacts with a contact center, right? The vast majority of businesses have them. The vast majority of customers are talking to them and interacting with them in some capacity, but it’s not obvious, necessarily, that augmenting humans who work at contact centers is a business to then go build. So how did you zero in on that as a customer segment? How did you obsess over that? And then talk to us a little bit about that first customer journey you had.

Zayd: I mean, the thing is they’re not particularly sexy, right? So, it’s not like a Stanford Ph.D. says, ” Hey, I’m going to go focus on contact centers and understand how these technologies apply and drive tremendous value or transformation there.” There are a few secular trends happening in general in the last couple of years where folks are rearchitecting their systems from on-premise to cloud-based systems. And you have folks adopting these multi-channel experiences, whether it’s phone, chat, email — it’s a space that’s going through rapid transformation right now. Because NLP and deep learning have transformed so much and are progressing so quickly that what’s possible in software now is just dramatically different than what was possible just a few years ago. So, you have a lot of change happening in the space and you have a fundamentally new solution set with deep learning. And it is just an opportunity there. That’s an analytical answer. I think the more emotional answer actually just was that, so Paul Graham has this essay in Y Combinator about schlep blindness. He talks about when Patrick and John started Stripe, everyone was building these niche social networks and these niche travel websites, and no one was solving the payments problem on the internet because it felt like a lot of schlepping around. To do it, you’d have to go deal with legacy APIs of banks and deal with how to do partnerships with banks and these kinds of things. But every website in the world needs some way to take payments if they’re doing business on the internet, and the experience was terrible. And I felt the same way about contact centers. It felt like a lot of schlepping around. You’ve got to figure out how you integrate into all these systems. It’s not exactly the most exciting part of the business for a lot of people. But the thing is, every company in the world has some way to interact with their customers, and the experiences right now are terrible, right? Where the average employee NPS within a contact center is less than zero, average customer experiences are very poor in terms of the way that they interact with contact centers. So, everyone has something. No, one’s happy with it. And I have something where I could, within a few weeks, have an impact on a team. Even though it wasn’t the dream of the Stanford Ph.D. to go work in the contact center, it felt like a lot of schlep blindness in why people were avoiding the problem. And so, I was really excited to lean in and solve that problem.

Ishani: Yeah. And I mean, by the same token of Paul Graham and the Y Combinator reference, you know, every incubator will tell you to stay away from the enterprise. Right? No one will tell you to go and drop out of your Ph.D., start a company, and then go sell to the big enterprises at the get-go. How did you even begin that early customer journey of getting someone to sign on?

Zayd: Yeah. Another Y Combinator advice, which is, don’t do big deals. But I think one of the most important skill sets is figuring out what advice applies to your situation and what advice does not apply to your situation. So that one, I don’t think applies. For Cresta, the market for artificial intelligence makes a lot of sense to start an enterprise because that’s where you have these teams of contact center agents with more than 250 people. And you’re able to use artificial intelligence in a way that can capture repetitive patterns across the conversations that multiple people have, and that’s where you can provide the most value and the largest segment of the market on which to make an impact.

Usually, it’s hard to go to enterprise because they have a bunch of requirements to go engage in that business and have an impact there. So, people often start mid-market or SMB segments of the market. This is where I was slightly unconventional — I tried cold calling and prospecting in terms of getting our first customer, and it wasn’t getting much traction. But then, through Scott Cook, I cold-emailed him after a presentation, and I got this meeting with the CIO of Intuit. And I went down there to present Cresta and the results that we had in terms of my Ph.D. work and asked him, “Hey, I’m starting a company in this space, and I’d love to work with you as our first client.” He said, “This really ties into the Intuit strategy. This is what we’re trying to do from an AI perspective, and this is a really great project, but we can’t work with you because you’re a one-person company, and we’re Intuit, we’re the nation’s largest financial software company, And we can’t really work with a company like yourself. But if you want to sign up as an intern to my group, you can sign up as an intern for the summer. I took him up on the offer. Once I got in, I got access to their data systems and basically worked with them on the technology and deployed the first software to their group in Tempe, Arizona. And the first person to use it was actually one of the top salespeople there, and he loved it so much that he got the rest of the team to start using it. The whole team’s performance doubled in a few weeks. And so that’s when they were looking at it and saying, okay, now we want to take that team and expand it to this other bigger team in Virginia. That’s when I went back to them and said, “Hey, this is now becoming a really serious project, and you probably want some sort of enterprise agreements around this. We came to a standard SaaS agreement, the only addendum being that this clause hereby terminates Zayd Enam’s internship. Then I had to sign as the CEO of Cresta and then as a former intern at Intuit, which is a fun reminder of that contract every time it comes up again. But yeah, that’s how we got our first customer. And then, once we had that case study, we were able to go to more enterprises with that credibility.

Ishani: That’s an incredible story. And I think YC needs to write that into their playbook around how you go build a first enterprise customer. First of all, most startups don’t go to enterprise, but if you’re going to go to enterprise, follow the playbook that Cresta lays out.

Zayd: Yeah, I think you learn a lot, and I think you do things that don’t scale, right? So, take an internship at a company — that’s something that doesn’t scale.

Ishani: No, definitely not. But I think it really does take customer obsession to a whole other level — around being able to go in and understand and see how these data are collected and see how people are using them. And then how they’re able to use something that you can build in the duration of an internship, even to increase performance and double it in this case, then actually take that out into the real world and say, okay, that’s great validation. Now I have the comfort to go build a company around it. And the proof points, right? And then, by the way, you also start with a great contract.

Zayd: Yeah.

Ishani: One of the things you mentioned is this concept of building on existing infrastructure versus designing an entirely new user interface. And what I mean by that is you talk about integrating with all these systems that already exist in contact centers, right? They have agent-facing software. It seems like you’ve taken the approach of building on top and integrating with that rather than having contact center agents focus on learning something new. From a product perspective and from a utility perspective, tell us about that decision and how it’s worked out so far.

Zayd: Right. Our approach is to peacefully coexist with existing systems and infrastructure in the contact center. And that strategy really is just, there is a lot of different underlying systems and topographies in the contact center, a lot of different C cast platforms, a lot of different CRMs and knowledge bases of these kinds of things. Some folks take the approach of a full rip and replace where they recommend that they come in and replace your entire system. And our belief is that’s likely not the right approach. What we have is a modular approach where we can come in and integrate with your existing systems and take you to the future state of the contact center piece by piece by one module when you adopt another module will get more value from the second module because they amplify each other. But you don’t need to rip and replace your entire system to do it. It’s a lot of pain, a lot of implementations. You have to build more integrations. You have to spend more time on some of these edge cases and these kinds of things. It’s more effective and efficient to start with one piece and see the value from that and then expand over time.

Ishani: Especially when you think about going to the enterprise, right? It’s unlikely that you get a Fortune 100 company that’s going to rip and replace their entire existing system for you if you’re even a 50-person company. I really love how you lay that out and saying, “Hey, actually, this is worth a lot of investment from your end because there’s clear ROI.” And in your case, doubling performance and being able to demonstrate that feels like a very clear path to demonstrate for an enterprise buyer that there’s a specific return for buying Cresta.

You’ve alluded to this a couple of times in terms of different modules of the product, maybe just contextualize. Cresta started as this sort of augmentative tool for contact center agents. Where’s the product today? If I’m an end user at a big enterprise, how have I adopted it? And what exactly does Cresta do for me?

Zayd: Yeah, so we started as a real-time agent coaching and assist for chat — sort of, how do we help chat agents more effectively and more efficiently handle sales and support conversations? Over time, we built a full platform. It Really pieces together all the different components of what an intelligence layer and an intelligence suite on a contact center should do. The way Cresta works is we have an insights product that is understanding your conversations, understanding what makes things effective. Why are your top performers performing better? And what behaviors do they do that make them effective at that? And that’s giving you insights on how that’s changing over time. And then, we are able to take that and have a set of actions through real-time coaching and post-call coaching that help democratize those behaviors across your entire team. So, we figure out what makes your top performers really good. And then, we’re coaching everyone across the team to help them be as good as the best. And then, we take those top performers and we’re able to make them superhuman because we have stuff like automation and efficiencies like summarization and these features that drive these kinds of superhuman efficiencies of the team, because all of a sudden you can summarize a call, or you can automate a call or automate a workflow that takes less than a second when it used to take a long time. That’s the core loop of Cresta. And then, we go back to insights. We identify what’s working, what’s not working, how customer sentiment trends market’s changing. And then go back through the loop again in terms of democratizing the behaviors that lead to better results and driving automation that lead to superhuman efficiencies.

And it’s that virtuous cycle that goes to the contact center and it’s a platform that each piece can be adopted in a modular way. But then, as you adopt each piece, it adds value to the other piece. And when they put all the pieces together, it becomes a powerful engine that drives compounding benefit for the business.

Ishani: Absolutely. We would call that a flywheel effect. That the more you use Cresta, the better it gets. And in many ways, this is perfect for our core concept of what an intelligent application is. The idea that you’re building a core machine learning-based product that gets better the more that you use it. Versus, I think a lot of companies out there today are building AI and ML, but they’re kind of retrofitting to a product and using AI and ML to optimize it overall and over time. Whereas the concept of starting with a core machine learning-based and intelligence product that enables your users to get better, but then also actually becomes better as a product because of that feedback loop and iterative cycle, is really, I think — The next generation of companies are going to have to be built that way.

Zayd: It’s certainly not easy, but it’s something that compounds over time. Then it becomes more and more powerful for a company. And the folks that adopt it end up outcompeting the folks that don’t, so you’ll see teams and individuals that are empowered with these kinds of systems are going to produce dramatically more, and they’re going to outcompete teams that don’t, and over time, we’re just going to see this play out where teams that do this versus teams that don’t do this and it’s a competition that’s already getting started.

Ishani: That’s a good insight actually, Zayd — that not just is the next generation of companies, intelligent applications versus predictable SaaS applications, but rather, also customers that are using intelligent applications have a wedge and a differentiation component over their counterparts that are just using SaaS tools.

Zayd: Yeah, that’s what makes it exciting. I think it’s a fun time for the industry. It’s a fun time for technology. And it’s one of the things that just makes it fun to build at this time.

Ishani: Agreed. And let’s get a little bit more specific than just around how you build this intelligence layer. What’s the approach to data, for example? There are lots of customer conversations available to you, once you’re in a customer and you get maybe a transcript or set of data that they have on existing customer conversations. How do you start? How do you continuously train that model, learn from each of those conversations? And actually make sure you’re delivering an accurate, great result to the end customer and surfacing the right level of insights of what good looks like.

Zayd: One piece to that is basically we get the conversation transcripts, the audio, and then we tie specific outcomes to each conversation — was this a positive outcome in terms of, if it’s a sales use case, was it a close one or what was order value? What was the upsell rate? These kinds of things. And if it’s support, is it a first call resolution? What was the average channel time? Was it high CSAT, high transactional NPS. So, depending on the outcome the business is looking to optimize for, we’re looking at those outcomes and tying it to those transcripts. And then we’re training models. And so that then gets to one of the core differentiabilities of Cresta, which is that we’re building infrastructure to make it possible to train tens of thousands of custom models for many enterprise customers where our vision is Cresta provides the Costco hot dog, which is like a factory to build these custom models for many customers and the infrastructure internally in the tooling, internally around conversation designers and labelers and machine learning engineers that produce high-quality models trained for that customer and keep them up to date regularly. That’s a hard challenge, but that’s something that we are specifically investing in from a tooling perspective. And we’re seeing the results of that payout because we’re able to deliver a model that understands a customer’s conversations. So, we can get to the specificity of why a customer doesn’t or does buy from Verizon in terms of they’re having an effective conversation and up-sell. Versus something for a different customer. They’re able to train a specific model that helps that specific company become more effective in their sales conversations.

Ishani: Super interesting. How do you view the role of these increasingly new and increasingly powerful pre-trained models? When you go to do that. For example, I could imagine you taking a pre-trained model that’s a, maybe more rules-based type of engine when you go into a customer initially. And then, of course, creating all these derivative models on top of that, maybe for specific customers, but also maybe even for specific use cases within customers.

Zayd: So, pre-train models are powerful, and we’ve known that for a long time — more than probably I’d say a decade now. The thing, though, is that when you take a pre-train model and you’ll fine-tune it for a particular application, what ends up mattering more than the amount of data that you fine-tuned on is the quality of that data. So, are you training on high-quality examples? And are you training on high-quality labels that truly get to the root of what you’re trying to train the model on, and that’s where you need the right infrastructure and tooling to label and design effectively. And make sure you’re encoding best practices effectively into the model.

So, pre-train models give a boost across everything really, but by themselves, I think they’re useful as toys in which, and toys soon become very serious business applications, but to really leverage them as business applications, you need to have a set of infrastructure around it to really control and fine-tune it with specifics of what you’re trying to do. And that needs to be high-quality and effective data.

Ishani: Yeah, you’re giving an example to this concept of garbage and garbage out, right? That if you’re training your model on, in this case, Cresta’s model, on bad customer interactions, of course, that’s the guidance that Cresta is going to give you is, is bad. But if you figured out a way to scalably and effectively label customer conversations that are really good and train Cresta’s model on that, then Cresta gives you really good recommendations. So, it’s just a good way to then think about, okay, you have to have a data strategy and the infrastructure and tooling, as you say around it, that is really robust. Even if you already have the machine learning components, the data really does matter as a differentiator here.

Zayd: Yeah. That becomes fundamental to this Costco hot dog approach.

Ishani: I think if I take away nothing else, there will be two analogies here. One is to go intern at a company to get a customer. And two is create a Costco hot dog.

Zayd: Yeah. Our head of engineering — this gentleman named Ping Wu — he’s very passionate about Costco hot dogs and how their inflation uh, resistant. And so, that’s the vision for Cresta.

Ishani: Real-world analogies help a lot.

Zayd: Yes.

Ishani: Tell me on this tooling, an infrastructure component, what’s out there that you’ve been able to leverage in terms of software and structures that exist versus some of the things you’ve had to create that are big hurdles for you to get data to, the right insights for Cresta.

Zayd: I think right now we’re still very nascent in terms of the whole stack for data and machine learning operations. Because there are no best practices really established, and it’s tricky, right? Because it’s not as mature as other industries in terms of like, this is how you build this type of application. For us, we’ve built, I would say, almost everything in-house, just because of the specifics of our application and the relative latency of tooling and infrastructure in the ML space. That makes it an interesting problem. I think over time, it will settle, and the space will mature, but right now, there’s gaps in open-source tooling. I mean, there’s a lot of great stuff happening, but I think it’s just a while before that stuff matures to the level that we can standardize on it.

Ishani: It’s a little bit surprising that you say that. Lots of companies I know are building off some of these open-source projects, but as you say, there’s gaps. And so, is the gap in stitching it all together or is it just that they don’t serve enough function? They’re emerging and nascent and exciting, but they’re features and not platforms.

Zayd: Yeah. Our data model in terms of conversations and outcomes, and these kinds of things, let us build in specific ways. We leverage in our stack, everything from the cloud providers to Kafka and Kubernetes and all these things. But as part of our stack approaches the machine learning aspect to it, and the process of labeling and training and regression testing and automating machine learning delivery, and production, all those things. Those are things that we’ve built in-house. I think over time, we’ll see more and more of that probably get adopted across many companies, but that’s the state of where we haven’t quite found something that truly hits the nail in terms of what we need.

Ishani: That’s awesome. Maybe there’s more opportunity, right? For all the folks listening that work on MLops, that work on building the infrastructure to get to an endpoint iterative machine learning-enabled application, it isn’t quite there yet. The industry hasn’t converted on something, so it feels like a good opportunity to go and understand, and obsess about com companies like Cresta, right? If the next generation of end customers may also be intelligent applications, then we should go be building for those intelligent applications too.

Zayd: Yeah. I think the promise is there, and I think there’s a major opportunity here to standardize and build this part of the stack. I just think it takes a while to have code bases that become mature and solve the problem and the right approach to it. And so, whenever we’ve looked at these things, there’s always something missing. But I think the opportunity is there. There’s definitely a market opportunity there for sure.

Ishani: Well, and in the interim, you’ve been able to build the infrastructure internally and then also create this company that is an intelligent application. But then also part of the core of what you talk about is enabling humans to work alongside technology really well, harnessing the power of automation. It seems like a deliberate choice by Cresta. You know, people talk a lot about AI automating away jobs, but in this case, I think you view it as strongly humans and technology working together side by side. Where does that come from? How much of that is the idea that we’re not just quite there yet on technology and in terms of conversational intelligence side, and it is just not quite good enough. And how much of that is a deliberate design choice working with your end customer and those partners?

Zayd: I think it’s a fundamental principle all the way back to my Stanford thesis, which is the focus on intelligence augmentation and really the concept of the bicycle of the mind. And I think to some extent, there’s a way to look at artificial intelligence and I sometimes characterize it as lazy artificial intelligence, which is that you take an existing process and you say that I’m going to automate this process end to end, and I can use some kind of automation or some kind of AI to go do it. But that kind of overlooks, what’s possible when you approach it more creatively. And in that sense, you kind of look at AI as a building block — how does this capability combine with humans unlock things that just weren’t possible before? And that’s a fundamental approach that we took. It’s been our goal and our direction in terms of that, this is what we believe is the right approach to this and ultimately results in larger upside and larger potential. And especially in our application, there’s, yes, we’re not at human-level AI for conversations, but even then, there’s opportunities for humans to have a continuous and big impact in terms of the companies that they work with.

Ishani: If you play it forward 10 years, how much more or less are humans involved in the contact center process?

Zayd: Our vision is that you get to this point that you have this concept of experts on day one. So, folks come into an environment and within the first day, they gain this expertise of all business information and all the knowledge and subtleties of their particular environment and are able to use a whole set of support and decisioning systems to get to the right decisions and the right effectiveness in their role. And what they’re bringing to the table is creativity in terms of understanding how to approach something that isn’t encoded in the patterns of the data yet. They’re bringing that creativity to the table, and they have a whole support decisioning system that’s helping them be that expert on the first day. And as soon as they’re able to encode and establish a best practice through their creativity, that becomes a part of the system again. And then the human is just focused on the next thing and the next thing. And so, our approach with these kinds of augmentation systems is that we’re constantly figuring out what’s the best practice, what’s really working, what’s effective, building support systems and information systems that can help deliver that scale to many people, and then figure out how could the humans continuously act as in some ways like a mutation process, that’s identifying what’s a new thing and what’s the creative approach to this problem. And then keep doing that and you just build a system that just gets smarter and smarter.

Ishani: A bit of a separate question. If I look at the investors around the table of Cresta, you’ve got folks like Greylock and Tiger Global that more traditional institutional venture investors. All makes good sense. You also have a coalition of investors from kind of legacy industry players. How has that experience played out? You know, having strategic involved can always be a little bit of a double edge sword in terms of having folks around the table. Tell us about your experience.

Zayd: We’re really fortunate to work with great partners from an investor perspective. So, in this last round, Genesys, Five9, and Zoom invested in Cresta. We have great partnerships with those folks in terms of leveraging Cresta on top of the platform to really see a big impact for their businesses. We’re seeing go-to-market acceleration through those partnerships, and what that really has done is mark our leadership in this space in terms of this real-time intelligence for the contact center.

Ishani: Super exciting. Yeah, I think investors can represent impactful connections, powerful networks that enable you to, again, just build more of a flywheel. Right? So, I love that concept of solidifying you as a winner in this space.

We typically end these podcasts with a lightning round of three questions. So, I’m going to shift into that. First, aside from your own, what startup or company are you most excited about in the intelligent application space and why?

Zayd: Oh, that’s a good question. I think Tesla is doing some very interesting things overall in terms of how they’re approaching autopilot systems. I’m not sure if it counts, but I think that vision is something that will take that company far, I believe, in terms of the way they have this loop for model predictions and data collection. I haven’t quite seen other startups operate at that level of data flywheel. And I think that’s the right approach.

Ishani: Love it. Question two, outside of enabling and applying AI to solve real-world challenges, what do you believe will be the greatest source of technological disruption over the next five years?

Zayd: Some of these contact centers are still working on these green screens with these old ’80s-style computers. And it feels surreal, but the cloud hasn’t actually fully happened yet. And a lot of companies are working in all kinds of different environments. I think that just better systems, better cloud systems, better UX, better integrations — that stuff has a big impact. We see with our customers as well that the AI has a lot of value, but they also get a lot of value from just better data integration and better UX to do their day-to-day workflow.

Ishani: Yep. The concept of customer obsession going and being on-site and visiting your customers and being integrated in them, even at the intern level, really gives you a real-world perspective for how true that is and how much the install base of technology takes a while. And a couple of cycles to come up to where we think about it, whether it’s in an academic lab or as a startup CEO, or as an investor,

Zayd: Right. Agreed.

Ishani: Final question. What is the most important lesson — you know, maybe something you wish you did better — that you’ve learned over your startup journey so far?

Zayd: So, I asked Scott Cook this question and I think the biggest one is probably self-awareness. And I think if you unlock self-awareness, then a lot of other things can happen in terms of development as a leader and development just as a company. I think understanding your own weaknesses, understanding what you need to get better at, and then approaching those things with the growth mindset, that becomes really fundamental. It sounds a little fluffy or psychobabbley, but I think it’s true.

Ishani: Awesome. Zayd, thank you for talking about Costco hot dogs, interning at your company, growth mindset, and the next generation of machine learning ops. Super appreciate having you on the podcast and your time today.

Zayd: Awesome. Thanks so much, Ishani.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Cresta, they can be found at Cresta.com — that is C-R-E-S-T-A.com. To learn more about the IA40, please visit IA40.com. Thanks again for joining us, and tune in in a couple of weeks for our next episode of Founded and Funded with Tesorio Founders Carlos Vega and Fabio Fleitas.

CommerceIQ’s Guru Hariharan on Hard-Learned Lessons From Successful Pivot to Unicorn

In this episode of Founded and Funded, Managing Director Scott Jacobson is talking with CommerceIQ CEO Guru Hariharan. CommerceIQ is a retail e-commerce management platform that automates and unifies category analytics, retail media management, and sales and operations all under one roof. The company secured $115 million in funding in March and just made its first acquisition — e-fundamentals — to expand into digital shelf analytics. The acquisition actually brings the company around full circle in a sense. CommerceIQ was originally Boomerang Commerce, which was a dynamic pricing software for multi-brand retail companies to better compete with the likes of Amazon. But Guru realized that despite the quality of that software and the A+ team he’d pulled together, he was trying to grow a business in an F market. He ended up pivoting the entire company to what is now CommerceIQ, selling the Boomerang business to Lowe’s. Scott and Guru dive into what went into navigating those turbulent times, what it takes to be a vertical SaaS company, and Guru’s realization that he couldn’t solve every problem as though it were a math problem.

This transcript was automatically generated and edited for clarity.

Scott: Hi, everybody, I’m Scott Jacobson, Managing Director at Madrona Venture Group, and it’s my pleasure to have Guru Hariharan, the Co-founder and CEO of CommerceIQ, here with me today.

Guru: Thanks, Scott. Thanks for having me. It’s a pleasure to be here.

Scott: Yeah, absolutely. So, for everyone who doesn’t know CommerceIQ, which is an ever-smaller part of the world, why don’t we start with what your product does? What problems does it solve for your customers? What’s the market opportunity you’re going after.

Guru: CommerceIQ is an AI platform that assists brands and agencies with their digital transformation to essentially help grow their market share and grow sales in a very profitable way. We call it retail e-commerce management platform — REM that’s the category we are creating. We are essentially helping every brand of the planet move from analog to algorithms.

Scott: You and I have had the opportunity to work together for many years. Madrona is a very happy investor in CommerceIQ. And certainly, my experience as an investor is that most great startups have very non-linear paths to success. CommerceIQ has a particularly non-linear path to where you ended up today. I think it’d be fun to unpack that. So, you raised your first round of venture financing, and you were solving a different set of problems for a different customer. Maybe start with who is your original target market? Why did you start there?

Guru: What we ended up starting out with was Boomerang Commerce which was going after this market for multi-brand retail. In fact, even rewinding back a little bit, this was my 18th iteration as a founder. For the first two years, I was iterating with a lot of ideas. I actually had a prototype for what is now Etsy. I had a prototype for what is now Viva — a sales tool for pharmaceutical companies… things like that. I could go on and on. But eventually, I realized that one has to connect the dots, as they would say. There has to be a founder-market fit. And for me, it was e-commerce. And for me, it was machine learning. It was the intersection of those two things because academically, I’m a machine learner and professionally, I’m an e-commerce guy. So, I kind of went back to my roots. Having worked at Amazon for multiple years, I had actually seen the power of algorithms. How algorithms could beat any human-dominated system. And that’s what Amazon did. We did an amazing job, and we got. Market share in almost every category by just putting algorithms to work. So, I kind of went back to those roots, and I said, “Wow if Amazon could do something like that, there’s gotta be an opportunity to go create algorithmic software for retail in general.”

We looked at multi-brand retailers — companies like say, Staples and Office Depot, or even Walmart and companies like that. And there was definitely a lot of room for improvement over that was crying out loud for some sort of innovation. So, I left my day job, and I started the company. And after these 17 iterations, the 18th iteration was a dynamic pricing software, and that hit the mark. And frankly, it hit the mark because I knew what I was doing on that knew what the right after was. And we just quoted up a great solution. And we took it to market. And we had amazing traction. We had companies like Staples, Office Depot, Home Depot, Lowe’s, Target, and Walmart — almost every multi-brand retailer that you know of today, or that you knew of before bankruptcies, was our customer.

Scott: One of Guru and my things in common is we both worked at Amazon, and I think both have this similar insight that you could apply technology, and in your case, machine learning, to great effect in e-com and why not build tech for other retailers? And you got to about $10 million of ARR in that business. You raised venture funding from a variety of funds. And then, we know the conclusion, which is you ended up exiting that business. And you’re in a different business today. Tell us a little bit about that journey. Why did you decide to exit that business — I think you foreshadowed a little bit with your comment around bankruptcy. But even more important about making the decision, how do you navigate that decision with you know, your team, your board, your customers who’ve bet on you?

Guru: Yeah. I think the journey was very intense in that period. It was actually harder for me to build a business, which was slow growing and so tough, than right now. Like in a way, it is actually harder to build a bad business — And that was a bad business because of the market that we bet on — than it is to build a good business.

We started off on this journey with multi-brand retail, right? There was an urgent problem we were solving. Product-market fit was amazing. The product was delivering great value to customers. Our net dollar retentions were off the charts. All that is great. The team was also solid. We had assembled an A+ team to work on it — an intersection of technology and retail and consulting. The problem was, that we bet on the wrong market. And hindsight is 20/20 — I learned the hard way that one could put a B or even a C team in an A+ market and make some hay. But putting an A+ team in a C or F market in our case is not a recipe for success. That is a wave that a founder should never be fighting.

We actually got to $10 million in ARR, and then we saw churn, which was a company filing bankruptcy, and it was a market-driven churn. We went back to $8M, then we crossed $10M again. Then we had this cake we brought in saying, “Hey guys, we crossed $10M” the third time we tried to celebrate, and I could see in everybody’s face that nobody was there for a celebration. In fact, after that, one of my colleagues took me to this restaurant, and he said, “Guru, I’m here for you. I’m here to build this company — this is an amazing technology that you’re building. There’s a lot of opportunity here. But if you look at it, we are kind of a hamster on a wheel. We are going after that $10 million mark for the third time. And this is not a time for us to celebrate.” That was a conversation that hit me hard. Oftentimes as a founder, as a CEO, as an operator, it’s very hard to pull back and look at the broader picture.

I said,” Okay, well, why don’t you be a part of that solution? Why don’t you help us solve that? And so essentially, we decided on that day to do a hard pivot. We were starting to explore what can we do next? And especially this time, we were going to bet on the right market — a growing market, an A+ market. And we started to look at all sorts of different avenues. It was almost like a blank sheet of paper with some money in the bank with some extremely smart colleagues and of course, a very driven board. What the hell can we do?

And so, after a lot of conversations, and Scott, you and I spent countless hours debating all these avenues, much to your frustration going back and forth on some of these things. We could not have messed up on this one. So, I was definitely slow. I maybe took a year more than I should have taken. We sort of did a lot of experiments. We talked to a lot of different types of companies and different types of industries, and it was also important for us to ask the right questions in terms of what the problem areas and things like that were? And that was the time we were lucky that Amazon acquired Whole Foods. And there was a huge rock that was dropped what was a calm pond at that time called e-commerce. And there was a massive tsunami of e-commerce coming up, and we could see it.

I remember getting a call from a top five CPG, a CIO, who we had sold to at Walmart — she had actually joined a CPG company. And she called me and said, ” My CEO just called me. We’re going to take a flight out to our headquarters. They’ve asked us to throw out our three-year vision and create a new three-year vision with Amazon in the center.” Not even e-commerce. She said Amazon in the center. So, this was like, this was crying out loud saying, okay, well, I’m connecting the dots here, not just for myself, but also for the company. We knew how every skew was operating. We knew the P&L of every skew, but we were helping the retailers be successful with that. We said, why not turn around and help the same brands who are actually selling those products? At the end of the day, consumers are not stopping to buy Pampers diapers or Kleenex tissue. They may stop going to a brick-and-mortar store, then they’ll probably go buy from an e-commerce store, like Amazon or Instacart. That’s what sort of was a small seed that we ended up watering and we did some more customer interviews.

And we ended up getting to the right answer, I would say. And that was CommerceIQ.

Scott: Maybe to pop up a level, you know, the belief was you could build technology to help very large enterprise players compete with Amazon. And it turned out that wasn’t enough. And so, when you talk about bankruptcies, it was these very large omnichannel retailers who sold both online and offline being beaten soundly in the market by Amazon. And so, in spite of the quality of the software, when you had to look at companies’ 10-Qs to figure out whether they can keep paying for it or not, I think that was the fundamental challenge. I remember a conversation, or maybe multiple conversations, we had as a board talking about, okay, what did we do about it? And, you came to the board and said, “Hey, I think we should sell this business while we try to go figure out, you know, what’s next.” As a board member that both, in retrospect, gives me a lot of joy and confidence in the outcome but were quite turbulent at the time. Maybe just kind of walk us through that process and where you ended up on the other side.

Guru: At that point, there was a choice to be made that either we could stand back and say, you know what, this was a fine run, let’s go and find a suitor for this and maybe roll it up into a larger company — could be Oracle could be whatever you name a large company, and there’s probably a space for a good dynamic pricing software. We said we could do that, or we could keep going. And for some reason, I was not there yet. Like, I think it was more personal than anything else that I was not there yet to call it a day. Maybe my biggest strength and my biggest weakness are the same is that I never give up. I never, never give up until we get through. I would call board members and call some friends and just talk to them about what was the right thing to do here. And the feedback is always what do you want to do? Where is your mind on this? And the fact that the board wasn’t pushing me towards a certain outcome or to get out or something like that just gave me the license to go back to the drawing board and think big. And that’s where we said, you know what, let’s come up with a different idea. So, we set aside this small team, we went through small product-market fit iterations. I still remember walking into this boardroom, and we were giving two updates. Update No. 1 was Boomerang Commerce. Update No. 2 was CommerceIQ. And one of the board members looked at me and said, “Guru, you’re trying to run a two-headed monster. It’s hard to build one company, and you’re trying to build two companies at once.”

And that was literally the board update. The board update was giving an update on two different companies because when you’re going after two different markets, two different sales motions, two different products. These are two different companies. And so, for us, it was a moment of reckoning. That was the point when we actually said, you know what, the energy and the gut feel, and the excitement is all around this small thing that we were bringing up. And the drudgery and the energy-sapping was this other business where the market was actually dying. So, what is the point? Let’s try to give up our ego on this and try to take two steps back to take five steps forward. And it was a license that the board gave me as a founder to go be able to do something like that, where we might either sell the company or we might bring this new thing up and potentially sell that as a business.

And it was a very personal decision for me that I wanted to sell the business and not the company because I knew that we could take this big, there was a lot of disruption to be made in the retail market. And right around that time, we started to look for suitors and we came across one of our longstanding customers, Lowe’s, came in and said, “Look, we just had a CEO change. And the entire management team is new. We’re looking to build a technology core and we looked at our kind of technology stack or vendors we work with, and you guys stood out. Are you guys looking to exit?”

And this was a conversation, by the way, I’d had six, seven months ago. At that time, we were not doing it, but I actually gave a call back to them saying now is the time to talk, and there was just a strong mind meld. So, we ended up taking our business unit, which is Boomerang Commerce and selling it to Lowe’s. At that time, absolutely, we could have exited, we could have done some distributions. In fact, I remember having a conversation with each of the investors on the board, and I laid it out saying, look, we don’t need all this money. We had raised only $20 million so far. We can certainly take 1X out and de-risks your position and stuff like that. And it was so great. Each one of the board members said, “I’m all in. I have a belief in you. I have belief in your management team. I have belief in the new product that you’re creating and the new market you’re going after. Let’s go for it.” So, we went for it.

Scott: So, you go build this team, you’re scaling up, there’s different capabilities you need when you’re five customers and a million of recurring revenue versus $10 million and a lot more customers. Now you do this sort of, okay — we got a bunch of money, a couple of customers and no revenue. But you had this history, and you had a management team who’d been through stuff. Were there any changes you needed to make in the team to have gone from zero to $10 and then $10 back to zero? It’s like, “Hey, who’s going to be coming along for the ride with me?”

Guru: Yeah, it was not much about the zero to $10 and $10 to zero. It was more about people who believed in the new vision of CommerceIQ or people who did not. Cause building a startup is freaking hard, and every day is a battle. Every day is a struggle. You need that core belief in the vision and the mission that you’re trying to solve for. But Boomerang Commerce was going after a certain market. CommerceIQ was not going after that market; it was going after something else — a completely new company. So, it was a moment where a lot of team members opted out, including management team members and C-level members, they said, look, “I joined you for solving the mission of creating an operating system for multi-brand retail. Brands do not speak to me. They don’t sing to me. And so, it’s time for me to head out.” There were some who I could see that it was not the right space for them, and I had to have the conversation with them and really asked them, are you really in this? Cause it’s okay if you’re not. And it doesn’t have to be today. We can find a good path out and hire your successor. So, there were all sorts of conversations that had to happen with the team.

I’ll say Scott, one of the most profound changes that happened was not with the team, the most profound impact was to me as a human. I came in from this Amazon school of thought. Everything for me was a math problem where I thought that everything could be solved like an analytical problem. You just gimme a problem, whether it’s a relationship problem or a human resources problem, or a market problem, I could apply analytics 101 and logic 101 and solve it. But as I look back and say what we had built as a company and what I, as a founder was, a good combination of IQ and LQ: intelligence quotient and learning quotient. What was missing, what we were not true to, was the EQ element. This pivot gave us the EQ. Gave me the EQ as a human — the ability to lead from the heart. There are situations where it was a bad idea to lead from a brain and solve it like a logic problem. For instance, somebody who had joined me with the vision, and I had sold this person the vision of winning the multi-brand retail market, and now I’m trying to logically explain why this was not a good idea and we had to go. It was not a math problem. It was not a logical argument. What I needed to show was empathy. What I needed to say to him or her in that meeting was that I understood, and I empathized with them, and I was sorry that this ended up happening. I did not foresee this. A lot of smart people did not foresee this, but it is right for the company to take this different path.

When I look at it and as they say, growth solves every problem, lack of growth also magnifies every problem. And there were lots of little problems that I had as a CEO, as a leader, as a human. And it just magnified the heck out of every one of those problems. And it was a very humbling two years in my life where I knew I wasn’t perfect, where I knew that there were lots of things that were wrong more than what were right. And I had to address them. And this was one of them. I had to really add that EQ element to my personality and to my leadership.

Scott: Yeah, I like that. It’s both the challenge of convincing people to come join you on a journey and the reality that that was the wrong journey and the self-reflection that it takes to embrace that and move on from that.

Guru: Now, so Scott, we talked about my journey going through the pivot. I actually was curious now that we are coming out on the green side, and hindsight is 2020, and we are in a good spot. I want to get your thoughts! What was going on in your mind and what was going on in the board’s mind? Outwardly you guys were doing a great job and giving me comfort, but what was going on in your mind when we came out and said, you know what. All the things that we raised money for. Forget about it. We’re going to do this new thing called CommerceIQ.

Scott: Yeah. I’m hopefully not suffering from revisionist or optimistic history. When you and I first got together to talk about Boomerang Commerce, which was the original name for the company, I was very predisposed to the idea. I felt somebody should be building the technology that helps Amazon win for the rest of the market. It’s like our former boss used to say, “Strong convictions, loosely held.” You can believe strongly in something, but if you have information that disproves or gives you doubt about that, you shouldn’t believe in it so strongly that you don’t recognize the opportunity or what you should be doing.

And so, I think like you, obviously, I was super sad when, you know, very large $3 million a year revenue customer turns into zero. Those were certainly scary moments for the company. I actually think the question of whether we exit the market was non-controversial from me or the board’s perspective. And between the conceptual decision, we should not be doing two things, we should only be doing one thing. It is fairly straightforward and obvious to say at a company at that stage, even though it took somebody to say it. Two, let’s go hire a banker and see if we can get full value for this product. And then three, you are having gone and developed those relationships, ahead of that process, resulted in what I would consider a 1-in-a-million type of outcome to sell, as you said, that part of the business and not the whole company and along the way, having done some research, some product-market fit, sort of stuff to say, “Hey, let’s go try this thing.” It’s like you said, what you want is the intersection of an A team with an A market. I felt like I had an A entrepreneur in a C market. I think the A entrepreneurs are harder to find than the A markets. And so, if we wanted to go after an A market, I’m sure we had plenty of good debate. But it was a very straightforward decision.

I did have questions from my partners like, “Hey, gosh, that’s a lot of money that you guys got. Maybe we should take some risk off the table.” And from my perspective, I felt like that was always an option. You and your team were good stewards of capital, which I think is the hallmark of a good management team. And so, whether we distributed cash the day the money hit the bank, or we did it a couple of months or years down the road because we didn’t see the opportunity, that was an option value that was always there. But the bigger opportunity was to figure out how to deploy it, to do something yet bigger. And so, the story’s not over yet, obviously, but I’m feeling pretty good about that decision.

Guru: That’s great. It’s good to understand the board dynamics at that time. Thanks for sharing.

Scott: Well, let’s fast forward to sort of the early innings of the CommerceIQ journey just by saying. You know, this very early idea in a customer to turn into a bunch more customers, your first $10 million of ARR in that business, which is now many multiples of that. And I think somewhere in the neighborhood of $200 million of total capital between the original business and the new business raised. And so, I mean, I think that’s just something fun to reflect on and feel great about. You can look at the last couple of years and say, well gosh, COVID was a big tailwind for e-commerce, you know, in some ways, clearly your customers were beneficiaries of that, CommerceIQ is a beneficiary of that because you, your customers needed algorithms to help scale the business, to make their business more profitable on Amazon and Walmart Instacart other places. And here we are in 2022, and there’s somewhat of a reversion to the mean of e-commerce, right? The growth rate was double. And now maybe it’s back to where it was, or maybe slightly elevated from that. And then you pile onto that the potential for a recession — if we’re not there already — higher interest rates, and that’s potentially having an impact on consumer spend. Obviously, the fundraising environment isn’t where it was a year ago. You didn’t have to navigate that in version one of the company. You’re having to navigate it now. Just tell us about how you’re thinking about it. You’ve got plenty of capital in the bank, you’ve got plenty of runway, but the choppy waters may be here for other reasons.

Guru: Yeah. I think the future of CommerceIQ is very exciting. We are in a very solid spot as a company. Of course, there’s going to be challenging, but one of the things that we definitely got right was the product-market fit on this one. And what we also got right was the quality of the market. Our customers are loving the product. Our net dollar retentions are off the charts in the top decile of our industry. And we also have built a company which has got a great cash efficiency ratio. As we look forward in the business, one of the things that is very exciting is this is a recession — and the recession is definitely going to happen, in my opinion, if we’re not in that already — that this is a recession where there are a certain set of markets which will continue to thrive: low-cost groceries, healthcare, discount retailers, children’s goods, pet industry, these things, usually they do a really good job in recession. We are serving, according to a study that I got the other day, we are serving eight out of the 10 markets that are expected to do well during a recession.

Scott: Yeah. I haven’t heard that stat before.

Guru: That’s a great place to be in for us as a company. We do have to slightly change how we talk about our value proposition. And in fact, even the focus of our value proposition. In a growth market in e-commerce, every brand was looking for three metrics: growth, growth, and growth.

And now, in a recessionary market, they’re looking for three metrics growth, profitability, and cash flow. So, we are having to change our value proposition by, say, 45 degrees. And this is not just marketing messaging and all that. I’m talking about what our customer success teams are working on and focusing on what our product team is focusing on, the product roadmap for the next few months, and stuff like that. We are definitely taking a slight shift to navigate that sort of dance that we have to play with the economy and help our customers win in this market. In a lot of ways, I’m really looking forward to the next two years, whether it’s expansion or recession, because of the strong foundations we have built. And frankly, it gives us a golden ticket for the next two years where new startups are probably not getting funded as fast as they were before. So, it’s a great opportunity for companies like us to deepen our moat, maybe pick up a few companies on the way and acquire them and also hire some great talent in this market. So, it’s actually very exciting. All we need to do is just stay true to our No. 1 leadership principle, which is customer obsession — ensuring that our customers are winning, they’re taken care of, and we are solving the right problems for them in the next two years.

Scott: I think that’s, that’s great. You know, you’ve got great dollar efficiency as a business and a very disciplined management team. I think you’ve got a great product roadmap, as you alluded to. You’ve got the balance sheet to go do some interesting things. You’ve got a very clear vision of where you want to take the product, and now you can go build a bunch of stuff or you can go buy some things as well. And you just closed your first acquisition. So, you know, gone from selling a piece of the business early days to adding somebody else’s business to the platform. How do you as a CEO think about building things versus buying?

Guru: This may not be the answer for every company or every CEO, but the way we think about our business is we are a vertical SaaS company. We are not a horizontal SaaS company. Horizontal SaaS companies are companies like, say a New Relic or Apptio in enterprise or Salesforce is a classic example. Like anybody who has a sales team can use a Salesforce software or anybody that has developers can use New Relic. But for us, we can only sell into the retail market. We don’t sell into the insurance, finance, or government, these are very large markets, which we don’t touch at all. And so that’s the definition of a vertical SaaS company.

And one of the things that I, as a vertical SAS CEO, or we, as a vertical SaaS company, we have to go solve a broad range of problems for every single customer in our market. That’s one of the idiosyncrasies that we cannot take one specific problem and solve it really well and go sell to thousands if not millions of companies, which is what a horizontal task would do. In a vertical, SaaS is sort of taking a slice of the market and going, solving a broad range of problems. Now we cannot be a small sliver for a small market. Then you are just building a small company, right? If you are building a multibillion-dollar company, which we are, then you are to do one of the two, right?

We are certainly not doing horizontal, we’re going vertical. We’re going deep and deep vertical. And what does that mean? That means that. We are not just solving a digital shelf monitoring problem. We’re not just solving a supply chain problem. It’s not just solving the retail media problem. We’re solving all of the above. We have to solve everything for this market.

And another idiosyncrasy of a great vertical SaaS business is that it’s a winner-takes-all market where we have to hurry up and invest strategically and go take the market. One thing we don’t have is time. We are already at a pretty good level of market share. I want to get to a point where we are 60, 70% market share in our market. And frankly, being a product-first and engineering-first CEO, it is hard for me to have that come-to-Jesus moment — that realization of, “Okay, I should go buy a product as opposed to building it in-house.” But one of the things that we do look at is how we can be open-minded to other products that are best-in-class, where customers are loving them, and frankly, can you convince that team to come in and join the common vision. And does it make financial sense for both parties to essentially go do that?

In our case, e.fundamentals was a great marriage from that perspective. The moment I talked to John Markman, who’s the CEO, both of us knew that there was something in this. And we were able to convince both our management teams very easily because we knew that there was value in putting one and one together. We were strong in North America. He was strong in Europe. We are strong in retail operations and supply chain, and retail media. He’s strong in digital shelf measurement, which is also a budgeted software. So, it was sort of complementary in almost everything that we do. I would’ve definitely not gone ahead if I was not convinced about the product if I was not convinced about the team. But in this case, the product is world-class, the technology is world-class, and the team is world-class. That sort of just gave us the confidence to move forward. Very exciting to have done this. And we are really looking forward to a fantastic journey together.

Scott: Thanks, that’s really cool. And maybe a good capstone to our conversation. You go from building a company to selling part of the business, to rebooting, building a new business to bringing in another company. And I know you’ve got a long road ahead and just personally really enjoyed our partnership together over the years, and I’m looking forward to many years to come. So, thanks for spending some time with us today. I know just as your entrepreneurial journey is such an interesting one, I think it’s really helpful to share with others.

Guru: Thank you, Scott. Thanks for having me. And certainly, thank you more than that for a wonderful partnership so far.

Coral: Thanks for joining us for this week’s episode of Founded and Funded. If you’re interested in learning more about CommerceIQ, please visit CommerceIQ.ai. Thanks again for joining us, and tune in in a couple of weeks for our next episode of Founded and Funded with Cresta Co-founder Zayd Enam.

Top Tier’s David York on Limited Partners and Expectations for the Market in 2022

In this episode of Founded and Funded, we stray a little from our typical format to answer some questions that often come up in discussions between founders and our investors. Madrona Managing Director Matt McIlwain talks with one of our Limited Partners — a partner that is not a day-to-day investor Top Tier Capital Partners Founder and Managing Director David York. They explore different kinds of funding mechanisms and how David is “selfishly optimistic” about the current market environment. And they talk about the history between the two firms – more specifically, why this San Francisco-based firm decided to bet on Seattle and Madrona all those years ago!

This transcript was automatically generated and edited for clarity.

Matt: This is Matt McIlwain. I’m a Managing Director at the Madrona Venture Group, and I’m just delighted to have a longtime friend and special guest here today — David York. He’s somebody that I’ve known now for over 20 years, and his is a really interesting journey going back to what was once called Paul Capital and had a focus on what’s called the secondary market, and how David under his leadership and with this amazing team, has really transformed it into a much bigger set of platforms. David, can you take us back to Paul Capital and how that evolved into Top Tier.

David: Sure, our pedigree actually goes back to the 1980s with the Hillman family, which is also a co-investor with us in the Madrona funds. My co-founder Phil Paul from Paul Capital used to run that portfolio in the 1980s. And at that time, the only investors in venture capital firms, or at least a lot of them, were either corporations like AT&T, which was one of the largest programs at the time, and/or families — endowments for the most part had not started to invest in venture capital and hadn’t really thought of that model. So, our legacy in this industry goes back almost 40 something years. I had the privilege in the ’90s of running a trading desk for a venture capital investment bank in San Francisco called Hambrecht & Quist, and one of my clients was Paul Capital. They, as you’ve mentioned earlier, were focused on secondaries for most of the ’90s. But at the end of that decade, it attracted capital to invest in funds on a primary basis.

Matt: Maybe David, when you say primary and secondary and then for now let’s constrain it to funds and then we can go to direct companies. Maybe we’re just going to break all those down.

David: So, when a fund is getting formed, it’s typically a blind pool. And the investment into that blind pool is usually by an accredited investor or an institution, and they usually are really trusting the folks, like the partners at Madrona, to manage that money going forward over a period of time. That initial investment into that fund is called a primary investment in our industry. You can use that same terminology to be an initial investment in a private company as well, but for limited partners, primaries are buying funds for the first time without any assets in them.

If you look at the contracts that are used to create those funds, things like defaulting on your commitment are very onerous and can be quite penal as it relates to your investment — you can forfeit your assets, you can forfeit a lot of things. And so, it’s not common that investors default on their commitments if they want to get out of, say, a venture fund. But one of the ways they can remove themselves from a venture fund, if they want to invest that money someplace else, is to sell them. And that sale transaction happens in what we call today, the secondary market. Think of that as sort of a tertiary market like used cars. Well, this is used private equity. And that was one of the early inventions that Phil Paul created when he started Paul Capital — that firm was started under the guise of a very large secondary transaction — one of the very early secondary transactions ever done in our industry.

Secondaries today now make up about 5 to 10% of the volume in the private equity industry. It’s a way to invest in private assets, typically companies, in a manager that you might know and trust, later in their life. And so, there’s a lot of visibility on what’s in those portfolios, as well as if you have some insight, you have some visibility on how those companies might do in a way that you can actually generate pretty attractive returns. You cannot generate for the most part the same type of return as if you invested in that fund at the very beginning but depending on the markets and the pricing you can acquire the secondary interest in, once it’s further along, you can generate returns that look pretty comparable.

Matt: So here is this organization, Paul Capital. They develop an expertise for buying these secondary stakes in venture capital funds. So, this is not yet at the secondary stake in an individual company level let’s pick up the story in the kind of the early 2000s.

David: Phil and I spent most of the early 2000s building what’s today our core fund to funds business. It’s 80-90% of what we do and did then — invest in funds on a primary basis, so when they got started. From time to time, we get an opportunity to buy a secondary in one of our managers as we went along, and so we would do that. With the global financial crisis, we started to see more and more secondary volume as an active fund investor in our managers, primarily driven by institutions that were essentially desperate to generate liquidity because of what had happened in the global financial crisis. And that motivated us to look at what was for sale versus what people were worried about and we realized that the worry was much higher than what they were selling, and the quality of what they were selling was quite good. So, we started aggressively buying fund interest in our managers at really very attractive prices.

I mean, think 30, 40, 50% of a discount to what the current NAV was, and that motivated us to really start to build a capital base there. So today about a third of our investment activity is focused on buying these secondaries. About 20% of what we do is then invest with our managers in the later stages of their developing portfolio companies. And usually, that’s in the B to C round, and that’s been a very lucrative investment activity for us as well. Today, we own indirectly and directly about 12,000 private companies. And we also have invested between our secondary purchases and our primary investments in about 500 different venture funds across about a 100+ general partners or managers. And that gives us a great lens on the industry, and that asset base generates quite a bit of interesting deal flow.

Matt: A lot of our audience are entrepreneurs, and what’s interesting from their perspective here is that here you are a great long-time investor — one of our limited partners — buying primary into Madrona. But then you can see through to the performance of our companies and then sometimes be a co-investor with us and a direct investor in them. And I know that you’ve done that both in terms of buying direct primary investments and in some cases, direct secondary, investments. In fact, I think your colleague Garth Timoll and you were champions of Smartsheet relatively early on. And that I think turned out to be quite a successful investment for all of us, of course, but from our perspective, it was great to have our trusted partners that were also interested in investing in our companies. And it’s a nice resource for those companies as well.

David: And we think that’s a perfect relationship with our managers. We want to be a source of capital that, thankfully, we think of as being a good partner. So, we want to help you build companies with our primary investments in your funds and then we want to help you with liquidity if you have an investor that can’t follow on, or somebody’s getting a divorce or an institution gets a new CIO, and they change their strategy, then we’ve enjoyed and had an opportunity to take advantage of those situations by buying funds from your limited partners. And then, you know, we do a handful of deals a year, so we’re not that active from the standpoint of co-investments, and we’re pretty picky, but we’ve enjoyed a great relationship with Madrona around some of the co-investments we’ve done from Qumulo to Smartsheet to Remitly and some others. One of the other things we’ve been able to do in the form of being a good partner is help some of those companies provide liquidity to their employees through tenders, which are actually direct secondaries in those companies. We look at those as a mechanism to provide more ownership to our investors in our programs to that good company. And we think it’s additive and based on the way capital markets are structured today — these companies are staying private longer — and so it gives CEOs and entrepreneurs the opportunity to help their employees get liquidity along the journey of the startup in a way that I think is constructive for everybody.

Matt: So now we’re onto secondaries directly in companies and buying shares, from an employee in theory, you could buy it from a former employee or one of the venture investors, maybe an angel investor. This is an area that I think we both agree the mindset in the venture community has changed quite a bit in the last 15-20 years. Can you maybe take us back to kind of the early 2000s mindset and how that’s changed in terms of secondary sales, especially for current employees at companies?

David: Sure. First of all, if you go back to the late ’90s, the average pre-money for an IPO was roughly $300 to $400 million — I mean, if you look at Microsoft and Amazon in particular, those that’s right in the range of where those companies were before they went public. Today that’s the average value of a Series C or a series D round, depending on how fast the business is growing and how much money they’re trying to raise. And so, the time pendulum of liquidity in the startup sort of model was you got paid enough to want to work there, but you really made your upside with your equity, and you got that liquidity from the companies going public earlier, and that’s kind of gone away. And it’s taken the investor community, meaning general partners and the limited partners, like you said, the last 15 years to get comfortable with the notion that those mechanisms are not going to change much as it relates to employee structure and whatnot. And so, it’s up to the private market investor to try and help with that. And we’ve all gotten much more comfortable in owning common. It used to be that we had to own the preferred to make ourselves feel comfortable, but as companies get big enough, which is usually where they’re typically doing these tenders, the preferred structures really are not necessary because the companies are moving well beyond that in a way that common is actually the same security in some ways.

And so that’s part of the reason we’re so comfortable with tenders is that it gives us more ownership. It gives us a better blended exposure. A lot of times the companies that we’ve done tenders with, if we’ve invested in on a direct basis or we’ll come back and do a direct round a little bit later on, and what we’re looking for is exposure to great companies that we see in our managers’ portfolios and trying to be a good partner along the way. We did one for an ad tech business down here in San Jose that they never raised any money beyond the seed round. The management team wanted liquidity, so they could kind of spend another three years doing their thing. And then ultimately the employee base wanted the same thing. And so, we did a couple of deals with them. Ultimately that company got acquired by Blackstone, and so we all did very well by it, but there was no real liquidity in the marketplace unless you did that for the employees and the management team.

Matt: You know, my mind got changed in this area maybe a decade or so ago in that you’re concerned that if people were selling, especially founders and employees, were selling their shares. They were going to be a little less aligned with the company. But I think they became more aligned with the venture investors because they were able to sell some and that could give a little bit of a release valve.

David: Yeah, Risk office as we call it, so that they can lean in even further.

Matt: And be less tempted if, you know, whether it’s a strategic or a finance — a Blackstone or somebody comes along and is trying to buy you, you can say we’re going to kind of play for the longer term, the bigger outcome. And I think that that does align with, with the venture investors. I mean, you must look through a lot of venture funds. And I’d be willing to guess that 90% of those venture funds end up having two, three, or at most four of the companies that are the ones that really move the needle in terms of the returns on those funds.

David: Yeah, those probabilities haven’t changed. No matter what decade we’re in. What’s changed in the venture capital fund investment universe today is that the dollar losses that a portfolio generates early on have really shrunk. And the reason for that is it just takes so much less capital to start a company. The number of companies that fail in a portfolio hasn’t really changed and it varies. But it over time ends up being somewhere between 20 and 50%. Of the ones that survive, half of those usually return the fund. And then the other half are where the drivers are. And just depending on how many companies you have in your portfolio in general, but that usually is in the neighborhood of two to three on the low end and maybe five or six on the high end. Every once in a while, you get a fund that has outliers in all those places, and they create really remarkable returns, but statistically, if you look at funds in China, you look at funds in the U.S., you look at funds in Europe, that all holds true. So, we’ve seen our loss ratios come down to a level that they’re lower than the buyout industry on one hand. And the other hand getting back to the old adage that you lose a lot of stuff, you do lose the companies. You just don’t lose as much money.

Matt: So, David you’ve been great partners with us at Madrona now for almost 15 years. And you know, that was back when the cloud was just getting off the ground with AWS and then later Azure, which of course are both based here in Seattle. I’m curious what got you and Top Tier excited about the Pacific Northwest as a region for venture capital as well as ultimately Madrona.

David: It’s a great question. Two things, first of all, more and more of our portfolio companies, which early on was predominantly Silicon Valley-based firms, were coming out of the Northwest. It became evident to us that there were essentially good things happening up north. And then we also had this strong belief that the cloud market in particular was going to be a big part of technology going forward. So, we wanted to sort of double down on that by getting more exposure to essentially ground zero for cloud, which is Seattle, in our opinion.

Matt: Well, we agree with you.

David: Yeah – you agree, but I mean, in general, that wasn’t obvious 15 years ago, but that was really the motivation. Seattle’s been kind of an interesting market for technology at a high level because the employment base has been for so long sucked up into a handful of really large companies. There used to be kind of the adage that if you really wanted to build a special business, you went to the big city. And then what was happening — the two things, one thing was the cloud activity that we wanted incremental exposure to if you will. But the other was that we started to see, finally, that the entrepreneurial ecosystem being generated by those large technology companies was really starting to be self-generating. And then what happens usually if you look at all the different regional markets and whatnot in the U.S. or different places around the world, you end up with this sort of flywheel of entrepreneur and startup sort of muscle memory, if you will, that allows both the entrepreneurs to invest in the businesses, but the businesses essentially generate employee capital, employee growth that you can then build upon to build an ecosystem. And so, what’s truly happened in the last 15 years in the Pacific Northwest in particular in Seattle specifically is that you’ve now that market is comparable to any other regional market in the U.S.

It doesn’t attract as much capital, a la the statistics because you don’t have as many firms up there, but I can tell you that the people that are up there think that the investment opportunities are just as good as San Francisco.

Matt: Well, we’re going try to keep spinning that flywheel.

David: Yeah. Well, you guys have done a great job. We spent a bunch of time and really felt very, very confident about where you and Tom and the rest of the partnership were headed at the time, as well as the relationships you had in the community in a way that we thought we’d get a disproportionate view of high-quality deal flow, as it relates to what you were going to look at and then ultimately made its way into our portfolios. And so that was the reason for originally investing. But the other thing that happened is we had an opportunity both to co-invest with you, which we’ve done in a number of companies, including Smartsheet, but then one of our local foundations down here decided to do some secondary sales and that gave us another opportunity to partner with Madrona and buying a portfolio of fund interests in your funds and to the point where now I think we have more exposure with you than anybody else, which is something we’re excited about. We’re big fans of the market. We think you’ve done a great job with the team and where you’re headed. And ironically, it’s probably four or five times better than when we started.

Matt: Well, thank you for all of that. And the feeling is very mutual on the partnership. And yes, you know, we don’t very often have situations where there’s a secondary in Madrona, but that circumstance where they change the CIO at that group. And the new CIO wanted to do some different things. And the great thing is that we were able to work with you all and ultimately own a piece of that ourselves. And so, I think everybody that was on the buy side of that trade was very, very happy in the end.

David: Yeah, it’s been a great transaction for everybody.

Matt: Certainly, the market conditions have changed a fair bit in the last set of months. You and I have talked about this before, and I’m curious as you kind of look down the road a little bit, we’ve got these different strategies — primary investing, secondary investments into funds, into companies directly. What’s your view on the macro just to start? And then we could talk a little bit about where you see more or less opportunity as a result of the macro environment.

David: Well, I’m selfishly optimistic. I think for the first time in probably 30 years, we have a rising rate environment. And I think the markets in general, especially the overpriced technology stocks, or the bigger momentum stories, are just being reset as to how you want to price growth when you have a rising rate environment. And we’re going through this natural churn as we reprice things. What I think, at the end of the day, you have to rely on ultimately is fundamentals. And the fundamentals of our underlying technology companies, especially the ones that have gone public recently, are still very, very strong. So, I think ultimately those prices that have been pretty radical, that declines have been pretty radical. I think they’ll get rationalized at some value that’s different than what we were at in March, for instance, but I don’t think, because the fundamentals are so strong that we’re going to stop pricing growth in those businesses. It’s just going to be at a different, multiple. And so, the market’s trying to figure out what’s a practical rationalization for growth and the value of growth.

Matt: And I think probably a lot of our listeners have seen the charts where the 5-year trend and 5-year average of SaaS companies, for example, was maybe in the eight times their forward 12 months revenue. And it went as high as like 17, 18 times even last fall. And now we’re below the 5-year average, you know, kind of in the six to seven times range and yet a lot of these companies are continuing to grow. I think there’s been one other factor. It’d be interesting to see if you’ve seen this as well, which is there was almost a growth at any cost. You’re now seeing a pretty hard swing back to cash flow break-even or better — control your destiny with positive cash flow as key criteria, at the expense of some growth people are okay with less growth, as long as you’re very close to, and you have a clear path to sustainable cash flow positive.

David: This is the notion of fundamentals. At the end of the day, growth at any cost ultimately gets outpriced in a way that you’re going to have to reset. It’s happened in the late nineties. It’s happened several times in the last two decades. You know, cash flow break-even is obviously the holy grail for these, especially the IPO companies that have gone out in the last two years. It was always the anticipation when the company went on the road that that would happen in a reasonable period of time. I think during COVID, because of the demand for technology and, frankly, those companies’ products, they kind of threw that aside and said, the world’s going to let me get by without having cash flow break even. So, bringing that back, I think, is very constructive. To me, it’s where things should net out anyway. So, I’m not against it. And I think, having a little bit of discipline in your management is not a bad thing. So, I think it’s okay.

Matt: You all you know, invest in primaries and secondaries all over the world. You also have, if I understand it correctly, a pretty geographically spread out investor base of groups that invest into Top Tier. What are you seeing globally if we zoom out from the United States in terms of capital interest availability and how folks are thinking about these changing market conditions.

David: So globally investors started allocating pretty regularly to venture capital, really starting about six or seven years ago. In the last five years, it’s now really cemented as part of a portfolio allocation strategy. If you talk to consultants like Cambridge, or you talk to the endowments or you talk to even the big pension funds. And then ironically the benchmarking that all those places use has a mixture of assets that typically has venture in it. And because venture has performed so well, especially in the last five years, it’s outperformed every equity class there is. People are having their benchmarks start to beat them in a way that they’re starting to keep that allocation and worry about catching up. So, we don’t see any real materials slowdown in venture portfolio allocations. We do see pacing, as it relates to deployment into the asset class, ebbing and flowing really with what is happening at the balance sheet level of the investor. So, think about — I have a blended portfolio of listed stocks or fixed income or whatnot. If those things go up or down then it improves your balance sheet if venture goes up and down, it improves your balance sheet. But all of them have a certain weighting that the CIO typically wants that mix to be, and if they get out of line, then that might slow down or accelerate pacing just depending on where you are.

Matt: Let’s say I’m trying to have 25% of my investments be in private equity and venture, and they’re all still basically at the same values. But my public stocks have gone way down, just mathematically, my private company holdings are going to be higher as a percentage, and I might be out of whack in terms of my percentage allocations that I’m looking to achieve as a pension fund or an endowment or a foundation. Then there’s a whole bunch of things, including this kind of, I guess, leads a little bit into why it might be a good cycle here in the not-too-distant future for the secondary markets again.

David: Yeah, we’re very selfishly bullish about that opportunity. What we do is eat and breathe venture capital and technology, but we’re very excited about the cycle in front of us. And then what we see is especially some of the more aggressive investors, where we think they will have that what you were describing, we call the denominator problem, where the denominator is shrinking, but their exposure to private assets hasn’t really changed. Over time, what happens is they typically are selling off exposure, and we think there’s going to be a lot of very interesting opportunities for that probably after the second quarter. And that’s because of the way the markets have responded and, frankly, the way the private markets respond to corrections. It usually takes two or three quarters for valuations to change. But in the meantime, the public markets have corrected overnight, and so you end up with this disproportionate overweighting in private assets in a way that I think will have investors starting to think about rebalancing later this year and potentially into next year. And so, we’re very enthusiastic about what we see coming.

Matt: There’s also an additional timing element there because so many of these nonprofits, a lot of them are educational endowments, and their fiscal year ends at the end of June. And so, they have their kind of get their final year-end numbers. They have to face the board

David: And the auditor.

Matt: And the auditor.

David: And the auditor is going to make them stay true to their charter. And we expect that market, in particular, to be rebalancing quite a bit in the second half of this year.

Matt: You talked a little bit about the thesis on cloud and that one has been a very strong one. And in Seattle, I think with Amazon, AWS and Microsoft Azure, and even a fairly substantial presence for the Google cloud teams up here has really, you know, is for all kinds of reasons, I suppose, the cloud capital of the world. But what other big themes or thesis, you know, either, in the U.S. or abroad are really on your radar screen today.

David: We think there’s been a total sea change with COVID around the use and acceptance of technology across some very major gross domestic product slices. So, health care is one of them, drug discovery has gotten better and better, but the use of technology in the hospital systems and the medical community, and I don’t know how many people went to the doctor’s office during COVID, but we certainly had Zoom calls with doctors. We just think that whole ecosystem is ripe for innovation and change. Education is another space — all the kids learned how to go to school on their computers. And ultimately, I think that whole space is going to be ripe for change. Transportation, logistics, the whole notion of the fact that you didn’t have to own a car or, frankly, you didn’t have to go to the store because stuff was facilitated for you. I think it’s becoming commonplace in a way that that whole part of our gross domestic product is going to be materially impacted. And we don’t as an economy really understand that yet, but I do think it’s going to change things like insurance, how we ship things and a bunch of other stuff. And so, that’s another slice of the GDP that’s going to change. And then technology continues to always kind of cannibalize itself, but machine learning and artificial intelligence, which are a big part of your effort there, and especially your work with some of the institutions up there in the Seattle area, that’s becoming a more and more commonplace component of software in a way that you can see that there are these big companies like Google and Microsoft, and Salesforce that have a lot of startups that are very interested in innovating or out producing by using machine learning and artificial intelligence. I just feel that’s going to be more and more standard in a way that we’re going to again have another kind of full rotation in the tech stack.

Matt: Well, as you know, I think we had dinner maybe five years ago when this was when we were still quite early in the era of applied machine learning, or we like to call intelligent applications. And I think this is one of the neat elements of Top Tier is you all were curious, too, and wanted to have a deeper discussion. And we got a group of friends together and dug into that topic. And now we’re seeing just a lot of affirmation, whether it’s in industries like insurance that you mentioned or healthcare and increasingly life sciences, it’s going to be very, very transformative in the years ahead. I think I’ve heard you say, ” Technology is business for the future.” When you kind of go around the world, you talk to the folks investing in your fund do they see that too? Are they more and more buying into that as a core thesis of what they need to have in their portfolios?

David: Well, let’s spend a minute on global investors. Yeah. I talked earlier about venture allocations, kind of being a fixed part of portfolios — that allocation typically sits in the equity component of portfolios, as it relates to sort of where the piece is and the balance sheet. Most of the pension assets in the world are really yield-oriented, and that’s primarily because they’re trying to generate 6 to 8% returns to meet their actuary and fixed income could do that for them historically. Today that hasn’t been the case, and so they’ve slowly started to add more equity-like things such as real estate or private equity or venture capital. In markets where you had culturally the ability to buy something and see it go up in value like you do in the U.S., that’s been a very large growing component of portfolios. It’s gone on average from 2 to 3% of a portfolio — venture capital has — to now it’s probably approaching 5, and that’s those are material moves in those programs. In the family office world, it’s now running in the 20 to 30% range. If you go to places like Europe, where fixed income is such a big component of the investor base, it’s slowly getting there. So, some of our larger investors are in Europe, and their blended equity portfolio is now 50% of what they do, and their private equity is now 15, including venture capital. Ten years ago, that number was more like 30, sometimes 20, as far as equity is concerned. And then venture then was a subset of that. So, we’ve seen that growth, but it’s been slower, more measured and more risk adverse. Asia still relies heavily on their fixed income markets. You know, if you think about their traditional pension markets, like Japan is the second-largest pension market in the world and they all follow the large sovereign pension funds there, and those are 80, 90% for the most part Japanese bonds. They’re large pools of capital, they’re trillion-dollar pools, but the ratios kind of dictate the trend and then if they want to do private equity, they struggle to do venture capital, because they worry about losing money. When investing, I’ve started to think about risk and where it lies and sometimes there’s risk of missing the upside. And I do think most of the Asian pension funds have missed a lot of upsides.

Matt: I tend to agree with you there. I mean, there’s some exceptions, of course. And you know, I think the folks in Singapore have done a particularly good job of diversifying into some of these different areas.

David: The sovereign wealth funds have done a better job thinking about equity than the pension systems in those communities. Yeah.

Matt: So, to bring this back to the entrepreneur for a second? What are the implications to them? My macro takeaway is that there is a growing amount of capital from pension funds and wealth assets and then you get the foundations endowments that have done very well. This seems to be generally something favorable to the entrepreneurs and the companies because there’s more capital that’s looking to find its way into private companies. Is that fair to say from your end or do you see it differently?

David: I think the capital will be abundant. But selection and actual fundamental application is still going to be the tricky bit. So that’ll take skill both on the entrepreneur’s side, as well as on the manager’s side at firms like Madrona. What we see today is that there’s an abundance of seed capital, right? So, you can get a company started, but really knowing how to build a business and get it to become successful is a whole other problem. There’s an abundance of series A through D but picking to make that stuff work is still the really, really differentiated activity. You know, most people have networks, and most people have what I would call service activities, but being able to see around corners and pick the right stuff is the hardest bit as it relates to traditional venture capital. In the growth market where a lot of the valuations are driven off of public equity activity. The people that were doing that, a fair amount of them had public equity investment funds, like mutual funds and hedge funds, and things of that nature. I think that stuff will ebb and flow with the public market valuations and the capital there will also ebb and flow a little bit with public market valuations because they too have denominator problems in those portfolios as public stocks come down.

So, we expect that market to slow down. We expect valuations to come in at the early stage and seed stage level valuations, they accreted but I think for the right reason, which was the fundamentals were in a pretty good spot for most of these seed companies. They had some sort of product, so you weren’t really doing a really true raw startup like you were maybe 10 or 15 years ago. So, to me, this is the time where the guys that roll up their sleeves and really do the good work actually are going to be the winners, whether it be an entrepreneur or an investor.

Matt: Well, I think this has been a really helpful conversation. You know, a lot of times the entrepreneurs we work with are very curious about how we get our capital and some of these other mechanisms — the primary and the secondary markets. I really want to thank you for taking the time to walk folks through some of that history. Some of the things that have changed, some of the things that are going on now and how these different parts of the kind of capital world exist and function. So, thanks very much, David, really appreciate it.

David: Matt as always. It’s a pleasure seeing you and a pleasure to spend time.

Coral: Thanks for joining us for this week’s episode of Founded and Funded. If you’re interested in learning more about Top Tier, please visit TTCP.com. Thanks again for joining us and tune in, in a couple of weeks for our next episode of Founded and Funded.

Sila Co-founder and CEO Shamir Karkal on Crypto and Web3

In this episode of Founded and Funded, Madrona Partner Chris Picardo dives into the world of crypto and Web3 with Sila Co-Founder and CEO Shamir Karkal. Sila is a FinTech platform that provides payment infrastructure as a service, which is critical for all companies that need to integrate with the U.S. Banking system and blockchain quickly and securely — while following all necessary regulations.

Sila has evolved since Madrona invested in its Seed Round in 2020, and it is now sitting at the intersection of crypto rails and traditional financial services infrastructure because as much as some people want to get away from the traditional financial system, crypto still needs to be able to plug into it. Shamir and Chris dive into the importance of this infrastructure, the ups and downs of the crypto market, the trends driving FinTech and crypto, and so much more today. So, I’ll hand it over to them to get started.

This transcript was automatically generated and edited for clarity.

Chris: My name is Chris Picardo. I’m a partner at Madrona, and we’re really excited today to have Shamir Karkal, who’s the founder and CEO of Sila. We are going to talk about all things FinTech, crypto and Web3, which I think is largely a first for the Madrona Founded and Funded podcast. Madrona has been invested in Sila since the Series A, and I think that Shamir and I have known each other for a bit longer than that and really excited to have him join. So, Shamir, welcome to the podcast, and I’d love to start off with just a little bit of background on yourself and your journey here. You’ve been in FinTech, I think since before it was called FinTech or had a name. And I’d love to hear a little bit about that journey and how it all started.

Shamir: Thank you for having me, Chris. It’s a pleasure being here and being part of the Madrona portfolio. So, I used to be a software engineer 15-20 years ago, I came to the U.S., went to business school and became a consultant. That’s really kind of where I fell into financial services. Did a lot of work, for banks, processors, central banks. This was the 08′ period where I went from working on cross-sell strategies for North American banks to country bailouts in the Middle East. And then in 09′, a friend of mine from business school sent me an email saying, let’s start a retail bank. You’ll see how crazy I am that I thought that was a good idea in 09′. And literally, my last engagement at McKinsey before that was best described as trillion-dollar bank bankruptcy. So, I had way more experience shutting down banks than starting them.

But he had this vision of how a better financial world and a better bank could help people manage their finances, and I totally got excited about that, moved back to the U.S. from Europe and started up Simple in 2009. Simple ended up being the first neobank, ever. The word neobank didn’t exist. And I think the word FinTech probably didn’t exist. We just called it financial services back in 2010. It took us three years to launch Simple because nobody had ever done anything like that before. And then in 2014, Simple was acquired by BBVA, which is a large Spanish bank. A $117 million acquisition seemed like a good outcome at the time, but now in hindsight, we probably didn’t quite realize how much potential that was.

I then got excited about building API platforms, and I was like, “The world needs API platforms.” And I persuaded BBVa to build a couple of them and built and launched them — one in Europe and one in the U.S., and launched them, acquired some customers as well, but ultimately just got frustrated with the big bank lifestyle and left in 2017. And then started Sila in 2018. Sometimes it feels to me like I’ve spent the last 12-14 years doing the same thing, which is help people, programmers, developers, innovators, program with money. A lot of that was internal. At Simple, we had to build all the infrastructure so that we could use it ourselves because it didn’t exist. Uh, I tried to build a type of bank platform at BBVA and then that’s what Sila does now. We help our customers program with money and build FinTech and crypto apps to do things like crypto on/off ramps, FinTech, PFM apps, savings, apps, credit apps, and NFT apps, all of it.

Chris: It’s an amazing journey. And I think since I’ve known you, I’ve always thought that you were out in front of all of these next big FinTech trends. I think obviously Simple was a good example of that. You’ve been talking about using crypto infrastructure in banking for a long time — I think before that really got particularly popular. If you think back on that, how did you see those pain points? What drove you to, in the case of Sila, say, “Hey, there is a new way to build this, and we should really be out in front of that”.

Shamir: I think it goes back to this fundamental thing — money is hugely important to people. I mean, it’s what drives society across the world now. If you look at new year’s resolutions every year, they kind of fluctuate, but they all come back to one of two things. It’s either, get healthy and that’s usually when times are good, people are focused on getting more healthy and especially in the last two years. And then when it’s a recession or a depression, then it’s all about getting financially healthy. And the prescription for both is kind of weirdly similar. You want to get healthier, exercise more and eat less. And if you want to get financially healthier, spend less and save more, and if you can, earn more.

But the financial system still operates in this mentality, I think, which is pre-1990s, which is we have a bunch of products, and we are going to sell them to you. I think FinTech is kind of the beginning of it, but especially in the crypto space, folks tend to flip that around and say, “Hey, I’m not trying to build a product and then sell it to customers. I’m trying to understand what customers want. And then I’m trying to build something that solves their problem.” And maybe it does something with money on the back end. But that part doesn’t need to be front and center and it should just be plugged in on the back end. Your access to financial services should be like your access to water. Like you go turn on the tap and it flows and it’s there when you need it. It tends to be a lot harder than that and a lot more complex. So, I think that’s been the driving impetus for me from like the last 12, 14 years is, we need to get the world to that place from where it is today.

And the differences I see between FinTech and crypto — at some level, it’s all just infrastructure. The FinTech guys typically started off by building on top of the existing financial system where the crypto guys are like, “Hey, we are going to build new financial systems, new payment systems around these blockchain economies, these blockchain networks. And some of the major differences I see there is that a lot of the FinTech folks are really very focused on the customer and the use case. And what are the problems you’re solving, which I think is a very good thing to me. The crypto folks tend to be very community first and they’re like, “Hey, we’re going to build a community of like-minded people. And we’re going to use this technology to excite and empower the community. And then the community as a whole is going to tackle and try and solve this problem.” In FinTech, you still see that ” Hey, there’s us, and then there’s our customers.” But in crypto, it’s like, who is a customer? Who is a builder? Those are just roles, which could be the same person.

Chris: I think that’s a really interesting way to put it. And one thing that I’ve always really liked since we first started talking about Sila, probably in 2018, was how you have this really good view of enabling your customers, who are generally developers or companies building new products, to build new products for those customers.

You started with instant ACH and for the FinTech nerds who are listening to this, I think people who spend time in this space know that ACH has been quite a hassle for a long time. And one thing you decided to do early on, which I will admit, I was very skeptical about when we started talking about it, was use versions of, call it early crypto architecture, to enable that. I think Sila at least for us was certainly the first example of a company that was enabling using some crypto infrastructure to enable use cases that may have nothing to do with what we tend to think about as crypto.

Shamir: Sometimes the way I like to think about this is in terms of financial networks. One of the jokes I say is that like old financial networks, like payment systems, just never die. They literally never die. All the oldest payment systems things like coins, cash, checks — anything that was used by a large number of people across a few different geographic areas is still with us today. And so, what ends up happening is, a lot of times in the tech world, you build technology and then you build new technology, and you’re like, you know what, we’re just going to start using the new technology and ignore the old. And you see that a little bit with the internet, like, email was one of the first killer apps with the internet in the ’90s. Email did not need to integrate into the postal service. Right. Uh, email was its own whole thing and it worked great, and it was awesome. And we still use the post. We use them for completely different things. That doesn’t work in financial services every payment system that gets built, gets built as a layer on top of a previous payment system. And then eventually all the users move to the new payment system. And so, when you’re building these new financial worlds, I think A lot of the early crypto people didn’t necessarily understand that they don’t work until they plug into the old — you kind of have to build them on top, integrate them into the old and then build new use cases, new functionality, move the volume over, and then eventually the old will die. You cannot have them exist as two separates — that doesn’t work in financial services. And so, I think the hardest problem in the whole crypto space sometimes is really the infrastructure to connect it into the traditional financial system. Because guess what? The traditional financial system sucks in many ways, and crypto may be way better, but if you can’t plug into the old, then you can’t move the value and the volume and the money. And that’s what we built Sila to do — to be that bridge between payment systems broadly, but especially between, new payment systems and new payment rails, like blockchain ones and the old ones. And to do that well, you have to do both of those well. You have to be plugged into the crypto ecosystem. You also have to have a deep understanding of how to do things like ACH payments and returns and KYC and compliance. Because that is what the old financial system is all about. So that’s what we choose to take on first is to solve not just the pure on-chain problems, not just the pure off-chain problems, but the combination of on and off-chain problems, that is the hardest thing to solve.

And so far, it’s worked. I mean, we have quite a few crypto customers who appreciate that on-chain, they can do lots of things and they can do it really well. But when it’s like, “Hey, how do you build a system and scale it on ACH, and then move that money over into a blockchain?” That’s hard, and they come to us for that. And then we have a bunch of FinTech customers who are like, “Hey, we are a FinTech app and we love being a FinTech app and we are growing and scaling nicely, but we like having the ability to potentially add the ability to buy or sell Bitcoin or Ether, or do NFTs or maybe access DeFi yield somehow.” All of those capabilities are interesting and we see those customers as well.

Chris: I think Sila is a great example, as you pointed out, of sitting right at the intersection of, call it crypto rails and ecosystem and traditional financial services infrastructure. And, to your point, those haven’t talked very nicely to each other, and people have to build the types of products that can reach into both sides of the ecosystems and say, “Hey, I can help you, for lack of a better term, build a bridge here”.

I think one thing that’s always helpful — I’d love you to walk through a customer example that can be hypothetical of what you can enable with Sila. With this kind of approach, you’ve taken that would just be really hard to do in some other way.

Shamir: There’s a few, not all of whom I can talk about. So, I’ll pick one of my favorite customers and they’re not necessarily that large, but they’re really cool and innovative, so I love talking about them. It’s a company called Fabrica and they do NFTs, but they do NFTs for land. So, if you go to www.Fabrica.Land, I think is their URL, you can sign up and they’ll verify your identity, link your bank account, and then you can go and you can buy a piece of land and they have a list of them and you can go buy like two acres of land in Southern California or middle of nowhere in New Mexico or Nevada or whatever. It’s a large country and there’s a lot of land out there. And an acre in Nevada is like, you know, $5 grand or $10 grand. You can buy on Fabrica legal ownership of a piece of land, which is sold to you as an NFT, which I find very cool because there’s this whole explosion of NFTs in the last 24 months. But most of them, when you look at it and you’re like, “Hey, what does this legally give me ownership to?” And it’s not very clear. Does it give you IP rights? Does it give you anything more than bragging rights to a piece of art as an example? And with Fabrica, it does give you legal rights to a piece of land.

They built the infrastructure on the back end, where they create a special purpose trust, move ownership of the land into that trust — the beneficiary of the trust is whoever is holding the NFT. They built all that infrastructure to do that. But on the flip side, when an NFT goes from John Smith to Jane Doe, and maybe Jane Doe goes and builds on the land or sells it onward, that’s her choice. Money has to go the other way, and they use us for that piece of it — for the onboarding, the identity verification, pulling the money out of somebody’s bank account, transferring it to somebody else’s bank account. And now, doing more sort of DeFi-ish sorts of things where it’s like the seller of the land can actually finance it and say, “Hey, instead of paying me $20K, you can do $2,000 bucks a month for 10 months. So, they build the on-chain capability to do that, but use our infrastructure on the back end to do all the money movement and the regulated functions behind that.

Chris: I think that’s a really cool example. And I like that there’s a product that in some ways is extremely crypto in this kind of NFT format. And yet, there’s the realization that you kind of need to use the existing financial infrastructure in an elegant way to be able to do what you want to do here. And Sila really fills that gap and natively can speak both languages. I want to switch gears a little bit here because we’ve talked about Sila for a little bit, but the other thing that you always have a pretty good beat on is the crypto and Web3 market in general. I’d love to understand how you think about the crypto market now and what should we make of the downturn and the noise that’s gone on, and where do you see the category and the overall market going from here?

Shamir: I think it’s actually a very exciting time in crypto. If you look at all the large crypto companies that are out there now and the folks who have real traction, most of them actually got started in the 2017, 2018, early 2019 timeframe. That was the last bust that was the crypto winter in which Sila got started as well. Crypto, especially you tend to see it’s almost like hypercycle, right? Every three, or four years you have a huge boom. And during the boom, a lot of projects end up getting funded — not necessarily from VCs. VCs actually are a late entrance into crypto and Web3— a lot of crypto fundraising has historically been community-driven. But a lot of projects end up getting funded that don’t look like very good ideas definitely in hindsight, but maybe a lot of people even in foresight thought they weren’t in great ideas. And then there’s a market crash and a lot of those ideas end up failing and these projects end up failing. And a lot of people lose money and then a lot of the air, but also a lot of the fluff, gets taken out of the market.

So, I suspect the things that are getting started now and being built now will drive the next boom three, four years from now, or maybe 12 months from now. I don’t know when the next boom will be. I’ve never figured out how to time these things. But we’re definitely in a crypto bust right now. I also am totally convinced that this is the cyclical nature of crypto. It is frustrating sometimes because it makes investment hard — you’re always on a rollercoaster, right. And people are not used to rollercoasters. But the markets will be back, and a lot of good projects will end up getting built now that will drive the next wave. And I think that’s true even in FinTech — the other space that we also operate in heavily. People look at the crypto boom and bust, and I think the FinTech boom of last year was just as big. And maybe the bust is just as big. But I think the same is true for it all. The underlying driver of this is global financial services is something like 20, 25% of global GDP — global GDP is like $100 trillion, a little bit more. So financial services like $20 trillion, and that’s annual revenue. But out of that $20 trillion, it’s like 1 to 2% of that is in FinTech and crypto. I think we are going to see in the next decade, that 1% go to like 10%. Even if you look in the world of the 2030s, like Chase and BofA and BNY Mellon, they’re not going to be gone. They’ll still be giants. They’ll still be doing a ton of business, but more and more of it will move to the new world. And that’s this underlying secular trend that’s driving FinTech, that’s driving crypto and Web3 and driving whole new use cases, products, and industries that we couldn’t even imagine a decade ago.

Chris: That’s a great segue because I was going to ask you on this theme of kind of crypto and Web3 overlapping with FinTech, maybe my personal thesis is: We need to see new and stickier and higher utility use cases emerge so there are good long-term reasons for users to use both of these rails and merge them. One way I’ve thought about it is, so far, the blockchain Web3 ecosystem has come up with one killer use case, which is cryptocurrency. Whether or not there’s volatility, we can say, “Hey, that’s a good use case”. The second use case, maybe it’s DeFi, maybe it’s something to do with NFTs, that’s still emerging.

You are sitting at a great spot where you get to see both sides of this world and your customers are the ones building those use cases. So if you look forward and you say, “Hey, here’s how this kind of crypto, FinTech overlap is going to emerge or going to continue to grow.” You know, what types of those use cases would you be most excited about?

Shamir: I think a lot of what’s going to happen over the next cycle and probably the next couple of cycles is just going to be increasing adoption. So, there was Bitcoin and Ether, and then there was this whole ICO boom, which drove the last boom back in 2016, 2017 — that went bust. And most of those cryptocurrencies went nowhere. Now we can look back on it and a few of them actually survived and built lasting ecosystems, whether it’s, BAND or Solana or whatever, but there were 2,000+ cryptocurrencies of which maybe 10 to 20 turned out to really hold value.

I feel like a lot of that needs to happen in the DeFi and NFT space as well. And when you look at an NFT, it’s like, what is this thing? It’s sort of a programmable token. It’s just a standard — on Ethereum, the ERC-721 is kind of the core of it. It is what you want to make of it. So, you have to really look at each NFT project or issuance and be like, what is this actually getting me? A lot of it early on has been around digital art, because that’s frankly, the easiest thing to move and sell online — you can send people to JPEG. It’s not hard.

The questions around what actual ownership is it giving me? What most people buy and sell in the real world all the time is not digital art. It’s not even physical art, it is phones, cameras, cars, houses, and every other type of real physical asset and virtual asset, whether it’s IP rights, whether it’s music, whether it’s video, all of those things. What NFTs really give you the ability to do is to program with those on a blockchain and build new sorts of applications or uses for them. You could not trade a piece of music across the world a few years ago, right? I feel like it’s those sorts of things that NFTs will get into, but the problem is, the more you get into the real world, the more you run into regulation. All of these markets are heavily regulated. The reason why the internet revolutionized advertising and tech and email is because those were not in regulated markets — it was easy.

All of this, whether it’s Uber with transportation or OpenSea with NFTs or whatever. These are regulated spaces, and if you want to get into them, you have to understand the complex mix of local, federal and international regulations. Because customers —they just want to go online, they want to buy cool stuff and sell cool stuff. And the fact that the buyer is in Nevada and the seller is in Vietnam, they don’t care, they’re part of the same online community — why can’t they sell to each other? The mess of laws between them is the real problem. So I think more infrastructure will get built to solve specific problems that will enable more of these applications and use cases to take off. So, a lot of the future is just more adoption, but what I call real adoption of DeFi and NFTs. I think we’ll see more interesting use cases combining this crypto and Web3-community-first approach with more and more real and virtual communities. Traditional finance wasn’t really designed for that. I think crypto naturally is. I think we’ll see a lot of that over the next, oh, probably 10, 20 years.

Chris: I actually saw a — you would probably know the name of this — I’m forgetting the name of this. There’s a version of NBA Top Shot for Premier League cricket. And I actually went to go try to buy. I was like, that seems like a good idea. I, I take, I own some of those NFTs, and I went to try to buy them, and they’re all sold out and you know, it was like, “Hey, come back at some arbitrary time. And hopefully, we’ll have some more of them.”

Shamir: Yeah, the cricket world tends to be heavily driven out of India. So, you might be in the wrong time zone. You might need to buy those things at like, you know, 2:00 AM Pacific or something and yeah, that’s the thing, right? The cricket following is so huge, but it is global, but it’s not really U.S. centered at all. Who in the U.S. plays cricket or cares about cricket? But a lot of Indians, Sri Lankans, Bangladeshis, Australians, and Britishers — the old British Empire — does.

I think that’s another thing that the U.S. as a country and as an economy is heavily financialized, right? Well over 80% of Americans have bank accounts, and lots of people have credit cards. It’s not like it’s easy to get a mortgage, but you could get one and it’s not super hard either. It’s just painfully processed, and paperwork driven. You look at the rest of the world and 80% of people in Africa or Asia have never had a bank account, and they’re just getting their first smartphones now. A lot of these people have never bought a single stock, never gotten a single loan, and never had a single bank account. And for them, they’re probably going to go straight to the crypto solutions. Because those crypto solutions are designed for them and the communities. They operate in.

Chris: I think that’s a really nice perspective. And I also think it’s good to point out that that’s a huge source of opportunity and can be a blind spot for us in the U.S. where we are so heavily financialized. And some of these tools are very natively, easy, at our disposal, in the traditional world, but for other places or for international things like cricket, that’s not quite the case.

I’d love to wrap up a little bit back on Sila and just more on your journey, which is you’ve been an entrepreneur now for a long time. This is your second, maybe arguably third company, always kind of at the forefront and in these emerging spaces. I’d love to know, if you think back over that career to date, what do you think the biggest lesson that you’ve learned from company building is, or what’s the most interesting thing that you have come across in your journey?

Shamir: I think one of the things that I’ve taken away is that market timing is impossible. I haven’t figured out how to do it at least, but persistence is massively important. Like, you might be a little bit early to a market. You might be a little bit late to a market. Or you might time it perfectly. You don’t know. You only know this in hindsight. Once you go public on the NASDAQ or whatever, you can look back at it in 10 years and say, “Yeah, I really should have started Simple in like 2011, that would’ve been the right thing to do. I’m like — Who knew, right? But if you’re persistent and you keep building and shipping, you just increase the odds of success. And ultimately, it comes down to knowing who your customers are and what value you’re delivering to them and staying close to them. And I try to do that even now at Sila, it’s hugely important because ultimately, that’s what everybody’s here for. They’re here to serve our customers and our customers typically are here to serve their customers. As long as you keep working with good people, doing good things, and keep pushing forward and stay persistent, you are increasing your chances of success. The hard part of it is — I know many people who did that and still failed. And then many people who didn’t necessarily do that that well, but still succeeded. There is a large amount of luck and a large amount of market stuff that drives outcomes in this space. That’s the frustrating part of it. All you can do is just do the things that increase your odds. If you’re around long enough, you’ll get dealt some bad hands, but you’ll get dealt some good hands too.

Chris: I love that. Be persistent, deliver customer value, and increase the odds of your success. I think that’s great advice and feels like a nice place to sort of wrap up. Shamir, I can’t thank you enough for joining the podcast today. It’s been a pleasure to be able to get to work with you for the last couple of years. And I keep looking forward to working together in the future.

Shamir: Same here, Chris. Thank you for having me.

Coral: Thanks for joining us for this week’s episode of Founded and Funded. If you’re interested in learning more about Sila, please visit SilaMoney.com. If you’re interested in learning more about Madonna’s take on crypto and Web3, head to Madrona.com/insights. Thanks again for joining us and tune in in a couple of weeks for our next episode of Founded and Funded with one of our Limited Partners — Top Tier’s David York.

Snorkel’s Alex Ratner talks data-centric AI and ‘one of the most historic opportunities for growth in AI’

Snorkel AI, Alex Ratner

In this episode of Founded and Funded, we spotlight Intelligent Application 40 winner Snorkel AI. Managing Director Tim Porter not only talks with Snorkel Co-founder and CEO Alex Ratner all about data-centric AI and programmatic data labeling and development, but they also dive into the importance of culture especially now and how to take advantage of what Alex calls “one of the most historic opportunities for growth in AI.”

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Founded and Funded. This is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group. Today, Managing Director Tim Porter talks to Snorkel Co-founder and CEO Alex Ratner all about data-centric AI and programmatic data labeling the two core hypotheses Snorkel was founded around. The research behind Snorkel started out as what Alex calls an “afternoon project” in 2015, but it quickly became so much more than that and officially spun out of the lab in 2019. Since then, the company has raised a total of $135 million to continue its focus on easing the burden required to label and manage the data necessary for AI and ML models to work and to extend its Snorkel Flow platform into an entire data-centric programmatic workflow for enterprises. Machine learning models have never been so powerful, automated or accessible as they are today. But we are still in the early innings of what it can do. IA40 companies are solving issues across the AI ML stack, but it all starts with clean data. Snorkel has built an incredible platform that taps into human knowledge and dramatically speeds up the data labeling that is necessary for the rest of the pipeline to work. Alex says that even the largest organizations in the world are blocked from using AI when it takes someone or a team of someone’s months of manual effort to label data every time a model needs to be built or updated. But that’s where Snorkel and its Snorkel Flow platform comes in. It should be no surprise that Snorkel was one of our 2021 intelligent application 40 winners, so I’ll go ahead and hand it over to Tim to dive into all of this and so much more with Alex. Take it away, guys.

Tim: Well, it is a real pleasure to be here with Alex Ratner professor of computer science at the Paul G Allen School of Computer Science and Engineering at the University of Washington, but even more relevantly the Co-founder and CEO of Snorkel AI. Congratulations on being recognized as one of the top 40 most innovative and high potential companies and ML/AI space broadly as voted by a large panel of VCs that are active in the space. None of whom could vote for their own portfolio companies. So, congratulations on that, Alex, and thank you so much for being here today.

Alex: Tim, thank you so much for having me and we’re obviously incredibly excited and more importantly, humbled by the honor. And I’ll note I’m not the professor yet. I’m an assistant professor, so it’s still a ways to go. But obviously, I’m very excited about the work that goes into both that on the academic side and Snorkel the company around what we call data-centric AI.

Tim: A VC once again, slightly over-promoting. I apologize for misspeaking on the title. But being based in Seattle and having a lot of connections with the University of Washington, we’re thrilled that, you know, over time that’ll be a home for you and you’re already doing a lot of things to impact the school there. But let’s talk about Snorkel. It’s been a company that I’ve followed for a long time. You and I have known each other for a number of years, Alex. But maybe we could start out by just telling our audience what exactly is Snorkel AI and what problems are you solving for customers?

Alex: We developed a platform called Snorkel Flow and it’s one of the first we call it a data-centric and programmatic development platform for AI. I think a lot of people know that AI today involves data, and it has centrally for quite some time. But what we do really is support this new reality that a lot of the success or failure in building and deploying AI applications has to do with the data they learned from. And not just any data, but carefully labeled what’s called training data, to teach them to do something.

So, we work with all kinds of customers the top five U.S. banks processing everything from loan documents to customer complaints and conversations in their chatbots all the way to medical images, network data, all sorts of stuff where basically users are trying to build machine learning models or AI applications that learn to predict or label something. And they do this by training on or learning from tons of data that’s been labeled with kind of the correct answer. If you look back 5+ years to when we started the project originally at the Stanford AI Lab, if you went onto the field and asked the practitioner, what are you spending your time on? What are you throwing team hours at? It would be all about the machine learning model or the AI application building some bespoke machine learning model architecture to handle chest x-rays or loan documents or conversational intents. And the data was an afterthought, or it was something that someone else prepared and labeled and did. You downloaded it from something like Kaggle or Image Net, and then you started your machine learning. This is what we now call model-centric development, where the data is something exogenous to the process that happens before from someone else. And you’re just iterating on your model. Fast forward a couple of very exciting years in the ML/AI space a lot of that model development is now for a staggeringly large range of problems, almost push-button a couple of lines of code thanks to some of the great companies and vendors and open-source contributions out there. And also, more powerful and more automated than ever before. But there’s always a trade-off. And the trade-off is that these new approaches are much more data hungry. So, the buck has shifted from model development to data labeling and development. And so, what Snorkel does is it tries to automate that data labeling development process- make it more programmatic, like software development — writing code, pushing buttons to label and develop your data — and solve that thing that’s often the bottleneck in AI today. This is complementary to the model development, which is often much more push button or, a line or two of open-source code. And this is based on techniques for this kind of data-centric programmatic development that we’ve developed over the last six and a half, seven years at, places including now UW, but also originally back to the Stanford AI Lab and co-developed and deployed with, lots of different tech companies, government agencies healthcare systems, et cetera.

Tim: This move from model-centric to data-centric — I really like how you frame that. We see that across companies and tying it back even to this notion of intelligent applications that data is really remaking how all applications are made, bringing new data to bear and providing new insights. I think most people also realize that ML is only as good as the training data that you bring to the problem. And this really helps speed and improve that. How the heck do you do it, though? It sounds a little bit like magic, so instead of having a person or a subject matter expert, being able to label this data, you’re able to do it with code. Two pretty hot areas that are talked about in the field — weak supervision and generative models — are two important building blocks on how you make that happen. Maybe you could spend a minute just explaining a little bit about the nuts and bolts and how you do this thing called programmatic labeling.

Alex: Yeah. So, I’ll start with the first thing that you said about magic. That’s one of the things I like to anchor on first in demos and customer presentations is that this is decidedly not aiming to be push-button auto magic. It’s still a human loop process, but it’s one that we aim to make look more like software development than just clicking one data point at a time to label it. Imagine, you’re trying to train a model to triage a customer complaint at a bank and maybe flag it as urgent or not urgent, or maybe, flag it with a specific regulation that it should be reviewed against. The traditional legacy approach that a lot of machine learning progress is based on is you’d have a bunch of people sit down and click through customer complaints, one at a time with the correct label. And that’s what your model would learn from. And in some ways, you know, what we’ve been working on is as much about inefficiencies or gross inefficiencies in that process, as it is about clever, algorithmic, theoretical and systems work that we do.

One way to think of it from that perspective is, if you have a subject matter expert sitting there who knows about all of these regulations and is reading these customer complaints, they probably know certain things they’re looking for. They have a bunch of domain expertise. They’re looking for certain phrases, certain keywords, certain metadata patterns, etc. Why can’t you just have them tell that to the model? In Snorkel Flow, they do exactly that. And some of it’s through no-code UI techniques, some of it’s through very heavily, auto-suggested, auto-generated techniques where they can even explain something they’re looking for and automatically label data that way. So, lots of acceleration automation, but at the core, it’s a domain expert using domain knowledge, heuristics, and programs to label the data, versus clicking one data point at a time.

We have customers who will take six months of manual labeling and collapse it into a couple of hours of this kind of programmatic labeling and development. The magnitude of the problem here is that a lot of these projects just don’t get tackled, especially take that example of customer complaints. You have data that’s very private, data that requires specific expertise, and data that’s always changing as input data changes and regulations change. We had a series of papers with Google and YouTube on how they were throwing away hundreds of thousands of labels a week before they started deploying Snorkel tech because even for a company with those kinds of resources, wasn’t scalable to label and re-label, every time something changed. So, we’re used to a lot of the kind of tip of the iceberg problems where we’re using machine learning to solve very standard problems. Is it a cat or a dog or a stop sign, a pedestrian, a positive or negative restaurant review? But there’s this whole iceberg under the surface of enterprise and organizational problems that are much, much more difficult to label at scale in this old way.

Tim: It’s fascinating, and just to double click on this point that it’s not magic, and you still need humans and subject matter experts, is that you want to start out with a set of ground truth labels that maybe comes from a human, but then you can use your technology to extrapolate from that to a much larger set of data, much, much more rapidly. I think it’s also clever how you use other organizational signals to try to come up with those labels in an accurate way.

Alex: Yeah. I like the parallel to software development. You don’t write down a bunch of zeros and ones every single time you want to compile a new program and you reuse assets, you use a higher level of abstraction, this higher-level knowledge. And similarly, that’s what we’re trying to do here. A lot of times, people try to apply AI in some enterprise setting. And the recommendation they get is — okay, great, you built all these things before, you have knowledge bases, you have legacy rules or heuristics. You have experts internally who have all this rich knowledge. You have models, you have all this other stuff — throw it out and start labeling data from scratch. Every single time you want to train a new model on a new setting. And what we’re saying instead is no use all that information. Use those organizational resources, whether it’s in a subject matter expert’s head, an underwriter, a legal analyst, the government analyst, a network technician, a clinician, and use other models, heuristics, and legacy systems to teach or bootstrap your machine learning model. And the cool thing is you can actually do this without any ground truth labeled data, although it’s often helpful to have a little bit of that.

You’d asked about weak supervision and generative models. A lot of the original theoretical work was diving into the algorithmic and theoretical aspects of this problem of, okay. If we shift the paradigm from labeling data points one by one, and assuming that they’re perfectly accurate, that they’re ground truth, which is, by the way, a very faulty assumption already, but, if you shift it to now having some programs, we call them labeling functions, that are radically more efficient, more auditable, more reusable, more, more adaptable, but also going to be messier and their heuristics. They’re not going to be perfectly accurate. They’re not going to perfectly cover all the diversity of data out there. They might conflict with each other and have all kinds of other messy aspects. How do you, ideally with formal guarantees, clean and de-noise and integrate those into a training set you can use. And that’s in fact, what we’ve spent half a decade working on about theoretically grounded techniques for using generative models and other approaches to figure out which of these labeling functions to up weight, down weight, how to de-noise and clean them. So, you can take this much more efficient, direct, but somewhat messier input and use it to train high-performance machine learning models.

Tim: Are there certain classes of problems or certain types of data or applications that this is best fit for?

Alex: So, we’ve applied it to everything from, self-driving to genomics to machine-reading and beyond, but I think some rules of thumb in terms of our inbound filtering, our outbound targeting is around where we think it will provide the biggest delta above other approaches.

One, first of all, is we handle structured data, especially messy structured data, but we also have a lot of focus on unstructured data. You hear about all these advances in machine learning, AI, all these new deep-learning or representation-learning models that are super powerful, but super data hungry. A lot of them provide the biggest deltas on unstructured data — think text, image, video, network data, data, PDF, website, etc. All this very messy long-tail data. Bigger data, more data-hungry models, and more need for data-centric AI development. That’s often where we sit.

Another rule of thumb is how expensive and difficult it is to actually label and relabel and maintain these big training datasets. If you’re talking about something that can be, a stop sign versus a pedestrian. Stop signs don’t change much, so you just leave it and maybe you can get by with a legacy manual approach there. But if you look at that iceberg under the surface of problems we tackle, think about very private data, very expertise, intensive data- financial insurance, medical government and most industries, honestly, anything with a user data network or technology and also settings where your data is changing, and your objectives are changing. So, you have to be constantly relabeling. So, when you have these aspects, suddenly the cost of just throwing people at the problem to kind of click, click, click, you know, a week or a month or longer at a time per model becomes infeasible literally for the world’s most well-resourced ML teams. And that’s where we like to step in to ease that bottleneck.

Tim: You mentioned moving towards even more data, hungry, deep learning, large models, you know, to extend that thought, these, very large-scale transformer models, foundational models that have gotten a lot of coverage. And then the ability that they provide to maybe do, one-shot or no-shot learning on certain problems or to refine them or train them with a smaller set of data for your specific use case. Do you see that as an overall large trend? And does that create more need for programmatic labeling?

Alex: I think it’s an extremely exciting trend, although it definitely will take a while to percolate into the enterprise for reasons we can go into about, everything from, efficient deployment to governance and auditability. But just talking about the tech trend for a second, it’s something that we’re very excited about. We had a recent webinar and a paper from my co-founder Chris’s lab at Stanford on combining these foundation or large language models with weaker programmatic supervision. And then we had another paper we just posted about using zero-shot learning on top of the large language models to automate some of these data-centric labeling and development techniques. So, we’re very excited about a whole host of complementary intersection points. And we already support basic pre-trained models of this class and these large language models in our deployed platform. In a nutshell, how I paint it is that these foundation models and a lot of us are not calling them foundation models because they serve as a great foundation for building or training or fine-tuning custom models or applications on top of them.

They do still fall into the general body of transfer learning techniques. And I’ll stick to very basic and old intuition that, you get what you train on, right? So, these large language models, let’s say they’re trained on something like web data when you actually want to use them to handle geological mining reports or clinical trial documents, or, loan documents, or, you know, the list goes on, they don’t just magically work out of the box, the zero or a few shot techniques don’t just suddenly solve the problem. You still need to label a bunch of data for the specific data and task at hand to get utility out of them. So, they’re very nicely complementary. But there’s a lot of very cool, but somewhat cherry-picked examples of what they do out there. They still need a lot of additional work to get them to be production level for enterprise use cases. So, there’s a nice complement there we’re very excited about.

Tim: You get what you train on — it continues to be a truism. Hey, you mentioned you know, the Stanford AI lab and your partnership with your Co-founder Chris Ray, maybe take us back to the founding story here, Alex. You put a lot of time into this before launching a company. You know, I feel like over the last few years there’s been this rush to start the company. Venture dollars are plentiful, so get while the getting’s good and you definitely took a longer path to get it to the point of saying, “Hey, we’re ready to commercialize this.”

Alex: Well, it all started with a massive con from my advisor and Co-founder Chris — he suggested this as an “afternoon project.”

Tim: I know there’s some lore about this started as like some math on a whiteboard, right?

Alex: Yeah, it was math on a whiteboard and then there was a Jupiter notebook. and we were, uh, teaching an intro course and we had just refactored it over the summer to center around Jupiter Notebooks, which was a really cool idea that also led to one of the most traumatic office hour sessions I had ever, where everyone in some massive intro course at Stanford came asking how they could install Jupiter Notebooks on every device imaginable. You know, My Tesla screen doesn’t support Jupiter notebooks, please help me ASAP — a five-alarm fire drill. I still remember that. It was a confluence of trends. One trend that was coming in was we had been working on all these systems for things that were more in the model-centric world. So feature engineering, model development, joint inference at scale, all these things that are super cool, but we were seeing this trend, kind of hitting us in the face from our users, who were, you know, biomedical data, scientists, geologists, all kinds of data scientists saying, “Hey, this is all great, but we’re starting to use these deep learning models and these other models and really our pain point is labeling the data. So could you help us with that?” Like everyone else back then — this is 2015, we said,” that’s not our problem, that’s someone else’s problem. We do machine learning. So. We’ll keep helping you with the fancy models.” Eventually, after being smacked in the face of this enough times, we said, “Hey there’s actually something here”. Everyone is getting stuck on the data and the data labeling and the data curation. Maybe we should look at that. So that was step one. Then, as we started thinking about more clever ways and looking at old techniques that did some of this and what our users were doing. Some of our users were getting very creative hacking together, ways to heuristically ad hoc labeled data. We said, okay, first of all, this is so painful that people are doing ungodly contortions just to hack together training sets. So, we said, “Okay, we’ve got to shift to this data-centric, versus model-centric realm. Because there’s something here.”

And then number two, we started asking, okay, how can we support this a bit more? So, we had this idea that we would come up with this kind of Jupiter Notebook where domain experts could quickly dump in some heuristics of how they were labeling data. And we would try to turn that into labeled training data. And that was the “afternoon project” that then spiraled wildly out of control because it just led to all these interesting problems of okay, well, how do you solicit information from the subject matter expert? How do you then clean it? Because even a subject matter expert is still going to give you rules of thumb that are only somewhat accurate. So how do we clean that and model it — that’s this weak supervision idea so that it’s clean enough to train a model? Then how do we build this broader iterative development loop that involves this kind of programming of data and then training models, and then, getting feedback on where to develop and debug next. That was how it all spun up. And then to your question of why were we so slow? Which is a fair one.

Tim: It’s a hard problem.

Alex: Yeah well, and honestly, I mean, we were and are very invested in this problem and in the kind of pathway that we’ve been charting with this you know, data-centric direction. And we’re always anchored on where’s the best place for us specifically to you know, center this effort. And for many years, I’m super biased, but we couldn’t ask for a better place than academia and the purview we had at Stanford. We had office hours weekly. We had everyone from major consulting companies to bioinformaticians to legal scholars coming by and trying to use these techniques. We had purview to work on getting the core theory and ideas. And we started to put some ideas out there, some code out there, we started to get some pull, started to get a bunch of people who were, trying to get me to drop out of the Ph.D. program with a pre-seed or seed round, or I had no idea what the terms meant back then, so it was all nonsense to me. But we thought that we had core problems to work out.

And then, four and a half years in, we started looking at what our users were telling us. And they were telling us things like, “Hey, maybe instead of, working on another theorem, which is cool and all you could help us solve the UI problems or the platform problems or the data management problems or the deployment problems or the feedback and error analysis guidance problems.” When that started happening, you know, we started poking our heads up and, decided, hey, it’s entered the next phase. We’re actually moving to a vehicle where we can put together a different set of people with different sets of skill sets to really build a product and a platform and engage more deeply with customers here. That was the next phase. So that was when we finally spun out.

Tim: I love that you were pulled by customers and customer-centric and making those decisions. It seems like you nailed the timing for when the market was ready and started to need these solutions on a bigger scale. But there’s another piece that you just hit on that I wanted to ask you more about, you know, we’ve talked a lot about the labeling aspect and that’s certainly the core of the solution that you provide. Snorkel Flow is a broader framework. Maybe talk a little bit about how that whole loop is important for Snorkel Flow.

Alex: Kind of our whole point, both research and product has always been that it can’t just be about labeling. When you think of labeling as this separate step in a vacuum, that’s where you get these very unscalable and impractical model-centric only setups.

The idea of data-centric development is that labeling and developing your data — so not just labeling, but sampling, slicing, augmenting all these things that people do as modern data operations. This is your primary development tool — not just to get data ready for models, but to adapt and improve models over time. And so, you have to have that whole loop otherwise you’re flying blind and you’re not really completing this kind of idea of data-centric guided development. So, our platform today starts with looking at your data sampling, labeling more broadly, developing, slicing, augmenting, etc. But then includes a full auto ML suite. Mostly just to give very rapid feedback. Where am I successfully training a model and where do I need to go next to continue this data-centric development? And then, you can export the model from our platform. We actually support broader multimodal applications. You can, if you want, just pull out the training data and train your own external models, we’re very open, but the core workflow has to include a model in the loop. And it’s more feasible than ever before to do that. Given all the kind of great modeling technology that’s out there in the open source these days.

Tim: One interesting observation about this space called MLOps now is I feel like, and sometimes joke, that companies that start out providing one important piece of functionality across this pipeline, for lack of a better term, whether it’s labeling or a feature store or deployment, you know, want to be end-to-end, and I think you just gave some good reasons why in this data-centric world, you need to be able to close the loop from watching how an application or a model is performing and tie that all the way back to iterating, to what’s happening with your label data. So that’s a good reason. But it also seems there’s a little bit of just, you know, startup imperialism that you want to be end-to-end and provide all these pieces.

On the other hand, I think you talk about plugging in other frameworks, other deployment mechanisms, other infrastructure management, it seems like you give customers the choice of like, “Hey, you can use Snorkel end-to-end, or plugin you’re best-of-breed for different pieces.” Is that the way you talk to customers about it? And is there a common way that customers tend to engage or is it really across the board?

Alex: So, and maybe three comments that are Snorkel specific, then I want to go back to that awesome phrase of startup imperialism. So, for us, first of all, there’s just a core definition of what we’ve been working on very publicly for over half a decade is this idea of data-centric development, which involves labeling. And that’s one of the several key interfaces, it’s one of the ways you can program your model. But it’s part of this broader loop that involves a set of, development, activities, and feedback from models, as you said. So that’s part of what we’ve always been, supporting and aiming to support. A second thing that’s specific to us is that we’re often approaching as I was talking about before a lot of zero-to-one type settings where you didn’t have a very sophisticated high therapist modeling stack because you were blocked on the data. You’re not predicting say customer churn where you already have the labels and you’re predicting column 20 from columns 1 through 19, you have just a pile of documents or a pile of chest x-rays or a pile of network flows, and, because it’s the zero to one state, there’s more of a pull often to just actually get to an end-to-end solution when you do go from zero to one.

And then the third point is just to touch on what you’ve mentioned, which is giving customers optionality. Our goal is to support a workflow that we’ve been working to define over the last six or seven years. But how you integrate different pieces into that workflow is something that we’re extremely open to. We have a Python SDK kind of one-to-one map throughout the whole process to make that really easy. And I think that’s critical if you want to play in this space. I think, on the one hand, I’m super biased, I think the most exciting technologies and projects will have an opinion on a workflow that is more expansive than just one little layer. But I think that workflow has to integrate with just the space that’s out there.

It’s an interesting question about startup imperialism and starting off with kind of one slice and then moving toward end-to-end. I think for a lot of folks in the space, there is also just a lot more pull to fill gaps than people may realize. I think if you just skim blog posts and academic papers, you would get a vastly different sense of AI maturity in the enterprise and the market than is actually the case. So, I think, people think we have this very complex, blog-post-defined-stack and every enterprise, but because of these problems of data is one of them and others around deployment and risk management, etc., we’re a lot earlier I think than many others do. And so often companies get pulled because there’s actually a bigger gap to fill them than people realize.

Tim: That’s a great point. And I want to talk more about that. So, you start helping a customer with problem X, maybe it’s, you know, the labeling issue here, and they’re actually asking you like, “Hey, we’re using you for this. We don’t have good solutions for these other pieces. Now help us deploy, now help us monitor. Okay. Now help us close the loop. But that’s a customer pull piece more than it is a high-level architecture strategy decision.

Alex: Yeah. And there’s a lot more of this pull in enterprise AI than people realize because there’s a lot less maturity than people realize just because there’s just so much to do. I think that one of the big challenges from a design perspective is where you draw the line so that you can really focus on what you’re uniquely best at. And we try our best to navigate that. We expand to cover this data-centric loop. We often push customers off and try to help them with reference architectures or connectors for pieces that we don’t think we have a special sauce around or that we shouldn’t spread into.

Tim: So, on this level of enterprise maturity, we have a thesis that we’re really at the beginning of a major wave of ML in production. Over these last several years, we’re kind of coming out of a period of intense experimentation at enterprises. Lots of innovation groups working on ML, working on models, seeing where they can build insights, and trying to get their data pipelines together. Cutting-edge companies certainly have been doing ML for years, but in those sophisticated examples, maybe there’s been an exponential increase in the number of models in production. Not that they are just getting them in production, but kind of the net effect is we’re at the beginnings of a pretty big bow wave of ML in production, both for internal applications, as well as the external applications that might be your company’s products.

So, is that what you’re seeing? Like where are we in terms of the innings here?

Alex: I think we’re early innings and I think it’s exciting because I don’t believe we’re early innings because of the lack of extreme concentrations of talent in the enterprise. There are historic levels of access to a lot of the core machine learning techniques, like the models, out there often in the open source than ever before. And so, you’ve got all the right ingredients, money has been put down, and they’re extremely talented data science and AI/ML engineering teams. You’ve got a flood of open-source tooling, especially around the models in the market, but you still have these significant blockers and headwinds that I think enterprises are really just starting to solve. Obviously, the one that we’re anchored around is the data. So, I think for that reason and everything else, that enterprises are very reasonably and responsibly trying to approach carefully, governance, auditability and interpretability, risk management, and deployment, we’re still in the early innings. And you see this kind of shift from the science project phase to the real production phase happening. And it’s a really exciting time to be in this space.

Tim: What industry verticals are you having the most success or focusing on the most and does that sort of map to this maturity that you’re talking about?

Alex: Our technology and our platform support a very broad set of problems. If you look at our publications and literature doing anything from, you know, self-driving to genomics to machine reading to many other things, but we focus on templatizing around certain core, very horizontal applications. And then today we work with very, highly sophisticated data science teams, and often with the kind of subject matter experts that have the domain knowledge about the problem in large enterprises across all sorts of verticals. We have a lot of customers — top five, top 10 U.S. banks and others in finance, insurance, biotech and pharma, healthcare, telecom, the government side and a range of others. So, it’s really these cross-cutting applications, things like dealing with unstructured data and, classifying, extracting, and performing other modeling tasks over them that we then templatize and target per vertical, where there are these great highly sophisticated data science teams that are blocked on the data.

Tim: I’ve used the term MLOps a few times in this conversation to describe the space. And I noticed you have not. I wonder if you like that categorization. We had another recent podcast in this series recently where Clem from Hugging Face and Luis from Octo ML hypothesize that in a few years, there will be no such thing as MLOps, it’s just DevOps and the problems that you have around machine learning deployment and management will be the same as any other application.

Do you like this category name MLOps, and do you think it has a future as its own thing or does it all converge?

Alex: I would’ve liked to be in that room for that debate. I don’t know if I’ll do justice to what that discussion covered. Okay well, why, why haven’t I used MLOpps? I think it’s growing to become a very expansive term. And so, I don’t have anything against it. I try to keep what we do a little bit more curtailed. I think there are many ways in which MLOps will remain its own thing and should. I mean, there’s a big difference between, code that is directly defined versus essentially code or programs that derive from, large statistical aggregates over, massive data sets that is just, fundamentally different in terms of how you build them, how you audit them, how you govern them, how you think about them.

Even the academic methods are very different — think more like formal analysis versus, something closer to statistical physics types of analysis. So, I think there have to be parts that are different, but I think also at the same time, there has to be many ways in which MLOps becomes closer to traditional DevOps and traditional software development. Obviously, that’s part of what we’re trying to do with data. We’re not going to get rid of all of the messy, unique properties of large data sets, but being able to at least treat the way they’re labeled and managed as more of a code asset and take a more DevOps stance versus this kind of manual activity. So, I guess in summary, I’m a big believer in pushing MLOps closer to DevOps and we’re, in some sense doing that at Snorkel, but I also think there are going to remain some aspects that just have to be unique and different even as they will get more standardized, commoditized and drift closer to DevOps as they have to.

Tim: Great points. Great framing. I completely agree with that. Let me switch gears a little bit. I was rereading your website, Alex, and I’m on the About Us page, you had your obligatory description of what the company does and then a rooted in research, point — we covered that and the cool beginnings from Stanford. And then I was struck, the next big part was about culture. How would you describe the culture at Snorkel? Are you a completely distributed company at this point? How have you continued to build the culture over these last few years, which have been tumultuous to say the least with everything going on in the world?

Alex: So, by culture, do you mean what code lender do we use? What’s our favorite sock emoji pack.

Tim: Um, among other things, yes.

Alex: I’m kidding. It’s obviously one of the, or the most important, questions, even divorced of the very unique situation we’ve been in over the last couple of years. It’ll sound somewhat vague and cheesy but, one of the most important things that starts at how we try to recruit and then goes into what we try to enforce a normalizes, is this, idea that can have extremely kind empathetic, friendly people who are also, very hard-charging and type A and obsessive about what they build and do, and that you don’t need to have one or the other. I think you can find people to work with in any context who are very fun and very kind but maybe won’t push as aggressively as you need to in the startup world. Finding people who can do both is the special thing. We always try to look for that intersection.

Of course, there are other extremely important things about building an inclusive, constructive, and positive environment. A lot of it is, again, back to cheesy comments, but about the balance of trying to always be extremely positive and supportive, but also, normalizing criticism and editorial input as much as possible as a positive, not a negative.

Tim: Are you fully distributed at this point? Is there an office-centric part of this? I’m sure everyone’s hybrid to some degree — how does Snorkel work?

Alex: Yeah. So, we just soft reopened the Redwood City office. So, for parts of our team where there, we have some parts of our go-to-market team in New York and distributed. We’re trying to navigate that in a way that’s responsive to what people want to do. We do plan to have some hybrid component and some in-person component. This is kind of an amateur hypothesis, but just from observations the last couple of years, I think you can do a really good job, and in some ways an even more efficient job of maintaining one-on-one relationships, small pods over virtual. But you face headwinds for cross-functional interactions and the broader social fabric. It’s really hard to schedule a five-minute Zoom meeting on someone’s calendar for like a bump into each other at the water cooler or walk by your office and overhear there’s a good essay that gets passed around for a lot of uh, uh, kind of intro grad students. When you start a Ph.D. program called, “You and your Research.” There was one statement I remember thereof saying that the people who always left their doors closed seemed to be much more efficient, but never really got anything done. So, I think there’s some aspect of that of you can be much more efficient with just everything back-to-back Zoom calls, and we want to keep some aspects of that, but also you lose some of that aspect of creativity, cross-functional interaction, and of course, social interactions. So, we’re going to try our best to navigate a path where we can capture the best of both. And that will be some form of hybrid that we’re still figuring out with our team.

Tim: Makes sense. And by the way, we love cheesy comments. I think some of those that might seem cheesy are the things that stick with people. Is there one ritual that you’ve established over the last few years that just works well for Snorkel that’s worth sharing?

Alex: We started doing these things we call “Whatever you Want”— “WW” at the beginning of all hands. We used to do it more than weekly at the beginning of the pandemic, but we do it weekly now, it’s just a retitling of “Show and Tell,” but it’s a couple of slides about any topic you want. And so, it’s a nice way to get to meet people who you’re not getting to bump into in the hallway and hear a little bit about some aspect of their life — a hobby, where they’re from a recent trip they went on. We did a series on failed past startups. So just little snippets and it adds a little bit more of the other dimensions to people beyond the purely kind of professional interaction. So that’s one thing that we’ve liked.

Tim: In this world of hybrid or remote, using the All Hands effectively, I think becomes really, really important. I did a panel at our CFO conference here with three chief people officers and the chief people officer from SeekOut had a different, but somewhat similar answer to what you just said. They said at their All Hands, they always kick it off with an opportunity for people to celebrate each other, which is something you said was core to your culture too, is to be, celebratory of each other, but still hard-charging. I think those little rituals mean a ton, especially in this world that we’ve been living in.

Alex: Puns are very important also. The things I’m most excited about as I recently had a second child, and I was informed by the team that I’m allowed to make two dad jokes per day now. So that’s been double the fun.

Tim: I have two kids also, and dad jokes and bad puns are right down my alley. So, there’s a lot happening in the technology markets and the public markets have corrected or repriced. You raised this awesome $85 million round last August. That was great timing. I’m sure you have a lot of cash in the bank. Your business also clearly is going well. What is your posture that you and your management team and board are talking about? Is it sort of, let’s keep accelerating here as fast as we can bear? Is there a little bit of a, ‘Hey things are good now, but we’re not sure’ in coming quarters? So maybe we don’t want to hire quite as quickly as we originally planned?” Or what’s your posture between sort of the gas pedal and the brake here as we go into the back half of this year. I know that’s a, no one knows no one has a crystal ball, but that’s a top conversation with all the companies that I’m working with.

Alex: It’s certainly uh, an interesting time. Seeing some of it as a return to sanity is obviously, I think, a positive for the space. Those of us who work in an AI, especially are always wary of over-hype leading to winters. I think for us, in particular, as you mentioned, we had recently raised a round. I think once you raise a bunch of cash in succession, you can either kind of go off the deep end or you can kind of instill good cultural habits and practices and grow up a little bit as a company. So, we were always planning to do the ladder and grow up a little bit. Obviously, the most important thing is being responsive to our customers, and we see just the same level of demand and, even more so for a lot of the projects that we try to anchor around with customers that are about increasing efficiencies and adding massive business value. And so, we’re still charging ahead at full speed. But we do think it’s a good reminder to be mature as a company and value efficiency and have that kind of culture and cadence. And I think it’s also a good reminder for the AI space to really, again, this is a little biased because we’ve been trying to do this from the beginning but focus on the business value rather than the science projects. We spend a lot of effort in our product building our go-to-market motion, trying to align with those teams and projects and budgets that are going to deliver a meaningful impact that’s robust. And so, I think it’s a good validation of that approach.

Tim: Very wise and very consistent with what we’re trying to counsel our company — don’t stop being aggressive, but efficiency ultimately also matters. And really inspect you know, new investments that you’re making because you may want to err on the side of making the runway last, even longer.

Alex: Yeah. I mean, we don’t want to slow down during one of the most historic opportunities for growth in AI, but I think you can keep going aggressively forward while also taking a nice reminder about the importance of building good, scalable practices, culture, etc.

Tim: Here, here! So, I’d be remiss to not ask, is there a company or two that you think are particularly cool or innovative in the field of ML broadly? Whether it’s an enabling company or a, finished application?

Alex: I may not sound too original cause the names already came up, but we’re big fans of Hugging Face and OctoML as representatives of those other areas of the ecosystem that, are very exciting and what they’re doing and just the evolution around models around infrastructure and, the fact that those companies exist and those technologies are at the stage of maturity, they are what makes data-centric AI development such a thing.

Tim: I’m sure we could do a whole separate podcast on learnings and tips and advice, but any tips for, maybe the technical founder, your best piece of advice that you’ve gotten on this journey — anything come to mind that you always think of first.

Alex: This is a little specific to data science and AI/ML but gravitates toward real customer problems and real customer pain. Don’t obsess over fitting into the perfect stack diagram or, you know, matching the perfect paradigm of scalability right away, go to where there are real problems, real data real use cases and learn from that.

Tim: Terrific customer-obsessed — customer-focused — is the most important thing. So, I’ve got to tell you maybe as we wrap up here, I’m sitting here, the audience, can’t see, I have my Snorkel T-shirt on, I’m a little bit of a Snorkel fanboy. A few years ago, some website or magazine interviewed me and said, what is a company you’re not an investor in that you’re most excited about? And I said, Snorkel. And Alex rewarded me with a box of swag, so I have this T-shirt to show for it. But the other piece that you don’t know, Alex, is that there was a pair of socks that you sent me, and you have the very kind of fun snorkel logo. And the socks were kind of too small for me. And my daughter saw them sitting on my desk at my home office, and they became her favorite pair of socks. She plays a lot of basketball. She’s in seventh grade. And I just want you to know that in the seventh-grade girl’s hoop leagues of Seattle, programmatic data labeling is being represented well with some flashy footwear.

So, thanks for that.

Alex: I think that’s going to be one of our biggest growth markets. We’re playing the long game here. So, I’m both incredibly humbled and incredibly appreciative because that’s going to be some great long-term value.

Tim: This is terrific. Thank you so much for your time. Congrats on everything you’re building at Snorkel. Thanks for the insights for other entrepreneurs and customers who are building in this world of machine learning and intelligent applications. And hopefully, we can do this again sometime.

Alex: Tim, thank you so much. And this was awesome.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Snorkel, they can be found Snorkel.ai. To learn more about IA40, please visit IA40.com. Thanks again for joining us and tune in, in a couple of weeks for our next episode of Founded and Funded with Sila Founder Shamir Karkal.

Robotics Expert Sidd Srinivasa on Trends and What’s Ripe for Innovation

In this episode of Founded and Funded Madrona Investors Aseem Datar and Sabrina Wu sit down with robotics expert and University of Washington Professor Sidd Srinivasa to talk about the technology and sociological trends that are leading to innovation in the robotics space, where Sidd sees opportunities for founders, and why now is the time to pay attention to what’s happening in the space. Sidd also shares why he is what he calls an “accidental roboticist” and some of the hard-learned lessons from throughout his extensive career.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Found it and Funded this is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group, and this week we are diving into a topic that I think we can agree everyone loves to talk about — robotics. George Devol created the first digitally operated and programmable robot back in 1954. And since then, we have been awed by the likes of C-3PO from “Star Wars,” Tipsy, the cocktail serving robot in Las Vegas, and Scout — Amazon’s delivery robots in Snohomish County here in Washington. Robots are transforming productivity, efficiency, cost, output, and product quality for companies, and many trends are coming together to push the move to automate from the pandemic, of course, which has pushed for a more touchless remote-first way of operation to an enduring labor shortage, to technological innovation in computing, AI, and machine learning to technology, infrastructure and data quality advancements that means the use of computer vision in real time is now possible. All of these trends come together to create almost endless opportunity for founders in the robotics space.

So, this week, investors Aseem Datar and Sabrina Wu are talking with robotics expert Sidd Srinivasa about all of this and so much more. Not only do we learn how Sidd is actually what he calls an accidental roboticist, but he outlines the areas of robotics that he sees are ripe for innovation and some of the hard-learned lessons from throughout his extensive career. With that, I’ll hand it over to Aseem and Sabrina to dive in.

Aseem: Hello everyone. My name is Aseem Datar and I’m happy to be here today with one of my fellow investors, Sabrina Wu and our guest of honor, Professor Siddhartha Srinivasa to talk about our favorite topic — robotics. So recently there’ve been a whole bunch of technological advancements in the field of robotics. That means that the world is prime for accelerated innovation and adoption, especially within sectors like industrial, manufacturing, logistics, and many, many more. At Madrona, we’re excited to see where entrepreneurs take it and the kind of companies that they buried using this technological building block per se. We wanted to bring in one of the foremost experts in robotics to talk about some of these recent trends and why now is the time to pay attention to what’s happening in the space.

Sidd, thank you so much for joining us and welcome to this conversation.

Sidd: Thank you so much for having me, Aseem and Sabrina. It’s a pleasure to be here and it’s a pleasure to chat about robots. One of my most favorite things to talk about.

Sabrina: Yes, Sidd thanks so much for being here. We’re really excited that you were able to join us today. You know, looking at your background, you were previously at Carnegie Mellon University for 18 years, and many of those years you were running the robotics institute. Thankfully we were able to steal you away from them and have you join the University of Washington, where you’re now an endowed professor focusing on human robotics interactions. Uh, You, of course, we’re also one of the First Wave Founders of Berkshire Grey. Now publicly traded on the New York stock exchange after having revolutionized the use case of robotics and AI for fulfillment at scale. So, you know, why don’t we start though with how you really got interested in robotics in the first place. Was there a pivotal moment for you when you were growing up that got you interested in the field or, you know, really what was it?

Sidd: That’s a tough one. I wish I could say that there was some origin story one day in which I had this revelation. But I’m actually a very accidental roboticist. It was in 1999. I was ready to go do a Ph.D. in mathematics at CalTech or in fluid mechanics at Cornell. The then director of the Robotics Institute Raj Reddy, visited IIT Madras, where I was doing my undergrad, and he happened to come home and was talking to us — my dad was a professor there as well. Then he asked me what are you going to do with your life? And I said, “Oh yeah, I’m going to do one of these things.” He said, “Nope, you should do robotics and apply to this robotics Institute place”— that, you know, back in 1999 was fledgling. I said, “Why not?” I still remember after I got my acceptance, my dad sat me down and said, “Son, you know what the future is? It’s turbines. It’s not robotics. Robotics is just a fad.” I still talk to him about that, about how turbines are doing compared to robotics. I’m sure they’re doing really well. But certainly, I’m glad that I pursued robotics. Then ever since, it’s been such a pleasure waking up every morning, working on robots. I just continue to be flabbergasted that people pay me money to do something that I would in a heartbeat do for free.

Aseem: That’s awesome. I thought that there was going to be some, “I was watching “Small Wonder” kind of story'” but maybe now, and who knows maybe your someday going to build robots that operate turbines, and you’ll bring the best of both worlds together. I think we have the most fun learning about backgrounds — these stories that don’t surface on LinkedIn. So, thank you for sharing that. As we at Madrona are thinking about robots, the one obvious question we sort of always come across in our minds as we think about the spaces and build a prepared mind kind of framework is why now? What’s changed in the world — robots have always existed in some way, shape, or form for decades. Following on that question, what are some of the driving factors that you believe are leading toward the acceleration, the investment in the field and ultimately toward adoption?

Sidd: It’s been a slow boil of robotics. I must say. It’s not that there’s been some step-function improvement. One of the things that has actually been hugely beneficial is Moore’s law. Computers are getting faster and faster day by day. Essentially the same algorithms that we used to run 20 years ago when I started my Ph.D., now take seconds to run instead of tens of minutes. I think that’s a huge win because one of the interesting things about robotics is that your clock is set by nature. It’s set by gravity, right? If you have a coffee mug that you’re trying to pick up and it starts dropping, then you can slow down time so that your computation reaches up to it. You just have to make it not fall. You have to grab it. I think the ability of our computing to finally catch up with nature and potentially exceed nature has been a huge tailwind for us. I think additionally, there are a few other factors. One is hardware, particularly perception hardware, which has gotten much better and much cheaper.

Some of that has been driven by the self-driving car industry. You know, back when I started my Ph.D., you had to pay tens of thousands of dollars to get a FireWire camera and then buy a giant board that then you would attach to your computer and have to write like custom software to even be able to grab pixels out of a camera.

That’s no longer true, things are much cheaper now. And that’s super useful. It’s super useful, not just to bring down the bond cost of your product. But it’s also super useful to prototype things. It’s much faster and easier to prototype things when parts don’t cost tens of thousands of dollars. That means that now we can very speedily go through several iterations of a robot or a robotic system, without necessarily having to think too much about like, oh, what am I purchasing right now, so you don’t have to prematurely optimize just yet.

Aseem: Yeah, that’s so interesting and so relevant. I remember the time when I was writing code on embedded systems, and you would think about memory management, right? Like you would think about how much memory is my algorithm using. And now when you graduate from college, you’re just commissioning another VM. You’re just buying more compute at cents on the dollar, right? I think that’s just fascinating in terms of where the world has gone. Sidd, what about networks? What about latency? Is there something to unpack there in terms of 1) time to making a decision getting faster and 2) what about advances in hardware itself — in terms of precision arms, in terms of actuators and so on? Is there something there that’s also a, I would say light tailwind that’s pushing this forward?

Sidd: I think one of the things that we’re seeing recently is that there has been a greater availability of compliant manipulators, you know, things that can work with and around people. We call them human safe, but essentially, they have the ability to feel forces and respond to them just like our arms do. And one of the advantages of that is that it transfers a lot of the complexity from the metal to the silicon. These robots that are not industrial manipulators, but combined manipulators are much more complicated to program and manipulate, but they are intrinsically safe and intrinsically more capable because they are able to feel forces and modulator their forces.

And I think our ability to wrangle this new piece of technology better is going to be a big unlock for the future. You’re already seeing how, if you look at even automotive, a majority of their manipulation or their assembly is done by these giant industrial manipulators that just pick and place. But a lot of their relevant and important manipulation, particularly of flexible things like brake lining or seat cushions need forces and torques and very careful manipulation. And that even now is done by people. That is particularly challenging. I think a future that I can see is the ability for robots to be able to perform those careful force-guided tasks that we humans do so effortlessly.

Aseem: I think that’s a great characterization of what things are coming together. You hinted a little bit at the industrial sectors and so I want to go down that path of how do you think about the market? What are areas that you see are ripe for robotics to play a huge role in? How do you think about industry focus? What are industries where robots are an obvious solution? And tell us a little bit about your thinking around the application of robots to those use cases.

Sidd: One thing I would say is that I have a bias to be a very full-stack roboticist. I like nails and I like to hammer them with whatever hammer is available. I think for me, there are a few criteria that are really important when trying to decide what the right nails are. One is how relevant is it? There are a lot of places where we may think robotics is relevant, but the technology that’s needed to do it is not there at all. Part of the reason for that is that we tend to anthropomorphize. We think, oh, this is easy for me so surely this must be easy for a robot and that’s sometimes true, but it’s more often not true. So, I think being able to find the intersection of something that robots are capable of doing and something that is of value to people is really interesting.

From a sort of vertical point of view, I think there are a few places where robotics has a lot of potential. And I think a lot of that is related to how complexity can be addressed via either changing the process path or changing how the work is done. One of the places that I am particularly excited about is being able to use robotics in farming or agriculture. I think that there’s tremendous potential in being able to merge the way food is produced, the science behind how food is produced, and the way food is harvested, and the way it’s packaged, and the way it’s sold. I think sometimes we assume that, and this is funny because we assume that strawberries have to grow in a particular way. But that’s not even true, right? Like we humans have manipulated the way strawberries grow and appear based on a lot of criteria that we care about. But you can imagine a world where we are optimizing those criteria, not just for our consumption, but also for the ability for robots to be able to pick them. The ability for robots to be able to identify them. The ability for robots to be able to package them. I think when you think about it holistically as my goal is to be able to produce really delicious food and to be able to automate its harvesting and delivery to a person, then you can really think of ways in which you can automate the entire process and think about how you can manipulate the entire process. So that’s certainly something that I’m interested in.

I think another piece that to me is really interesting that I continue to be fascinated by is last mile. You look around outside and outside any doorstep, there are packages and it’s interesting and challenging to understand how those packages can be delivered faster, better to you. Right now, it’s both labor-intensive and energetically inefficient. I don’t just mean packages, right? Even if you think about food delivery, I think of it as a full stack of how would we imagine the preparation and the combination of the food such that it continues to be delicious.

But also, something that can be automated and delivered on time to us. Some foods are actually very, very hard to deliver as we all know. Getting fries delivered at home or getting a nice, like Indian samosa delivered by let’s still crispy and not soggy is super hard. But I think part of that is because of the way those food items are created — because they were never created to be packaged in a box and delivered to us. They were created to be eaten hot off of the tava or the plate into our mouth. So, I think, thinking through how that entire process might work, I think it would be interesting and valuable.

Aseem: That’s so cool because it’s complimentary to our view we have yet at Madrona around, there’s a strong wave around, you know, COVID start us that, a lot of systems, processes now are moving towards more autonomous touchless, contactless as well as, high-quality outcomes, right? Because the more systematic approach you take, the more consistent quality comes out of it. An area that we’ve not talked about here but it’s interesting to us is also around the smart factory, the autonomous vehicle assembly. I think all these things coupled with the problem of like, you know, an aging workforce slash shortage of labor, we believe are just areas that are ripe for disruption, or I would say opportunity from a robot standpoint.

Sidd: Yeah, I completely agree with that. I also think that part of this might be to rethink. How processes are engineered. As an example, if you wanted a robot that would do your laundry, this is everybody’s favorite robot. Building a robot that like is in your home, that’s loading your washer, pulling it out, putting it into the dryer, taking it out, folding your clothes might be incredibly challenging.

But you can imagine a world where some entity takes all of your dirty laundry, takes it to some centralized location where there’s a larger physical space, which does all the cleaning for you and delivers it back to you as quickly as possible. By changing the way things are processed and turning it from many small things to one aggregated larger thing. I think you can get potentially a lot of wins. That of course demands that we, as humans, change the way we want to live to some extent. But there’s a lot of evidence to that. Right? In that, like, we’re willing to change the way we work, and we live if it is longer term more convenient for us. We haven’t talked about consumer robotics — robots in the home. I find that to be the most challenging market and something that like I haven’t particularly thought about because building something boutique for everyone’s home is way, way, way harder than building something that sits in its own physical space that can be controlled and manipulated by you and everything goes to it and comes out of it.

Sabrina: We have this debate a lot at Madrona as well of just where is the best use case for robotics? Is it in the enterprise setting? Is it in the consumer setting? And I’m curious, you touched upon it a little bit about the different verticals in agriculture and other, but to be a little bit more specific, if you’re a future founder you know, listening to this podcast today, what opportunities are you seeing? What white spaces are you seeing for a founder to come in? Is it specifically within verticals or applications or do you see it more on the hardware or software side? Just curious what your thoughts are around that.

Sidd: I do think that there is potential everywhere. My own personal interest has always been in trying to find a vertical opportunity and then do whatever it takes to solve that problem. Also specifically look at a place where automation is not necessarily a must have but can be a ramp function value add. I think if you start off with,” Hey, if I don’t build Rosie the robot, then I don’t have a business.” Well, then you’re in trouble. I think we want to make sure that there is a business case even with very limited automation. Even there like I would stair-step automation as oftentimes quality assurance prediction is much easier than actual physical manipulation. If you can actually have a value add that’s just about having sensors in your world that help you understand your process better or someone’s process better such that it can make it more efficient. That’s already a big win. And every single motor that you add to your world is an order of magnitude, greater complexity because everything breaks when you interact with the physical world. So, I think even there, when you’re starting to add automation, at first ask the question. Can you add automation that doesn’t move but that is able to monitor and enhance your process path through AI, computer vision, machine learning, and then subsequently use that to bootstrap how you might want to integrate physical automation in.

I think that’s a place where I think that there’s a lot of potential, right? Like even thinking about quality assurance. I think the biggest challenge with just inference and perception as a business is that you might get sharded by so many different applications. You know, someone has a light bulb that they want to assure, someone else has a PCB. Someone else has a salad that they want to know whether any of the produce is old or not. Someone else may have bananas. Someone else may have other things. So, I think the challenge that is in making sure that there aren’t so many different verticals that you’re chasing, that you end up doing a poor job of any one of them. I think this is the biggest challenge that I see in this particular space is that sometimes people either focus too much on a vertical and that’s too narrow. It’s one of the teeth in a comb and it’s too small or they try to build infrastructure and that becomes too broad, like, I don’t want a machine learning model. What I want is a managed service. I want someone not to hand me over like a piece of code. I want someone to solve my problem. My problem might be, I want to be assured that the chicken I’m selling are all of the right shape, or I want to be assured that the fries that I’m selling are all numbered, 37, there are 37 fries in each bag that I’m selling. I think being able to produce value while still being able to not be sharded by too many teeth in the comb is interesting and challenging. I don’t think anyone’s cracked that yet, but I think that there’s a lot of opportunity in that space.

Aseem: Yeah, you alluded to this, but I want to ask you this million-dollar question, or maybe it’s a millions of dollars question these days with how companies are performing and creating value. Hardware, robotics, or software robotics? Let me qualify that a little bit. There’s generally healthy tension on — do I solve a problem using hardware smarts and precision and building more complex arms, or do I actually solve it using the power of software and intelligence and ML models and CV? How should one think about that?

Sidd: I think about this a lot, I must say. The way I think about it is so first of all, I don’t have an answer. I just have a thought about it. I think that the constraints of the built environment often tell us what’s possible and what’s not possible. So, if you look at automating your kitchen, for example, it’s very hard to put belts and pulleys and tubes in your kitchen that plop food on your plate. Just the natural constraints that you created because it’s a kitchen that you want to use — it’s a kitchen that has certain dimensions — makes certain hardware choices possible or not possible.

The fewer constraints you have, the easier it is to solve using only hardware. You can use off-the-shelf mechatronics to solve a lot of these problems. Our beer factories and our Frito-Lays factories are great examples of solving a very hard food manufacturing problem effortlessly because we’ve removed a lot of the constraints that exist there. My personal taste is in looking at spaces where the constraints of the built environment make it nearly impossible to use off-the-shelf mechatronic solutions that compel us to use a combination of what we call robotics. Whether it’s robot arms or more complicated actuators and a lot of intelligence — computer vision, machine learning nonlinear control.

I think those are the spaces that lie at the intersection of things that are very valuable because no one has a solution for it and things that are fundamentally going to get better. Our compute is always fundamentally going to get better. So, I think to answer your question of like hardware versus software, there are many problems that can be solved using just hardware. But I think I gravitate towards problems, which are much, much harder to solve, either constraint wise or from a value proposition point of view, with off-the-shelf mechatronic solutions.

Aseem: That’s very cool. A slightly related question. There’s always this concern around safety, robotic operation, like human in the loop. You know, what happens when a robotic system like Tesla goes off the road and what’s the correction mechanism. I know Sidd, last time we chatted, you had a really cool posture on how you think about humans in the loop. I remember distinctly your comment about these things will fail. We know that they would fail as we are building and getting better. How should you design for that?

Sidd: First of all, I do agree that safety is a requirement. It’s not a nice to have, it’s a must-have. I think also that we have to assume that robots will fail. I always believe that it’s not the happy path. It’s not the YouTube video that you should be looking at. You should just be looking at all the times that the robot fails, right — the unhappy path. And I think that humans also have perceptions of robot capability based on happy path that they see. I think as an analogy if an alien being watched YouTube videos of 7- to 10-year-old children, they would think that their virtuoso pianists, incredible gymnasts, amazing singers, the best at math — can recite thousands of digits of Pi because they don’t see the unhappy path. Which is they’re running around kicking and screaming most of the time. I think it’s the same with robots, right? I think when people look at videos of robots, what they see is the happy path of robotics.

A lot of what I do is anticipate what the unhappy path will be and address it. This is actually hard because sometimes your robot doesn’t know when something goes wrong. This happens commonly, you know, the robot fails to grab something, and it doesn’t know that it’s failed to grab something.

So, there’s an observability question of we need to make sure that the robot knows that something has gone wrong. I think the second piece is around creating exception paths, such that you can gracefully fail. In most situations, you can gracefully fail. There are a lot of opportunities for correction, particularly if you own the full stack. A lot of the design engineering that is needed is to make sure that we are able to identify what the exception paths are and handle them. Actually, if you watch a high-speed video of yourself grabbing a coffee mug, you’ll notice that you’re just fumbling all the time. You’re failing and failing, and then grabbing the coffee mug. But all of that happens in less than 10 to 15 milliseconds. So being able to react to these in an elegant way is important.

In terms of human in the loop. One of the things that I believe strongly in is to be able to leverage human feedback whenever and wherever possible. You always want to build systems where you can either offline or even online annotate data, annotate the robot, such that it’s able to learn from its experiences as well as it’s able to learn from human supervision. I think that we have a lot of tools available now that help us do that. We have the ability to capture large amounts of data. We have the ability to send that data to annotators who are able to annotate it for us. I think that’s, to me, being able to build continual learning algorithms and being able to formalize that is a way to capture human insight without necessarily having to rely fully on it.

Sabrina: That’s fascinating. I’d love to pivot a little bit and have you tell us about your journey at Berkshire Grey? You were one of the first founders of the company, and now they are one of the leaders in providing robotic picking and packing technology used by companies like Target and FedEx. Can you tell us a little bit about how that came to fruition? What were the challenges you saw in the industry at the time? And I would love to learn a little bit more about your experience, scaling the business and ultimately making a bet on the future.

Sidd: I still have such warm feelings about my time at Berkshire. I really loved it. It coincided with my daughter being born. So, it was pretty epic time for us as a family. I see my daughter grow —she’s seven years old now. I can tell how old Berkshire Grey is based on how much Sameera has grown. Obviously, full credit goes to a lot of people. I’m just one of the people who is part of this journey.

But I think the central thesis was always this idea of being able to build a full robotic stack for automation. One of the things that we had observed was that there were some really amazing companies that were out there, but they were providing a Lego block that would attempt to fit itself into a giant jigsaw puzzle. Like Saying, “Hey, I have a nice picking system, or I have a nice system that can move a tote from one place to another.” You realize very quickly that to integrate a picking system with a very complicated warehouse management system that has so many inputs and so many outputs is much harder than building the picking system itself. Even if you have the best picking system in the world, your ability to integrate it with even one integrator is very hard and to think about like having to integrate with 10 or 20 of them, right? Those kinds of businesses were failing. Not because they didn’t have a beautiful, perfectly crafted Lego block, but it’s because it didn’t fit in the house. It was too much work to make it fit in the house. You have to take the house apart and put it back together. The sort of central pieces of Berkshire Grey was, give us an empty space. As an input, trucks come in and as an output, packages come out and, we won’t tell you what’s in this empty space and you don’t tell us how to control that empty space. It was a huge bet for us to think about automation that way. Because we had to believe that people would give us this empty lot. It’s a huge investment on people to give us this empty lot, but the positives were that we could fill this empty lot with whatever we wanted — people, robots, anything — and we controlled the entire experience. That was what we really sought to do. I must say, initially a large part of it was not automated, but still, the input-output relationships were maintained. I think over time as more and more maturity came about — and obviously, since I left Berkshire Grey, they’ve become even more mature on everything that they’ve been doing. I think you fill out more and more pieces of this Lego house, but you control everything that happens in there. So, I think that was a big learning for me. I think another learning is also that you know, when we were four people each one of us had to write code, talk to vendors, be a program manager, weld robots. I really enjoyed that. I really enjoyed that because I just love building robots. As the company grew to like 100 and then 200 people, I think we had to organize ourselves into various roles. A lot of fun too, but fun and a different way and potentially needed a different set of people. Obviously, I’ve done a few things since Berkshire Grey, and I realized that it’s almost like shedding skin. You have to have one skin and then you molt, and you shed that skin and then a new skin comes about. And you have to just accept that the people who were part of the original skin may not necessarily be the ones who are ready for the next one. The one after that. Some people might grow into those roles and those opportunities. But I think just acceptance of that was valuable.

I think another lesson that I learned was customers don’t want to tell you anything. This is incredibly frustrating for us because we just wanted to know what actually they wanted to solve.

If we knew what they wanted to solve, we could do it, but it took us a material amount of time before we earned sort of their trust for them to be able to open the door more and more. I think that was really interesting for us.

Sabrina: That’s awesome. I hadn’t heard that story before. You know, from your experience at Berkshire Grey, and as you mentioned, you’ve now worked with a lot of earlier stage companies and ideas since then, curious to hear what mistakes you’ve seen, people make along the way, and any advice that you have for new founders as they think about their journey in robotics.

Sidd: Oh, boy, I haven’t made a lot of mistakes. So, I think that in some ways the scars that we have are what help us not make those same mistakes again. I think that’s probably the only value that I provide is that I’ve made more mistakes in robotics than other people. So, I cannot just tell you what not to do. I think it’s really important to carefully think about what your minimum lovable product is. I cannot stress how important that is. I think that people fall in love with a certain way of doing something or fall in love with a certain piece of technology, and they forget that in the end it has to be valued and loved by your actual end customer. This was, frankly, a big struggle for me too because I’ve been building robots for so long that I have a way of building robots. I have to unthink that sometimes, because I don’t want to be stuck in that same rut. I think the other thing is that a lot of people who want to build robots come from software or AI or machine learning and forget about, or at least don’t have enough scars from, just long lead times for getting anything. I was actually just talking to somebody who is fascinated by how hard it was to do integration testing in robotics. They were telling me, “Oh, you know, with software, you just click this button and then you can run a, you know, integration tests on everything. How do you do that with hardware?” I was like, “Nope. It can’t be done.” You have to actually have a QA team that goes out and does these tests for you? You have to pay them a fairly significant amount of money to go do that and that takes a significant amount of effort.

So, I think there are certain mental models when you’re only building software that you need to undo yourself of. That said, there are other people who will only build hardware who want to build robots? You know, they build amazing, beautiful hardware systems and there too, there’s a failing because you believe that everything can be done with hardware ingenuity. Whereas, you know, I keep telling them, computers are free and instead of building a mechatronic way of, let’s say isolating a part, “Hey, just put a camera there and then it’ll tell you where it is”.

So, I think that robotics is a funny space, which requires you to know both hardware and software, and I think my advice would be make sure that you have enough people in the room who have enough scars of making enough mistakes in hardware and software and have the nuance to be able to.

Lead them to do the right thing. I think that’s been the biggest learning for me.

Aseem: Yeah, very profound. It’s almost like go hire the people who make mistakes so that the robots don’t make the mistakes. It’s amazing what we take away from this conversation. Hey, Sidd, I know that the only thing between you and dinner is us and ever since you mentioned samosas, I’m envisioning, you’re going to go off to a room it’s a Bat Cave in your house, you’re going to press a button and the robot is going to start frying a samosa.

Thank you so much for making time. I think there’s a lot of aspiring founders that we’ve talked to who are deeply interested in, you know, very passionate about this space and I’m sure they will take a lot away from this conversation. So, thanks for spending the time and thanks to those of you who tuned in.

Sidd: Thank you.

Coral: Thanks for joining us for this week’s episode of Founded and Funded. If you’re interested in learning more about Madrona’s investments in the robotics space, you can check out the show notes for Aseem and Sabrina’s contact information. Thanks again for joining us and tune in, in a couple of weeks, for our next episode of Founded and Funded with Snorkel’s Alex Ratner.

SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen on AI-powered talent solutions, developing talent, and maintaining culture

SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen

This week on Founded and Funded, we spotlight our next IA40 winner – SeekOut. Investor Ishani Ummat talks to SeekOut Co-founder and CEO Anoop Gupta and VP of People Jenny Armstrong-Owen about their AI-powered intelligence platform, the importance of not only finding and recruiting new hires but also developing and retaining employees within a company, and maintaining SeekOut’s own culture while seeing significant growth over the last year.

This transcript was automatically generated and edited for clarity.

Soma: Welcome to Founded and Funded. I’m Soma, Managing Director at Madrona Venture Group. And this week we are spotlighting one of our 2021 IA40 winners – SeekOut. Madrona Investor Ishani Ummat talks with CEO and Co-founder Anoop Gupta and their Head of People, Jenny Armstrong-Owen. SeekOut is one of our portfolio companies, and so we were very honored that our panel of more than 50 judges selected them for our inaugural group of IA40 winners. SeekOut provides an AI- powered talent 360 platform to source, hire, develop, and retain talent while focusing on diversity, technical expertise and other hard-to-find skillsets.

We led SeekOut’s Series A round of financing, and have worked with the team closely since before then as they fine tuned their initial product offering. The company has had massive success. And earlier this year they secured $115 million Series C round to scale their go to market and to build out their product roadmap, including powering solutions for internal talent, mobility, employee retention and the like- all topics that are Anoop and Jenny will dive into with Ishani today. With that, let me hand it over to Ishani.

Ishani: Hi, everyone. I’m delighted to be here with a Anoop Gupta, the CEO of SeekOut, and Jenny Armstrong-Owen, SeekOut’s head of people. SeekOut is building an AI powered talent 360 platform for enterprise talent optimization and was selected as a top 40 intelligent application. We define intelligent applications as the next generation of applications that harness the power of machine intelligence to create a continuously improving experience for the end user and solve a business problem better than ever before. I’m so excited to dive in today with Anoop and Jenny, thank you both so much for being here.

Anoop: Hey, Ishani, it’s wonderful to be here. Thank you for making time for us.

Jenny: Agreed. Thank you so much. It’s great to be here.

Ishani: So, I’d love to start out by going way back. Anoop, you were a professor of computer science for over 10 years, co-founded the virtual classroom project that quickly got acquired by Microsoft. In 2015, you left Microsoft to start the precursor to SeekOut. Tell us about what led you to the core talent problem that SeekOut is solving today.

Anoop: So, Ishani, when we left Microsoft, we left because you know, Microsoft was just an absolutely fantastic place to innovate, but what Microsoft legitimately wants you to do is to get on an 18-Wheeler and discover some big island, and we wanted to be on a mountain bike exploring opportunities because it’s such an exciting world out there. Given my background of running Skype and Exchange, actually the first thing we settled on, was Nextio, which was a messaging application. And the whole notion was that today people hide their email address and phone number because once you give it out, people can spam them. And we were not being so successful there, so we built an application called Career Insights. What Career Insights was about is you analyze all resumes in the world, and if you do that, then we can say, “Hey, if you are a UI designer at Microsoft, what are the next possibilities? Where are your peers going? And if they were going to Facebook, we could tell you where are the Facebook UI designers leaving for and doing next. So, it became Career Pathways inside that. And we said, “Oh, this is so useful for recruiters and talent people” that we pivoted there, and since then, our passion, our understanding of what is missing and what could be done better has led to our growth of SeekOut and talent acquisition and what we bring to the table.

Ishani: That’s so great. You sort of found your way to the recruiting market, to the recruiter as an end customer, but beginning with this problem of career pathing and pathways. It’s only something that’s amplified over the course of the last decade, let’s call it and it seems sort of prescient, but now that we look at this moment in time that seems like a very acute foresight.

Jenny I’d love your perspective. This talent environment has evolved so much in the last few years in ways that even Anoop and SeekOut could not have predicted with the pandemic and everything like that. We all see and feel the Great Resignation, the ongoing talent war in the tech world. You’ve been in talent teams for 20 years — what elements of this were predictable and what has taken you by surprise?

Jenny: Well, definitely what is very predictable is that the tech world continues to explode and grow. I read a statistic in the New York Times that the tech unemployment is 1.7%, which is basically negative unemployment. So, that’s not a surprise. What was not predictable was COVID, was the ability for folks to literally work from their homes. And it released the boundaries around what was possible for folks. And I think that’s one of the biggest challenges for organizations. And if you didn’t snap and adapt to that, you were not going to be able to meet your hiring goals.

One of the things that I love about being here at SeekOut, is going and finding people wherever they are. And so for us, we’re not restricted to Bellevue, Washington, or Seattle, Washington, and I think that’s one of the things, especially about our tool, that is so incredibly powerful. If you’re an organization that can embrace remote, that can actually make you so much better than restricting yourself geographically. That’s one of the things that I think has been a huge benefit for us. I think we’re embracing a new paradigm of relationships with employees, and it’s going to be a much more virtual relationship at times than it is a physical one.

Anoop: One of the things when we got into this, is we said, “Hey, digital talent, technology talent, is really important,” and what COVID did was, Satya said “Two years of transformation in two months,” right? So the accelerating rate of digital transformation, something we were focusing on, wasn’t there and that really increased the value of what we’re doing. The second thing that’s happened over the last two years is the emphasis on diversity. A lot of young people are saying, “I don’t want to join a company if I don’t see that they are embracing diversity, inclusion, and belonging in a genuine, authentic way.” We believe a lot of talent exists. It begins with how do you hire, how do you understand what exists in talent pools, and then being able to find them. The problem that leaders have — business leaders, talent leaders — is, they have good intentions, but translating those great intentions into concrete actions and results has been hard, and SeekOut really facilitates that.

Ishani: It’s such a good point on the market, evolving in some ways that you are able to control and some ways you can react really responsibly and control around. In other ways, that they are so out of your control where you sometimes tools can help you with that, tools like SeekOut, and sometimes you have to build that internally. It’s a culture thing. It’s an intangible. But let’s talk a little bit about the tool you’ve actually built. The way I think of SeekOut is it’s a product that’s evolved a lot from a talent acquisition tool to really a more 360 degree talent intelligence platform. But it didn’t start that way. Walk us through this journey from a talent acquisition tool to really an intelligence platform.

Anoop: My Ph.D. thesis was on AI and systems. My co-founder Aravind came from building the Bing search engine. When you look at all of these areas, AI is just a core part of it. So, to use an analogy — when you go to Google and do a flight search — UA 236. It understands that you are doing a flight search that UA is United Airlines, and you’re probably looking for arrival or departure times and therefore this is the relevant information. So, in a similar vein, SeekOut is a people search engine. So, we need to understand a lot about people. So, when I search for Anoop Gupta, our search engine realizes that Anoop is a first name and Gupta is a last name — and that it is a common name in India, right. So, we can get a lot of information that helps us. Similarly, normalizing for universities and companies is really important. SeekOut is very special in that it brings data from many, many different sources and combines it together. So, as we want it to go to technical folks and technical talent, and I’m just using that as an example, and you get GitHub, you see the profile on the GitHub, how does it match to the profile, you know, they might have LinkedIn and they are the same person. You know, it takes AI to figure that out. Then you want to look at all the code and information that you find, and you say, what is their coder score? How good a coder are they? Do they know Python? Do they know C++? So, we started bringing those things inside of it and all of those are inferred things. When we do security clearance, as an example, people don’t mention security clearance often, so what we go and look at is we look at job descriptions for the last many years, and we say did the job description say “This role requires security clearance and top secret or whatever?” And then we say, if there are enough of these positions where that is required at that company, at that location — then we say, you likely have security clearance. So, AI is fundamentally baked into the product, but we also take an approach that while AI is everywhere, it is designed as a complement to the human and not as a substitute to the human recruiter or sourcer that is there. That is an important principle for ourselves. The human is doing what they are best at, and all of the AI and logic are doing what they are good at to facilitate the human being more successful.

Ishani: We talk a lot about intelligent applications having a data strategy. And in order to augment workflows and make them solve a business problem really better than ever before. All of what you described is so well steeped in that philosophy around pulling in data from a host of public sources and then being able to really drive a better product around that and surface insights that matter. Customers love as one of the core features of SeekOut, the search functionality. So I’m sitting on top of all that data, the search just works. Can you talk a little bit about how you handle and process all of this data to just make it work like magic for a consumer?

Anoop: So one is, you’re very right. It’s actually a very hard problem when you have 800 million profiles and data coming from lots of sources, and the data is not static data — people are changing jobs, people are changing things. It’s all dynamic data, so, how one makes it work, how one makes it very performant? You know, my co-founder again — one of the movers and shakers behind the Bing search engine, and because we come from that background, Googles and Bings have to handle very large amounts of data, so how do you construct the index structures? How do you do the entity formation combined together? So that is core to what we do. And then on top of all of that big data, when you say can you clone Jenny and find us similar features? Now that is an impossible task. Because people may do the job with her humor, and her other parts are so hard to replicate, and the nice person that she is, then you have to do all of the matching, right? Or when you parse a PDF resume, how do you extract the skills or when you parse a PDF job description, how do you parse the requirements and what are the must-have requirements? What are the nice-to-have requirements? So, there’s just infinite amounts of problems, and we keep tackling them one at a time.

Ishani: It seems like you also, though, have to be so semantically aware of the context, right? That’s exactly what you’re talking about with the job description. How do you parse out requirements versus any of the other components? And how do you parse out whether someone might have met those requirements? So much is evolving in this field of semantic awareness, semantic search, and natural language processing. What are the kinds of underlying models that you use? Have they really evolved in the last few years as we see some of the transformer models or CNNs start to make a step-change in technology?

Anoop: Our models are continuously evolving based on what the users are doing, how they’re using it, and what their needs are. We do a lot of building ourselves, but we also leverage third parties. We also, you know, we have a notion of a power filter or something. So, if you think and look at synonyms, right? So, you say people who know JavaScript, they are a short distance away from TypeScript, right? Or people who know machine learning, there’s so many different kinds of words that people use in GitHub, whether it’s Keras or TensorFlow, PYTorch, whatever kinds of things, how do you find the equivalencies? You can find some things through correlations or other algorithms. What makes sense, what does not make sense. So, Ishani, there’s just a lot of different things that we are continuously doing. There are different kinds of algorithms and networks that get used for different types of natural language parsing and what we do. But I’ve always said from when we were at Microsoft, eventually, it is the data that you have because everybody publishes their algorithm and if you have the right data, you can do so much more. It is the data, and then the intelligence on the top that I think is really important. You got to have the right data. And then, of course, the right people and the algorithms to get to that intelligence.

Ishani: So, it really goes back to this concept of having a data strategy early. Being able to be nimble in evolving underlying technology and application intelligence. We always talk about garbage in, garbage out. So, being able to really understand where your data’s coming from, semantically parse and structure it to then be able to give to your end user as we call it magic.

Anoop: Yes. Yes. The problem with data is data is not clean. So, you know how you can efficiently clean up that data and use ML models to say these are extreme, exceptions and what to look at become super important.

Ishani: So let’s zoom out a bit. We’ve talked about this briefly, but over the last two and a half years or so, work has changed so much. Hiring has become hard. Engaging with employees has never been more important than it is today. Retention is hard, and SeekOut is doing really well in part because of that macro tailwind. From a company growth perspective, how did you recognize and take advantage of that moment in time?

Anoop: Helping companies get a competitive advantage, recruiting hard-to-find and diverse talent was a model for us from the very beginning. Then all these things happened and we’ve grown 30X in revenue over the last three years, our valuation is 50X where it was from three years ago and we have very high net retention and amazing customers. But we hadn’t thought of everything. We were focused on talent acquisition. That is how do we bring external people? Then with COVID, and the great reshuffle, the great resignation, many companies like Peloton stopped hiring externally and we said, what are the opportunities we can create for the people that are inside? So, our more recent focus on retention is really big. So, here’s the big story that we talk about. It is truly about the future of enterprise. We believe winning companies are realizing that the growth of people and the organization are inextricably linked. So, our mission has broadened, and it’s become to help great companies and their people dream bigger, perform better, and grow together. So that’s the mission and it’s a fundamental mission for every CEO and business leader and not just the HR leader. Then what we are doing is, you know, use technology to ensure that companies and talent are aligned and empowered and growing together. Or in another way what we’re saying is, “Hey, we going to help organizations thrive by helping them hire, retain, and develop great and diverse talent.”

Ishani: You know, SeekOut was really the right place at the right time to take advantage of, and actually really help people through that transition. But you have to be experiencing this internally as well? You talked about 30 X in terms of growth, but you also have triple headcount in the last year. I think you anticipate doing it again this year. How do you maintain, and Jenny, this is a question for you, culture and such a high growth environment?

Jenny: It’s one of my favorite questions I get it a lot in interviews. Culture has become probably the most important thing in a world where people are free agents, and they want to work at a place that aligns with their values and the way that they want to grow and develop with a company. So, I will share this. For me, I was looking at a number of different companies, and I met Anoop, and our first conversation, Anoop, I don’t know if you remember this, it was supposed to go for an hour. We went over 90 minutes, and in that moment, I knew that this was different. This was a different place. The culture here really does emanate from Anoop, Aravind, John and Vikas — the folks that started this company. From my perspective, our job is to make sure we don’t have cultural drift because we don’t have to fix our culture. Our culture is phenomenal. Candidates across the board tell us they’ve never had a candidate experience like this before. Everybody they meet with is super kind and helpful and collaborative. So for us, it’s really keeping our eye on these cultural anchors and making sure that we’re staying true to those.

So, in the hiring process, making sure that every single person who comes here, there’s a diversity interview where we talk about what is important to you in terms of diversity, belonging, equity, and inclusion. To Anoop’s point, people want to go where they feel like they’re going to belong. And then diversity can thrive, and equity can thrive, but you have to have that sense of belonging first. So for us, it’s very much staying focused on that. And everything that we do is around driving programs and opportunities and conversations that reinforce that. We start every Friday All Hands — in fact, I will admit, I suggested to Anoop early on that this was not going to scale as we grow. We’re 150 people today. But we start every all hands with 15 minutes of gratitude. I admit that it is absolutely scalable, and we’re going to continue to do it because it is by far the most favorite meeting of the entire week. That moment that we set aside to say nothing is more important for us in this moment than sharing our gratitude with each other. So I think that’s, for us, I feel super fortunate to be able to be at this intersection at a time where, it is tough, right? Companies are struggling to keep their culture intact in a world in which everything’s shifting so quickly.

Ishani: That’s such a good point that begins in the interview process and it continues in the onboarding process. Then it’s an everyday commitment to reinforcing your culture. I think people do have really good elements of each of those. But it’s rare that you find somebody so committed to all of them.

Jenny: It starts with Anoop.

Anoop: So, you know, so Jenny said it so well it comes from just a deep belief that people are the most foundational element to our success. We truly, believed that for ourselves. I’ll give you an example in a story. So we were looking for, I think the CRO, we had an executive search firm, and they said, ” Anoop you seem to be open to meeting a lot of people. Are you sure you have enough time?” And I said, ” I’m always there when it’s a people question. People are so important.” We have four OKRs now, these are the company goals. Our main goal is our people, culture, execution are our competitive advantage. I truly believe in that. It is not our AI knowledge. It is not we are smarter. It is that as a company, who we bring in, how we think, how we execute, how we collaborate, how we decide to disagree, yet, find commitment, you know, hold each other accountable, be nice.

We want to be the ones to show that nice people can win. Kind people, people with empathy can win. You don’t have to be a jerk to get ahead. So that is just a fundamental belief for us. And that has helped with our retention. That’s helped with our recruitment. That’s helped with the energy and their whole self that people bring to the company every day. And I think that’s a huge part of our success.

Ishani: The recruiting example of the CRO is so interesting because it really does delineate there is a real and important place for tools, but there’s certainly a line where that stops. Where you, Anoop, taking the time, you know, it wouldn’t be a little bit facetious as a talent optimization platform, if you didn’t take the time to bring in your own talent and really make sure that they fit the organization’s culture and the ethos, and they want to be where they are. So certainly, it has, there’s good continuity there with SeekOut’s mission and SeekOut’s product and how you operate.

But also, that there’s a role for the talent optimization platform that you use. And that presumably you use SeekOut, at SeekOut.

Anoop: So, you know, the other side story is. Every exec firm that I talk to, they give me some candidates and sometimes they are diverse, sometimes they’re not diverse. I say, well, let me find you some women candidates, let me find you some, you know, black candidates. They exist — you just don’t know; you need a better tool.

Ishani: It’s very much clear that there are roles, and these tools are augmenting how people do their jobs and in ways that haven’t ever happened before. But that it is an augmentation with learning, with intelligence, and with automation. But there’s still very clear roles for how do you build, for example, a culture like Jenny, right? And how do you maintain that? It also speaks to one of the product focus areas of SeekOut, which is on retention and really retaining your talent and looking internally. Jenny, talk to us a little bit about some of the strategies that you use, whether or not it’s related to SeekOut’s product, to maintain the talent and retain talent.

Jenny: Yeah. And thanks. I think it’s actually one of the reasons why, when I with Anoop, and he cast the vision for what SeekOut was going to be, was what got me so excited. As someone who’s led people teams now for way too many years to admit, I think getting folks in the door, getting them hired, is absolutely critical and important. I think growing, developing, and evolving as teams with folks who are committed and engaged, that is the job, right? That is every day. All day thinking about the people that we already have here. That’s one of the things about the enterprise talent optimization, where we’re going there, it’s going to revolutionize people teams. I mean, it’s like the best way for me after so many years of not having really effective tools on people teams —you know, we’re building a world in which they are going to be so complementary and it’s going to free people teams and leaders up to do what they do best, which is really about developing people.

So, for example, yeah, we’re 150 people. Well, we’re going to be implementing a people success platform. We’re going to be making sure we’re touching base on the things that matter the most to people, which is all about skill development, acquisition, growth. That’s fundamentally why folks will leave, right? Especially in the tech world, because they want to do different things, or they want to be able to stretch and grow. One of the things that’s awesome about startups is you have infinite ability to grow your people in whatever direction they want to, because the opportunity is here. It’s one of the reasons why I stayed at my first tech company for so long — I was able to do and grow and be so many things, and that’s one of the things that we talk to people about in terms of our value prop when we’re interviewing them is, “Hey, we are interested in you for this, but guess what? The world is your oyster at SeekOut and wherever your passion wants to take you, we are going to support that passion.”

Ishani: What you’re saying around giving people, the opportunity to grow is incredibly aligned with SeekOut, with the mission of the company. But also again, the product. It is also very hard to execute on. To say — we have a high-performing software engineer in our machine learning division who wants to go try out product management. Right? What are the tools that you used at SeekOut, and how do you actually execute on that?

Jenny: Well, I think that we are still in our nascent stages. We started last year at 40 people. We’re now at150 people. What I would say is building the capability in leaders to be aware and to be having these conversations and to be free enough to be able to think beyond the roadmap and the things that are getting done today. So, I think you have to hold both things tightly and loosely at the same time, if that makes any sense. And it requires a high level of change management and org development skills. Like we have to build whole-brained leaders who can look at our people with both things in mind. Executing on the deliverables that we have today, but fundamentally making sure you’re having this other conversation and that you’re driving that consistently in a way so that there’s never any dissonance. I think that’s the challenge? Creating too much space between those conversations or even having those conversations at all creates the dissonance. Then that creates the drag and the drifting. So, for me, that’s one of the things that we talk about a lot is who do we have?

Anoop, I would love for you to give your kind of ETO summary, because I think it is so compelling about the tools that we’re going to be able to provide. To your point, Ishani, I don’t have specific tools today. I mean, I can use my SeekOut tool, which is awesome, but we’re also small enough that we kind of can do a lot of this, you know? One-on-one but Anoop, if I would love for you to add onto that.

Anoop: You know, the cost when a great employee leaves is almost two X their salary for the annual salary, because it takes so much for the new person to come in and get up to speed, and meanwhile, the products are delayed and other things that delay whatever function they might have been going. So that’s why it’s so critical. And that’s why people care about it a lot. One of the things I say is that companies are deluged with data. There’s data flowing out of everything, but when it comes to data about their people, companies don’t understand the data is siloed. The data doesn’t exist. They may not have the external data. They may not have what they did before. And there is missing data. You know, your manager doesn’t know, Hey, in a large company like Microsoft or VMware or Salesforce where are the open jobs. What are the matching jobs? What are the skills? What does it look like? So, the data about employees is missing, the data about opportunities is missing, and then how do you take opportunities and data to match them to people? So, we can tell you about career path, if you’re going from a software development to a product manager, we can point you to people who made a different transition. We might be able to point you to people who made that transition, who might be from the same school, might be from the same gender and you don’t have to talk to the hiring manager, you can talk to people below and say, what is the culture of the team? Basically, we bring amazing data from outside. But then we take data from inside the company —this may come from management hierarchies. This may come from Salesforce. This may come from your developer systems and GitHub — and give you the most comprehensive thing. Then we engage with people. We really have two audiences. One of our audiences is the employee. Okay, who in a private secure way are mapping out their career, their growth, their learning journeys, their growth and development journeys. The second is the HR and the business leaders who are saying, we’ve got to deliver. There’s a strategy we want to do. Do we have the right talent? How does my group compare to competitors? How does it grow across the companies and how do we optimize?

So, we are super excited about it in any conversation that we are having, with CHROs, with other leaders, there’s a lot of excitement about what’s possible what SeekOut can do for them.

Ishani: So, SeekOut today is a really amazing example of an intelligent application for 360 talent optimization, not just the external component, but also internally. This speaks so much to both the environment and you’re reacting and being nimble around, how do you create offerings that people need? Without revealing too much, give us a peek into what the future holds for SeekOut.

Anoop: So future wise, Ishani, each of these broad areas that I’m talking about, there is immense depth in that. As we go deeper into it, there is a lot of work that is involved. So, if you look three to five years just executing on even the components that we have talked about and becoming a star We’re thinking you know, I believe this is a new category. HR don’t even realize what is possible in terms of data, the insights they can have, what they can do for their employees. So, there’s always a market and a mind shift that is involved and people are the slowest to change in some sense. So, I think our journey just making it, and if we do it right, and if we are the leaders, this is more than a hundred billion-dollar company, I believe. Okay. So there’s lots of growth and possibility, in this because talent is central to organizations and their success.

Ishani: Anoop and Jenny, we tend to end these podcasts with a lightning round of questions. So, we’ll go quickly through three questions that we ask every company that comes on this podcast. The first for both of you, aside from your own, what startup or company are you most excited about that is an intelligent application?

Anoop: So, for me, I would say, you know, some company like Gong or basically people who give you intelligence about how your salespeople are doing, how can you be better? What those calls are. Do the natural language analysis and all of that. So, it is just a hot topic, so it could be more, but that’s top of mind for me.

So let me just name that.

Jenny: I have an appreciation for Amperity and what they’ve been up to and what they’ve been doing. So that would be mine.

Ishani: Awesome. Both actually are also intelligent app top 40 companies. So, congratulations to Amperity and Gong. Outside of enabling and applying AI and ML to solve real-world challenges, what do you think will be the greatest source of technological innovation and disruption over the next five years?

Anoop: Certainly, you know, machine learning/AI will have a huge impact. But I think it will also be coupled with that it works on lots of data. We are instrumenting everything, on how the washing machine is being used, how your toaster is being used, how you’re driving. So, I think, the data and the machine learning together. But with the caveat of us making sure that it is not biased. Every tool in humanity can be used for good and it can be used for bad. But I think if we use these things intelligently, we can make a lot of good happen.

Jenny: Yeah, I would have to agree. I can’t say it any better than Anoop did. I think that making sure that technology is being inclusive as well. I think that’s a huge area of focus and concern.

Ishani: I couldn’t agree more. Final question. What is the most important lesson? Likely something you wish you did better, perhaps not, that you’ve learned over your startup journey.

Anoop: I will say, throughout my career, I always kind of knew people were important, and culture was important. You know, people would talk about it. But my appreciation and conviction that it is about people and culture as the fundamentals and foundations to success has been a realization. You know, if you asked me this question five years ago, I would not have answered it this way. You kind of take culture for granted, is not granted in the sense that it is already kind of baked for you in a larger organization. I think here, there was the opportunity to say — you get to define it — then it just made so much sense that this is the thing to focus on.

Jenny: That’s awesome, Anoop. I love that. I would say that for me learning that, you can put people at the top of the pyramid, and you can be very successful, is something that makes me incredibly happy that I’m getting the chance to learn and experience.

Ishani: Anoop and Jenny, it’s been so great to talk to you today about SeekOut, but also about people and how important they are in the organization. SeekOut is a great tool that enables you to find, recruit, and hopefully retain the best people that are going to build your organization. Thank you so much for taking the time and it was a great chat.

Anoop: Thank you so much for having us really appreciate the time.

Thank you for listening to this week’s episode of Founded & Funded. Tune in in a couple of weeks for the next episode with UW’s robotics expert Sidd Srinivasa.

 

Hugging Face CEO Clem Delangue and OctoML CEO Luis Ceze on foundation models, open source, and transparency

Hugging Face CEO Clem Delangue and OctoML CEO Luis Ceze

This week on Founded and Funded, we spotlight our next IA40 winners – Hugging Face and OctoML. Managing Director Matt McIlwain talked to Hugging Face Co-founder and CEO Clem Delangue and OctoML Co-founder and CEO Luis Ceze all about foundation models, diving deep into the importance of detecting biases in the data being used to train models as well as the importance of transparency and the ability for researchers to share their models. They discuss open source, business models, the role of cloud providers and debate DevOps versus MLOps, something that Luis feels particularly passionate about. Clem even explains how large models are to machine learning like what Formula 1 is to the car industry.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Founded and Funded. This is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group. And this week we’re spotlighting two 2021 IA40 winners. Today Madrona Managing Director Matt McIlwain is talking with Clem Delangue Co-founder and CEO of Hugging Face and Luis Ceze Co-founder and CEO of OctoML. Both of these companies were selected as a top 40 intelligent application by over 50 judges across 40 venture capital firms. Intelligent applications require enabling layers, and we’re delighted to have Clem and Luis on today to talk more about the enabling companies they co-founded, which can work in tandem and are both rooted in open source.

Hugging Face is an AI community and platform for ML models and datasets that was founded in 2016 and has raised $65 million, and OctoML is an ML model deployment platform that automatically optimizes and deploys models into production on any cloud or edge hardware. OctoML spun out of the University of Washington and is one of Madonna’s portfolio companies. Founded in 2019, Octo has raised $133 million to date.

I’ll hand it over to Matt to dive into foundation models, the importance of detecting biases in data being used to train models, as well as the importance of transparency and the ability for researchers to share their models. And of course, how large models are to machine learning like what Formula 1 is to the car industry. But I’ll let Clem explain that one. So, I’ll hand it over to Matt.

Matt: Hello, this is Matt McIlwain. I’m one of the Managing Directors at Madrona Venture Group. So, let’s dive in with these two amazing founders and CEOs, and I want to start with a topic that’s important not only historically in software, but certainly relevant in some new and different ways in the context of intelligent applications and that is open source. Luis, I know your company, OctoML plays on top of your open-source work that you and your team, built with TVM, how do you think about that distinction between the OctoML role versus TVM.

Luis: Just to be clear, the OctoML platform is really an automation platform that takes machine learning models to production. That involves automating the engineering required to get your model and tune for the right hardware, the right choices, reasons, rights, other pieces of the ecosystem, and then wrapping it up into a stable interface that it can go and deploy in the cloud and in the edge.

And TVM is a piece of that, but TVM is a very sophisticated tool that is usable by, I would say machine learning engineers in general. So, the platform automates that and makes it accessible to a much broader set of skill sets, a much broader set of users, and then also pairs TVM with other components of the ecosystem. For example, when should you use a certain hardware-specific library is something that we automate as well. What we want here in the end, is to enable folks deploying machine learning models and teams deploying machine learning models to treat ML models as if they were any other piece of software. Okay, so you don’t have to worry about how you’re going to go and tune and package it to a specific deployment scenario. You have to think about that very carefully today with ML deployment. We want to automate that away and make that be fully transparent and automatic.

So why do we make Apache TVM open source? One of the things that TVM solves — we call the matrix from hell. And if you have a bunch of models and a bunch of hardware targets, and you are mapping any model on any hardware, this requires a lot of diversity, right? What better way to deal with diversity of these combinations of models to hardware than actually having a community that is incentivized to do that. For model creators and framework developers, by using TVM, they have more reach to hardware. So, creating this incentive and folks participating and putting all hands on deck and creating this diverse infrastructure is a perfect match for an open source. So TVM is, and will always be, open source and very grateful to that.

Matt: Clem, frame a little bit for us, how you thought about open source and how you’ve thought about it in the context of your marketplace.

Clem: Basically, at Hugging Face, we believe that machine learning is like the technology trend of the decade, that it’s becoming the default way of building technology. If you look at it like that, you realize that it’s not going to be the product of one single company, it’s really going to take collaboration of hundreds of different companies to achieve that. So that’s why we’ve always taken a very open source, collaborative, platform approach to machine learning.

And a little bit, like what GitHub did for software, meaning becoming this repository of code, this place where software engineers collaborate, version their code, and share their code to the world. We’ve seen that there was value, thanks to the usage of our platform, in doing something similar, but for machine learning artifacts — so for models and data sets. So, what we’ve seen is that by building a platform, by being community first, we’ve unlocked, for now 10,000 companies using us, the ability to build machine learning better than what they were doing before.

Matt: So, Clem, that’s really interesting. Maybe just to build on that last point. When people are trying to use these models, there is often some kind of underlying software that’s involved with the building, the training, the leveraging of the model. There’s also datasets — some that are open public data sets, some that are not. So, in that context, how do you all work with both the software and the data set elements that are more or less open in terms of leveraging your platform?

Clem: Yeah. So, something that we were pretty convinced about since we started working on this platform three years ago, is that for it to work and really empower companies to really build the machine learning, it had to be extensible, modular, and open. We don’t believe in this idea of providing an off-the-shelf API for machine learning — like having one company doing machine learning and then the rest of the world won’t be doing machine learning. It can be useful for a subset of companies, but the truth is at the end of the day, most companies out there will want to build machine learning. So, you need to give them tools that fits their use cases that fit their existing infrastructure that can be integrated with, parts of the stack that they already have.

So, for example, for private-public, what we’re seeing is that by giving the choice to the companies to pick which part they want to be private, which part they want to be public — what’s interesting is that it usually evolves over time in the machine learning life cycle. If you think of like the beginning of a machine learning project, what you want to do is maybe train a new model on public data sets because it’s already available, it’s already formatted the right way for you task. That gets you to a minimal viable product model really fast. Then once you’ve validated that it could be include into your product, then you can maybe switch to private data sources and then train a model that you’re going to only keep for your company and keep public. Maybe you’d use that for one year, two years, and then you’re like, okay, now I’ve used it a lot and we’re comfortable sharing that with the world, and then you’re going to move your model into the public domain just to contribute to the whole field. It’s really interesting to see the timeline on these things and how the lines between public and privates are probably much more blurrier than we can think looking at it from the outside.

Matt: That’s super interesting. At one level delineates between the public data sources that presumably people are free to use and the private data sources, which might have some proprietary usage, rights, and permissions. Maybe one other level in there is kind of the — I want to know what data was used in my model. So, kind of this data lineage piece, and how do you help people with that topic.

Clem: So, we have a bunch of tools. We have a tool that is called the data measurement tool that is very important and useful to try to detect biases in your data, which is a very important topic for us.

We have someone called Dr. Margaret Mitchell, who co-leads and co-created, the machine learning ethics team at Google in the past, and who created something called Model Cards that are now adapted to data, too, which are a way to bring more transparency into the data. Which for me, is incredibly important most actually on the data side than the model side, because if you look today at a lot of the NLP models, for example, if you look at BERTs, it’s incredibly biased, right? If you take like a simple example, like you ask the model to predict the word, when you say, “Clem’s work,” “Clem’s job is” or “Sofia’s work is.” You’ll see that the word that is predicted is very different if the first name is a male or if it’s female. You’ll even get on the woman’s side, the fist prediction of a BERT model is “prostitute,” which is incredibly offensive and incredibly biased. So, it’s really important I feel like today in our field that we just acknowledge that. That we don’t try to put that under the rug and build transparency tools, bias mitigation tools, for us to be able to take that into account and make sure we use this technology the right way.

Matt: Yeah, that’s incredibly powerful and helps illustrate beyond the sort of the first set of challenges of building machine learning models that there are these second- and third-order derivative challenges that are going be hard to tackle for a long time to come but are important as you point out to put on the table and acknowledge and work on.

Luis, I’m curious, you referenced this data engineer as your initial customer. Can you tell us a little bit of what you’re learning about the state of these customers and who this data engineer is? Who else might be key decision-makers and using, let’s even put aside like paying for your stuff, just wanting to use it?

Luis: I wouldn’t call them necessarily data engineer. It’s more like ML engineer or ML infra-engineer. So those are folks that think about how to deploy machine learning models today. But what we want here is to have any software developer to be able to deploy the machine learning models and use their existing DevOps infrastructure and existing DevOps people. Right? We are learning a bunch of things from them. First is that it’s just incredibly manual. There’s something that we call the handoff problem from, a model created by a data scientists or folks that create that model to something that’s deployable today involves many steps that are done by humans.

For example, turning a model into code is one step that’s done by hand. Then after that, just figuring out how you’re going to run it. Where are you going to run? It is something that requires a lot of experience with system software tools. If you’re going to deploy on Nvidia, you have to use a certain set of tools. You’re going to deploy an Intel, CPU’s are going to have to use a set of tools.

That’s done by different companies and different customers that have different names for this. Some of those are sophisticated DevOps engineers. Some companies call those machine learning infrastructure engineers, and as the maturity of ML deployment increases in these companies. I’m sure there will be a common name across them, but honestly, if you talk to 10 customers, you’re going to have more than 10 ways of calling those people.

Matt: Is this the same entry point for you, Clem?

Clem: Yeah. What’s interesting to me, the other day I was thinking about, if we want to make like machine learning, the default way of building technology — like software 2.0, in a way. It’s interesting to look at how software became kind of like democratized. If you think about software, like maybe 15, 20 years ago, and who was building software. You realize that maybe, obviously software got adopted really fast, but if there was one thing that was limiting is how to train a software engineer. Because it’s hard, to take maybe someone who was a consultant before, or like was working on finance and then train them to become a software engineer is hard work. It’s not something that they’re going to do really fast. What’s beautiful with machine learning is that, this wave of education of software engineers almost kind created the foundations to go much faster on a machine learning because turning a software engineer into someone who can do machine learning is much faster. For example, with the Hugging Face course, which takes a few hours to take, we see software engineers starting this course and at the end of the course, being able to start building machine learning products, which is pretty amazing. So when you think about the future of machine learning and the rates of adoption, one of the reasons why I’m super optimistic is that I think it’s not crazy to think that, maybe in four or five years, we might have more people able to build machine learning than software engineers today. I don’t really know how we’re going to close them. Maybe they’re still going to be called software engineers. Maybe they’re going to be called machine learning engineers? Maybe they’d have another name.

Luis: Maybe just application engineers because applications have any intelligent components, it should just be application engineers, right?

So, Matt I have a bunch of questions for Clem too. So let me know when we can ask questions to each other here.

Matt: Let me ask one question of you and then you can go. You’ve shared with me a few times that you think this whole construct of MLOps, which I guess arguably today is the cousin of DevOps is just going to go away and maybe this gets back to this, what are we going to call the people? It doesn’t really matter, maybe they’re all application engineers over time. Do you see MLOps and DevOps merging or is MLOps just automated away? What’s your vision around that Luis?

Luis: To be very clear for the rest of the audience here. So creating models or arriving at a model that does something useful for you, it’s very distinct to how we’ve been writing software so far. I know to Clem’s point, he put it very well. That part, I don’t know what name that has. I do not include that in MLOps. But MLOps, I mean, like, once you have a model, how do you put it in operation and manage it? That’s the part that whenever I look at it super closely today, it involves turning a machine learning model into deployment artifact, integrating the machine learning model process and deployment with the regular application life cycle deployments, like CICD and so on. And even monitoring a machine learning model once it’s in deployment. So, all of that, the people call MLOps. If we did it right and enabled a machine learning model to be treated like any other piece of software module today, you should use the existing CICD infrastructure. You should use the existing DevOps people. You should even use your existing ways of collecting data for things in deployment, like what Datadog does, and then put views and interpretation on top of that.

So, our view here is that if we do all of this we should be able to, once you have a model, you turn that into an artifact that you can use the existing DevOps infrastructure to deal with. So, in that view, I would say that MLOps shouldn’t be called anything else other than DevOps. Because you have a model that you can treat as if it were any other piece of software. So that’s our vision.

Matt: Clem do you agree with this vision?

Clem: Yeah, yeah — I think it is very accurate.

Matt: Good. Luis, what were you going to ask Clem?

Luis: First, what makes some models wildly popular? Out of these tens of thousands of models I’m sure there’s a very bi-modal distribution there. Do you see any patterns of what makes models, especially popular with the general audience?

Clem: It’s a tough question. I think it varies wildly based on where the company is in terms of like their machine learning life cycle. Like when they start with machine learning, they’re going to tend to use the most popular, more generic kind of models. They’re going to start with BERTs, with DistilBERT, for example for NLP. And then move towards kind of like more sophisticated, sometimes more specialized models for their use cases. And sometimes even training their own models. So, it’s very much kind of like a mix of what problem it solves, how easy it solves the problem, how big the model is. Obviously like a big chunk of your work at OctoML is, you know, to make the scaling of these models cheaper for companies to run billions of inferences. It’s all that plus I think one layer that we really created that wasn’t there before is the sort of social or peer validation.

And that’s what you find on GitHub. It’s hard to assess the quality of a repository if you didn’t have things to like numbers of stars, numbers of forks, numbers of contributors to the model. So that’s what we provide also at Hugging Face for models and data sets where you can start to see oh, is this model has been liked a lot. Who’s contributing to this model? Is it evolving and things like that? That also, I think provides like a critical way to peak models, right? Based on what your peers and what the community has been using.

Luis: Yeah, that makes sense. Peer validation is incredibly powerful. I want to touch on another topic quickly and then I’ll pass the token back, you mentioned public data versus private data. There was a really interesting discussion that I think parallels really well with the trends in foundational models. Where you can actually train a giant foundational model on public data and go and refine it with private data. Of course, there’s some risk of bias and we need to manage that. But I’d love to hear your thoughts and where you see the trends of, making the creation of foundational models or even the access to foundational models be something that’s wide enough to have many users refining upon that. We keep hearing about some of these models costing a crazy amount of money to train. Of course, folks are going to want to see a return on that.

Clem: Yeah. I mean, for us, transparency and the ability for researchers to share their work is incredibly important for these researchers, but also for the field in general. I think that’s what powered the progress of the machine learning field in the past five years. And you’re starting to see today some organizations deciding not to release models, which to me is something negative happening in our field, and something we should try to mitigate because we do believe that some of these models are so powerful that they shouldn’t be left only in the hands of the couple of very large organizations.

In the science field there’s always been this trend and this ability to release research for the whole field to have access to them and be able to, for example, mitigate biases, create counter powers, to mitigate like the negative effects that it can have. To me, it’s incredibly important that researchers are still able to share their models, share the data sets publicly for the whole field to really benefit from them. Maybe just to, to complement on that we’ve led with Hugging Face, an initiative called BigScience, which is gathering almost a thousand researchers all over the world. Some from some of the biggest organizations, some more academic — from more than 250 institutions to train ethically and publicly the largest language model out there. It’s really exciting because you can really follow the training in the open.

Luis: I’ve been seeing that’s fantastic to see that.

Clem: I like to joke sometimes that very large models are to machine learning, what Formula 1 is to the car industry. In the sense that the two main things that they do is first they’re good branding. They’re good PR, they’re good marketing — the same way Formula 1 is. And second, they are pushing the limits of what you’re able to do to have some learning. The truth is you and I, when we are going to work, we’re not going to use like Formula 1, because it’s not practical. It’s too expensive. And so that’s not what we’re going to be using. And not all like car manufacturers need to get into Formula 1 — like Tesla is not doing Formula 1.

Matt: I’m going to have to ask you about Charles Leclerc then. Because I have a feeling you might be a big fan.

Clem: Yeah, absolutely. But so, if you think about large language models, that way. And if you realize that the biggest thing is the learning that you get by pushing everything to the extremes, then it creates even more value in doing it in the open. And that’s basically what, BigScience it is kind of like doing this whole process of training a very large language model in the open so that everyone can take advantage of the learning of it. So, if you go on the website, if you check on GitHub, all the learning in terms of oh, it failed because of that, it worked because of that. We tweak that and completely change the learning rate and things like that. So that’s super exciting about that in the sense that it’s building some sort of an artifact for the whole science community, for the whole machine learning community to learn from and get better at doing these things.

Luis: I like the parallel a lot. One of the parallels that I like to think as well as the training these giant models should be equivalent to building a large scientific instrument, say the Hubble Telescope. We spent, a few billion dollars to put it in space and a lot of people can use it. On the commercial side you build a giant machine that you give people some time on to go and do things. I see the parallel, like as any huge engineering effort that’s done upfront to enable future uses. I think that’s the computational equivalent of that, where you have a giant amount of computation whose result is an asset that should be shared. So, in a way that makes sense.

Matt: What I’m trying to get my head around, not to extend this analogy too much is, every team has to build their car. And they don’t tell you everything that they’re doing to make it the fastest car on the track. So, what’s the right layer or layers of abstraction here. Open AI with GPT-3, there’s some things that you can work with and play with, and you can do prompt engineering and all, but there’s some things that are let’s call them in more of the black box, what has been additive about OpenAI’s efforts? And maybe touch a little bit on, with projects like BigScience, what are different and also needed to put it that way.

Clem: I think different layers of abstraction or needed by different kinds of companies and are solving different use cases. Providing an off-the-shelf API for machine learning is needed for companies that are not really able to do machine learning — who just need to call an API to get the prediction. It’s almost the equivalent of a Wix or a Squarespace for technology, right? People were not able to build software to write codes, they’re going to use kind of like a no-code interface to build the websites. And that’s the same thing here, I think. Some use cases are better served with providing an off-the-shelf API and not doing any machine learning yourself. Some others you need to be able to see the layers of the model and be able to train things, to understand things for it to work. So, I think it really depends on the use case, the type of company that you’re talking to. So, for example, the largest open-source language models are on Hugging Face. So, it’s like the models from Editor AI, it’s like the biggest T five models. And they have some usage, but it’s not massive to be honest. Even if they’re like a fraction of the size of the ones that are not public. So at the end of the day, again, it’s Formula 1 there are a couple of cars that a couple of drivers are building, but most of the things that are happening today are actually happening in much smaller models. From what I see, I don’t know if Luis is seeing the same thing. Even like Codex for example — the one that is actually used in production is much, much smaller than what the, like the big number it’s claims in terms of size of the models. I don’t know. Luis, are you seeing the same thing?

Luis: Yeah, similar thing even private companies, right? So, they develope their large models in private, and they go and specialize it — they have their own foundational models and specialized specific use case and deploy that to typically much smaller and much more appropriate for the broader, deployment. I think it’d be interesting to see in the spirit of building communities around it and having people refine on top of large-scale models, is creating broader incentives for folks actually go and pay the high computational costs of training these models. But once they make available, is there a way for them to share some of the upside that people get by refining those models specific use cases. Again, like how I repeat what I said before. I see this giant piles of computation involved in training these models as producing an asset, so that can be used in a number of ways.

Matt: That’s actually a great segue into business models. So, I take a pre-chain model that’s in the Hugging Face market, and I decide to use it and adapt it for my own purposes. How does that work from a business model perspective?

Clem: So, I think the business model of open source and platforms are always similar in terms of high level, in the sense that they like some sort of a premium model, where like most of the companies that are using your product, are not paying most of the time and it creates your top of the funnel? For us, it’s 10,000 companies using us for free. Then a smaller percentage of the companies that are using your platform are paying for additional premium features or capabilities. What we’ve seen is that there was definitely some companies that were obviously very willing to pay because they had specific constraints. When you think about enterprise, especially in like regulated industries. If you think about banking, if you think about healthcare. Obviously, they specific constraints, that make them willing to pay for help on these countries. So that’s one way that we monetize today. The other way is around infrastructure because obviously infrastructure is important for machine learning. And what we’re saying at Hugging Face is that we almost becoming some sort of a gateway for it in the sense that because companies are starting from the model hub, taking their models and then making decisions from them. We can act somehow as a gateway for compute for infrastructure. It is definitely like very much early days, right? As most of our focus has really been on adoption, which I think is what’s making us unique. But I think there is a growing consensus that as machine learning is becoming key for so many companies that machine learning tools, providers, are going to be able to build these big businesses — especially if they have a lot of usage.

Matt: And Luis, similarly, you’ve got a lot of demand and interest for your SaaS offering, as you call it. Maybe tell us a little bit more about that and what you’re seeing in terms of early usage and thoughts about business model.

Luis: Yeah, absolutely. We call it the OctoML platform. So, it’s model in, deployable container out. It’s a simple model people pay to use it. And then the pricing is a function of the number of model hard repairs and also the size of deployment. And what customers are paying for there really is first for automation, right? So often when you’re replacing what humans are doing when taking models to deployment. It’s turning to either using our web interface or an API call. Imagine instead of actually having an engineering team where data scientists say here’s a model and then the deployment folks like, oh, give me the container to deploy it. We put an API on that and run it automatically. It’s a different motion than what Clem just described because the open-source users of TVM — and these are folks that are more sophisticated, they’re using TVM directly. Some of them want to use a platform because they want more automation. For example, they don’t want to go and have to set up a fleet of devices to do tuning on. They don’t have to go and collect the data sets to feed TVM for it to do it’s machine learning, information learning things — all of that is just turn key. And we have, what I call altar loop automation, where you could give a set of models, get a set of harder targets and we solve the matrix from hell for them automatically. Given that there’s a huge difference between using TVM directly or the experience of the platform provides that in that case, it’s very clear. And the platform is a commercial product folks have to pay to use.

Clem: I’d be interested Luis, to hear you about how you see your relationship with the cloud providers is that mostly as, you know, potential customers, partners, competitors. How do you see them?

Luis: Oh, great question. And it’s a good segue here too. I see them as potential customers and partners. Less so as a competitor, and I’ll elaborate. Even though there is some specific points that might seem contradictory to them saying. First of all, so some cloud providers happen to have popular applications that they run on their own cloud and these applications use machine learning in that case, customers — I call that “sell to.”

But the bigger opportunity that I see here is “sell with.” And, from all cloud vendors, what they care about is driving usage in their clouds. So, the way you drive usage in their cloud is to make it very easy for users to get machine learning models, use a lot of computation, and make it really easy to get them on their cloud. So, in whether a service provides us turning models into highly optimized containers that can be moved around in different instances and the cloud vendors like that because it drives up utilization in their cloud.

So, in that case, we’re not seeing resistance. In fact, we’re seeing a lot of encouragement in working with cloud vendors as partners. So talking about selling to and selling with — now, of course, one of these cloud vendors have a service that also builds on TVM — Amazon has something called SageMaker Neo, which is an early offering of using TVM to compile models to run on Amazon cloud. We see our services differentiated in a number of ways. First, there’s some technical differentiation of how we do the tuning of the model to make the most out of the hardware target by using our machine learning for machine learning magic. But more broadly, I would say that the key thing that there’s no competition here is because we support all cloud vendors. And if there’s that one cloud vendor where they can’t be is to be the other cloud vendor at the same time. So, the fact that we sit on top of all these cloud vendors is a huge selling point that I feel likes makes the competition not be relevant. be

Matt: What I think is really interesting here is it’s like what are going to be the right abstraction layers to deliver value in the future? What are the kinds of application areas that are most exciting to you all for the future?

Clem: What I’m super excited obviously is that transformers are starting to make their way from NLP from texts to all the other machine learning domains. If you’re starting to look at computer vision, you’re starting to see vision transformers, if you’re starting to look at speech, you’re seeing like a WAV to VIC, you’re starting to see things in a time series. Uber announced that they’re using transformers now to do a time series for their ETA right? You starting to see biology and chemistry basically taking over all the science benchmarks. So it’s really exciting. Not so much because I feel like the other fields are going to get accelerated as fast as the NLP field did, but also because I think you’re going to start to be able to build much greater bridges between all these domains, which is going to be extremely impactful for final use cases. Let’s say, for example, you think about fraud detection, which is a very important topic for a lot of companies, especially financial companies. Because before, like the domains were very siloed and separated, you were doing it mostly with a time series, right? So, prediction on events and things like that. But now if you’re seeing that everything is powered by transformers, you can actually do a little bit of time series, but also NLP. Because obviously fraud is also predicted by the kind of texts that someone trying to fraud or like a system trying to fraud is sending you. And so you’re starting to see these frontiers between domains getting blurrier and blurrier. In fact, I’m not even sure that these different domains will really exist in a few years. If it’s not going to be all machine learning, all transformers just with different input, right? Like a text input or audio input, image, input, video input, numbers input. And that’s probably like the most exciting thing that I’ve seen in the past few months on Hugging Face. Now we’re seeing it out of adoption for computer vision models, for speech models, time series models, recommender systems. So, I’m super excited about that and the kind of like use cases that it is going to unlock.

Luis: I feel like it’s pretty clear today that almost every single interesting application has multiple machine learning models in them and as an integral part of that. And they’re naturally multimodal as well. There’s language models with computer vision models, with time series models. I think the right abstraction here would be you declare it, that you know, where your ensemble of models are and you should give it to the infrastructure. And infrastructure automatically decides where and what should run, that includes mobile and cloud, right?

So almost every single application has something that’s closer to an end-user and cloud counterparts and even knowing what should run on the edge, what should run in the cloud, that should be automatically done by the infrastructure. So, for us to get there, it requires a level of automation that is not quite there yet. Even like when you give a set of models and deciding maybe a given model should be split into two, where part of it runs in the cloud and part of it runs on the edge. So that’s where I think the abstraction should be. You should not worry about where things are running and how. That should be fully automatic.

Now on the — what is an exciting application? This is going to be more personal and Matt, that’s probably not going to be a surprise to you. I think there’s so many exciting applications in life sciences. It’s inherently multimodal — from using commodity sensors in smartphones to make diagnostic decisions. There is a lot of interesting progress there using microphones to measure lung capacity, for example, for using cameras to make skin cancer early diagnosis and things like that. All the way to, you know, much larger scale computations and everything that’s going on in deep genomics in applying modern machine learning models into giant genomic datasets, is something that I find extremely exciting and not surprisingly a lot of those use transformers as well. So what I’m seeing actually, I’m also very excited about what, Clem said. It’s fantastic to see what Hugging Face has been doing and showing the diversity of use cases, transformer models apply to. Just like, bring it a little bit closer in terms of the actual application, I feel like life science is the one that inherently puts everything together into a very high value and meaningful application of human health.

Clem: And something I wanted to add because it’s easy to miss it if you not following closely, but already today, if you think about your day, most of it is spent in machine learning. And that is something new you have to realize because maybe two, three years ago, there was some like over-hype about AI, right? Everyone was talking about AI, but there was not really a lot of final use cases. Today, it’s not the case anymore. If you think about you day you can, do a Google search — it’s machine learning-powered. You’re going to write an email — autocomplete its machine learning-powered. And you’re going to order Uber, your ETA is machine learning-powered. You’re going to go on zoom or this podcast, noise-canceling and background removal is machine learning. Going to go on social network, your feed is machine learning-powered. So already today you’re spending most of your day in machine learning, which obviously is extremely exciting.

Matt: Yeah, it kind of leads to a question what’s the technology, that’s the greatest source of disruption and innovation that you see in the next five to 10 years.

Clem: So, for me, it might not be a technology in itself, but I’m really excited about everything decentralized. And not just in the crypto blockchain kind of sense. So, for example, Hugging Face, we’re trying to build a very decentralized organization in the sense that decision making is done kind of like everywhere in the organization in a very bottom-up way rather than top-down. And I’m really excited about applying this notion of decentralization. I think it’s going to fundamentally change the way that we build technology.

Luis: For me, it is impacted by AI too, but it’s molecular level manipulation. It’s just everywhere. You saw Nvidia’s announcement of 4-nanometer transistor technologists, soon we’re going to see 2 nanometers — we’re closely getting to the molecular scale there. So, this is applied to manufacturing electronics, but then, going back to life sciences, our ability to design, synthesize and read things at the molecular scale is something that’s there today already. So just think about DNA sequencing. You can read individual pieces of DNA with extreme accuracy, in large part because of AI algorithms that decode very noisy data, but our ability to read individual molecules is there and the ability to synthesize them.

So, I hope I’m not being confusing putting these two things together. I think in the end, being able to manipulate things at the molecular scale has a deep impact on how we build computers, because computers are in the end dependent on how you put the right molecules together, and same thing applies to living systems. So in the end, we’re all composed of molecules and being able to engineer synthesize the right ones has profound impacts on life. So that’s my favorite one, yeah.

Matt: I don’t know how I can bring us back down after that. Basically, to synthesize it, the journey from atoms and physics to bits and computing, to bases and biology, you know, and the intersections of those worlds. And what’s going to happen in the future as a result.

I know you and I are both passionate about that and no doubt from what Clem is saying, too, and bringing in this point about decentralization as well. And how does that change the way that we can work and learn and discover together. Very exciting. Hey, is there a company, in this intelligent application world, maybe more up at the application level, as opposed to the enabling level where both of your companies are playing today, that you’re, you really just admire and think a lot of maybe it’s because of some of these cultural attributes about the centralization clam, or maybe cause of the problem that they’re trying to solve that you’d say, wow, that’s one of the coolest, private, innovative, intelligent application companies?

Clem: I recently talked to a Patricia from a Private AI, which to me is doing something really exciting because initially it sounds like a boring topic in a way, which is a PII detection, like detecting personal information in for example, your data or your sets. But I think it’s incredibly important to understand better what’s in your data, what’s in your model, in terms of problems, right?

Like is there personal information that you don’t want to share? Are there biases? I think of being much more like valuing forums and kind of like building technology with values rather than thinking that you’re just a tool, that doesn’t have value and kind of like the harm comes from people using your tool. I think it’s a very big technology switch that we’re seeing happening now with companies and organizations having to be very intentional about the product decisions that they take, to make sure that you reflect their values and the values that they want to kind of like broadcast.

Luis: One company that I think is doing really cool, intelligent applications is a company called RunwayML. That’s the ability of manipulating media in a very easy way using machine learning, really cool. Like for example, how you can very easily edit videos in a pretty profound way, that had been incredibly manual and hard in the past. Now turning that into something that’s point and click it’s pretty exciting. Also comes from the ability of training, large you know, models to generate visual content. So that’s one of them.

Matt: Let me bring us to kind of a wrap up with a question around your own entrepreneurial journeys. We have a lot of folks that are listening that are starting or thinking about starting companies. And if you could share with us, one or perhaps two, the most important lessons, things that you’ve learned, wished you knew better going into the entrepreneurship journey that might be helpful for others. I think that would be tremendously valuable to our listeners.

Clem: It’s a tough question because I think the beauty of entrepreneurship is that you can really own your uniqueness and really build a company that plays on your strengths and doesn’t care about your weaknesses. So, I think there are as many journeys as they are startups. Right? But if I had to kind keep it very general. I would say for me, like the biggest learning was to take steps, just one at a time. You don’t really know what’s going to happen in five years in three years. So just like deal with the now, take time to enjoy your journey and enjoy where you are now because I don’t know if Luis it’s the same, but you obviously look back at the first few years, and at the time you felt like you were struggling, but at the end of the day it was fun. Then, yeah, obviously to trust yourself as a founder, you know, like you’ll get millions of advices, usually conflicting. For me it’s been a good learning just to learn, to trust myself, go with my gut and usually it pays off.

Luis: It’s hard to top that, but I will say, for me, personally coming from academia, it’s been fantastic to see a different form of impact because as a professor, you can have impact by writing papers that people read and then can change fields or training students that go and do their own thing and become professors and so on. But then I see building a company out of research that started in universities and all the ways of impact that actually putting products in people’s hands. Some of the lessons that I’ve learned as you know, Matt, there’s massive survivor bias here, but you know, just picking people that you generally like to work with is incredibly important. People that are supported, they can count on people around you and feel like there is a very trusting relationship with the folks that you work closely with. It’s just something that is true in building a company. I’m sure it’s true in many other things in life as well, but I’m extremely grateful to be surrounded by people that I deeply trust. I have no worries about showing weaknesses and having to be always right. No, I think it’s great when you say you know what I did wrong, I’m going to fix it. It’s much better to admit if you’re wrong and fix it quickly than trying to insist that being right is important. But a funny thing that I’ve learned like yet again, is that we overestimate what we can do in the short term, but we underestimate them, what we can do in the long run. When putting plans together, we all have this ambitious things, we’re going to get this into the next two months. And you almost always get that wrong because you overestimate that. But then when you think about a plan that is a few years, like a couple of years out, you almost always, undershoot, right?

So, when I keep seeing this time, and again, and this is something that I think affects how you think about building your company, putting plans together, especially when things are moving fast. It matters a lot. So put a lot of thoughts into plans, writing things down a lot.

Matt: Well, you’ve heard this from me before Luis, but Clem, I love what you said too, because it is, the customer and the founder are almost always right. And the VC is often wrong. So, they’re trying hard. We try hard! Well, gosh, I’ve just so enjoyed getting a chance to listen to both of you and asking a few questions and, you know, excited to see where this world of enabling technologies like Hugging Face and Octo ML and the underlying capabilities around that go in the future. What that portends for the future of intelligent applications that are brought together and really can, I think, transform the world where I think you’re probably both right. That in the future, we’re not going to think about DevOps and MLOps, we’re not going to think about apps and other apps. We’re just going to have this kind of notion of application engineering. But there’s lots of problems to solve along that journey. So thank you so much for spending time with us. Congratulations again on being winners in the intelligent application inaugural class.

And we look forward to seeing all the progress in the future for both your companies.

Clem: Thanks so much.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Hugging Face, they can be found at HuggingFace.co to learn more about OctoML, visit OctoML.ai. And, of course, to learn more about IA40, please visit IA40.com. Thanks again for joining us, and tune in in a couple of weeks for Founded and Funded’s next spotlight episode on another IA40 winner.

Starburst’s Justin Borgman on entrepreneurship, open source, and enabling intelligent applications

Starburst CEO Justin Borgman

This week on Founded and Funded, we spotlight our next IA40 winner – Starburst Data. Managing Director Matt McIlwain talks to co-founder and CEO Justin Borgman about how launching his first company was like getting a Ph.D. in entrepreneurship, and then they dive into the customer problem Justin saw that made him believe the time was right to launch his second — Starburst. The two discuss open-source alignment, why making use of cloud partnerships early, especially cloud marketplaces, can be so beneficial for startups, why Starburst had to change the name of its query engine from Presto to Trino, and Justin’s guidance for creating a future-proof architecture.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Founded and Funded. This is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group. And this week we’re spotlighting another 2021 IA40 winner. Today Madrona Managing Director Matt McIlwain is talking with Justin Borgman, founder and CEO of Starburst Data, which was selected as a Top 40 intelligent application by over 50 judges, across 40 venture capital firms. We define intelligent applications as the next generation of applications that harness the power of machine intelligence to create a continuously improving experience for the end-user and solve a business problem better than ever before.

These applications require enabling layers. And we’re delighted to have Justin on today to talk more about the enabling company he co-founded in 2017. Justin walks us through how launching his first company – Hadapt – was basically like getting a Ph.D. in entrepreneurship and then through the customer problem he saw that led to the launch of his second company – Starburst. Matt and Justin discussed why making use of cloud marketplaces early can be so beneficial for startups. Why Starburst had to change the name of its query engine from Presto to Trino, and Justin’s guidance for creating a future-proof architecture. But I don’t want to give it all away. So, with that, I’ll hand it over to Matt and Justin.

Matt: Well, hello everybody. I’m Matt McIlwain, I’m a Managing Director here at Madrona Venture Group, and I’m just delighted to welcome Justin Borgman, Founder and CEO of Starburst Data. Starburst is really behind the popular Presto-based open-source project called Trino that helps customers carry out complex analytics on disparate distributed data sources. We’re going to talk all about that here with Justin and, you know, Starburst was selected as one of the top 40 intelligent applications, as an enabling application. And as you’ll see, Starburst is very much the kind of the core of that. And one of the things we’re going to dig into today a bit is at what layer of abstraction this next generation of data enablers actually lives. But before we get into all of that, Justin welcome.

Justin: Thank you, Matt. You know, we’re honored to be selected, and it’s a pleasure to be here with you today.

Matt: I think it would be just great because prior to Starburst, you’ve done some really amazing things, and I think they kind of inform ultimately how you got energized and excited to create Starburst. Can you, for our audience, just walk us through the time before Starburst?

Justin: Yeah, sure. My journey, at least in big data and analytics, really begins back in 2010. So, 12 years ago with the founding of my first company, which was called Hadapt. And that business was really based on some research by the folks who became my co-founders in that company, Daniel Abadi and Kamil Bajda-Pawlikowski who were a professor and Ph.D. student at Yale University and co-wrote a paper called HadoopDB. And the basic idea of back in 2010 that they had, and really were pioneers with this paper — was could we turn Hadoop, which was becoming the data lake. In fact, the term data lake was really created in the context of Hadoop back then — could we turn that into a data warehouse? Could you actually run SQL analytics on data in Hadoop? Could you connect BI tools? Could you use this effectively as an open-source data warehouse? And I was in business school at the time. I had a computer science degree previous to that. I was a software engineer for the first few years of my career before going to business school. I read this paper, and I was like, this is the coolest thing ever. I walked over to the computer science department and talk to those guys into starting Hadapt with me, which was really the commercialization of that research.

Ultimately, we built that business over four years and learned a tremendous amount in that process, both in terms of the market but also as an entrepreneur, as a first-time CEO. Even though I was in business school, maybe my Ph.D. I guess you could say was going through that first startup. Cause there’s so much that you learn through experience that you really can’t read about and almost can’t be taught without going through it. And some of the lessons of that startup that we saw, and this was particularly evident to me when the company was acquired by Teradata in 2014. So, I became a VP and GM at Teradata. And one of the things that became very clear to me at Teradata, which is by the way, like the pioneer of the enterprise data warehouse, right. They’ve been around 40 years and they kind of created this concept of a single source of truth, get all of your data into one place. And what I found was that despite their success none of their customers had gotten all of their data into one place. And that was a really eye-opening moment to me that centralization might not be possible. If the leading company for 40 years couldn’t do it, why should we expect we can do it now? That got me thinking about the future of data warehousing in a more decentralized fashion. And that coincided with me meeting the creators of an open-source project at Facebook called Presto at the time. And we began to collaborate — Teradata and Facebook — which may seem like an unlikely pair. We started working on how we could make Presto an enterprise-grade solution, to really allow you to query data anywhere. And that was what excited me about the technology. It was a query engine for anything.

Matt: Wow. Can’t wait to dive more into that. It’s interesting, your observation about Teradata, which really was a pioneer in data warehouses and sort of this point of how hard it is almost more from a sociological perspective to get all the data into one centralized place. Was there also, as you learned more about Teradata, a technological constraint? And what did you find what’s? I mean, congratulations. I mean, it was incredible to build Hadapt and to be acquired by one of the really, truly great technology companies. But what was the constraints there, too?

Justin: Those are great questions. By the way, I want to put an exclamation point on the sociological piece. I think as technologists, we naturally think that – it was a great engineer and leader who gave me the advice maybe 10 or 12 years ago. He said, “There are no technical problems, only people problems.” And that has stuck with me because I think as technologists, we often underestimate that. But to your point on the technical side, and I would say this is maybe just part of a function of the business model of the day, Teredata sold their product as an appliance. And an appliance for anyone listening, who doesn’t know what an appliance is. It’s just hardware and software combined.

And the goal of an appliance is well-intentioned — it’s to provide simplicity to the customer. You just plug it in and go. But it also makes it very inflexible to the world that’s evolving around you. So, I think that was one of the challenges you were buying basically high performance, almost like a supercomputer database, and you were paying a lot for that as a result. So, you really couldn’t take advantage of increasingly low-cost commodity hardware, and then even more so, you couldn’t take advantage of the elasticity and the separation of storage and compute that the cloud provides. Incidentally, that was, I think, what really helped give rise to one of your portfolio companies, which is Snowflake, right?

Which really was the first to take advantage of that storage compute separation.

Matt: Yes. And then to effectively say, well, I’m going to let the cloud be the kind of underlying resource around which I can build an abstraction layer on top of that, which in that case was a cloud-native data warehouse. But you have, in a sense, taking a different approach, complementary but different. Bringing us back to the story of the founding of Starburst — tell us a little bit about the Presto team, maybe build on the beginnings of that story of that collaboration and how that led ultimately to the formation of Starburst.

Justin: Absolutely. Presto was first created by Martin, Dane, David and Eric. They all are here at Starburst of course today, but they created it in 2012 at Facebook and then open-sourced it in 2013. And it was really, one of the goals for them was to provide a much faster interactive query engine compared to Hive, which was the previous generation also created at Facebook by the way. So, Facebook was very much pioneers in open sort of data lake, data warehousing analytics. But Hive was not fast enough. Presto was designed to be much faster, and it had this really interesting abstraction where it was truly disconnected from storage, meaning that they were agnostic to data source. So it wasn’t just a SQL engine for Hadoop. It was a SQL engine for anything. You could query my SQL, you could query Postgres, you could query Kafka, you could query Teradata, you could query anything. That was what attracted me to it and began the collaboration. And you’re absolutely right, I think this is one of the hidden secrets of the Presto/Trino history. Teradata played a really important role in those early days in terms of making it by companies outside of Silicon Valley — companies who need access controls and security enterprise features.

Matt: Enterprise abilities and your insight to listen to the customer and understand that those abilities were going to be needed, especially when you’re talking about data and accessing data, you know, it’s a little before your time. One of the very first companies that I became familiar with at Madrona, and it was an investment we’d already made when I joined in 2000 was a company called Nimble Technologies. And this was a precursor, and it didn’t work to be candid. And part of it was the sociological reasons, you know, who moves my cheese, who moved my data. It was trying to do it in a way that was distributed like Presto and Starburst do, but there was so much concern about the abilities – the securability, the reliability, the availability that at that point in time, I don’t think the technologies were ready either, created the challenges. What were the early use cases that you were seeing? I mean, I’m sure there were some inspired by Facebook that were just so much: is such a problem, I’m willing to go take the risk on this new open-source project in this company, building a hardened layer on top of it.

Justin: Well, there are really two categories of use cases. I think, where the Silicon Valley internet companies at the time were using the technology and still do today, the Airbnb, Netflix, Lyft, LinkedIn, Twitter, Uber, Dropbox were effectively using this as a data warehouse alternative. Those companies deal with such a volume of data they just couldn’t possibly fathom buying expensive appliances, let’s say, to store all of this data and analyze it. And so this became the way that they ran all of their analytics. So that was one category— essentially, I have my data in a data lake. In the early days that was Hadoop. In the more recent years, that’s probably S3 on Amazon or Azure data lake storage or Google Cloud Storage. So, you know, I’ve got really cheap storage. I can store my data and open data format so I can use different tools to interact with it. I can train a machine learning model using Spark, and I can query it with Starburst or at the time Presto, which later became known as Trino. And the reality is, that has been a very core bread and butter use case. Some call that use case now a Lakehouse, basically doing a data warehouse in a data lake.

The other category though, which I think you’ll find interesting Matt, and was a big reason why we built a business around this. We were seeing that fortunate 1000 global customers had a slightly different need that I think actually we could only uniquely solve, which was the fact that they had data silos. They had data in a variety of different systems. So, if you’re a big bank, a big retailer, big healthcare company, particularly regulated industries, you have decentralization, and that’s never going to change. It’s just impossible, truly for those types of enterprises. So, what we were able to do is essentially join tables in different systems and give you fast results.

So maybe you’ve got product data or customer behavior data in a data lake, and you’ve got billing data or finance data in a data warehouse. And you want to be able to join these two together to understand how the customer behavior is driving profitability or revenue, or what have you. So those are classically living in maybe different data sources, and we can execute those queries in effectively real time or at query time and give you fast results. And some people will say, well, that sounds a lot like data virtualization of 10 or 15 years ago. The big difference here is that Trino and Starbursts are actually an MPP execution engine. MPP just means massively parallel processing. So, it’s running on a parallel cluster, not just one machine. And because of that, you can get performance and scale that you could never get with those previous generations.

Matt: And I think that was the technological limitation back in the day is that you didn’t have this MPP capability that has subsequently come along. And for that matter networks so that you could do that in a distributed way.

Justin: That’s exactly right. People ask me, “well, what’s different now.” It is those two points. It’s MPP and its network bandwidth. You’re a hundred percent spot on.

Matt: And so what’s interesting, there is that enables these big institutions to create their own intelligent applications effectively, or their own intelligent analytics platform. They may not turn it into an application. They made us choose to use it for some in-house continuous insights. Is that where you have found more of those types of use cases in contrast to somebody using Trino and Starburst as a platform to build an intelligent application as a service?

Justin: So, in house, I would say was definitely where the business started. And, really began with power users who really understand the data that exists in the organization and just don’t have the ability to access it or query it. It really started with like doing exploratory analysis. I’ve got an idea and I want to go test my hypothesis. I need to run some ad hoc queries and get results. And my goodness is going to take me weeks if I have to go to the data engineering team to create pipelines and move data and get it into our data warehouse. And I need to iterate at a much faster speed. So that time to insight was a real driver of early use cases. The other driver was a need for accuracy or freshness I guess I will say because we allow you to effectively skip ETL and we try not to be too dogmatic about this. We’re not saying that ETL is dead or we’re getting rid of ETL. It’s just that we make it optional. And there are going to be cases where it may be advantageous to just connect to your data source and query it rather than moving it. And that gives you some really interesting optionality as you’re doing your analysis.

Matt: And ETL, of course, meaning extract, transform, and load the data. It’s a set of preparations that make the data more queryable and more usable.

Justin: Exactly. So, with the classic data warehousing model pioneered by Teradata and of course Oracle and IBM, it was all about extracting, that’s the E of ETL, your data from the different data sources you have, doing some kind of transformation to normalize it or get it prepared and then loading it into this new enterprise data warehouse.

And that process, that ETL process, ends up taking a tremendous amount of time, particularly human time in terms of creating those pipelines and maintaining those pipelines. Cause you might add a new field in a source database, and now you need to go add that field in your data warehouse, and you’ve got to keep these in sync and so forth. That’s part of the disruption, I guess you could say that we’re offering the market – the ability to skip that process where it makes sense and just query the data where it lives directly.

Matt: Say more about that? Cause I do think that’s one of the transformative capabilities of Starburst. I mean, how do you do that?

Justin: At a technological level, the easiest way to think about our architecture is that we’re a database without storage. That’s the way I explain it to people. For database geeks, they’ll understand the full stack, you know, there’s this SQL parser, cost-based optimizer, query engine and execution engine, and often a storage engine where you’re storing the data. It’s the storage engine piece that we intentionally don’t have. And that’s what gives us a different perspective on really how we design and build the system where we are intentionally reliant on the storage systems that you connect to. And so, we connect to a catalog that you have either a universal catalog — some companies have all their data in one central catalog, and we partner with Alation and Collibra and Glue Catalog on AWS and so forth. Or you’re connecting to the catalog of the individual source systems — Teradata, Oracle, Hive Metastore and Hadoop — and that is effectively how we know where the data lives. And then our engine is going to execute that query, push the query processing down to where the data lives as much as possible to minimize traffic over the network and then pull back what’s necessary to complete the query, execute the join in memory. And back to that point about MPP — that parallel processing is what’s able to give it the performance and scale. Often I have these conversations with customers who maybe are hearing this the first time and they say, “This sounds too good to be true. How can you possibly do this?” It is that MPP aspect that makes this possible in a performant way.

Matt: And in that sense how should I think about where the quote “file system” lives or the data and metadata system that even if I’m not having to deal with the underlying storage, I still need to know the metadata about all the data that I’m trying to access, so I can do a query.

Justin: Different customers have slightly different approaches here. Some leverage a third-party tool, you know, like Alation or a Collibra, which might be a solution. Others maybe are just joining between data lakes and might be leveraging the Hive Metastore. To me, the lasting legacy of Hadoop is really the Hive Metastore. That seems to continue to persist even in the cloud age, if you will. Or, if they’re in an AWS stack, Glue Catalog is a great way of keeping all of your metadata across a variety of Amazon products in one place, we can leverage that we can collect statistics. Collecting statistics is really important because it allows us to optimize the way we execute the queries when we know how the data is laid out and where it lives.

Matt: That’s great. Maybe also so that people that are not familiar with these things, is this a read-only capability or is there a write-back capability? So, I do a query. I can do some analytics. I want to write something back to those underlying distributed data stores. Tell us about that.

Justin: That’s a really important question. And for anyone in the audience — the reason that question is so important is that historically, if we go back to my first startup, in the land of Hadoop, if you will — the early data lakes, you really couldn’t write data effectively. You couldn’t do updates and deletes. It was really designed to be an append-only system. You just keep adding more data to it, but you couldn’t modify the data that existed. And that was a real limiting factor for a lot of use cases. For example, one of the most popular examples is probably GDPR or other data privacy rules that say, look, Matt wants himself out of our database. He doesn’t want us to keep sending him emails. You have to go in and then remove Matt from the database. And that was very challenging to do in a data lake world. And, and that was one of the reasons, quite frankly, that necessitated that you still had to have a data warehouse in your ecosystem. You couldn’t just do everything in a data lake. Now that has changed in the last few years in a very important way on two levels. On both the query and the storage level. And I’ll explain what I mean by that.

So, first of all, on the storage level. There have been new table formats that allow you now in a data lake to make updates and deletes. And they’re really three that are important today. There’s one called Delta, which was created by Databricks. And then later open-sourced. There’s one called Iceberg, which is definitely a fast-mover. And, I would say keep an eye on Iceberg. That was built at Netflix and is used by many of the internet companies today. And then there’s a third one called Hudi, which came out of Uber. And all three of these approaches effectively allow you to do updates and delete. So no longer is this a limitation of a data lake model or a lake house model.

The other piece is on the query engine side, where over the last year or two we’ve added that on the query side. So now you can write data back. You can do updates and deletes in a data lake. You can even create tables in other data sources. We have some customers that use us as part of a cloud migration, where they’re taking data out of a traditional on-prem data warehouse and moving it into a cloud data warehouse and are able to do that through a SQL query engine effectively.

Matt: I’m going to pop this back up for a second to the open-source history here. So it starts out and you’ve got Presto and then I’m curious how it became Trino and then how the Starburst complements and works with the Trino ecosystem. And what are the types of things you’ve built for the commercial product that are complementary to the open source?

Justin: First of all, I’ll just say for me, as I was thinking about starting my second company, open source was an important criteria of the type of business that I wanted to build, because I think there are some really inherent advantages both for the company and customers. The first is, you get the benefit of contributions from a wide audience. I think that really enriches the technology and allows it to grow and evolve at a faster rate than perhaps a single vendor pushing it forward. And what I mean by that is, for example, in the early days the geospatial functions were created by the ride-sharing companies. We didn’t build those. I mean, maybe we would’ve gotten to it eventually. I don’t know, but they built that. So as a result, pretty much every single ride-sharing company in the world now uses this technology. The other benefit is it gives you very broad distribution. It is open source and therefore it is free. Let’s not mistake the fact that it is free. And like anything that’s free, people are going to download it and start using it and use it on a global basis. So, we’ve had customers in Asia Pacific, Europe, Africa, you know, everywhere from the earliest days of the business because of that distribution.

That was one of the lessons, painful lessons for me, actually, I learned in my first business, Hadapt. Although it ran on top of Hadoop, we were selling proprietary software and when Cloudera introduced Impala and that was free and open-source, included with the distribution. So, you know, that was really hard for us because we weren’t getting the same number of looks or evaluations if you will. The last piece I’ll mention on why open source is, I think for customers, it brings the benefit of not feeling locked in to a specific vendor. And I think at least in the data world that has been a historical pain point – where the Oracles and even Teradatas of the world effectively increased prices became very, very expensive and customers fell kind of captive by their vendors. The notion of an open-source project offers customers the freedom to potentially say, you know what, this vendor isn’t adding the value that I want, but I want to continue to use the technology. They have that flexibility. And this is another reason why I think open data formats are really good for customers because then your data is not locked into a proprietary format either.

So that’s a little bit about the kind of why open source. Then you asked the question about sort of Trino and Presto and how we interact with the community today. So, the original Presto was created at Facebook, as I mentioned by my co-founders and the creators effectively left Facebook, joined us and, in the process, created Presto SQL. And so, you actually had two Prestos — a lot of people didn’t know this, but there was Presto DB and Presto SQL. Unless you were really involved in the space, you know, potato/patato, I guess, for, for a lot of folks back then.

Matt: Yeah.

Justin: But, the reality was that the community effectively moved with Presto SQL. That’s where we were investing. That’s where LinkedIn and the other large community players were investing. The name change was more recent. That was a little over a year ago, and that was driven by a trademark issue because PrestoDB was, was the first name. It was created at Facebook, even though it was created by the folks here. It was created while they were employed there. And the way trademark law works, of course, is your employer owns the IP that you create when you’re employed. And so, basically, PrestoSQL had to change its name. So Presto SQL became Trino a little over a year.

Matt: Got it. That’s super helpful. And I think also helpful for the audience. So now we have you know, this open-source Trino and maybe connect the dots between the underlying open-source capabilities and what Starburst is building on top of that.

Justin: First of all, I will say that the open-source aspect of this is still very core to what we do. And my co-founders are deeply involved in the open-source community. And there is a real, I would say philosophical aspect to wanting to make the open-source project a hundred-year project. I think we look at Postgres maybe as a good example of a database created many, many years ago that is still super relevant today. And in order to do that, you have to really have a vibrant community and you have to be making sure that you’re continuously improving it in a meaningful way.

So, the majority of the performance improvements, scalability improvement — those go right into the engine. The engine remains 100% open source. We build our product off of that open source. We do not have our own proprietary fork. some open-source companies do things that way, we don’t. We build directly off of the open source. And what that means is that effectively, when somebody adds a new feature or capability to the open source, our customers are able to pick it up right away because we’re building off of that. But it also means that we’re continuously invested in the success of the open-source project, because the stability of the underlying technology impacts our own stability for our own customers.

So, we invest a lot of time and energy in that and continue to do so both in terms of code quality and testing and code reviews and so forth.

Matt: And that’s a great mindset to have for both the longevity of the underlying Trino open-source movement, and I think it also serves your customers very well. I know this is a simplification — When I think about another company — Databricks is to Spark as Starburst is to Trino, right? And so, in the case of Databricks, they have done some things to supercharge performance to create a managed service and then create a lot of integrations that make it easier to move things in and out of its managed service in the cloud. And then there’s some of these abilities, these commercial abilities that we’ve talked about that kind of wrap around all of that, that seemed to be some of the core things that you get in Databricks that you wouldn’t get naturally, in this underlying Spark open source. Are those the kinds of things that you all differentiate Starburst from Trino on or complement Trino on? Or how do you think about that?

Justin: In many ways. Yes. I think there are probably a little bit of subtle differences to the philosophy. My co-founders are very adamant that we not have different engines, like core elements of the engine. We just don’t do that in a way that Databricks, I think, does in a few areas. So, you’re getting the same core engine on the open source and Starburst. So, that’s maybe one difference. But I think there are a lot of common themes there. I mean, I think really what we’re trying to do is make the technology accessible, useful, and valuable to customers both in terms of the enterprise features and capabilities they need around security or access controls or connectivity to various different data sources — performance as well. We have this notion of materialized views, which is pretty cool, as well as making it just easier to deploy.

We started with a product called Starburst enterprise that is self-managed, meaning customers have to run it and manage it. That’s been very successful, but we just introduced Starburst Galaxy, which is intended to be super easy. And the beauty here of two products, we debated this a lot. Like, are we just pivoting this? Or is this two products? What does this mean? And it is intentionally two products with different criteria. And what I mean by that is Starburst Enterprise is an always will be intended to be maximally flexible to deploy in your environment, whatever you have. So you’re a big bank. You’ve got Kerberos, you’ve got LDAP, you’ve got, Oracle and Db2, and you’ve got all these different things. We’re going to make sure that enterprise works for you within your environment. Galaxy is optimized for ease of use and time to value. It’s kind of the difference between like Linux and your apple iPhone, right? Like iPhone is meant to be useful to even your grandmother, hopefully. That even she can get value out of it. Linux, of course infinitely flexible. And The way we’ve kind of approached those.

Matt: Just to make sure that I and our audiences are understanding Galexy, how similar is the analogy to kind of Mango Classic and Mongo Atlas, where Atlas is the cloud version — it’s a managed service it’s ease of use kind of dimensions to it. Is that a good analogy or not?

Justin: It is. I think it’s probably one of the best analogies. I would say Mongo and, and maybe Confluent are probably our top-two role models in terms of balancing self-managed enterprise product and a cloud product that are similar and different in important ways. To the point about Mongo and that being a great role model for us, we’re lucky enough to have, Carlos Delatorre the former CRO of Mongo as an angel investor very early on. I’ve learned a lot from him over the years. And then, we just hired as our CRO a guy named Javier Molina, who ran sales for that Atlas product specifically. And one of the reasons we were so attracted to him was because he understands that go-to-market motion, and we think that’s going to be really big for us in terms of the market. Today we do very well in the large enterprise. We think that this technology could be applicable to thousands of customers. Not, not just hundreds of customers.

Matt: That is a great hire because that Atlas product from a sort of a standing start four years ago now represents more than half of all of Mongo sales. It’s just incredible to see the team at Mongo in that way. But maybe take us a little bit into the decision to and then launch Galaxy and how that’s additive to both your existing customers and how it opens the door to some new customers.

Justin: I will preface by saying, and some of the audience may know this, we started Starburst as a bootstrap business. We didn’t actually raise venture right away. And that’s important context because, while I loved that part of the company’s history, and I recommend that to any founder who’s able to get a business off the ground that way initially. The one drawback, of course, is you don’t have the capital to go make huge technology bets necessarily. Right? We were funded by revenue. We were a profitable cash flow, positive business. So the moment that we did raise venture, a couple of years into it, that’s when we said, “Okay, we’re going to build this SaaS solution.” So, one part was like, it takes capital to build a SaaS solution, and that was an important trigger. The other motivator though, which kind of gave us confidence that this would work out, is that we were very early and making our self-managed product available on AWS Marketplace. And the reason I mentioned AWS Marketplace is that was a self-service way of buying and consuming our product.

Now it’s not a SaaS solution per se, but it is a self-service way of transacting, deploying via a CFT, and using our technology. What was very interesting to us, is we launched that when we were, I don’t know, 20 people, bootstrap, tiny little company, nobody had ever heard of us. And we did it mostly just because we thought the marketplace was interesting. It wasn’t necessarily any genius idea. Although, it looks maybe genius in retrospect. But what we saw with that was an organic adoption. We didn’t market our marketplace offering. We didn’t push our marketplace offering. We weren’t doing any outbound back then and we saw more and more people start to use it. What was really interesting about that was not only was it growing on its own without us really doing anything to it, also it was a very long tail of customers. And that was what kind of told us. Okay, we’re obviously having a lot of success with Fortune 1000, but there are companies using our stuff that I’ve never heard of before. And that’s super exciting.

Matt: Yeah, that’s awesome.

Justin: And so, for us, that was the signal that there was a market beyond what we were seeing at that point in time.

Matt: I would imagine that is, especially since it was a self-service offering, so, you know, somebody had to have some degree of technical acumen to kind of stand it up. And run it. Were they most often then running it in the cloud, I guess in theory, I could buy it in the marketplace and then operate it on my own desktop, too.

Justin: I think that’s true in theory, but, but you’re right, that it required some heavy lifting on their part. It was a real effort A) to find us and B) to deploy this, to stand it up and manage it all on their own. To us, it was kind of like, imagine how many people might use it if we could make this easy. And that was the motivation for Galaxy.

Matt: Say a little bit more about how it’s been working with, you know, the big cloud service providers to go to market with Galaxy.

Justin: It is actually available on all three major public clouds. And we designed it that way from the start. But, they’re great partners. And look, I’ll preface by saying of course there’s going to be some coopetition and overlap because every cloud provider has an enormous portfolio of products. So there are overlapping points. But at the end of the day, the field organizations, the sellers, just care about driving consumption of those clouds.

And that’s what we do. You know, the more queries you run on Starburst, the more AWS compute or Azure compute or Google compute, you’re consuming. So, they’ve been great to partner with that way. And the marketplaces, going back to that point, turn out to be a great transaction vehicle. I can’t stress this enough for any aspiring entrepreneur. Get your Ph.D. in marketplaces. And by the way, there are a lot of ecosystem partners now that help you with that, like Tackle for example.

Matt: Are you finding, I mean, I’m sure there are differences. Is there naturally better alignment with your products and the kind of customers you’re trying to reach, between the different cloud service providers or is it too early to tell?

Justin: Well, I think we partner with all of them. We enjoy working with all of them. If I was going to maybe single one out just a little bit, I would say that I think Google’s philosophy or approach to the market is interesting to me and well aligned to some of our own fundamental beliefs.

And what I mean by that is I think Google, as the challenger in the market, acknowledges, understands, and embraces that they’re never going to own all of the data in the world. And that’s important at least important, I think for me, and important for customers, because they’re willing to approach the market from the standpoint of not necessarily saying everything has to be in Google or, creating more freedom for customers to basically do different things in different clouds. They’re much more, I guess I would just say, open to the fact that it’s a heterogeneous world, which is a very core aspect of what we believe.

Matt: And so, to that end, do you find that whether it’s in Google or otherwise, that when I deploy Galaxy in somebody’s cloud, and I’m running it in the cloud, that I’m querying data sources that are back on-premise as part of the queries that I do. Or is it strictly the data that’s living in different data repositories or in a data lake in the cloud?

Justin: It can be either one. And that’s part of the power I think for customers is that flexibility, that optionality, that ability to modernize their architecture before they migrate. We’re not saying don’t migrate, but we’re saying we can give you access to everything you want today. And then you can migrate at your own pace, which I think is very powerful. And just to close on the Google point. We just announced a partnership that allows Google customers to leverage big query, to access data in different clouds, different data sources on-prem, etc., effectively extend beyond Google. And I think that’s an important thing to note as well.

Matt: I do think that this whole thing about data and really workload migrations, you referenced it a couple of times. You know, you and I have lived in the cloud and data world for decades now, and it seems like it’s still relatively early innings, but what are you seeing from a customer perspective, especially the enterprise customer, on their, kind of cloud migration journey?

Justin: I will preface by saying it varies. Some are further along in that journey. Some are just getting started. I think one of the biggest things that I find interesting and really try to drill into when I’m talking to customers is to what degree they think they are going to consolidate all of their data into one place. Because what I have seen, and I think this is a risk, so if there are any potential customers listening to this, keep this in mind. Customers have a fantasy, and I can understand why you would like this fantasy of saying, “Oh cool, we’re going to turn off all of these different databases that we have, this total mess that we have on-prem, and we’re going to just get it all into one cloud data warehouse.” And I’m not picking a Snowflake. I’ve heard the same story repeated with every one of the cloud data warehouses out there. My word of caution would be, we’ve seen that movie before over 30 or 40 years, and to the greater extent that you do do that, the more you’re beholden to a particular vendor, which is going to get expensive for you. What I like to remind people is, all these new companies are very charming and attractive today, but Larry Ellison was charming in 1979, and how many of you are still charmed by him today would be my question.

Right? So just be careful in that. Think from a long-term perspective. Create a future-proofed architecture — those would be just some of our pieces of advice.

Matt: That’s good advice. It might be one thing to say I’m going to retire your old employer, you know, Teradata data warehouse in favor of a more modern cloud-based data warehouse. But I do think it’s highly unlikely and ill-advised to think that you’d ever have all your data in one data store to rule them all as it were for all kinds of reasons. But that I think brings us to this data cloud alliance. I note that Google is a part of that, Databricks, Confluence, several others. What was the genesis? What are you trying to accomplish there in service of your customers?

Justin: It’s around trying to create openness, freedom for customers to be able to work in an interoperable fashion across the different clouds that they may participate in. This is another maybe fantasy that I’ll mention. A lot of companies, I think particularly those early in their journey, will say, no, no, no, we’re just doing one cloud. We’re not doing multicloud. We’re just doing one cloud. It’s all going in cloud X. And, the reality is that changes very quickly. One of the fastest ways that that changes is when you make an acquisition. You just bought a new company, and they’re cloud Y, so now your multicloud, whether you want it to be or not. We have a vested interest in trying to give customers choice and the freedom to operate across these different clouds. And I think Google is very forward-thinking in embracing that as well.

Matt: That leads to an interesting question. I mean, I like to think that, infrastructure as a service or kind of the core elements of cloud service providers, was an abstraction layer effectively on top of hardware. To kind of oversimplify it. But is there a new abstraction layer emerging that maybe we could think of as data lakes, data lake houses, cloud-native data warehouses, or how do you think about that layer of abstraction relative to infrastructure, and then relative on top of it to applications?

Justin: Abstraction is such a powerful vehicle I think for application developers, anyone building an architecture. Abstraction gives you a lot of freedom to change the components of course, underneath. For us, what we’re obviously most interested in is being that abstraction layer for SQL-based access to all of the different data sources that you have, so that you have the freedom to change those pieces. Maybe it’s Hadoop and Teradata today and tomorrow it’s S3 and Snowflake — great — so long as your applications, your BI tools, everything that speaks SQL are pointing to Starburst. And then you have the ability to make those changes underneath, around storage and effectively commoditize storage, which is also very powerful for customers. And there is an emerging name, or a category, if you will, that we’re pretty excited about, which is this notion of a Data Mesh, which is really sort of speaking to this idea of decentralized data and creating a mesh that, that sort of works across that. Now that is back to one of the first things you said on this podcast — there’s a sociological component to it. In fact, the creator of this concept is a woman named Zhamak Dehghani. And if anyone’s interested, I encourage you to buy her book. Actually, we’re giving it away for free on our website. But she describes it as a socio-technical sort of movement, if you will. Which is to say it is people, process, and technology altogether. But we think we can be the technology to enable that. The people and process side is very interesting because part of what that enables is the opportunity to decentralize not just access to data, but a decentralized sort of decision-making and ownership of the data. So, this is kind of like a way of putting more power in the hands of the data producers — the ones who are responsible for that data and know the data the best to also participate in the creation of data as a product that can be shared and consumed by others in the organization. So, it’s a really interesting philosophy one that we see certainly gaining a lot of attention, and I think be gaining more and more momentum over time.

Matt: We touched on some of the technological reasons around the why now. Is there evidence of the, why now on sort of these more sociological dimensions and how much has the fact that we all had to live in a digital-only world for a while? And we now believe, I think we all do, that we’re going to be living in a hybrid working world — has that been part of the why now that sociologically people are saying, “Hey, we just gotta change so we can do more of a decentralized approach,” or am I just kind of speculating here?

Justin: I think that’s right. I think the things driving that in my view are, are first of all, just complexity of data sources. We’ve got more data. Everything is collecting data, right? As we’ve digitally transformed, and the pandemic has only accelerated this, we have now more opportunities to analyze and understand and make data-driven decisions. But to do that, it’s just not scalable for everything to always run through one team, one person, one brain. And that’s where I think decentralization is a great way of giving you velocity by delegating and putting more power in the hands of individuals. And I think consistent with that, we operate in an ever more competitive world and companies have to adapt quickly. The speed of adaptation genuinely impacts your top line and your bottom line. So, I think these are some of the things that are driving serious thought around it.

Matt: That’s well said. I have just a couple of fun questions as we wrap up here, but I just wanted to see if there’s anything else that we didn’t cover. That’s important about what Starburst is trying to accomplish.

Justin: I would just say, you know, at the end of the day, what we’re trying to do, and I hope this doesn’t sound cheesy, but we want to do the right thing for our customers. We want to be on the right side of history. And that was one of the things that motivated me to found Starbursts in the first place was that my time in the database industry, up to that point, I met a lot of customers who just felt very trapped, locked in, they weren’t choosing their technology choices. Those choices had already been made and they were stuck with them. They were living with them. Philosophically this notion of freedom is just core to what we’re trying to do. I think you’ll continuously see that in all of our design decisions. We want to be able to support multiple data sources, multiple data formats, be able to operate anywhere. We want customers to be in control, and we think that’s a slightly different perspective than many in the database world at least have historically had.

Matt: I think one other thing that I was curious about is use cases around taking that freedom and distributed, decentralized approach, and then using some of those data sources to help train models from a machine learning perspective. And are you seeing kind of a growth in those kinds of use cases that Starbursts could help unlock?

Justin: Yeah, absolutely. And I always try to be clear that obviously, we don’t do machine learning. We don’t train machine learning models, but I think we’re a very important partner to that process because you need the data to train the model and the more access to data, the better your model is going to be. And so, getting data is the first step to ML and AI. And we think we’re an important part of that.

Matt: We agree. And that’s why we were delighted that, I mean, it was a very strong endorsement of you all being in this enabler bucket for the Intelligent Application 40, and we certainly see and know about those kinds of use cases. A fun question is outside of your company what’s a startup that you’re most excited about that’s related to this broader world of intelligent applications.

Justin: That is a great question. I think Clari is a really interesting example of this. Clari is really the interface that I’m using to understand my business because it ties in all the important aspects of what we’re doing and provides not only a great summarized view, but also predictive analytics about where we’re going to end up. And particularly as you scale, being able to forecast is so critical, especially in the path to an IPO, which we hope will be able to achieve in the next two to three years.

Matt: So. You’ve now been a successful founder, built two companies, Starburst is still a work in process, but you’re doing incredible things. What’s a lesson or two for those in the audience that are either on their own startup journey or considering the startup journey that had been really valuable to you, whether they’re kind of from your first-hand experience or advice from others or a combination.

Justin: Oh man. There’s a lot. I can say there. I think first of all, the advice that I give to any entrepreneur at any stage in the journey, particularly those that are just thinking about maybe being an entrepreneur. I think the single most important attribute is strictly perseverance. You have to have a high pain threshold and a willingness to push through that pain because is not for the faint of heart. It is not easy. I think just some people are built for that. They have the stubbornness, the drive, to push through that, and others get overwhelmed by it and bogged down. So, that’s kind of like a look inside yourself type of thing to evaluate and consider. The piece of advice I will give that I heard myself. I actually asked a now public company CEO founder, “Does this ever get easier?” Because as you’re building, you always think like, okay, at some point, like, I’m just, it’s just going to get easy, right? Like I’m going to be relaxing on the beach, this thing’s going to run itself. And he said, “No, it’s just different kinds of hard.” And that stuck with me because particularly as you scale, every new chapter has been a new challenge and in a totally different way. That’s part of what’s amazing about startups, I think, just from like a personal growth perspective. You are always having to improve yourself, scale to the next level. And so, that really stuck with me. It never gets easier, just different kinds of hard.

Matt: Different kinds of hard. I love that. I don’t know if I’ve heard it phrased that way. So, I really appreciate you sharing that with us, Justin, and yes, you’re always building these new skills for the next phase of the journey, too. And having to let go of things that you did more of so that you can empower others and scale the organization. It has been an absolute pleasure, Justin, visiting with you and incredible what Starburst has accomplished and your role as an enabler of all kinds of data analytics, including those things that go into building machine learning models and intelligent applications. So, thank you very much for taking time with us today and look forward to seeing the continued success of Starburst.

Justin: Thank you, Matt. I sincerely appreciate it. It’s really been my pleasure.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Starburst, they can be found at Starburst.io. To learn more about IA40, please visit IA40.com. Thanks again for joining us and tune in, in a couple of weeks for Founded and Funded’s next spotlight episode on another IA40 winner.

RunwayML Co-Founder Cristobal Valenzuela on the Intersection of Art and Technology

RunwayML, Cristóbal Valenzuela

In this episode of Founded and Funded, Madrona is launching a special series to highlight some of its IA40 winners, starting with RunwayML, which offers web-based video editing tools that utilize machine learning to automate what used to take video editors hours if not days to accomplish. Madrona Investor Ishani Ummat speaks with Co-founder and CEO Cristobal Valenzuela all about where the idea came from, how he decided to launch a company instead of joining Adobe – and even how TikTok fits into all of this. Listen now to hear all about it.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to founded and funded. This is Coral Garnick Ducken and this week we are launching a special series to spotlight some of last year’s IA40 winners. Today, Madrona investor Ishani Ummat is talking to Cristobal Valenzuela about the web-based video editing tool RunwayML. It all started as a research project inside NYU using an algorithm to stylize and colorize images in Photoshop, but Cristobal now sees Runway as an opportunity to not simply improve how things have commonly been done, but rather leapfrog an entire industry. And the company secured a $35 million Series B in December to work toward that goal. With that, I’m going to just hand it over to Ishani and Cristobal to dive into it.

Ishani: Hi everyone. My name is Ishani and I’m delighted to be here today with Cristobal Valenzuela. The CEO of RunwayML. RunwayML is building a web-based real-time video editing tool with machine learning and last year RunwayML was selected as a top 40 intelligent application by over 50 judges across 40 venture capital firms. We define intelligent applications as the next generation of applications that harness the power of machine intelligence to create a continuously improving experience for the end user and solve a business problem better than ever before. Runway is a story I love — re-imagining creativity with machine learning. And I can’t think of a more interesting conversation to kick off our IA40 spotlight.

Cris, thank you for joining us today.

Cris: Thank you for the invitation. I’m super happy to be here.

Ishani: I’d love to start off with your thesis project actually at NYU. That’s sort of the basis for this company. Take us back to that time. What led you to this idea? Why did you start working on it? And did you know you wanted to start a company?

Cris: So, the short story about Runway is — I’m from Chile, and I moved to New York five years ago. And the reason I moved was at the time, I was just fascinated with things that were coming up in the computer vision world. I’m coming from an econ background and had no experience building deep learning models before, but the things I was seeing specifically around computer vision generative models like five, six years ago, it just blew my mind, and it blew it so much that I just decided to move to study this on a full-time basis at NYU.

So at NYU, I basically spent two years just doing a deep dive into how to really take what was happening, specifically after I would say ImageNET and AlexNET a bunch of really impactful and big milestones in the computer vision world started to emerge, and apply them inside creative and art domains. And the reason was , I think we’re just touching the surface of what it would really actually mean to deploy algorithms inside the creative practice. The reason I wanted to explore those was just, I knew something was happening. I knew something was about to happen, but yet no one was doing it.

So why not just do it yourself? Um, no, I didn’t know if I wanted to start a company, but by the time I was building the thesis, it was more of an organic direction that we took that I realized that my research was way more impactful than I originally thought of. Specifically, when you’re doing research in an academic situation, you’re always constrained, and the bubble is always perfect. You have all the perfect conditions. But when I started applying some of the things I was doing inside school to the outside world, I immediately realized that industry experts, VFX people, film, creators, artists, designers were like, “Hey, I’m interested in this. I want to use it.” And so that kind of sparked the conversation of — “Oh, maybe we should think about this as a company.” And then yeah, it started from there.

Ishani: Was there an aha moment, in that journey as you’re talking to people and they say — “Oh yeah, interesting research, but I don’t actually know how to apply it.” Was there one moment that you can take us back to that said, “Oh, wow. This is actually so significantly bigger and it’s a company, not just a project.”

Cris: I mean, we started the first research projects in school, there were more about taking image segmentation or image understanding models and video understanding models and applying them with creative domains. So how do you take like someone who’s working in Photoshop and help them understand how the software could basically be a bit smarter in terms of understanding what the person is actually trying to do? What the intent of editing and image is and see if you can have an algorithm or a system that assists you on that editing. So, we built a bunch of experiments and integrations in Photoshop and Premiere. And the ideas were very simple. Like, let’s see, for instance, if I can help you just stylize or colorize or edit an image faster by using some very simple algorithms. And again, it was more of let’s see if this is interesting for these creators. And when I realized there was something definitely here, is the reaction when I remember a few tweets around like, here’s a prototype, anyone interested in trying this? And I remember the amount of inbound interest I got from professional photographers, people working in film people working in ad agencies, very organically being just basically, “Hey, I’ve been struggling with this for years can you just help me cut something that took me weeks of work to 10 minutes. I want to learn more.” That’s when we were like, okay, there’s something definitely happening within the scope of creative domains, and so we should go deeper.

I guess there was one moment in particular where I really thought I should try to do it myself. And, so when I was presenting Runway at my thesis at NYU, someone from Adobe was in the panel. And two weeks after my presentation, they basically offered me to join Adobe, to build all the things that we were building at Runway as part of it their new AI team. I was two years into New York as an immigrant, with the perfect dream company offering you the dream job with a visa and the perfect salary – it is just the dream. When I thought about it at that time, I remember my mom was visiting me and she was asking me, “What else do you want? It’s perfect – everything makes sense, rationally. Why would you not take that? Everything you want is there.” But I couldn’t say yes, my gut, my intuition was like, I can’t do it. If I am doing this, if I’m going to build this thing, I need to do it. And I want to have control of how it’s built. And so, the decision of having the offer and having a capacity of jumping in and being like, “Hey, I’m going to take this. This is a safe solution.” Versus, no, I would really want to try and build it on my own even if I fail, I fail, but at least I tried.

So for me, that was the moment where I was like, OK, something happened — either I go and build inside a company or I try to build on my own because I haven’t raised any capital. I’ll try to see if I can sustain living in New York with no money for a couple of months until I figure this out. I think that motivation of like, okay, I’m going to try to prove that I can make the right decision of not taking it, not going to Adobe, was something that I guess motivated us to do it.

Ishani: That’s an incredible story. Can you talk to us a little bit about this technology that underpins Runway? You know, many of the models that you reference and leverage weren’t even around five to seven years ago. We’ve all spent time editing, whether it’s home videos or in Final Cut Pro and the range in between of getting that mug out of the background or even being able to remove the background from an image was such a huge feature in Microsoft PowerPoint that for everyone out there who makes slides on a daily basis and translating that to video seems like an order of magnitude more difficult. Tell us a little bit about the step change in technology that really enabled the core product of Runway to exist.

Cris: Totally. I think there are a bunch of megatrends on which Runway sits today. We’re seeing an emergence of new video content platforms emerging of the last couple of years. And so, the need to create more video has become more obvious for creators, for ad agencies, but also for companies in general. Every company is becoming some sort of media company. They’re creating content all the time. Everyone’s producing their own podcast, their own YouTube shows. The way that software to create content has evolved and has been developed over the last 10, 20 years is, I would say, still based on an old paradigm of how media works. Like, if you open Premiere, if you open Final Cut, those were software made to make ads for TV. And so the limitations and the constraints and the configurations are all set up for like 10 years ago, right? But if you speak with anyone creating content today for YouTube, for TikTok, for Instagram, the volume and the quantity and the type of content is very different. And so that’s, the first megatrend: How do you think about new tools for the next generation of creators. so within that where ML kind of like really come in and where the differentiator of Runaway is that we see a few things that are happening first, the emergence of the web, like the web as a creative medium. I think Figma and Canva have proven this.

The web is such a collaborative space that you need to just be able to build things on the web. If you want to collaborate with more people, if you want to move really fast, if you want to just not be constrained to any limitations from hardware and desktop. I guess, to your question of ML in particular, we build it so in a way that the video platform, the video rendering, the video encoding itself is entirely ML driven. By that, we mean that every single process in that media pipeline that is either tedious, time-consuming or very expensive to do. We can automate via this kind of like pipeline of algorithms. And so, things like you were saying, like removing an object from a background, has been a very tedious process to do historically in video making. It’s a process known as rotoscoping. And it’s been in film and in video for like as early as video was there. Yet, it’s extremely expensive. So, we thought about it. If that’s a primitive principle, for instance, of video-making, how do you make it so it’s accessible? It’s extremely fast. It’s on the web and the way you do it, it’s not a manual, tedious process. It’s automatic – as fast as possible. So, we’ve built it taking those principles of what folks really want in video, simplifying to the core components, using these human-in-the-loop algorithms and then basically helping you make video faster and better. And there’s a lot of other kinds of components of video that we’re automating as well that basically help drive that motion forward to create more video as fast as possible.

Ishani: I love that you frame the company as being built off of megatrends but then focus on the specific use cases. But then, there’s a broad range of use cases here that I hear you talk about. Across whether it’s an individual creator or, you know, a professional photographer. And so it seems quite widely applicable. When you think about some of the research work that you’re doing and the capabilities of making machine learning more accessible to each of those range of end users How do you actually go about picking and choosing the sort of machine learning models that drive it?

Cris: I would say that. Going back to 5, 6, 7 years ago, a lot of the computer vision and ML models started to become more relevant and commonplace. A bunch of things were also built around that time, like the infrastructure to deploy models. And we’ve seen the emergence of ML ops community in general, like tools and systems the monitor, your training process, tools to deploy models to production tools to optimize models to different devices. There’s a lot of things that happen to basically help drive these models into production. And we’ve seen that in like robotics and self-driving cars. Like those algorithms are becoming more predominant than ever before. Basically, because we’ve invested as a community of ML, folds or ML companies on that infrastructure. And so, for us is the realization that we don’t have to build infrastructure ourselves. Like, you can take off-the-shelf solutions to help you deploy the models into production environments, with millions of users in real time, for instance. The core component, I would say it’s not like spending too much time on that infrastructure, given that it’s already been built. It’s more like what’s the unique problem that you’re trying to solve here? If we think about that, there are two ways you can take that approach one is just looking at open source.

The ML community in general has been built a lot on top of open source. And so there’s a lot of ideas that are really interesting. You can borrow them, you can build on top, and you can contribute as well. We do it a lot. We publish. But when it comes to production like getting things and putting them at the level of perfection that your customers really want it is a whole other beast. That requires a different mindset. For instance, going back to the rotoscoping example. Video segmentation is a task that has been approached in very different ways on the research side. But when you speak with someone doing video, even if it’s a professional VFX and filmmaker or some casual creator, the way you think about it is completely different. At the end of the day, as a creator, you don’t really care what model goes behind the scenes. I think a lot of people might want to overemphasize the need of showing you how the algorithm works and demonstrating its capabilities. But if you just focus on the customer itself, people just really want to remove the objects from their backgrounds. And so with that in mind, there’s a lot of that comes from like automation from how do you build a robust segmentation model? How do you build it so it works really well? It all has all of these kinds of constraints, but at the same time, how do you involve the user input in that process? So half of it is research on ML and the other is a lot of just user research. How are you doing this today? How are you actually doing a background removal process? Some people might use Photoshop or some very complicated to use tools. Some other people may use some sort of automation by building their own tools, and you’re trying to really understand what that actually means. So you build a solution that specifically within creative domains is never fully automated.

Cris: I’m a big believer that you’re never going to find a tool in the creative space that does everything for you. That’s just a dream. That’s a Utopia. Nothing in the creative world works like that. So every solution that’s just input here, do nothing because the machine will do it for you.

It’s just a complete mistake and totally would not work. So, for us, it is more about, you have a problem, you have an insight, you need something to be done. Here’s a system that we build on research that helps you, but we also understand what you require, how you work with the device and how you work with that loop that we call.

Ishani: And, you know, you could argue that if a machine was doing all of it, isn’t really creative inherently. Do you lose that aspect? That sort of intangible aspect of creativity? So, much to unpack here. So, you talked about the infrastructure layer. We call those enablers in intelligent applications where there’s this whole system of, the Databricks of the world, but that DataRobots and all these other companies that are out there that are Grafana, Monte Carlo, that sit at the enabler level that create the ability for folks like you, RunwayML, to build endpoint applications much faster and better than before. Some of that’s in the open-source community, as you say. And some of that is actually, company-based but it removes the infrastructure layer from every intelligent application that has to be built. And, being able to capitalize on that, I think has made a huge impact on the endpoint applications like Runway. And then you think about bringing that to the product. So much of what you talk about is around accessibility. You know, new technology adoption – so much of it is related to how accessible that technology has become, and so in the academic sense, this machine learning models and development and rendering and all these sorts of technical terms, don’t feel very accessible to creators and particularly the demographic that you’re targeting. But building it into a, low-code/no-code, video editing tool, it really does.

So, the classic question is browser versus application, and you talked a little bit about why you’re in the browser and how it’s become so much more of a collaborative and creative space. What are the other decisions you’ve made along the way to make the Runway experience — specifically being able to get machine learning into the hands of creators at a product level, more accessible for new users, borrowing from things like the workflow of Final Cut Pro or some of the other tools that are out there. Tell us about those decisions that you’ve made along the way.

Cris: There are a lot of things that come into this conversation. The first one is, we’re always thinking in terms of the company, like the build versus buy. If I want to build and deploy models to millions of users, I don’t have to build a whole backend infrastructure and don’t have to own the instances. You just plug into the whole infrastructure that has already been built. And that’s so good because you can focus on the key differentiators of your company. What are the things that are unique as a product that will help your customers do more?

So, for our customers, what they want is just to create more video faster. And so, for that, we basically take existing primitives from the video space. And so, we’re really close to like professional software for people working in the industry for years to try to understand what are you trying to actually do in your workflow and how could something like an automated system help you, but also open the doors for other folks who would have never of being able to do that thing before, do it as well? And when you think about that, you think about, OK, we need to build on top of the infrastructure. We need to allow the new generation of creators to tap into what making video is. The web becomes such an important aspect of that. Mostly because it democratizes access to complicated and sophisticated tools like professional video in a way that I don’t think we’ve seen before.

There are a few things that are really important. The first one is the need for hardware gets reduced to zero. Like a lot of our users are on Chromebooks, on Windows laptops on iPad. It’s really hard to edit video in any of those devices if you don’t have a powerful or deep-feed, GPU machine. So, for a lot of people, that’s not a limitation if you have that capacity to compute. But if you’re a small shop or a small business, or if you’re a small ad agency or even a big ad agency, you still have that limitation on hardware. The web just like completely reduced it to zero. Basically, you’re connected to our cloud. You have that endpoint. And since we already have that GPU cluster running the models, you’re basically able to access not just one GPU machine, you’re able to access a lot. And so if you want to export hundreds of versions of your video, that’s possible. And I think that the second one really important aspect of the web, and why we decided to build in the web again, building on the accessibility point is collaboration.

When you think about video creation today, you can think about people editing video, like video creators, themselves, video editors. But video encompasses more than just people doing the actual editing. It involves the managers. It involves the viewers and the designers. If you’re building a brand, and you have design assets and files, and someone is building in a video, how you share those assets with that person, or with that team really matters. So video becomes like a central hub of collaboration as well. And the web facilitates that at a rate that’s impossible to do in any kind of environment. And so, for us, it’s considering those aspects as well when deciding how and when to build a platform. And aiming and investing in the web for us has been a long-term goal. A lot of the things we’re doing right now in the video space, on the web hasn’t been done, so we’re working with the Chrome team with the Google team, really closely to work on some of the new standards that they’re developing to make sure editing 4k footage with 10 layers at the same time feels as native as possible. And I think Figma has already proven this in the vector UI design. You can run things natively or even more better than native on the web. And now we’re actually starting to see these in video as well, which is a bit more complex in terms of latency and interactions, but we’re definitely getting there.

Ishani: That’s awesome. You talk about cloud computing as a big enabler again and this collaboration concept. Multiplayer in the web is this next generation of collaboration and you’re right, Figma, Coda, Notion, Canva have made collaboration and multiplayer inherent and I think a lot of the applications that don’t have that multiplayer component are proving to be much more difficult to use, especially within teams and within a remote and hybrid kind of world that we’re entering. Figma and Canva — you mentioned them. They really, to me, started to pave the way to this multiplayer concept — web-based — but also this concept of low-code/no-code and being able to set the precedent for using machine learning, using technology in a much more accessible way for a non-technical user.

Do you think of that as one of the big trends that’s enabled and paved the way for you and Runway.

Cris: So, when we think about it, I guess no code for us on the ML side of things, we actually think a lot about how we take these models, these very complex pieces of software with hundreds of thousands of connections and systems to make them work really well and robust, into really consumable and easy to digest and simple solutions as an interface. Making sure that you build interfaces that are programmable or accessible and customizable. I think in a way, it becomes a commodity like it’s a system that you build, it’s proprietary, you develop it, but your customers are less concerned about the internal aspects of how it works and are more concerned about the output, right? And so, when I think about Webflow for instance, and I think about web designing in general, like Squarespace or those kinds of companies, would build like democratizing, no-code solutions for building websites, you really care about your customers just building really good websites. Right? How CSS and the JavaScript endpoints work on the backend are not really useful for them, unless you’re helping them solve a business use case. And so, you don’t really expose those kinds of things.

Ishani: That’s great, framing it as exposure. I hadn’t quite thought of it that way before, but it does make sense. You’re masking sort of the code and you can expose to components of it where it matters and where it’s a variable that people want to influence. But where It’s not. And you learn a lot of this through user testing, but where it’s not you can mask it. Tell us a little bit about the process for that user testing. I mean, so much of what you’re talking about is really driven by your end user. And it seems like you’re really in touch with who that is and how you learned a lot from them. What does the process for that look like? I think it’s so important as you iterate on early product and early build. And when you launch a new feature, you know, in your case Green Screen, for example, what’s the process you go through for a user iteration and feedback.

Cris: I love that question. I think a few things are important. The first one is a lot of times your users don’t actually know what they want.

Ishani: They just know They have a problem, but they don’t know to solve it.

Cris: Exactly. So, if you ask them the answer to what they want, that will not necessarily be the best solution. That’s the realm of knowledge they have today. In a way, no one was ever asking for an automated rotoscoping solution because no one thought that was possible. When you start doing and developing technologies or start delving deeper into things that haven’t been done before, it’s really hard to do comparisons to like, how has these been working before? Because no one has done it before, so it’s really hard to have a benchmark.

And so, when you ask people, what’s a pain for you in the video space, a lot of people will tell you like, Hey, rotoscoping, extremely painful. So, what do you want? Well, I want a better brush so I can do my mask five times faster. And so, I could be like, great, I’ve listened to you. I’ve built this thing, now you’re working two times faster. Do you like it? And it’s great. I like it. But the moment you mentioned like, “Hey, I can actually automate the whole thing for you. Just literally type a word.” And this is true. We have this as a beta that we are going to release really soon, where you can type. Let’s say you have a shot of a car and a tree. You can type “car.” Then we have a model that understands the object in that video, understands the car, creates the masks for you and extracts the mask immediately. And so, you’re not editing anymore with frames, your editing with words, right?

It’s really hard for our customer to tell you that — “Hey, I want to like this thing.” But the moment you show them to them, they’re like, “Oh, it’s insane. Like I want this. It is not only helping me move twice as fast. It’s helping me move a hundred times faster.” So, a lot of the user research in a way is like listening to your customers and listening to your users, but actually trying to really listen or hear their pain. Okay, what are you actually trying to say when you’re saying these things, this is actually the tool itself is a problem, or it’s more of like, the process is broken. If you have the process that’s broken and you as a product person know that technology, know the skills of your team and what’s possible today, how do you build quick prototypes and solutions that can help you actually figure out if that’s actually something worth investing and building?

Cris: So, we do a lot of that. We listen a lot. We understand our customers. We understand either people who have never used the Runway before. We interview them a lot and we try to distill, okay, what’s the fundamental things that are happening here. And how would we build them with a set of technologies that we’ve been developing over the last couple of years?

Ishani: Right — and from the end-user standpoint. It’s just not in the realm of possibility to augment their workflow so much with automation, you know, maybe incremental baby steps. But as you say, the 100X just doesn’t fall within the imagination of someone using a video tool to take it all the way to, for example, text-based video editing. That’s in the realm of researchers at OpenAI, doing GPT3 work and DALL-E, and all those image processing things. So being able to really distill down a pain point, but then you use your imagination to go from up with a solution.

Cris: And that’s a lot of prototyping as well. Basically, coming up with ideas and you just test those ideas with your customers as quickly as possible before building really robust and technically complex solutions. So, I guess to your point of for instance more on the generative side, something we’ve been spending a lot of time on generative models, deficient models, transform, applies to computer vision. The thesis there is that we’re probably going to start seeing more video content being entirely generated. So, think about stock footage or stock video, right? It was the case before you had to either shoot something or buy that footage from like a Getty Image platform, and that’s a really expensive process, both because the acid itself is super expensive to buy, but also because the asset might never actually be the perfect asset that you want. There’s some things that you want to change the color isn’t right. I want that person, but in a different position. It’s so complicated. And so, we’re approaching the space where you’re actually going to be able to generate those things, generate that stock footage, that footage in general. So, when you ask people, how do you want to create or work with assets, with templates, with custom content, they might ask you like, “Hey, I want a better search for my stock footage library.” But the moment you have Dall-E or other models that are able to generate realistic content, the conversation completely changes. You’re not marginally improving a process. You’re leapfrogging a whole industry. You’re like, okay, this was the way people used to operate.

Now, this technology is enabling you to think in just a completely different way. The questions you’re asking yourself are so different. And so having that is something we’ve always had in mind. And we’re also betting on that, on the long term as well.

Ishani: Incredible. Yeah. That leap from video editing to transformer model augmented video editing is massive, right? Transformative from technology perspective, but massive from it. Just how do I make that leap and requiring the technology, the examples to saying, oh, I can use transformer models in this process. We can talk about transformer models forever. Maybe take us to the moment where that started to make sense for you as a business.

Cris: I think the moment we started seeing this as an interesting research technique was the moment people understood that you can apply it not just to tokens, but to like pixels themselves. We use some of these techniques for our models behind the scenes, but in general, I’m less of a fan of a specific technique because techniques tend to move really fast, and something else will happen. And so, I think it’s important always to like — when you see those trends coming up, see how they can adjust to your product or your needs. But at the same time, don’t fixate too much on specific technique because a new technique might come up that might be better. And the ability to switch and learn from what’s better, I think we’ll always pay off versus like, if you’ve spent too much time developing something and then a new approach comes and you’re unable to adjust. then it’s going to be hard. I mean, the space moves so fast. The ML space is moving so fast that something that just published four months ago has already been changed, so keeping track of that, I think it’s the most impactful thing. I guess on the research side of Runway, we do a lot of different approaches from transformers to more generative stuff from our traditional computer vision as well. Again, always in the aim of like, how do we help you make video faster?

Ishani: That’s a really great insight to be nimble across, you know, a rapidly evolving technology field. And the conversation, even if you just zoom in on transformer models on how large these models have been and how many parameters they’ve been trained on, even over the course of the last 12 months, the chart is absurd, right? And that point of you building a business on top of some of these platform technologies, or what will evolve to be platform technologies, being nimble across the methodology is so, key.

Cris: One hundred percent. Because at the same time, there are a lot of things that are happening specifically where you mentioned if like those hose models themselves that our greater research insights, but try to, productionalize a model that has 2 billion parameters for like a million users. You either have a budget of a million, a million dollars a second, or it’s impossible to do it. Right? So, it’s great. Like fundamentally. It’s moving the field in such an interesting way. There’s new techniques. But again, if you’re thinking about how to put it into a product, that’s a whole different conversation.

Cris: So always trying to balance those things for us is really important.

Ishani: So how do you straddle then the business side of Runway and the research side of Runway.

Cris: We don’t see them as different worlds. It’s part of the same. So, research at Runway, is just applied research to product? As a researcher at Runway, you work really closely with the design team and with the engineering team and with everyone to really figure out if there’s something we can do. There’s a cost, like a literal cost, that needs to be considered, and compute, to have in mind when you are developing that. There’s the feasibility approach — is there something we can actually build in a reasonable amount of time? There’s a performance trade-off and all of these things that you have in mind when you’re thinking about applying those into a product.

Perhaps if you’re a more formal academic context, and you’re just doing research. You’re not constrained by those things. I mean, when OpenAI was building GPT-3 they were not thinking about deploying this for video domains with millions of visitors, they were thinking — this is an idea, let’s see if it works. And then people start building on top of that. Now there’s a lot of like pruning and ideas that can come to make it more efficient, more fast. But it’s still, if you look at OpenAIs for like pricing model for, GPT-3 today’s it is still very expensive to use it. And it’s a language model. So video is way more expensive. And so, we’re less concerned about, how do we push like a field so far where it’s opened all these doors for positive expressions. And we are more of let’s be more pragmatic and like research is a product. It’s the same thing. It’s — just make sure that it works inside our environment where users can actually get value out of it. So that’s how, I guess how we want to think about it.

Ishani: I love that researcher as product. Let’s zoom out a little bit. When we look at the rhetoric around RunwayML, you talked a little bit about this confluence of code and art. And it’s not often that we see companies at this intersection. Talk a little bit about that, conceptually, what it means to you and your customers. One question I’m curious about is, you know, did it make it harder to raise venture funding back in the 2019-2020 timeframe because you are sitting at that intersection and that framing,

Cris: One thing I will clarify is, when I came to school, at NYU I went to art school. I spent two years in an art school, which is working and taking classes in computer science. It’s a unique kind of like arts program inside NYU call ITP. It’s our program that’s been running for 40 years. And it sits at the intersection of like technology arts and design. You can think about as a hacky, hacker space. You can just be there working on whatever you want, any kind of topic that involves technology and art and design and take classes from any department in NYU. And so you’re surrounded by really smart people from all sorts of backgrounds and ideas, and skills and you’re building interesting creative projects- just to just building things because you’re interested in exploring things. When we started the company, we started doing the research inside this program. It was a way for us to just have fun. We just enjoyed doing this. Experimenting with this technology, building our projects and then like showing them in galleries or in spaces or in online places. Seeing what was coming out of it, that’s what drove us. When we started, like seeing that the interest was more than just artists, but like companies and filmmakers and creators, and it was like, Hey, we should actually take this outside of an art experimental approach and productionalize it to make sure that we can deliver on the promise of transforming how content is created.

Was it challenging to raise capital at that time with that kind of like art experimental narrative? I don’t know. It’s difficult for me to benchmark because again, I was like two years in New York coming from a totally different country, culture. So, I didn’t really know at that time what raising actually meant. I was more of like, Hey, we just need to start this company. A bunch of VCs and investors had already started to reach out. So, we built a process — it actually took us like four weeks to raise. It was really fast, I think. I was thinking about the time that I never had a deck. We just showed a demo, and everyone immediately understood how it worked. I guess the advice for me from that time would be definitely to just build demos. Build things more than just decks.

Cris: Now that I look at it, and I started raising a few more rounds after that, it was interesting to see that we’re coming from a background on skills that are not common I would say most like venture-funded companies. Most of the members on our team do have an art practice or a creative background. They are artists themselves our engineers and have studied art as their primary study, and then they became engineers after. And I think that drives a few things. First of all, culture. The culture of Runway is very creative-driven, very altruistic driven, and that sits perfectly with the product we are building. Like we’re really thinking about creativity, thinking about content, thinking and about creative tools. And when you’re an artist yourself, you’re building a way for you. You know, you understand this type of user.

Ishani: You frame it as the intersection of art and technology, art and code. There’s so much opportunity as you’re articulating for the intersection of, you know, technology and X. I think that’s where we’re super excited about the next generation of applications that maybe we haven’t all thought about yet. So, we’re excited to see the success that you’ve had and all the continued progress. You know, building a culture of creativity in a technology company is inherently both easy and difficult.

And so being able to do that and then continuing to scale, it is so exciting for us to see.

Cris: Yeah. And it’s been a great way of attracting really great talent. The intersection of art and technology is something that has grown a lot over the last couple of years. And there’s a lot of interesting and talented engineers and designers and people, in general, sitting at that intersection wanting to really think about how to apply these technologies for art making, creative making. So, Runway has become like that spot where you can just come and help us and build the kind of like reality in a way. And yeah, I’m really excited to continue doing that.

Ishani: Chris, thanks so much for walking us through the business. We’re going to end the series of podcasts with three lightning round questions that have a little bit less to do with your business specifically but more about where you sit in the ecosystem. So aside from your own, what startup or company are you most excited about in the intelligent application space and why?

Cris: That’s a good question. I’m really excited about companies who are verticalizing ML, in kind of like niche domains. Uh, we started using this company called SeekOut for recruiting a couple of months ago. And it’s been so transformative for us, specifically for finding talent. I’m excited about companies like Weights and Biases as well — in terms of like research, how do you make sure that within our problem, you can help your team just move faster by identifying what needs to be done and how you can run experiments just faster. So, any company who is just seeking to like, just think about long-tailed use cases and think about optimizations so you can run with some of these algorithms or these platforms are the companies that I’m excited about.

Ishani: Incredible. And what a great segue to the fact that SeekOut is going to be our next podcast. Okay. Question number two. Outside of artificial intelligence and machine learning to solve real-world challenges, where do you think the greatest source of technological disruption and innovation is over the course of the next five years?

Cris: I guess I’m a bit biased about this, but I would say, from non-domain experts diving into like domain expert fields. The barriers of entry to a lot of technologies have considerably been lower, and so you have people who are able to build on domains that perhaps they’re not their own domains of expertise and bring in insights and thoughts and ways of working and ways of thinking that are completely new. The misfits of those spaces for me is where a lot of transformation will happen. So, I guess for us, it was like, we’re coming from an art background from a creative perspective. We’re changing how video works in businesses, right? We have so many insights and so many ways of thinking about the product and the ecosystem that perhaps people in the industry today are not really thinking of. And that’s just so unique. And such a differentiator that I’m really excited to see more of those people just jumping in between different kinds of domains and backgrounds.

Ishani: Right, this concept of accessibility begets innovation.

Cris: Yes, exactly.

Ishani: Question number three. What is the most important lesson, perhaps something you wish you did better, that you’ve learned over your startup journey so far.

Cris: Oh, well, a lot, perhaps a good way of summarizing all the learning is, I think something I’ve learned, is that in order to just build a great product a great business is the rate of learning really matters. Like how fast you are learning as a company and as a team and as a product, how fast you are learning about your customers, how fast you were learning about the industry, about the competition, about the market, about technology. That rate of learning and how fast you can just do something you’ve never done before. Experiment with it, learn as much as possible and adapt really, really, really, really is important. And it’s something I’ve seen a lot from other companies is perhaps it’s easy to get stuck, uh, and it has happened to us as well, into something that you’ve realized you kind of like, quote know works. But then something happens and you’re not able to adapt. And so, just having that mentality of always learning — learning never stops in every single domain of the company. Always keep on learning as much as possible. And then everything else will come.

Ishani: I love that in the same way. You’re always launching your product; you’re always learning about how to build a company.

Cris: Exactly. Always.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Runway, they can be found at RunwayML.com. To learn more about IA40, please visit IA40.com. Thanks again for joining us, and tune in, in a couple of weeks for Founded and Funded’s next spotlight episode on another IA 40 winner.