Founded and Funded – Deploying ML Models in the Cloud, on Phones, in Devices with Luis Ceze and Jason Knight of OctoML

Photo: Luis Ceze

OctoML on Octomizing/Optimizing ML Models and Helping Chips, Servers and Devices Run Them Faster; Madrona doubles down on the Series A funding

Today OctoML announced the close of their $15 million Series A round led by Amplify Partners. Madrona led the seed (with Amplify participating) and we are excited to continue to work with this team that is building technology based on the Apache TVM open source program. Apache TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators that the founders built several years ago. OctoML aims to take the difficulty out of optimizing and deploying ML models. Matt McIlwain sat down with Luis Ceze and Jason Knight on the eve of their Series A, to talk about the challenges with machine learning and deep learning that OctoML helps manage. Listen below!

Transcript below:
Intro
Welcome to found it and funded My name is Erika Shaffer and work at Madrona Venture Group and we are doing something a little different here. We’re here with Matt McIlwain, Luis Ceze, and Jason Knight to talk about OctoML. I’m going to turn it over to Matt, who has been leading this investment. We are all super excited about OctoML and in hearing about what you guys are doing.

Matt McIlwain
Well, thanks very much, Erika. We are indeed super excited about OctoML. And it’s been great to get to know Luis and Jason over many years, as well as the whole founding team at OctoML. And we’ll get to their story in just a second. The one reflection that I wanted to offer was that this whole era of what we think of as the intelligent applications era has been building in its momentum over the past several years. We think back to companies like Turi that we were involved with and Algorithmia and more recently Xnor and now I think a lot of those pieces are coming together in the fullest of ways, is what OctoML is doing. But rather than hear it from me, I think you’ll all enjoy hearing it more from the founders. So I want to start off with a question of going back, Luis, to the graduate school work that some of your PhD students were doing at the University of Washington. Now, tell us a little bit about the founding story of the technology, and the Apache TVM open source project.

Luis
Yeah, absolutely. First of all, I would say that if you’re excited, we’re even more excited about this and super excited about the work you’ve been doing with us. Yes, so the technology came to be because there was this observation that Carlos Guestrin and I had a few a few years ago, actually four years ago now, that said that, there are quite a few more machine learning models that were becoming more popular, more useful, and people tend to use them but then there’s also a growing set of hardware targets, one could map these models to so when you have, a great set of models and growing set of hardware targets. Back the question, so I said, “Well, what’s going to happen when people start optimizing models for different hardware and making the most out of their of their deployments.”. That was the genesis of the TVM project. So it essentially became what it is today a fully automated flow that ingests models from expressing a variety of all the popular machine learning frameworks, and then automatically optimizes them for chosen deployment targets. We couldn’t be more grateful to a big open source community that grew around it too. So the project started as an open source project from the beginning. And today, it has over 200 contributors and is in active deployments in a variety of applications you probably use every day from Amazon, Microsoft and in Facebook.

Matt
I think that our listeners always enjoy hearing about founding stories of companies and your founding team, and principally some of the graduate students that you and Carlos had been working with. Maybe tell a little bit about that and then it’d be great to have Jason join in since he joined up with all of you right at the beginning.

Luis
Absolutely, as soon as is a great way of looping in Jason into the right moment, too. So yeah, so as TVM started getting more and more traction, we did the conference at the end of 2018. And we have well over 200 people come and we’re like, “Oh, wow, this there’s something interesting happening here.” and, it was one of those moments where all stars align where the key PhD students behind the project, including Tianqi Chen, and Thierry Moreau, and Jared Roesch, were all close to graduation and thinking about what’s next. And I was also thinking about what to do next. And then Jason was at Intel at that time, and was really interested in and was a champion of TVM on the Intel side. And He then said, Oh, it turns out that I’m also looking for opportunities. So it’s like since he came and visited us and started talking more seriously and the thing evolved super quickly from there. And now you can hear from Jason himself

Jason
Yeah, actually my background is a data scientist. And through a complicated backstory, I ended up at Intel through a silicon hardware, startup acquisition. And I was running a team of product managers looking at the software stack for deep learning and how a company like Intel was going to, make inroads here and continue to impress and delight our huge customer base. And I was helping to fund some of the TVM work as a result of that and really seeing that, despite my best efforts at Intel, kind of pushing the big ship a few degrees at a time towards these kind of new compiler approaches to supporting this type of new workload and new hardware targets, it was clear that the traction was already taking place with open source TVM project and, and that was where the action was happening. And so it was a natural timing and opportunity for something to happen here in terms of not only Intel’s efforts but more broadly, the entire ecosystem needing a solution like this and the kind of pain points I’d seen over and over again at Intel of just end users wanting to do more with the hardware they had available and the hardware that was coming to them and what needed to happen to make that realistic. And so that was a natural genesis for you me and Luis to talk about this and, and make something happen here.

Matt
That’s fantastic. And of course We had known Jason for a little while at Madrona. And we’re just delighted that all these pieces were coming together. Hey, Luis, can you say a little bit more because you had that first conference in December of 2018 and then a subsequent one in December of 2019. It seemed to be that not only the open source community was coming together, but folks from some of the big companies that might want to help somebody build and refine their models or deploy their models were coming together too and that’s kind of a magical combination when you get all those people working together in the same place.

Luis
Yes, absolutely. So, yes, as I said, the conference that made us realize something big was going on was December 2018. And then a year later, we ran another conference. And by that time, OctoML had already been formed. So we formed the company in late July of 2019. And then by December, we already had the demo of our initial project – our initial product that Jason demoed for the conference. Yes. So in the December 2019 conference, we had pretty much all of the major players in machine learning – those that use machine learning to develop machine learning, were present. So we had, for example, several hardware vendors join us. Qualcomm was being fairly active in deploying a hardware for accelerating machine learning on mobile devices. They had Jeff Gehlhaar, there on the record saying that TVM is key to accessing, their new hardware called hexagon. We had ARM come and also talk about their experience in building a unified framework to unify machine learning support in CPUs, GPUs and their upcoming accelerators. We had Xilinx and we had a few others and Intel who came and talked about their experience in this space. So I wanted to add more to that, what was interesting during that conference was having companies like Facebook and Microsoft talking about how TVM was really helpful in reducing their deployment pains of optimizing models enough such that they can scale in the cloud. And also such that it can actually run well enough on on mobile devices. This was very heartwarming for us because it’s confirming our thesis that a lot of the pain in machine learning, in using machine learning modern applications is shifting from creating the model to really deploying and making the best use of them. And that’s really, our central effort right now is to make it super easy for anyone to get their models optimized and deployed. And by offering our TVM in the cloud flow, so maybe Jason can have a little bit to that from the product side.

Jason
Yeah, so it’s, it’s great seeing the amount of activity and innovation happening in the TVM space at the TVM conference. But it’s clear that there’s still a long, long way to go in terms of just better supporting the long tail of developers who maybe don’t have the amount of experience that some of these TVM developers do in terms of just getting their model and optimizing it and running on a difficult target, like a phone or an embedded platform. So yeah, we’re happy to talk more about that. We actually just put up a blog post kind of detailing some of the things we talked about at the TVM conference. And, and we’ll be giving out more details soon.

Matt
Yeah, maybe I think what’s interesting, if I think about it, from a sort of a business perspective is, on the one hand, you have all kinds of folks, with different levels of skills and experiences, building models, refining their models, optimizing their models, so that they can be deployed. And then you’ve got this whole sort of fragmented group of not just kind of chip makers as you’re referencing but also the hardware devices, that those chips go into to run, whether that’s a phone or a camera or other kinds of devices that you know can be anywhere in a consumer or commercial sense. And what’s interesting to me what I like about the business is that you guys are helping connect some of the dots between those worlds and, a kind of a simplified end to end sort of way. And it would be interesting to spend a little bit more time and maybe talk about the, the the Octomizer, your kind of your first product, specifically, but more generally, what you’re trying to do and connecting those worlds.

Jason
Yeah, definitely. So one way to look at this is we’ve seen a lot of great work from TensorFlow from Google and PYtorch from Facebook and others on the training side for creating deep learning models and training those from data sets, but when you look at the next step in the lifecycle of machine learning model, there’s a lot less hand holding and tools available to get those models deployed into production devices. When you care about things like model size and computational efficiency, and portability across different hardware platforms, etc. And so this actually sits right at the one of the difficulties of the underlying infrastructure and how that’s built with the dependence on hardware kernel libraries. So these are handwritten, hand optimized kernel libraries built by each vendor. And these are, somewhat holding the field back and making it more difficult for end users to get their models into production. And so TVM and the Octomizer that we’re building on top of that makes it easier to just take a model, run it through a system, look at the optimized benchmark numbers for that model across a variety of hardware targets, and then get that model back in a packaged format that’s ready to go for production use, whether you’re using writing a Python application, or you need to bring out every bit of performance with the C shared library and a C API. Or a Docker image with a GRPC wrapper if you want some easy serverless access. So that’s what we’re building with the Octomizer. And it’s designed to be a kind of one pane of glass for your machine learning solutions across any hardware that you care about. And, and, and then we build on top of that with things like quantization and compression and distillation as we move into the future.

Luis
A couple more points to that. Yeah. So those are definitely important. And is the very first step, we’re taking. One interesting to realize is what we’re doing here is that TVM really offers this opportunity of providing a unified foundation for machine learning software on top of our idea of hardware. So by unifying the foundation, in which one could use to deploy models you also create the perfect wedge for you to add more machine learning ops into the flow. So if you know people are starting to realize more and more that, regular software development has enjoyed, incredible progress in DevOps. But now, machine learning doesn’t have that, right. So when we see the Octomizer has a platform, which we start with model optimization and packaging, but it’s the perfect point for us to build on to offer things like instrumenting models to monitor how they’re doing during deployment, to also help understand how models are doing, and essentially provide a complete solution of automated services for machine learning Ops,

Jason
One of those applications as well as training on the edge. In addition, in the sense that training is no more than just a set of additional operations that are required. And having a compiler based approach, it’s quite easy to add these extra set of ops and deploy those to hardware. And so getting things like training on the edge is in target for us in the future as we look forward here.

Matt
That’s great. Well, I want to come back a little bit to the prospect side, but I’m super curious. Now we talked about the company name OctoML. We talked about the product name Octomizer. How did this all come about? How did you guys come up with this name? And, and it’s a lot of fun. I know the story, but for the most the folks here, what, what’s the story?

Luis
Okay, all right. So I always say I’m sure Jason and I can interleave with because we have, there’s multiple angles here. So it turns out, they’re both Jason and I and other folks in our group have an interest in in biology. So nature has been an incredible source of inspiration in building better systems. And nature has evolved incredible creatures, but when you when you look around and you think about some creatures like an octopus, you see how incredibly smart they are. They have distributed brains, right? So they are incredibly adaptable, and they’re very, very, very smart plus very happy and light hearted creatures and creative so this is something that To like resonated with everyone, so it stems really from, from an octopus and, and so like a lot of what we do now has a nautical theme. And then we have the Octomizer, you’re going to hear more in the future about something called aquarium and Cracken and the Barnacles, which are all things that are part of our daily communication, which makes it super creative and light hearted. So all right, Jason, maybe I talked too much. It’s your turn. Now,

Jason
I guess one thing to point out is we really applied machine learning slant to even our name selection, because the objective function or set of regulators, we applied to the name selection process itself, because it needs to be relatively short, easy to spell, easy to pronounce, somewhat unique, but not too unique. And then it has, all these other associations that Luis was mentioning or similar associations. So those are definitely in the objective function as we were working through this process. It’s also rhymes with optimal as well. So, yeah, it took us a while to get there, but we were happy with the result.

Matt
I think you guys did a great job. And I also like the visual notion of, even though they’ve got distributed brains that there is this sort of central part of an octopus and then there’s it can touch anything. So it’s kind of this build one’s gonna run many places sort of image that sort of flows through, but maybe I’m stretching it too much now

Luis
No, that this excellent point is that we do think about, TVM being in our technologies really be a central place, I can touch multiple targets in a very efficient and adaptable and automatic way, right. It’s a definitely within scope of how we’re thinking as well. So great.

Jason
So 9 bits in a byte by being a core primitive computational power of two.

Matt
Very good. Coming back to the open source community, you guys have partly because of your your academic backgrounds and in involvement in other ways in the open source community. So how is it? How are things working within the Apache TVM community along alongside OctoML. So very important time in the life of both and curious to get your thoughts on that.

Jason
Yeah, we really see OctoML as doing a lot of and pushing a lot of work that needs to be done in the open source community, eating our vegetables. So we’re currently ramping up the team to just put more of that vegetable eating spirit in the TVM project and helping pitch in on documentation and packaging, all those things that need to be done. But it’s difficult. Open source is known to attract people to scratch their own itch and solve their own problems. But these kind of less sexy tasks often get undone for long periods of time. So we hope to be a driving force in doing a lot of that. And of course, working with the community more broadly to, connect the dots and help coordinate larger structural decisions that need to be made for the project. And all of this being done under the Apache Foundation, umbrella and governance process. So we’re working closely with the Apache folks and continuing to, smooth and work under that umbrella.

Luis
Yeah, just to add a couple more thoughts here, we are contributing heavily to the Apache TVM project in multiple domains as Jason as Jason said, and we think that this is, also very, very fortuitous for us because we see TVM as well as you one could go and use TVM directly to go and go do what they want. But then, as they start using it they realize that there are a lot of things that a commercial offering, could do, for example, make it much more automated, make it plug and play. TVM the core idea from that start was a research idea and now it’s part of what it I, iss using machine learning for machine learning optimization, and that can be made much more effective with the right data that we are helping to produce as well. So, we couldn’t be happier with the synergy between the open source project, open source community, and also what we’re doing on our private side as well.

Jason
Also, when one thing that’s been nice to see is in talking to users, or soon to be users in the TVM project, they’ll say, Oh, it’s great to see you guys supporting TVM. We were hesitant of kind of jumping in because we didn’t want to jump in and then be lost without anyone to turn to for help. But having someone like yourselves, knowing that you’re there for support makes us feel better about you putting those initial feet on the ground there. So that’s been really nice to see as well.

Matt
Now, that’s really interesting and, we are recording this in a time when in fact, we’re all in different places because we’re in the midst of the Covid-19 crisis. I’m curious on a couple of different levels. One, is with, the open source community two is with the some folks that are interested in becoming, early customers, but even just Thirdly, with your team, how are all those things going for you all working in this environment? And certainly there’s companies like GitLab and others that have had lots of success, both, working as distributed teams and working with their customers in a distributed way. What are some of the early learnings for you all on that front?

Jason
Well, since TVM, start as an open source project, then a lot of us have that distributed collaborative, blood flowing through our veins to begin with. So working remotely in a distributed, asynchronous capacity is kind of part and parcel to working with open source community. So luckily, both those community and us as a company have been relatively untouched on that front.

Luis
Oh, absolutely. So when we when we started the company, we we’re heavily based in Seattle but in no Jason is based in San Diego that started initiatives and we started growing more distributed – we hired people in the Bay Area. We had people in Oregon in the team and it’s working so well it’s been so productive to we were very, very fortunate and lucky not only we already started somewhat distributed to begin with, and now it’s serving us really well. We had great investors with us by to being stuck with us and, and fun years right to the right moment where we need to continue growing. And in fact, we are hiring people in a distributed way. Like just yesterday we had another one another person that we really wanted to hire, assign and join, join our team. So we are fully operating in all capacities, including interviewing, hiring and doing this submitted way and I haven’t noticed any hit in productivity whatsoever. If anything, I think we’d probably be even more productive and focused, right.

Jason
And on the customer side, I would say been a mixed bag in terms of, there are those customers that kind of have some wiggle in their direction or roadmaps here and there, but then there’s also customers that have, orders of magnitude increase in their product demand because they’re serving, Voice over IP or something to that effect. It’s being really heavily in demand in this time of need. And so it just depends, and so luckily, there’s not been any kind of negative shifts there.

Matt
Yeah, you guys, I’ve really been blown away by your ability to attract some just incredible talent to the team here in just a short period of I don’t know, like, seven or eight months of really being a company and I get the sense that that momentum is just going to continue here. So congratulations on that front. I’m curious on the customer front, to pick up on what you were saying, Jason, what are you finding in terms of, kind of customer readiness? I think back to even a few years ago, it seemed like it was almost still too early, there was a lot of, tire kicking around applied machine learning and deep learning. And people were happy to have meetings, but they were more kind of curiosity meetings. Seems like there’s a lot more doing going on now. But I’d be interested in your perspectives on the state of play.

Jason
Yeah, but I would say it’s more than timing, it’s variance, and that we see a huge range and customers that have deep pain in this today in terms of getting computational costs on their cloud bill down yesterday. And because they’re spending, tons of GPU hours on every customer inference request that comes in. And then you have really large organizations with hundreds of data scientists trying to support these very complex set of deployments across, several, half dozen or dozens of different model hardware endpoints. And, and so there’s a lot of pain and a lot of different angles. And it’s, it’s mixed over the set of value propositions that we have performance, ease of use and portability across hardware platforms. And so it’s, been really nice to see, we’re just talking to a large telecom company just the other night. And yeah, just huge amounts of demand. And so it’s, it’s really nice to have the open source ecosystem as well, because it’s a natural, funnel to, to try to pick up on this activity and see, oh, we see someone coming through using the open source project and talking about it on the forums and we have going have a conversation. with them, and there’s naturally already a need there, because otherwise they wouldn’t be looking to the open source project.

Luis
Yeah. And just just one more thing that I think it’s interesting to observe that, yes. So there is there is indication that is, it’s early, but already big enough to have serious impact. For example, we hear companies wanting to move computation to the edge to not only save on cloud costs, but be more privacy conscious. Right now, as you can imagine, as a lot of people working or working from home, all of a sudden, we see a huge spike in conditional demands in the cloud. And, we have some reasons to believe that a lot of that involves running machine learning models in the cloud, that, companies will have to, reduce and improve the performance, because otherwise there’s just simply no way to scale as fast as they need to. So We’re seeing that this spike in demand of cloud services as well being a source of opportunity for us.

Jason
Also, also, one thing I’m excited about too, is on, on the embedded side of things, it’s one reason why there is there’s pent up demand. But it’s, essentially, there hasn’t been much activity in terms of machine learning and the embedded side of things, because there haven’t been solutions out there that people can use to go and deploy machine learning models into embedded processors. And so being able to kind of unlock that chicken and egg problem and solve one, crack the egg essentially, and have a chicken come out and start that cycle, and really unlock the embedded ml market. It’s really exciting proposition to me, as we get there, through our cloud, mobile and embedded efforts.

Matt
And I think that’s what we saw to from, having, been fortunate to, provide the seed capital last summer with you guys into the early fall. And really, be alongside you from day one on this journey. And I’m interested in sort of two things. One is, I think, in retrospect, right, you all made this decision in the early part of this year, that there was enough visibility enough evidence that you were going to go ahead and, and raise a round. And that’s looking like it was well timed now but maybe a little bit of like, why do you decide to do that? And then the second question is, well, what are you going to do with this $15 million that you’ve just raised? And and what’s the plan in terms of, growing the, the business side of the the TVM movement?

Luis
Yeah, absolutely. So we, as I said, we, it was incredibly well timed, by, by luck and good advice as well. Yeah. So at that time, what motivated us was that we had an opportunity to hire, incredible people, and it was quite faster. We actually be more successful in hiring than we could have even, hope for in the best case. So it’s like why not, in this climate when we have interesting people to hire and amazing people, we just go and hire them and need resources for that. And that was the first, let’s do this early. But and now know, as we, as Jason said, we started to engage with, with more customers and getting our technology in the hands of customers. And this is immediately puts, more pressure on us to hire more people to, make sure that our customer engagements are successful. So we’re going to staff that up and make sure that, we have the right resources to make them successful. And also as we as as we go to market and explore more, more thesis on how we build a business around the Octomizer requires effort. And that’s what we that’s we’re going to use the funds for is increase our machine learning systems technology team, and also, grow our platform team because what we’re building here is essentially a cloud platform to automate all of these, a process that requires, a significant amount of engineering. And we’ve been very, very engineering heavy so far naturally because we’re building the technology, and we are very much technologists first. But now’s the time to definitely beef up our business development side as well. And that’s where, a good chunk of our resources are going to go as well.

Jason
Also, one thing to point out is just given where the TVM project sits in the stack, in terms of, having the capability to support pretty much any hardware platform for machine learning, you’re talking about dozens of hardware vendors here, silicon vendors, and then basically be able to cater to any machine learning and deep learning workload on top, whether it’s in the cloud, mobile or embedded, and you’re talking about a huge space of opportunity, right and, and that’s just the beginning in terms of, there’s extensions upstream to training and downstream to post deployment and there’s classical ml and science as well. And so each one of these Kind of permutations is a huge effort in itself. And so just trying to take even small chunks of this huge pie is a big engineering effort. So that’s, that’s definitely where a lot of the money spent is going at this point.

Matt
Well, we’re really excited and honored to be continuing on this journey with both of you and they’re in the not only the founding team, but of course, all the talented folks that you’ve hired. And I think from a timing perspective, the fundraise was, well, timed. But I think from a market perspective, the role that you all are trying to play, the real problems that you’re trying to solve are exceptionally well timed. And so we’re looking forward to seeing how that develops here in the in the months and years ahead.

And we’re excited to be here. Thanks, Matt.

We couldn’t be we couldn’t be more excited. Thank you. Thank you for everything.

 

Founded and Funded: Building an Open Source Business from Scratch with Eric Rudder and Joe Duffy of Pulumi

This week we are publishing a non-covid related podcast – recorded before Seattle was hit hard. We hope it provides some relief! We are also prepping some great podcasts that deal directly with how founders and people are managing through this time – and looking to the future. Stay tuned for those!

A few quarantine essentials: Non-perishable food items, toilet paper and the Founded and Funded podcast. Founded and Funded is back with Episode 5 of Season 2. In this Episode, Madrona managing director, S. Somasegar sits down with the founders of Pulumi, Eric Rudder and Joe Duffy.

During their time at Microsoft, Eric and Joe found the most joy in building something from nothing in the form of one-off products. Along their journeys, they paid very close attention industry inflection points which helped time the perfect jump. However, before they could take a bet on their idea, they had to take a bet on each other as co-founders.

Listen in on their conversation as they chat about the time they spent together working at Microsoft, the promise of open source technology, and their experience building a company that empowers both its employees and its customers. Looking for insight on how to time your leap into entrepreneurship? We have that too!

Full Transcript

Erika

Welcome to founded and funded. This is Erika Shaffer from Madrona Venture Group. Today we’re going to hear from Eric Rudder, and Joe Duffy, who are the founders of Pulumi. Pulumi helps developers create, deploy and manage modern cloud infrastructure. They speak with Soma, about their time at Microsoft, where they all worked together and knew each other and how they are applying what they learned there to startup life. This was recorded before the onset of the Covid-19 crisis. So it has a little bit of a light hearted tone. I hope you enjoy it. Listen on.

Soma

It’s really an exciting opportunity to get a chat with a couple of people that I’ve known for many, many years and I’ve had a chance to have the opportunity in the past. Religion to work together over the years. I’m Soma Somasegar, a managing director here at Madrona Venture Group. And let me have Eric and Joe introduce themselves.

Joe

My name is Joe Duffy, I’m founder and CEO of Pulumi. Me prior to this, I, you know, I was at Microsoft, where I got the privilege of working with so Maya and Eric, for many years, really fired up about developers making developers productive and really excited about what’s going on in the cloud. And that’s kind of why we started Pulumi. And I’m excited to be here and kind of tell tell the story of the journey.

Eric

And I’m Eric Rudder, founder and chairman of Pulumi. I’m equally excited. I think at Microsoft we used to say super excited. So I’m super excited to be here. And I think we’ll have some fun today.

Soma

Great. You guys have had a very successful career at Microsoft. You were there for sort of many, many years, couple of decades. And Joe, you’ve been there for 12-13 years of Microsoft very accomplished, very successful. What made you guys decide to say like, Hey, I’m going to sort of give up all that goodness and safety and get in get to be an entrepreneur and start the entrepreneurial journey with Pulumi

Joe

That story like, for me, I actually started my career. Actually, when I was in high school, I kind of started a little consulting business where I was helping companies get to the internet. And that gave me exposure to kind of some of the sales and marketing and customer relationship parts of a business, you really need to think about making a profit and making customers happy. And so I actually looked at starting a company before coming to Microsoft, and then the opportunity to come to Microsoft arose and I knew, hey, I’m gonna get to work with the best people in the industry. I’m gonna learn so much. I figured I’d say for three years and then go start a company. And every year I asked Is this the year and turns out 12 years passed quicker than I could realize seeing things at scale, seeing innovation at a company like Microsoft is just completely leagues beyond what you see typically, you know, in most companies, and frankly, I would have met some I would have met Eric if I if I didn’t do that. So, really, to me, I just wanted to get connected to the customer. You know, I felt like a big company is great. There’s lots of funding to do really innovative, new bold bets. But really the appeal for me of a business is it’s a meritocracy of business. The idea succeeds because people pay money because it, it delivers value to them. And so that that economics aspect to me was always fascinating. That’s been the biggest learning curve after leaving Microsoft as well as you know, hey, you’re thrown into business and customers and sales and marketing and finance. And that’s been, for me the best part of the journey so far.

Eric

I think my journey is probably similar to Joe in high school, I was actually a manager in a hardware store, Max matching swatches of sofa fabric to custom paint colors, and getting people the right size of lumber. So I guess you’d say both of us have helped people build things from a very early age, but I want to but at Microsoft, kind of on a lark, and as a great career, I got to do lots of different things there. I got to, you know, work on developer tools. I got to work on research, I got to run, you know, Business Development at the company. And it was a great experience great people always, at its core, Microsoft’s been a developer Led company in terms of its culture and audience and products, and kind of went through the typical, you know, kids out of the house empty nest thing you always look at, you know what I want to do with, you know, my next 20 years of career and it’s always a challenge to build something from nothing and get Microsoft, you know that I had the most fun and the most joy, literally building one Oh, products versus taking a product from 7o to 8o. Not that they’re, you know, aren’t the features that I’d like to add to PowerPoint for their next version. But it’s always more excited to establish a product or establish a category, you know, the opportunity was just too great. We kind of started the company. At a time when every application was becoming a cloud application, it was clear that a lot of the tooling was geared towards, you know, sort of the previous generation of products and we’re just starting to get towards, you know, modern container infrastructure, modern computing infrastructure, you know, multi cloud, you know, many cloud connecting data sources, and it was just clear we were at the right inflection point to go do something. I’m old enough I’ve watched the inflection points, but I’ve seen the inflection points from character to graphical. I saw the inflection point from client base to client server. I saw the inflection from client server to internet. You know, once the industry was closed on cloud native, I was like, Okay, this time, I’m going to lead from a different perspective rather than Microsoft and Joe and I just started hacking away in his basement, literally, as we call the winery, since Joe’s hobbies is making wine. So it’s a big wine barrel down there. And we just started working together and it’s been a great ride ever since.

Soma

You know, one of the things that entrepreneurs always say is like, Hey, there are a lot of critical positions that they ever make day in and day out. But one core fundamental decision that they have to make earlier on is who their co-founders are going to be. As much as you guys have had the opportunity to work together and get to know each other over the years. How did that decision or ordering the journey come along? Where you decided to take a bet on each other and say, like, Hey, we are going to together co-founder below me?

Joe

I think I you know, I worked with Eric throughout the years at several points. You know, I look back and a lot of my fondest memories Working at Microsoft Word these sort of 1o projects where we work together to build something from nothing. So I had seen that happen before I actually had that experience. And honestly, we did hang out in my basement for starting the company and see, Hey, can we can we build the thing? Can we figure this out? Can we figure out kind of how one plus one equals three? And I think that period really gave us the confidence. Not only was it the right partnership, but also the right opportunity at the right time. So actually, you know, just deciding, hey, we’re going to do this. And frankly, we’re both sort of ready to do it at the same time. That doesn’t happen very often. Right? It was almost like stars aligning that we were both ready to leave Microsoft, which frankly, is kind of a scary thing. You know, you’ve got an established career. You’ve got a network, you’ve got that financial safety, but both of us were at the point in our lives where we’re ready to say to take that leap.

Eric

Yeah, I think you know, for Microsoft, we just literally decided let’s work together and see how it goes. Let’s build something. We built little things that looked at Twitter streams and lit up light bulbs. Joe remembers that.

Joe

I do! During the Superbowl one year detecting who was talking about Beyoncé versus the actual game? Yeah.

Eric

We looked at, you know, doing some infrastructure products together, we looked at doing some Mar tech stack together. And I think Later, we’ll probably talk about how we got to the idea for Pulumi. But it wasn’t that the Pulumi idea was the first thing we just thought I was hated. We enjoy working with each other, and we would, you know, kind of start working in the morning, we literally would, you know, take a break for lunch and cook lunch together. Well, Joe and I are kind of into cooking, and I won’t claim them quite the shift Joe is, but it is a way to see you know, do we enjoy social time together because startup life is sometimes intense and sometimes many hours during the day. And so, you know, we enjoyed spending time with each other and were able to build the solutions we set out to build I think that’s the most important thing is you know, because it’s easy to just ideate and kick stuff around. But we were actually able to build stuff, get stuff done, you know, sort of encourage each other trust each other. And that was hopeful and we were able to take the leap.

Soma

How would you describe what Pulumi is?

Eric

Yeah, I think I’ve said before that all applications are cloud applications. And Pulumi really enables developers, DevOps, DevSecOps, to build better cloud applications faster. You know, when we got started, as I said, we were literally incubating something different. We were kind of working on a Mar tech solution. And we got to the point where we were like, okay, let’s kind of stand this up in the cloud. And let’s see what the state of the art is for provisioning something, giving our customers the power to run it on the cloud that they wanted or on prem. And we found that there wasn’t any tooling to do this. And so you know, the oft quoted Necessity is the mother of invention, really was the thing that got us going to fulfill the need, there was absolutely a need to probably about 30 companies before we actually founded Pulumi or wrote our first line of Pulumi code and there really wasn’t need Pulumi helps people provision, deploy, run, secure their infrastructure, we call it modern infrastructure, as code you Making sure that Devon DevOps can work together. It’s based on the idea of using real programming languages. So people don’t need to learn new specific dsls. They get to use their favorite editors, their favorite tools, their favorite integrations, their favorite package managers, their favorite versioning semantics. And we’re extending it to letting people write rules about policy, security, all these things that people kind of struggle with today, in terms of keeping up you know, Pulumi makes that easy for dev teams,

Joe

Honestly, our backgrounds caused us to take a pretty radically different approach to the space, I think all those years, really obsessing over developer productivity and making developers happy and bringing them joy and making them you know, giving them superpowers, right, that that that was our day job that got us up out of bed every morning, you know, for decades. We brought that same attention to detail that that kind of human care factor of Hey, we want, we want this to be not only productive, safe, secure, we actually want to bring joy to the idea of building cloud software. And so that that idea of really integrating the cloud development experience to the inner development loop is pretty novel. And you know, Pulumi is really, in, you know, a completely different approach than anybody else in the market today. It is open source as well. And that’s really important to us, we see a community and a community is really core to everything that we do with Columbia as well.

Soma

He just reference open source, Joe, you can’t like, you know, think about developers without thinking about open source today. And most companies that sort of do something or other with open source, A, then they start off with like my head, I’m going to observe an open source project, and the project becomes successful Can I think when I let me think about building a company around that, or the other way, which is like, no, hey, I’m building a company, I’m building a product, I’m building a service. And then sometime down the road, I decide that I want to take a piece of what I’m doing and make it open source for all the right reasons, but you guys decided earlier on almost from day one, to take a different approach. You said like a hair. We want to build a commercial entity here. And by the way, because we are focused on building something of huge value to developers, we want to think about our open source strategy. And let’s keep both in mind from day one. How did you guys come to that decision? And how was that journey for you?

Joe

Yeah, we knew from day one, it was really important that, you know, the core of the platform we’re building needs to be open source for a number of reasons. One, it encouraged contributions. It encourages, you know, people just want to understand how it works. It gives people confidence that we’re some part of the company to change, they still got the project, they know the project, they’ve got the source code for the project. It’s table stakes, right? For developers, that core technology they’re using, especially to author your own software needs to be open source. So we knew that going in. The question was, we didn’t want to end up in a situation we see other folks struggling with where they open sourced the business model. They didn’t think of how we’re going to make money. They really focused on community and nothing else, assuming that eventually they’d figure out the monetization strategy. And we see time and time again, it’s not easy. A lot of these companies don’t figure it out. And so we waited longer than we Had to open source until we had figured that out. And it’s interesting. There’s an interest in our Horowitz article that talks about the three waves of open source monetization strategy from taking into selling support as kind of wave one, wave two is really the open core model and wave three, it says, and this came out just a few months ago, but you know, we knew SAS was going to be big, and there was a natural alignment besides the open source technology. So what we did is actually said, Hey, we’re gonna open source, the whole platform, it’s all going to be there. We’re not holding anything back. The thing that we’ll charge for is the SAS. And this is, you know, an increasingly popular approach. It definitely comes with some challenges, but the benefit is from day one. We knew how the business would unfold. We knew how the community interacted with those revenue opportunities.

Soma

Now that it’s been what, two and a half years since we made that decision, are you still feeling great about the decision? Any learnings that you want to share the process?

Eric

Yeah, I think we I think we feel better about the decision every day. I think the other thing Just to build on Joe’s answer, like, we’re also living the lifestyle, right. So as we were bootstrapping our company, we’re benefitting tremendously from open source, right. So we’re a multi platform multi language company. And quite frankly, we could not have done it without leveraging open source contributions of many. So, you know, we often talk about, you know, standing on the shoulders of giants, and we definitely benefited from keep the people who came before us. And it just seemed like the right thing to do, right is to, you know, to contribute back fixes rather than fork and, you know, give credit where credit is due. And that was the great thing for us. The other thing we wanted to do was make sure that we added value as a company immediately to people who were using the open source and so we decided to provision a service and more probably a free service. So even if you’re just using illuminate casually from the open source distro, you could connect it to the Pulumi service for free and continue to give back to the community and to kind of nurture them along and for To discover, you know, what was unique about the commercial part of polygamy is no question. It’s a it’s a tough boundary between deciding what’s in the, you know, the free version and what’s in the premium version. And we’re absolutely tried to build a business and the maturity thing, but we made a fundamental decision that know the core value proposition of Pulumi is going to be open source, we were going to continue to support the community, encourage community contributions and be good members of the open source community in the large and that’s been a great founding decision. Honestly.

Joe

I’d say the one other just lesson learned to add was, I think it is really important to be clear upfront with your community, that here’s what we’re doing, and here’s why we’re doing it. And you know, I remember the day we launched the commercial edition of ourselves and in the community Slack, we had heart emojis and you know, celebration emojis and like the community actually is rooting for you provided you are open and honest about what you’re trying to do and why you’re doing it and the community wants you to have a system business because they enjoy the open source project you’re building. And they want that to be funded for many, many years to come. And so you’ll actually find if you’re open about where that boundary is and why people are super supportive,

Soma

Joe, you enter sort of one of the earlier problems of open source of Microsoft. And one of the big decisions that we made many years ago, Microsoft rose to open source dotnet framework. Tell me a little bit about that.

Joe

You know, I was in the windows organization and one of the goals was to modernize the engineering systems and try to work a little bit more like open source technologies, being able to use components in the open source, one of the challenges of proprietary sources, you’re always reinventing the wheel, you’re always creating solutions to problems that people have already solved and, and so we’re looking at ways to embrace that inside internally and actually share across the company as well have this notion of we used to call it internal open source which was unlock collaboration across all of Microsoft. So no matter where you are, you can contribute whether it’s a bug fix or feature recommendation or whether you’re just trying to apply you know, security standards across Company try to unlock that. So I had that coming into Dev. And someone asked me to start this project. And to be honest, I was flabbergasted that we were going to do that and super excited to start. And the thing that was amazing was more of a cultural transformation within the company than it was even a technology one, I remember, my team’s PC renewal was coming up. So people had to get new laptops, and I kind of said, Hey, 50% of the people in the team are gonna get Mac books, they have to give up their windows laptops. And I can’t tell you the number of people that came to me said, Wow, did you know almost everything we work on? If you have a Mac Book? is it relevant to you? And that most people doing development these days use Mac books. That means most of what we’re working on doesn’t apply to most developers. And so it was just like, really rejuvenated the team. There’s a Renaissance and, and I see now just years later, the impact that had on the entire culture across all of Microsoft was just super transformative. So is the human part actually, it turned out to be the best part of that, that whole journey.

Soma

It’s been what not to go into fields also since we started Pulumi Every day is a sort of a new day as an entrepreneur, what are some of the key learnings that you guys have had as part of the Pulumi journey in the last couple of years that you think other entrepreneurs would love to hear from your experiences?

Joe

So one for me is change is the one constant, my job changes month over month, you know, we’re a little bit further along now. So it’s, I would say things have settled a little bit, the roles and responsibilities settle a little bit as you as you hire people into functions. But in the early days, you’re doing a little bit of finance, a little bit of product, a little bit of coding, and frankly, I was coding very well into the journey, even when we had customers and doing marketing. And, I mean, you have to wear a lot of hats. And you have to be comfortable with the idea that how you spend your time next month is going to be very different than how you spend time this month.

Eric

For me it was probably I was fortunate to meet Colin Powell, one of his favorite sayings is a positive attitude is a force multiplier. When you talk about good days and bad days or you know some things go your way and some things don’t. It’s tough when things don’t, right. You’re young company. No one’s ever heard of you. You know, it’s hard to open a bank account, it’s hard to raise capital, it’s hard to get an internet connection at the right speed, it’s hard to find a real estate place to work at a reasonable rate. It’s hard to hire versus big companies yet, you know, that core belief in that, you know, hey, there really is a market need for what we’re doing. We believe in it. We believe in the product, we believe in the community. You know, we believe in the opportunity, that enthusiasm is infectious. There’s times you just need to suspend disbelief, and go forward and take that leap. You know, and I get to, you know, lots of credit for, you know, having that blind faith and, and, you know, leading us forward, I think that’s the thing that you can never underestimate in terms of just staying positive. And just reflecting on these, you know, one or two things may not have gone well, but look at all that we have accomplished. Both Joe and I by personality are not the greatest in the world at stopping and celebrating what we’ve accomplished in the week. I think both of us are much more likely in our status reports to write the low light section before we write the highlight section and yet don’t minimize what you accomplish in terms of getting good You’re just this decision to take the leap to lead the company to start your own thing to build the first version, you know, the, all these famous quotes about, oh, you should be ashamed of your first version or you know, they’re all true. And yet just having the belief overall, we’re going to do it, we’re going to succeed, people build off that people feed off that your customers feed off that your partner’s feed off that your employees feed off that it’s a super important thing to do. And I take away that learning I think I’ve internalized by doing a startup.

Joe

But we were talking about one plus one equals three kind of thing. I mean, that is something that Eric does have to remind me about. And so I’m super happy he does because it’s really important. You know, you have these people that believe in the vision, they believe in the company and things aren’t always rosy. But you have to figure out a way to celebrate those successes, no matter how small to keep people kind of pointing in the right direction and feeling inspired feeling like yeah, last week, we had a few setbacks, but you know what, we’re gonna, we’re going to do this thing and I still believe and that it’s tough as a founder because you’re, you’re definitely shouldering a lot of these burdens. Like the final financing you want to share, because employees want to know kind of, if things aren’t going well, you don’t want to hide that from people, but you have to frame it in an appropriate way. And that’s definitely a challenge. But it’s very important.

Soma

If I’m looking at it from the lens of you know, hey, I would love to learn from any mistakes that Joe and Eric have made other things that you wish you had done differently.

Joe

Yeah, so one that stands out is a mistake that I frequently make that Eric frequently corrects me. Which is, there’s this notion, especially if you’re building a venture backed business, that you need to be a quote, hyperscale company, right? You have to, it has to be up into the right all the time, you can never fall short of 10xing the growth or, you know, crazy, crazy growth metrics. And it’s really easy to get caught up and looking at survivorship bias of, well, this this great company over here, lift to you know, grew, you know, 10X year over year for the first three years, you know, so if we don’t do that we’re a failure. I think the real key thing is to be intentional and focus on what matters. There’s a lot of value Any metrics that people kind of get caught up in and really having that slow, steady growth and being responsible about how you’re funding that growth, so that you actually do measure return on investment. Like, if you’re not 10xing your growth, it’s easy to think that money will solve the problem, right? hire more people do more PR do more expensive events. And the reality is until you have that solid foundation, trying to scale too quickly, just it’s not gonna work. And it’s a quick way to run out of money, frankly.

Eric

Get annoyed, I’d say the so we talked about like Joe and I working together in his basement for awhile we probably worked on and off in your basement and my dining room table for what for about six months, I’d say we should have worked together for about two months. We talked about you know, taking the leap and how tough it is to take the leap. The opportunities out there are tremendous Do it Do I wish we got started four months earlier only when I think about it, which is only every day, you know, because you look at these growth modes and they’re all compounding right. And so if we had, you know, an extra four months in our pocket, you think about all the employees you could have hired, you know, A lot of these things are timing based and you know, the opportunities. We missed a few people, they decided to either re up their current companies or go to other startups or start their own companies. I think we knew early on that the opportunity was big, even if we weren’t sure exactly how we were going to pursue it. Like we knew early on that we were able to work together, in part because we had a shared work history. I think we were lucky, we had friends from the outside sort of encouraging us to take the leap.

Soma

You know, as entrepreneurs, when you think about success of the company, in this case, success blew me. There’s all these sort of what I call financial and customer metrics that you can talk about. But for you guys, personally, how would you define success for you, Joe, for you, Eric?

Joe

when we started the company, we knew we wanted to build a business, right? We want to build a great company, we want to build a business that, you know, could become profitable. The point of businesses, you know, you fund the business eventually actually makes money. Right. And I think that, to me is the outcome that we’re seeking is we want to build a sustainable long term business. You know, we could have, especially a lot of tech companies can build the company and with the intent of selling it or having a quick exit, or you can optimize for different things. And for me, that’s the most satisfying outcome is happy customers happy end users sustainable business, because then, you know, we can branch out into other adjacencies that are in this fast growing phase, you know, cloud is here to stay this, this transition that’s changing, the whole industry is just getting started. I really want us to have a business that can be here now solve some problems, but solve those problems in five years and 10 years that are going to just be even bigger than they are today.

Eric

Yeah, I mean, it’s kind of a big question, right. Even before we get started, we’re actually talking about Clay Christensen just passing away and he’s actually got a great essay in Harvard Business Review on sort of how do you define success, who some of them are business metrics, some of them are personal metrics, you know, being a great family member. And then his last one is how do I stay out of jail which is really you know, an ethical consideration. I think in the on the business side, we wanted to build Something of enduring value that really made a contribution right to the industry in terms of helping people write cloud applications, when you look at, you know, still the opportunity out there for the cloud applications that are going to be deployed that are going to change our life and fundamental ways we can be a small part of that. That’s a tremendous contribution. You know, we wanted to create a company with a culture that people enjoyed working out, people felt that their work was rewarded. People felt that they were respected that one of the benefits of starting your own company is you get to start and create your own culture. And I think that’s a that was a tremendous opportunity. Get to pick the people that you hire, what rules you employ. I think we firmly believe in the no assholes rule. You know, these are people that you’re gonna be spending lots of time with on a regular basis. I think we’re off to a great start with Pulumi. We’ve got a great core of folks, we’ve got a great core of community, a great set of customers a great set of values. And you know, the opportunity before us is tremendous. I think that’s the personal front. That’s one set of stuff I can assure you will stay out of jail. And then the other business side, it really is, you know, creating a great company that creates great products and fosters a great environment for employees.

Soma

If you think about like the next, say, three years, or five years or even 10 years, what aspirations do is follow me.

Joe

I mean, for me, I think of every developer on the planet, being empowered to use the best of what the cloud has to offer to build more innovative software and for their company for, for whatever they’re doing, but really just supercharging their ability to innovate using all these great cloud capabilities that today, frankly, are off limits for many of them. So to me three years, five years, however long it takes to get to that level of scale. You know, every developer when they think I’m building an application, they think, Hey, Pulumi is the way to do that, because it’s the best way to build this application and achieve the intended outcome. And I think honestly, as more people become developers, you’ll see more hobbyists becoming developers, frankly, folks that are growing up now. You know, a lot more of them are learning how to program very early. And so they’re gonna be a lot more developers, the developer segment is continuing to grow. folks that don’t consider themselves developers today will uplevel their skills and learn how to write code and become developers. And so really sort of tapping into that, I think is the opportunity for us.

Eric

Yeah, I think when I walk into one of our customers shops, I want to see every cloud engineer in that environment with Pulumi open and running on their desktop, helping them get their job done faster, helping them build better applications, really helping our customers be more productive.

Soma

You had the choice to design, the company Pulumi wherever you want, and you chose to build it in Seattle. How do you see Seattle as an emerging as a technology as a, you know, innovation as a startup ecosystem? Both over the years and more important in the future?

Eric

Yeah, I mean, I think we chose Seattle mostly for the weather.

Eric

I think we’re on day 45 now of straight rain.

Joe

Great for productivity by the way, you don’t, you don’t have the sun distracting you calling you outside.

Eric

You know, the thing about Pulumi is I think we recognize the opportunity very early on for cloud applications in particular, you know, we sort of touch on cloud infrastructure in many ways. And Seattle is a is an incredible community of cloud talent. I mean, obviously, all the big three cloud giants have strong, you know, representations here, in addition to the other tech companies, you know, sales are a great place to live. It’s a great quality of life. I think that’s what brought us here in the first place. The community is, you know, continuing to grow in its maturity, like, you know, it’s the, you know, it’s one of these things in the industry, where you sort of overestimate what will happen in one year, and underestimate what will happen in three years or five years. When you look back on the tech community 510 years ago, you might argue is a little bit sleepy, it’s vibrant. Now, you know, we’re literally taping this in a tech incubator. There’s bunches of interesting companies that we meet every day, whether it meetups or you know in our own headquarters coaching on how to use Pulumi the talent pool is great, the people is great, the culture is great. And at the same time, I think it was important when we started Pulumi me to embrace sort of the modern work style. And so we do have employees that work remotely. So even though our headquarters is here, we’re very much a modern internet style company, you know, employees in California, Utah and New York, we’ve employees now abroad, and Amsterdam London bath. And so part of it is embracing the worldwide culture, even though Seattle is kind of our home and hub. And I think we’ve really managed to get the best of the local community and the worldwide community and kind of fusing those things together. But we love being in Seattle, we love the talent and people in Seattle at the university anchor is fantastic. The big company tech anchors is fantastic. It’s kind of a built-in captive audience for us it for customers that are easy to see, and for building the community of plumbing users as well.

Soma

I really enjoyed this conversation. Thank you both for sharing your thoughts and experiences in this podcast. See you again.

Joe

Thanks Soma.

Eric

Thanks a lot.

Erika

Thanks for listening to founded and funded. If you liked this episode, please subscribe. We’ll have some interesting episodes coming up with CEOs and experts talking about managing through this downturn and difficult adjustment period for many companies and businesses and employees. Stay tuned for those

The Remaking of Enterprise Infrastructure – Investment Themes For Next Generation Cloud

Enterprise infrastructure has been one of the foundational investment themes here at Madrona since the inception of the firm. From the likes of Isilon to Qumulo, Igneous, Tier 3, and to Heptio, Snowflake and Datacoral more recently, we have been fortunate to partner with world-class founders who have reinvented and redefined enterprise infrastructure.

For the past several years, with enterprises rapidly adopting cloud and open source software, we have primarily focused on cloud-native technologies and developer-focused services that have enabled the move to cloud. We invested in categories like containerization, orchestration, and CI/CD that have now considerably matured. Looking ahead, with cloud adoption entering the middle innings but with technologies such as Machine Learning truly coming into play and cloud native innovation continuing at a dizzying pace, we believe that enterprise infrastructure is going to get reinvented yet again. Infrastructure, as we know it today, will look very different in the next decade. It will become much more application-centric, abstracted – maybe even fully automated – with specialized hardware often available to address the needs of next-generation applications.

As we wrote in our recent post describing Madrona’s overall investment themes for 2019, this continued evolution of next-generation cloud infrastructure remains the foundational layer of the innovation stack against which we primarily invest. In this piece, we go deeper into the categories that we see ourselves spending the most time, energy and dollars over the next several years. While these categories are arranged primarily from a technology trend standpoint (as illustrated in the graphic above), they also align with where we anticipate the greatest customer needs for cost, performance, agility, simplification, usability, and enterprise-ready features.

Management of cloud-native applications across hybrid infrastructure

2018 was undeniably the year of “hybrid cloud.” AWS announced Outposts, Google released GKE On-Prem and Microsoft beefed up Azure Stack (first announced in late 2017). The top cloud providers officially recognized that not every workload will move to the cloud and that the cloud will need to go to those workloads. However, while not all computing will move to public clouds, we firmly believe that all computing will eventually follow a cloud model, offering automation, portability and reliability at scale across public clouds, on-prem and every hybrid variation in between.

In this “hybrid cloud forever” world businesses want more than just the ability to move workloads between environments. They want consistent experiences so that they can develop their applications once and run anywhere with complete visibility, security and reliability — and have a single playbook for all environments.

This leads to opportunities in the following areas:

  • Monitoring and observability: As more and more cloud-native applications are deployed in hybrid environments, enterprises will demand complete monitoring and observability to know exactly how their applications are running. The key will be to offer a “single pane of glass” (complete with management) across multiple clouds and hybrid environments, thereby building a moat against the “consoles” offered by each public cloud provider. More importantly, the next-generation monitoring tools will need to be intelligent in applying Machine Learning to monitor and detect – potentially even remediate – error conditions for applications running across complex, distributed and diverse infrastructures.
  • SRE for the masses: According to Joe Beda, the co-founder of Heptio, “DevOps is a cultural shift whereby developers are aware of how their applications are run in a production environment and the operations folks are aware and empowered to know how the application works so that they can actively play a part in making the application more reliable.” The “operations” side of the equation is best exemplified by Google’s highly trained (and compensated) Site Reliability Engineers (SRE’s). As cloud adoption further matures, we believe that other enterprises will begin to embrace the SRE model but will be unable to attract or retain Google SRE level talent. Thus, there will be a need for tools that simplify and automate this role and help enterprise IT teams become Google-like operators with the performance, scalability and availability demanded by enterprise applications.
  • Security, compliance and policy management: Cloud, where enterprises lose total control over the underlying infrastructure, places unique security demands on cloud-native applications. Security ceases to be an afterthought – it now must be designed into applications from the beginning, and applications must be operated with the security posture front and center. This has created a new category of cloud native security companies that are continuing to grow. Current examples include portfolio company, Tigera, which has become the leader in network security for Kubernetes environments, and container security companies like Aqua, StackRox and Twistlock. In addition, data management and compliance – not just for data at rest but also for data in motion between distributed services and infrastructures – create a major pain point for CIOs and CSOs. Integris addresses the significant associated privacy considerations, partly fueled by GDPR and its clones. The holy grail is to analyze data without compromising privacy. Technologies such as security enclaves and blockchains are also enabling interesting opportunities in this space and we expect to see more.
  • Microservices management and service mesh: With applications increasingly becoming distributed, open source projects such as Istio (Google) and Envoy (Lyft) have emerged to help address the great need to efficiently connect and discover microservices. While Envoy has seen relatively wide adoption, it has acted predominantly as an enabler for other services and businesses such as monitoring and security. With next-generation applications expected to leverage the best-in-class services, regardless of which cloud/on-prem/hybrid infrastructure they are run on, we see an opportunity to provide a uniform way to connect, secure, manage and discover microservices (run in a hybrid environment).
  • Streams processing: Customers are awash in data and events from across these hybrid environments including data from server logs, network wire data, sensors and IoT devices. Modern applications need to be able to handle the breadth and volume of data efficiently while delivering new real time capabilities. The area of streams processing is one of the most important areas of the application stack enabling developers to unlock the value in these sources of data in real time. We see fragmentation in the market across various approaches (Flink, Spark, Storm, Heron, etc.) and an opportunity for convergence. We will continue to watch this area to understand whether a differentiated company could be created.

Abstraction and automation of infrastructure

While containerization and all of the other CNCF projects promised simplification of dev and ops, the reality has turned out to be quite different. In order to develop, deploy and manage a distributed application today, both dev and ops teams need to be experts in a myriad of different tools, all the way from version control, orchestration systems, CI/CD tools, databases, to monitoring, security, etc. The increasingly crowded CNCF roadmap is a good reflection of that growing complexity. CNCF’s flagship conference, Kubecon, was hosted in Seattle in December and illustrated both the interest in cloud native technologies (attendees grew 8x since 2016 to over 8,000) as well as the need for increased usability, scalability, and help moving from experimentation to production. As a result, in the next few years, we anticipate that an opposite trend will take effect. We expect infrastructure to become far more “abstracted,” allowing developers to focus on code and letting the “machine” take care of all the nitty gritty of running infrastructure at scale. Specifically, we think opportunities are becoming available in the following areas:

  • Serverless becomes mainstream: For way too long, applications (and thereby developers) have remained captive of the legacy infrastructure stack in which applications were designed to conform to the infrastructure and not the other way around. Serverless, first introduced by AWS Lambda, broke that mold. It allowed developers to run applications without having to worry about infrastructure and to combine their own code with best-in-class services from others. While this has created a different concern for enterprises – applications architected to use Lambda can be difficult to port elsewhere – the benefits of serverless, in particular rapid product experimentation and cost, will compel a significant portion of the cloud workloads to adopt it. We firmly believe that we are at the very beginning of serverless adoption and we expect to see a lot more opportunities in this space to further facilitate serverless apps across infrastructure, similar to Serverless.com (toolkit for building serverless apps on any platform) and IOpipe (monitoring for serverless apps).
  • Infrastructure backend as code: The complexity of building distributed applications often far exceeds the complexity of the app’s core design and wastes valuable development time and budget. For every app, a developer wants to build, s/he ends up writing the same low-level distributed systems code again and again. We believe that will change and that the distributed systems backend will be automatically created and optimized for each app. Companies like Pulumi and projects like Dark are already great examples of this need.
  • Fully autonomous infrastructure: Automating management of systems has been the holy grail since the advent of enterprise computing. However, with the availability of “infinite” compute (in the cloud), telemetry data, and mature ML/AI technology, we anticipate significant progress towards the vision of fully autonomous infrastructure. Even in the case of cloud services, many complex configuration and management choices need be made to optimize the performance and costs of several infrastructure categories. These choices range from capacity management in a broad range of workloads to more complex decisions in specific workloads such as databases. In databases, for example, there has been some very promising research done on applying machine learning to basic configuration all the way to index maintenance. We believe there are exciting capabilities to be built and potentially new companies to be grown in this area.

Specialized infrastructure

Finally, we believe that specialized infrastructure will make a comeback to keep up with the demands of next-general application workloads. We expect to see that in both hardware and software.

  • Specialized hardware: While ML workloads continue to proliferate and general-purpose CPUs (and even GPUs) struggle to keep up, new specialized hardware has arrived from Google’s TPUs to Amazon’s new Inferentia chips in the cloud. Microsoft Azure also now offers FPGA-based acceleration for ML workloads while AWS offers FPGA accelerators that other companies can build upon – a notable example being the FPGA-based genomics acceleration built by Edico Genome. While we are unlikely to invest in a pure hardware company, we do believe that the availability of specialized hardware in the cloud will enable a variety of new investable applications involving rich media, medical imaging, genomic information, etc. that were not possible until recently.
  • Hardware-optimized software: With ML coming to every edge device – sensors, cameras, cars, robots, etc. – we believe that there is an enormous opportunity to optimize and run models on hardware endpoints with constrained compute, power and/or bandwidth. Xnor.ai, for example, optimizes ML models to run on resource-constrained edge devices. More broadly, we envision opportunities for software-defined hardware and open source hardware designs (such as RISC-V) that enable hardware to be rapidly configured specifically for various applications.

Open Source Everywhere

For every trend in enterprise infrastructure, we believe that open source will continue to be the predominant delivery and license mechanism. The associated business model will most likely include a proprietary enterprise product built around an open core, or a hosted service where the provider runs the open source as a service and charges for usage.

Our own yardstick for investing in open source-based companies remains the same. We look for companies based around projects that can make a single developer look like a “hero” by making her/him successful at some important task. We expect the developer mindshare for a given open source project to be reflected in metrics such as Github stars, growth in monthly downloads, etc. A successful business then can be created around that open source project to provide the capabilities that a team of developers and eventually an enterprise would need and pay for.

Conclusion

These categories are the “blueprints” we have in our minds as we look for the next billion-dollar business in the enterprise infrastructure category. Those blueprints, however, are by no means exhaustive. The best founders always surprise us by their ability to look ahead and predict where the world is going, before anyone else does. So, while this post describes some of the infrastructure themes we are interested in at Madrona, we are not exclusively thesis-driven. We are primarily founder driven; but we also believe that having a thoughtful point of view about the trends driving the industry – while being humble, curious and open-minded about opportunities we have not thought as deeply about – will enable us to partner with and help the next generation of successful entrepreneurs. So, if you have further thoughts on these themes, or especially are thinking about building a new company in any of these areas, please reach out to us!

Current or previous Madrona Venture Group portfolio companies mentioned in this blog post: Datacoral, Heptio, Igneous, Integris, IOpipe, Isilon, Pulumi, Qumulo, Snowflake, Tier 3, Tigera and Xnor.ai