Putting the Tech in Biotech: Why Now is the Time to Build Tech-Enabled Life Science Companies

A-Alpha Bio
A-Alpha Bio Co-founders David Younger and Randolph Lopez.

Over the last six years, Madrona has been investing in founders building companies at the intersections of life (biological and chemical) and data sciences – something we call the “Intersections of Innovation,” or “IoI” for short. Companies integrating the increasingly high throughput and complex life science datasets with the abundant and pervasive computational and data science resources are becoming a core part of Madrona’s investment focus. While we have written extensively about our “Intersections of Innovation” portfolio companies, it is time to share more about the overarching themes guiding our investments in this space.

Applied machine/deep learning and the life sciences are coming together to transform how the world understands and improves human life and health. We believe the intersections of these previously tangential fields will define breakthroughs in therapeutic research, diagnostics, clinical processes, preventions, and cures. And we are partnering with world-class teams focused on building wet/dry lab platforms and novel datasets to disrupt traditional drug discovery processes and mindsets while helping to solve the next generation of challenges in biology across areas such as proteomics, gene editing, synthetic biology, and more.

Why now

Madrona has been the vanguard of myriad technological paradigm shifts over the last 25+ years — from cloud computing to the explosion of open-source software to the emergence of intelligent applications. We believe we are on the cusp of such a shift in the life sciences. The status quo in life science is no longer enough. Today, drug discovery is a slow, painful process. Technologies to profile individual patients with unique “-omic signatures” can now deliver on the promise of personalized healthcare, but the data analysis and infrastructure to deliver personalized insights lags. On the therapeutic side, it can take $1-3 billion and more than 10 years to take a drug to market — and more than 90% of drug candidates fail along the way. That is not the way forward. Drug development can be accelerated through better target selection, low latency, biologically relevant screening, targeted clinical trials, and pharmacogenomics drug selection.

In recent years, biology, chemistry, and data modeling advancements have begun enabling teams to discover, validate, and advance science faster than ever before, but the sheer scale of data generated has become a bottleneck. Key innovations in lab automation, microfluidics, and translational models, for example, have enabled a much faster throughput for wet lab experimentation. Similarly, the technological revolutions in single-cell and spatial biology are generating complex datasets at an unprecedented rate These wet lab advancements create acute pain points that data science and computational approaches are uniquely qualified to solve: too much data, too fast. Only by applying machine learning to these biological data problems will the industry be able to leverage the appropriate speed and scale to realize the promise of personalized medicine, yielding a new standard of care — one designed for an individual.

What we’re excited about

We are beginning to see the impact of sophisticated machine learning models on crucial biological processes that are revolutionizing drug design, but the stage is set for so much more. With digitized data flowing faster than ever, innovative teams can use modern cloud computing resources to process, store, and analyze that data. They can then use this data to train a machine learning model to start predicting behavior — a protein molecule’s binding affinity to another molecule or structure-function relationship. That then enables more targeted wet lab experimentation that, in turn, improves the results and the whole discovery process. This creates an iterative loop that will improve both the traditionally manual wet lab process AND refine the ML model, accelerating drug discovery by many orders of magnitude.

ML will transform the life sciences

As the fields of machine and deep learning improve, we can begin to see how the once brittle and simplistic computational models of biological systems become more sophisticated and biologically relevant. Running these processes at scale enables companies to generate novel datasets that rapidly improve prioritization of which potential candidates on biological targets of interest are worthwhile to advance to additional research or clinical trials.

For example, A-Alpha Bio, one of our portfolio companies, screens and analyzes hundreds of millions of protein interactions and then uses that data to computationally predict binding behavior. This process previously required scientists to screen two proteins in a single experiment and attempt to measure whether they bind to each other or not — possibly having to go through thousands of one-to-one experiments to find a match. A-Alpha’s ability to screen proteins at high throughput in silico (computationally) and in the wet lab has the potential to reduce the time, cost, and risks around the drug-discovery journey. This new, data-rich computational model paired with innovative wet lab screening can advance drug discovery in diverse fields ranging from antibodies to small molecule molecular glues. Computational models are not limited to drug discovery. Opportunities abound to deliver on the promise of next-generation human health in clinical trials, companion diagnostics, consumer health, and more.

Why the Pacific Northwest?

The next generation of intersections of innovation companies need talent that spans both the life science and computer/data science worlds. As the home of world-class technical talent in both areas, the Pacific Northwest is an extremely exciting place for entrepreneurs and a place we’re proud to call home.

Seattle is home to incredible talent in the life sciences ecosystem. The University of Washington hosts the world-renowned Institute for Protein Design (IPD), the Institute of Stem Cell and Regenerative Medicine, and the Molecular Information Systems Lab. Additionally, the Fred Hutchinson Cancer Center, a research and clinical facility recognized as one of the best in the country, has more than 100 labs under its roof and is home to multiple cutting-edge research efforts in machine learning.

Scientists driving innovation in their wet labs is not new, but the speed of discoveries that data science techniques introduce means that, as David Baker, who leads the IPD, articulated recently, “brilliant students … used to all want to leave the lab and become professors…now, they all want to start companies.” The combination of science, data science, and personal ambition to turn academic work into something that touches lives creates opportunities to build companies – providing researchers an opportunity to be at the forefront of both discovery and implementation.

On the computational side, the Seattle area is home to Amazon and Microsoft and tech hubs for Google, Meta, and many others. We’re in the cloud capital of the world, which produces no shortage of new companies applying machine learning in innovative ways and no shortage of tech talent. The Allen Institute for AI also attracts technologists and scientists looking to build companies, providing them access to researchers in artificial intelligence working at the top of their fields.

We’re seeing more technologists and scientists working across disciplines to be a part of an early movement of applying various engineering disciplines to the life sciences. We’re seeing software engineers increasingly drawn beyond the traditional tech industry and into the life sciences by way of the real opportunity to make impactful improvements in human lives and health outcomes.

Why Madrona?

Madrona has invested in ten intersections of innovation companies since our first investment in 2016 in Accolade. Several years later, we invested in Nautilus Biotechnology and began actively leaning into the intersection of life and data sciences. Today, our portfolio companies like Nautilus, A-Alpha Bio, Ozette, Terray Therapeutics, Envisagenics, and others are paving the way for the discovery of new disease treatments through the intersections of these disciplines. But, in concert with the field, we are just getting started.

Madrona is in the unique position of living in both the tech and life science worlds. We’ve spent 25 years working with SaaS, cloud computing, and intelligent application companies. We understand machine learning and how it applies to cutting-edge science. Applying our experiences to help life sciences innovatively combine wet and dry lab experiences and transform how the world understands and improves human life.

There is enormous potential in this space, and we are excited to back teams applying machine learning to the life sciences in innovative ways that unlock novel insights and solve problems across the life science and healthcare spectrum. We are always looking for eager entrepreneurs who embrace innovative new ideas – if that is you, please reach out to Matt McIlwain, Chris Picardo, Joe Horsman.

Advice From 5 IA40 Leaders

We polled the founders of five companies from our inaugural Intelligent Application 40 list to bring together some of the best advice they have from hard lessons they’ve learned throughout their journey or the best advice they received during that time.

When setting out to found a company, advice is never hard to come by. Mentors, advisers, professors, investors, Meetup groups, entrepreneur networks – there is no shortage of places to go to seek advice. The problem is – that advice can be as varied as the types of companies one can found.

We polled the founders of five companies from our inaugural IA40 list to bring together some of the best advice they have from hard lessons they’ve learned throughout their journey or the best advice they received during that time.

Cristobal Valenzuela, CEO of RunwayML, which is one of the Intelligent Application 40 winners
Listen to our podcast with Cristobal here.

Cristobal Valenzuela is the Co-founder and CEO of Runway, which offers web-based video editing tools that utilize machine learning to automate what used to take video editors hours if not days to accomplish. He says he learned a lot when he was just starting out — spinning his company out of this thesis project at NYU. Still, he says the most important thing to always keep in mind is your rate of learning — entrepreneurs should never stop learning.

“How fast you are learning as a company, as a team and as a product, how fast you are learning about your customers, how fast you are learning about the industry, about the competition, about the market, about technology. That rate of learning and how fast you can do something you’ve never done before, experiment, learn as much as possible, and then adapt is really, really, really, really important. It’s easy to get stuck and not be able to adapt. So, just have that mentality that you’re always learning. And then everything else will come.”

Justin Borgman, CEO of Starburst Data, which is one of the Intelligent Application 40 winners
Listen to Justin’s IA40 podcast here.

Justin Borgman is the Co-founder and CEO of Starburst, which provides a query and analytics engine to unlock the value of distributed data. Justin says the advice he gives any entrepreneur at any stage in their journey, but particularly to those just starting out, is to look inside themselves and consider whether they have the perseverance required because that is the single most important attribute to being an entrepreneur.

“You have to have a high pain threshold and a willingness to push through that pain because it is not for the faint of heart. It is not easy. I think some people are just built for that. They have the stubbornness, the drive to push through that when others get overwhelmed by it and bogged down.

One piece of advice I will share that I heard myself — I actually asked a now public company CEO founder, ‘Does this ever get easier?’ Because as you’re building, you always think, ‘Okay, at some point, it’s just going to get easy, right? Like I’m going to be relaxing on the beach, this thing’s going to run itself.’ And he said, ‘No, it’s just different kinds of hard.’ And that stuck with me because particularly as you scale, every new chapter has been a new challenge and in a totally different way. That’s part of what’s amazing about startups, I think, just from a personal growth perspective. You are always having to improve yourself and scale to the next level. And so that really stuck with me. It never gets easier, just different kinds of hard.”

Anoop Gupta, CEO of SeekOut, which is one of the Intelligent Application 40 winners
Listen to Anoop’s podcast here.

Anoop Gupta is Co-founder and CEO of SeekOut, which provides the Talent platform companies use to find, hire, grow, and retain talent. Anoop spent much of his career at Microsoft, but as he’s transitioned into the world of entrepreneurship and helping others evolve in their own careers, he said he’s started to better understand the importance of setting a company culture – and how it needs to be foundational for any entrepreneur.

“Throughout my career, I have worked with incredible people and was lucky enough to be at a place with a culture that really invested in people. In a larger organization, you kind of take culture for granted — in the sense that it is already baked in. In starting SeekOut, my appreciation and conviction that people and culture are paramount has grown. Having the right people and creating a culture of gratitude, humility, and empathy is foundational to success. My advice for others starting their own companies is to be proactive about defining your culture and to stay true to that culture as you grow.”

Clem Delangue, CEO of Hugging Face, which is one of the Intelligent Application 40 winners
Listen to our podcast with Clem here.

Clem Delangue is Co-founder and CEO of Hugging Face, an AI community and platform for ML models and datasets, which just landed $100 million in financing this year. He thinks the beauty of entrepreneurship is owning one’s own uniqueness and building a company that plays to each entrepreneur’s individual strengths. He shared his biggest learning during his early days was to always take things one step at a time.

“You don’t really know what’s going to happen in three years or five years. So just deal with the now. Take time to enjoy your journey and enjoy where you are now because when you look back at the first few years, at the time you may have felt like you were struggling, but at the end of the day, it was fun. Also, trust yourself as a founder. You’ll get millions of pieces of advice, usually conflicting. For me, it’s been good to learn to trust myself, to go with my gut and usually it pays off.”

Luis Ceze, CEO of OctoML, which is one of the Intelligent Application 40 winners
Listen to our podcast with Luis here.

Luis Ceze is the Co-founder and CEO of OctoML, an ML model deployment platform that automatically optimizes and deploys models into production on any cloud or edge hardware. Luis is an entrepreneur and tenured professor at the University of Washington. He said as a professor, you can have impact by writing papers that people read and then do something as a result. And you can directly impact your students – what they go on to learn, research – maybe even become a professor themselves. But getting into company building – where you actually put a product into the hands of a consumer has been a new and exciting experience for him. One of the most important lessons he’s learned, he said, has been to surround himself with people that he genuinely likes to work with because it creates a more supportive, trusting environment.

“People who are supported, they can count on people around them and feel like there is a very trusting relationship with the folks that you work closely with. I have no worries about showing weaknesses and always having to be right. I think it’s great when you say, ‘You know what, I was wrong, I’m going to fix it.’ It’s much better to admit when you’re wrong and fix it quickly than trying to insist on being right.”

SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen on AI-powered talent solutions, developing talent, and maintaining culture

SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen

This week on Founded and Funded, we spotlight our next IA40 winner – SeekOut. Investor Ishani Ummat talks to SeekOut Co-founder and CEO Anoop Gupta and VP of People Jenny Armstrong-Owen about their AI-powered intelligence platform, the importance of not only finding and recruiting new hires but also developing and retaining employees within a company, and maintaining SeekOut’s own culture while seeing significant growth over the last year.

This transcript was automatically generated and edited for clarity.

Soma: Welcome to Founded and Funded. I’m Soma, Managing Director at Madrona Venture Group. And this week we are spotlighting one of our 2021 IA40 winners – SeekOut. Madrona Investor Ishani Ummat talks with CEO and Co-founder Anoop Gupta and their Head of People, Jenny Armstrong-Owen. SeekOut is one of our portfolio companies, and so we were very honored that our panel of more than 50 judges selected them for our inaugural group of IA40 winners. SeekOut provides an AI- powered talent 360 platform to source, hire, develop, and retain talent while focusing on diversity, technical expertise and other hard-to-find skillsets.

We led SeekOut’s Series A round of financing, and have worked with the team closely since before then as they fine tuned their initial product offering. The company has had massive success. And earlier this year they secured $115 million Series C round to scale their go to market and to build out their product roadmap, including powering solutions for internal talent, mobility, employee retention and the like- all topics that are Anoop and Jenny will dive into with Ishani today. With that, let me hand it over to Ishani.

Ishani: Hi, everyone. I’m delighted to be here with a Anoop Gupta, the CEO of SeekOut, and Jenny Armstrong-Owen, SeekOut’s head of people. SeekOut is building an AI powered talent 360 platform for enterprise talent optimization and was selected as a top 40 intelligent application. We define intelligent applications as the next generation of applications that harness the power of machine intelligence to create a continuously improving experience for the end user and solve a business problem better than ever before. I’m so excited to dive in today with Anoop and Jenny, thank you both so much for being here.

Anoop: Hey, Ishani, it’s wonderful to be here. Thank you for making time for us.

Jenny: Agreed. Thank you so much. It’s great to be here.

Ishani: So, I’d love to start out by going way back. Anoop, you were a professor of computer science for over 10 years, co-founded the virtual classroom project that quickly got acquired by Microsoft. In 2015, you left Microsoft to start the precursor to SeekOut. Tell us about what led you to the core talent problem that SeekOut is solving today.

Anoop: So, Ishani, when we left Microsoft, we left because you know, Microsoft was just an absolutely fantastic place to innovate, but what Microsoft legitimately wants you to do is to get on an 18-Wheeler and discover some big island, and we wanted to be on a mountain bike exploring opportunities because it’s such an exciting world out there. Given my background of running Skype and Exchange, actually the first thing we settled on, was Nextio, which was a messaging application. And the whole notion was that today people hide their email address and phone number because once you give it out, people can spam them. And we were not being so successful there, so we built an application called Career Insights. What Career Insights was about is you analyze all resumes in the world, and if you do that, then we can say, “Hey, if you are a UI designer at Microsoft, what are the next possibilities? Where are your peers going? And if they were going to Facebook, we could tell you where are the Facebook UI designers leaving for and doing next. So, it became Career Pathways inside that. And we said, “Oh, this is so useful for recruiters and talent people” that we pivoted there, and since then, our passion, our understanding of what is missing and what could be done better has led to our growth of SeekOut and talent acquisition and what we bring to the table.

Ishani: That’s so great. You sort of found your way to the recruiting market, to the recruiter as an end customer, but beginning with this problem of career pathing and pathways. It’s only something that’s amplified over the course of the last decade, let’s call it and it seems sort of prescient, but now that we look at this moment in time that seems like a very acute foresight.

Jenny I’d love your perspective. This talent environment has evolved so much in the last few years in ways that even Anoop and SeekOut could not have predicted with the pandemic and everything like that. We all see and feel the Great Resignation, the ongoing talent war in the tech world. You’ve been in talent teams for 20 years — what elements of this were predictable and what has taken you by surprise?

Jenny: Well, definitely what is very predictable is that the tech world continues to explode and grow. I read a statistic in the New York Times that the tech unemployment is 1.7%, which is basically negative unemployment. So, that’s not a surprise. What was not predictable was COVID, was the ability for folks to literally work from their homes. And it released the boundaries around what was possible for folks. And I think that’s one of the biggest challenges for organizations. And if you didn’t snap and adapt to that, you were not going to be able to meet your hiring goals.

One of the things that I love about being here at SeekOut, is going and finding people wherever they are. And so for us, we’re not restricted to Bellevue, Washington, or Seattle, Washington, and I think that’s one of the things, especially about our tool, that is so incredibly powerful. If you’re an organization that can embrace remote, that can actually make you so much better than restricting yourself geographically. That’s one of the things that I think has been a huge benefit for us. I think we’re embracing a new paradigm of relationships with employees, and it’s going to be a much more virtual relationship at times than it is a physical one.

Anoop: One of the things when we got into this, is we said, “Hey, digital talent, technology talent, is really important,” and what COVID did was, Satya said “Two years of transformation in two months,” right? So the accelerating rate of digital transformation, something we were focusing on, wasn’t there and that really increased the value of what we’re doing. The second thing that’s happened over the last two years is the emphasis on diversity. A lot of young people are saying, “I don’t want to join a company if I don’t see that they are embracing diversity, inclusion, and belonging in a genuine, authentic way.” We believe a lot of talent exists. It begins with how do you hire, how do you understand what exists in talent pools, and then being able to find them. The problem that leaders have — business leaders, talent leaders — is, they have good intentions, but translating those great intentions into concrete actions and results has been hard, and SeekOut really facilitates that.

Ishani: It’s such a good point on the market, evolving in some ways that you are able to control and some ways you can react really responsibly and control around. In other ways, that they are so out of your control where you sometimes tools can help you with that, tools like SeekOut, and sometimes you have to build that internally. It’s a culture thing. It’s an intangible. But let’s talk a little bit about the tool you’ve actually built. The way I think of SeekOut is it’s a product that’s evolved a lot from a talent acquisition tool to really a more 360 degree talent intelligence platform. But it didn’t start that way. Walk us through this journey from a talent acquisition tool to really an intelligence platform.

Anoop: My Ph.D. thesis was on AI and systems. My co-founder Aravind came from building the Bing search engine. When you look at all of these areas, AI is just a core part of it. So, to use an analogy — when you go to Google and do a flight search — UA 236. It understands that you are doing a flight search that UA is United Airlines, and you’re probably looking for arrival or departure times and therefore this is the relevant information. So, in a similar vein, SeekOut is a people search engine. So, we need to understand a lot about people. So, when I search for Anoop Gupta, our search engine realizes that Anoop is a first name and Gupta is a last name — and that it is a common name in India, right. So, we can get a lot of information that helps us. Similarly, normalizing for universities and companies is really important. SeekOut is very special in that it brings data from many, many different sources and combines it together. So, as we want it to go to technical folks and technical talent, and I’m just using that as an example, and you get GitHub, you see the profile on the GitHub, how does it match to the profile, you know, they might have LinkedIn and they are the same person. You know, it takes AI to figure that out. Then you want to look at all the code and information that you find, and you say, what is their coder score? How good a coder are they? Do they know Python? Do they know C++? So, we started bringing those things inside of it and all of those are inferred things. When we do security clearance, as an example, people don’t mention security clearance often, so what we go and look at is we look at job descriptions for the last many years, and we say did the job description say “This role requires security clearance and top secret or whatever?” And then we say, if there are enough of these positions where that is required at that company, at that location — then we say, you likely have security clearance. So, AI is fundamentally baked into the product, but we also take an approach that while AI is everywhere, it is designed as a complement to the human and not as a substitute to the human recruiter or sourcer that is there. That is an important principle for ourselves. The human is doing what they are best at, and all of the AI and logic are doing what they are good at to facilitate the human being more successful.

Ishani: We talk a lot about intelligent applications having a data strategy. And in order to augment workflows and make them solve a business problem really better than ever before. All of what you described is so well steeped in that philosophy around pulling in data from a host of public sources and then being able to really drive a better product around that and surface insights that matter. Customers love as one of the core features of SeekOut, the search functionality. So I’m sitting on top of all that data, the search just works. Can you talk a little bit about how you handle and process all of this data to just make it work like magic for a consumer?

Anoop: So one is, you’re very right. It’s actually a very hard problem when you have 800 million profiles and data coming from lots of sources, and the data is not static data — people are changing jobs, people are changing things. It’s all dynamic data, so, how one makes it work, how one makes it very performant? You know, my co-founder again — one of the movers and shakers behind the Bing search engine, and because we come from that background, Googles and Bings have to handle very large amounts of data, so how do you construct the index structures? How do you do the entity formation combined together? So that is core to what we do. And then on top of all of that big data, when you say can you clone Jenny and find us similar features? Now that is an impossible task. Because people may do the job with her humor, and her other parts are so hard to replicate, and the nice person that she is, then you have to do all of the matching, right? Or when you parse a PDF resume, how do you extract the skills or when you parse a PDF job description, how do you parse the requirements and what are the must-have requirements? What are the nice-to-have requirements? So, there’s just infinite amounts of problems, and we keep tackling them one at a time.

Ishani: It seems like you also, though, have to be so semantically aware of the context, right? That’s exactly what you’re talking about with the job description. How do you parse out requirements versus any of the other components? And how do you parse out whether someone might have met those requirements? So much is evolving in this field of semantic awareness, semantic search, and natural language processing. What are the kinds of underlying models that you use? Have they really evolved in the last few years as we see some of the transformer models or CNNs start to make a step-change in technology?

Anoop: Our models are continuously evolving based on what the users are doing, how they’re using it, and what their needs are. We do a lot of building ourselves, but we also leverage third parties. We also, you know, we have a notion of a power filter or something. So, if you think and look at synonyms, right? So, you say people who know JavaScript, they are a short distance away from TypeScript, right? Or people who know machine learning, there’s so many different kinds of words that people use in GitHub, whether it’s Keras or TensorFlow, PYTorch, whatever kinds of things, how do you find the equivalencies? You can find some things through correlations or other algorithms. What makes sense, what does not make sense. So, Ishani, there’s just a lot of different things that we are continuously doing. There are different kinds of algorithms and networks that get used for different types of natural language parsing and what we do. But I’ve always said from when we were at Microsoft, eventually, it is the data that you have because everybody publishes their algorithm and if you have the right data, you can do so much more. It is the data, and then the intelligence on the top that I think is really important. You got to have the right data. And then, of course, the right people and the algorithms to get to that intelligence.

Ishani: So, it really goes back to this concept of having a data strategy early. Being able to be nimble in evolving underlying technology and application intelligence. We always talk about garbage in, garbage out. So, being able to really understand where your data’s coming from, semantically parse and structure it to then be able to give to your end user as we call it magic.

Anoop: Yes. Yes. The problem with data is data is not clean. So, you know how you can efficiently clean up that data and use ML models to say these are extreme, exceptions and what to look at become super important.

Ishani: So let’s zoom out a bit. We’ve talked about this briefly, but over the last two and a half years or so, work has changed so much. Hiring has become hard. Engaging with employees has never been more important than it is today. Retention is hard, and SeekOut is doing really well in part because of that macro tailwind. From a company growth perspective, how did you recognize and take advantage of that moment in time?

Anoop: Helping companies get a competitive advantage, recruiting hard-to-find and diverse talent was a model for us from the very beginning. Then all these things happened and we’ve grown 30X in revenue over the last three years, our valuation is 50X where it was from three years ago and we have very high net retention and amazing customers. But we hadn’t thought of everything. We were focused on talent acquisition. That is how do we bring external people? Then with COVID, and the great reshuffle, the great resignation, many companies like Peloton stopped hiring externally and we said, what are the opportunities we can create for the people that are inside? So, our more recent focus on retention is really big. So, here’s the big story that we talk about. It is truly about the future of enterprise. We believe winning companies are realizing that the growth of people and the organization are inextricably linked. So, our mission has broadened, and it’s become to help great companies and their people dream bigger, perform better, and grow together. So that’s the mission and it’s a fundamental mission for every CEO and business leader and not just the HR leader. Then what we are doing is, you know, use technology to ensure that companies and talent are aligned and empowered and growing together. Or in another way what we’re saying is, “Hey, we going to help organizations thrive by helping them hire, retain, and develop great and diverse talent.”

Ishani: You know, SeekOut was really the right place at the right time to take advantage of, and actually really help people through that transition. But you have to be experiencing this internally as well? You talked about 30 X in terms of growth, but you also have triple headcount in the last year. I think you anticipate doing it again this year. How do you maintain, and Jenny, this is a question for you, culture and such a high growth environment?

Jenny: It’s one of my favorite questions I get it a lot in interviews. Culture has become probably the most important thing in a world where people are free agents, and they want to work at a place that aligns with their values and the way that they want to grow and develop with a company. So, I will share this. For me, I was looking at a number of different companies, and I met Anoop, and our first conversation, Anoop, I don’t know if you remember this, it was supposed to go for an hour. We went over 90 minutes, and in that moment, I knew that this was different. This was a different place. The culture here really does emanate from Anoop, Aravind, John and Vikas — the folks that started this company. From my perspective, our job is to make sure we don’t have cultural drift because we don’t have to fix our culture. Our culture is phenomenal. Candidates across the board tell us they’ve never had a candidate experience like this before. Everybody they meet with is super kind and helpful and collaborative. So for us, it’s really keeping our eye on these cultural anchors and making sure that we’re staying true to those.

So, in the hiring process, making sure that every single person who comes here, there’s a diversity interview where we talk about what is important to you in terms of diversity, belonging, equity, and inclusion. To Anoop’s point, people want to go where they feel like they’re going to belong. And then diversity can thrive, and equity can thrive, but you have to have that sense of belonging first. So for us, it’s very much staying focused on that. And everything that we do is around driving programs and opportunities and conversations that reinforce that. We start every Friday All Hands — in fact, I will admit, I suggested to Anoop early on that this was not going to scale as we grow. We’re 150 people today. But we start every all hands with 15 minutes of gratitude. I admit that it is absolutely scalable, and we’re going to continue to do it because it is by far the most favorite meeting of the entire week. That moment that we set aside to say nothing is more important for us in this moment than sharing our gratitude with each other. So I think that’s, for us, I feel super fortunate to be able to be at this intersection at a time where, it is tough, right? Companies are struggling to keep their culture intact in a world in which everything’s shifting so quickly.

Ishani: That’s such a good point that begins in the interview process and it continues in the onboarding process. Then it’s an everyday commitment to reinforcing your culture. I think people do have really good elements of each of those. But it’s rare that you find somebody so committed to all of them.

Jenny: It starts with Anoop.

Anoop: So, you know, so Jenny said it so well it comes from just a deep belief that people are the most foundational element to our success. We truly, believed that for ourselves. I’ll give you an example in a story. So we were looking for, I think the CRO, we had an executive search firm, and they said, ” Anoop you seem to be open to meeting a lot of people. Are you sure you have enough time?” And I said, ” I’m always there when it’s a people question. People are so important.” We have four OKRs now, these are the company goals. Our main goal is our people, culture, execution are our competitive advantage. I truly believe in that. It is not our AI knowledge. It is not we are smarter. It is that as a company, who we bring in, how we think, how we execute, how we collaborate, how we decide to disagree, yet, find commitment, you know, hold each other accountable, be nice.

We want to be the ones to show that nice people can win. Kind people, people with empathy can win. You don’t have to be a jerk to get ahead. So that is just a fundamental belief for us. And that has helped with our retention. That’s helped with our recruitment. That’s helped with the energy and their whole self that people bring to the company every day. And I think that’s a huge part of our success.

Ishani: The recruiting example of the CRO is so interesting because it really does delineate there is a real and important place for tools, but there’s certainly a line where that stops. Where you, Anoop, taking the time, you know, it wouldn’t be a little bit facetious as a talent optimization platform, if you didn’t take the time to bring in your own talent and really make sure that they fit the organization’s culture and the ethos, and they want to be where they are. So certainly, it has, there’s good continuity there with SeekOut’s mission and SeekOut’s product and how you operate.

But also, that there’s a role for the talent optimization platform that you use. And that presumably you use SeekOut, at SeekOut.

Anoop: So, you know, the other side story is. Every exec firm that I talk to, they give me some candidates and sometimes they are diverse, sometimes they’re not diverse. I say, well, let me find you some women candidates, let me find you some, you know, black candidates. They exist — you just don’t know; you need a better tool.

Ishani: It’s very much clear that there are roles, and these tools are augmenting how people do their jobs and in ways that haven’t ever happened before. But that it is an augmentation with learning, with intelligence, and with automation. But there’s still very clear roles for how do you build, for example, a culture like Jenny, right? And how do you maintain that? It also speaks to one of the product focus areas of SeekOut, which is on retention and really retaining your talent and looking internally. Jenny, talk to us a little bit about some of the strategies that you use, whether or not it’s related to SeekOut’s product, to maintain the talent and retain talent.

Jenny: Yeah. And thanks. I think it’s actually one of the reasons why, when I with Anoop, and he cast the vision for what SeekOut was going to be, was what got me so excited. As someone who’s led people teams now for way too many years to admit, I think getting folks in the door, getting them hired, is absolutely critical and important. I think growing, developing, and evolving as teams with folks who are committed and engaged, that is the job, right? That is every day. All day thinking about the people that we already have here. That’s one of the things about the enterprise talent optimization, where we’re going there, it’s going to revolutionize people teams. I mean, it’s like the best way for me after so many years of not having really effective tools on people teams —you know, we’re building a world in which they are going to be so complementary and it’s going to free people teams and leaders up to do what they do best, which is really about developing people.

So, for example, yeah, we’re 150 people. Well, we’re going to be implementing a people success platform. We’re going to be making sure we’re touching base on the things that matter the most to people, which is all about skill development, acquisition, growth. That’s fundamentally why folks will leave, right? Especially in the tech world, because they want to do different things, or they want to be able to stretch and grow. One of the things that’s awesome about startups is you have infinite ability to grow your people in whatever direction they want to, because the opportunity is here. It’s one of the reasons why I stayed at my first tech company for so long — I was able to do and grow and be so many things, and that’s one of the things that we talk to people about in terms of our value prop when we’re interviewing them is, “Hey, we are interested in you for this, but guess what? The world is your oyster at SeekOut and wherever your passion wants to take you, we are going to support that passion.”

Ishani: What you’re saying around giving people, the opportunity to grow is incredibly aligned with SeekOut, with the mission of the company. But also again, the product. It is also very hard to execute on. To say — we have a high-performing software engineer in our machine learning division who wants to go try out product management. Right? What are the tools that you used at SeekOut, and how do you actually execute on that?

Jenny: Well, I think that we are still in our nascent stages. We started last year at 40 people. We’re now at150 people. What I would say is building the capability in leaders to be aware and to be having these conversations and to be free enough to be able to think beyond the roadmap and the things that are getting done today. So, I think you have to hold both things tightly and loosely at the same time, if that makes any sense. And it requires a high level of change management and org development skills. Like we have to build whole-brained leaders who can look at our people with both things in mind. Executing on the deliverables that we have today, but fundamentally making sure you’re having this other conversation and that you’re driving that consistently in a way so that there’s never any dissonance. I think that’s the challenge? Creating too much space between those conversations or even having those conversations at all creates the dissonance. Then that creates the drag and the drifting. So, for me, that’s one of the things that we talk about a lot is who do we have?

Anoop, I would love for you to give your kind of ETO summary, because I think it is so compelling about the tools that we’re going to be able to provide. To your point, Ishani, I don’t have specific tools today. I mean, I can use my SeekOut tool, which is awesome, but we’re also small enough that we kind of can do a lot of this, you know? One-on-one but Anoop, if I would love for you to add onto that.

Anoop: You know, the cost when a great employee leaves is almost two X their salary for the annual salary, because it takes so much for the new person to come in and get up to speed, and meanwhile, the products are delayed and other things that delay whatever function they might have been going. So that’s why it’s so critical. And that’s why people care about it a lot. One of the things I say is that companies are deluged with data. There’s data flowing out of everything, but when it comes to data about their people, companies don’t understand the data is siloed. The data doesn’t exist. They may not have the external data. They may not have what they did before. And there is missing data. You know, your manager doesn’t know, Hey, in a large company like Microsoft or VMware or Salesforce where are the open jobs. What are the matching jobs? What are the skills? What does it look like? So, the data about employees is missing, the data about opportunities is missing, and then how do you take opportunities and data to match them to people? So, we can tell you about career path, if you’re going from a software development to a product manager, we can point you to people who made a different transition. We might be able to point you to people who made that transition, who might be from the same school, might be from the same gender and you don’t have to talk to the hiring manager, you can talk to people below and say, what is the culture of the team? Basically, we bring amazing data from outside. But then we take data from inside the company —this may come from management hierarchies. This may come from Salesforce. This may come from your developer systems and GitHub — and give you the most comprehensive thing. Then we engage with people. We really have two audiences. One of our audiences is the employee. Okay, who in a private secure way are mapping out their career, their growth, their learning journeys, their growth and development journeys. The second is the HR and the business leaders who are saying, we’ve got to deliver. There’s a strategy we want to do. Do we have the right talent? How does my group compare to competitors? How does it grow across the companies and how do we optimize?

So, we are super excited about it in any conversation that we are having, with CHROs, with other leaders, there’s a lot of excitement about what’s possible what SeekOut can do for them.

Ishani: So, SeekOut today is a really amazing example of an intelligent application for 360 talent optimization, not just the external component, but also internally. This speaks so much to both the environment and you’re reacting and being nimble around, how do you create offerings that people need? Without revealing too much, give us a peek into what the future holds for SeekOut.

Anoop: So future wise, Ishani, each of these broad areas that I’m talking about, there is immense depth in that. As we go deeper into it, there is a lot of work that is involved. So, if you look three to five years just executing on even the components that we have talked about and becoming a star We’re thinking you know, I believe this is a new category. HR don’t even realize what is possible in terms of data, the insights they can have, what they can do for their employees. So, there’s always a market and a mind shift that is involved and people are the slowest to change in some sense. So, I think our journey just making it, and if we do it right, and if we are the leaders, this is more than a hundred billion-dollar company, I believe. Okay. So there’s lots of growth and possibility, in this because talent is central to organizations and their success.

Ishani: Anoop and Jenny, we tend to end these podcasts with a lightning round of questions. So, we’ll go quickly through three questions that we ask every company that comes on this podcast. The first for both of you, aside from your own, what startup or company are you most excited about that is an intelligent application?

Anoop: So, for me, I would say, you know, some company like Gong or basically people who give you intelligence about how your salespeople are doing, how can you be better? What those calls are. Do the natural language analysis and all of that. So, it is just a hot topic, so it could be more, but that’s top of mind for me.

So let me just name that.

Jenny: I have an appreciation for Amperity and what they’ve been up to and what they’ve been doing. So that would be mine.

Ishani: Awesome. Both actually are also intelligent app top 40 companies. So, congratulations to Amperity and Gong. Outside of enabling and applying AI and ML to solve real-world challenges, what do you think will be the greatest source of technological innovation and disruption over the next five years?

Anoop: Certainly, you know, machine learning/AI will have a huge impact. But I think it will also be coupled with that it works on lots of data. We are instrumenting everything, on how the washing machine is being used, how your toaster is being used, how you’re driving. So, I think, the data and the machine learning together. But with the caveat of us making sure that it is not biased. Every tool in humanity can be used for good and it can be used for bad. But I think if we use these things intelligently, we can make a lot of good happen.

Jenny: Yeah, I would have to agree. I can’t say it any better than Anoop did. I think that making sure that technology is being inclusive as well. I think that’s a huge area of focus and concern.

Ishani: I couldn’t agree more. Final question. What is the most important lesson? Likely something you wish you did better, perhaps not, that you’ve learned over your startup journey.

Anoop: I will say, throughout my career, I always kind of knew people were important, and culture was important. You know, people would talk about it. But my appreciation and conviction that it is about people and culture as the fundamentals and foundations to success has been a realization. You know, if you asked me this question five years ago, I would not have answered it this way. You kind of take culture for granted, is not granted in the sense that it is already kind of baked for you in a larger organization. I think here, there was the opportunity to say — you get to define it — then it just made so much sense that this is the thing to focus on.

Jenny: That’s awesome, Anoop. I love that. I would say that for me learning that, you can put people at the top of the pyramid, and you can be very successful, is something that makes me incredibly happy that I’m getting the chance to learn and experience.

Ishani: Anoop and Jenny, it’s been so great to talk to you today about SeekOut, but also about people and how important they are in the organization. SeekOut is a great tool that enables you to find, recruit, and hopefully retain the best people that are going to build your organization. Thank you so much for taking the time and it was a great chat.

Anoop: Thank you so much for having us really appreciate the time.

Thank you for listening to this week’s episode of Founded & Funded. Tune in in a couple of weeks for the next episode with UW’s robotics expert Sidd Srinivasa.


The Remaking of Enterprise Infrastructure – The Sequel

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.” – Bill Gates, The Road Ahead

Just over a year ago, we wrote about how enterprise infrastructure was being reimagined and remade, driven by the rapid adoption of cloud computing and rise of ML-driven applications. We had postulated that the biggest trends driving the next generation of enterprise software infrastructure would be (a) cloud-native applications across hybrid clouds, (b) abstraction/automation of infrastructure, (c) specialized hardware and hardware-optimized software, and (d) open source software.

Since then, we have witnessed several of those trends accelerating while others are taking longer to gain adoption. The COVID-19 pandemic over recent months, in particular, has arguably accelerated enterprises multiple years down evolutionary paths they were already on – digital transformation, move to the cloud for business agility, and a perimeter-less enterprise. Work and investments in these areas have moved from initiatives to imperatives, balanced with macroeconomic realities and the headwind of widespread spending cuts. Against that backdrop, today we again take stock of where next-generation infrastructure is headed, recapping which trends we feel are accelerating, which are emerging, and which are stalling – all through the lens of customer problems that create opportunities for entrepreneurs.

Next-generation enterprise infrastructure, as we show in the figure above, will be driven by major business needs including usability, control, simplification, and efficiency across increasingly diverse, hybrid environments and evolve along the four dimensions of (1) cloud native software services, (2) developer experiences, (3) AI/ML infrastructure, and (4) vertical specific infrastructure. We dive into these four areas, and their respective components, in the rather lengthy post below. We hope some of you will read the whole thing, and others can jump to their area of interest!

As we have noted in the past, below are a few “blueprints” as we look for the next billion-dollar company in enterprise infrastructure. As we continue to meet with amazing founders who surprise and challenge us with their unique insights and bold visions, we continue to refine and recalibrate our thinking. What are we overlooking? What do you disagree with? Where are we early? Where are we late? We’d love to hear your thoughts. Let’s keep the dialogue going!

Cloud Native Software and Services

Cloud native technologies and applications continue to be the biggest innovation wave in enterprise infrastructure and will remain so for the foreseeable future. As 451 Research and others point out, “… the journey to cloud-native has been embraced widely as a strategic imperative. The C-suite points to cloud-native as a weapon it will bring to the fight against variables such as uncertainty and rapidly changing market conditions. This viewpoint was born prior to COVID-19 – which brings all those variables in spades. As this crisis passes, and those who survive plan for the next global pandemic, there are many important reasons to include cloud-native at the core of IT readiness.”[1]

However, enterprises that have begun to adopt technologies such as containers, Kubernetes, and microservices are quickly confronted with a new wave of complexity that few engineers in their organization are equipped to tackle. This is producing a second wave of opportunity to ease this adoption path.

Hybrid and Multi-cloud Management

We highlighted last year that we are now in a “hybrid cloud forever” world. Whether workloads run in a hyperscale public cloud region or on-premises, enterprises will adopt a “cloud model” for how they manage these applications and infrastructure. We are seeing the forces driving such multi-site and multi-cloud operations continuing to accelerate. While AWS remains the leader, both Azure and Google are adding new data centers around the world and expanding support for on-premises applications. Azure has gained significant ground with a growing number of services that are production-ready, and Google has invested heavily in expanding their enterprise sales and service capabilities while continuing to offer best-in-class ML services for areas such as vision, speech, and Tensorflow. Azure and Google continue to close the gaps and are often preferable to AWS in situations where enterprises must comply with regulatory and compliance directives for data residency and need to account for possible changes in strategic direction that may require migrating their applications to different cloud providers.

These compliance and data residency considerations are leading organizations to invest in skills and tools for building applications that are easily portable, which improves deployment agility and reduces the risk of vendor lock-in. This creates new sets of challenges in operating applications reliably across varying cloud environments and in ensuring security, governance, and compliance with internal and external policies. In 2019, we invested in MontyCloud which helps companies address the Day 2 operational complexities of multi-cloud environments. We continue to see more opportunities in hybrid and multi-cloud management as regulatory guidelines continue to evolve and organizations emerge from the early stages of executing the shift.

Automated Infrastructure

Automated infrastructure management has been a key enabler for organizations that need to operate in varying cloud and on-premises environments. As containers have grown mainstream, container orchestration with Kubernetes is becoming the most common enterprise choice for operating complex applications. Combining version-controlled configuration and deployment files with operational stability based on control loops has enabled teams to effectively and simultaneously embrace devops and automation while building applications that are portable across on-premises and multi-cloud environments. We invested in Pulumi, which allows organizations to use their programming language of choice in place of Kubernetes YAML files or other domain specific languages, further enabling a unified programming interface with the same development workflows and automated pipelines that development teams are already familiar with.

Machine Learning continues to promise automation of capacity management, failover and resiliency, security, and cost optimization. We see further innovation in ML-powered automation services that will allow developers to focus on applications rather than infrastructure monitoring while enabling IT organizations to identify vulnerabilities, isolate insecure resources, improve SLAs, and optimize costs. While we are already seeing technologies such as autonomous databases offer the promise of automating index maintenance, performance, and cost tuning, we have yet to see wider innovation in this space. We expect some of these capabilities to be natively offered by the public cloud providers. The opportunity for startups will be to offer a solution that leverages unique data from varying sources, delivering effective controls and mitigation, and supporting multi-cloud and on-premises environments.


Serverless remains at the leading edge of automated infrastructure, where developers can focus on business logic without having to automate, update, or monitor infrastructure. This is creating opportunities across multiple application segments, from front end applications gaining richer functionality through APIs to backend systems expanding integrations with event sources and gaining richer built-in business logic to transform data. AWS Lambda continues to lead the charge, lending some of its core concepts and patterns to a range of fast-growing applications. However, migrating traditional enterprise applications to an event-driven serverless design can require enterprises to take a larger than anticipated leap. While several pockets of an organization could be experimenting with serverless applications, we continue to look for signs of broader adoption across the enterprise. New approaches that help serverless more effectively address internal policy and compliance requirements would help grease the skids and increase the adoption for many of these serverless applications. Opportunities exist for new programming languages to make it easier to write more powerful functions along with new approaches for managing persistence and ensuring policy compliance. As applications begin to operate across increasingly diverse locations, distributed databases such as FaunaDB will help address the need to persist state in addition to elastically scaling stateless compute resources in transient serverless environments. We are more convinced than ever that serverless will grow to be a dominant application architecture over time, but it will not happen overnight and thus far has been developing more slowly than we forecasted.


With the growth of applications across public cloud regions, remote locations, and individual devices, enterprises are already learning new approaches to secure data at rest, define data perimeters, establish secure network paths. The move to working-from-home has accelerated this evolution, not only from a network perspective but also with a proliferation of bring-your-own-devices (BYOD). We are seeing continued and often increasing activity on several fronts:

  • Securing hardware and devices. Our portfolio company Eclypsium protects against firmware and hardware exploits, helping enterprises deal with the new normal of a distributed workforce and an increasingly risky environment of sophisticated attackers. We expect to see more companies realizing the need for firmware and hardware protection as well as broader opportunities around next generation endpoint protection solutions to support work-from-home, BYOD, and the now perimeter-less enterprise.
  • Secure computing environments. New virtualization technologies such as Firecracker using languages such as Rust are already delivering security and performance in constrained capacity environments. This is particularly valuable for the next generation of applications designed for low latency interactions with end users around the world. With Web Assembly (WASM), code written in almost any popular language can be compiled into a binary format and executed in a secure sandboxed environment within any modern browser. This can be valuable when optimizing for resource hungry tasks such as processing image or audio streams where Javascript isn’t the right tool for the required performance.
  • Securing data in use. While cryptographic methods can secure data at rest and in motion, these methods alone may be inadequate to protect data in use when it sits unencrypted in system memory. Secure enclaves provide an isolated execution environment that ensures that data is encrypted in memory and decrypted only when being used inside the CPU. This enables scenarios such as processing sensitive data on edge devices and aggregating insights securely back to the cloud.
  • Data privacy. Automated data privacy remains a challenge for companies of all sizes. GDPR and CCPA has resulted in unicorns such as OneTrust (who just acquired portfolio company Integris) as more countries adopt and implement similar regulations. Organizations around the world across industry verticals will require new workflows and services to store and access critical data as well as address an enduring business priority of understanding various data attributes – where it lives, what it contains, and what policies must apply to various usage patterns.
  • Securing distributed applications. Traditional approaches to securing applications that were designed for monolithic applications continue to be upended by distributed, microservices-based applications where security vulnerabilities may sit at varying points in the network or component services. Our portfolio company Extrahop’s Reveal(x) product exemplifies the value of deeply analyzing network traffic in order to secure applications. We expect to see this market continuing to expand in the future. We believe that companies can turn managing security from a business risk into a competitive advantage by embracing “SecOps.” SecOps includes building secure applications from the ground up, using secure protocols with end to end encryption by default, building tools to quickly identify and isolate vulnerabilities when they arise, and modernizing the way teams work together integrating security planning and operations directly into development teams. We are interested in new companies that further enable this SecOps approach for customers.

Developer Experiences

Rapid Application Development

Where front end and back end components were historically packaged together, we are seeing these components increasingly decoupled to speed up application development and raise productivity of relatively non-technical users.

For example, developers working on simple web applications, such as corporate websites, marketing campaigns, and small private publications that don’t require complex backend infrastructure, are already realizing the advantages of automated build and deployment pipelines integrated with hosting services. These automated workflows enable developers to see their updates published immediately and delivered blazingly fast through CDNs in SEO friendly ways. Open source Javascript-based frameworks such as GatsbyJS and Next.js can improve application performance by an order of magnitude by simply generating static HTML/CSS at build time or pre-rendering pages at the server instead of client devices. These improvements in application performance combined with ease of deploying to hosting platforms is empowering millions of front-end developers in building new applications.

Content Management Systems (CMS) that store and present the data for these simple web applications have turned ‘headless,’ storing and serving data through APIs that can be plugged into different applications across varying channels. This has enabled non-technical users to simply update their corporate website or product and documentation pages without depending on engineers to deploy updates. This points to a related trend of a rapidly growing API ecosystem that can enrich these ‘simple’ applications with functionality delivered by third party providers.

In fact, workflows (business activities such as processing customer orders, handling payments, adding loyalty points once a purchase is complete, etc.) in modern enterprises are increasingly implemented by calling a set of different (often 3rd-party) services that could be implemented as serverless functions or in other forms. While each service is independent and does not have any context of any other service, business logic dictates the order, timing, data, etc. with which each service should be called. That business logic needs to be implemented somewhere – using code – and the scheduling of each constituent service needs to be done by an orchestration engine. A workflow engine is exactly that – it stores and runs the business logic when triggered by an event and orchestrates the underlying services to fulfill that workflow. Such an engine is essential to build a complex, stateful, distributed application out of a collection of stateless services. The rapidly growing popularity of open source workflow engines such as Cadence (from Uber) is a good testament of this trend and we expect to see much more activity in this space going forward.

Everything as an API

Whether it’s a single page application with a mobile front end or a microservice that’s part of a complex system, APIs are enabling developers to reuse existing building blocks in different contexts rather than build the same functionality from scratch. “Twilio for X” has become shorthand for businesses that turn a frequently needed service into an easy to use, reliable, and affordable API that can be plugged into any distributed application. While Twilio (SMS), Stripe (payments), Auth0 (authentication), Plaid (fintech) and Sendgrid (emails) are already examples of successful API-focused companies, we continue to see more interesting companies in this area such as daily.co (adds 1-click video chat to any app/site), Sila (Madrona portfolio company providing ACH and other fintech back-end services as an API), and many more. As the API economy grows, so does the need for developers to easily create, query, optimize, meter, and secure these APIs. We are already seeing technologies such as GraphQL driving significant innovation in the API infrastructure and expect to see many more opportunities in this space.

AI/ML Infrastructure

Data Preparation

Data preparation remains the largest drain on productivity in data science today. Merging data from multiple sources, cleansing and normalizing training data, labeling and classifying this data, and compensating for sparse training data are common pain points that we hear from customers and our portfolio companies. Vertical applications that mine unstructured data is a large investment theme and reflected in Madrona investments such as intelligent contract management solution, Lexion, as well as in significant social challenges such as identifying and moderating misleading or toxic online content. Technologies such as Snorkel that help engineers quickly label, augment and structure training datasets hold a lot of promise. Similarly, tools such as Ludwig make it easier to train and test deep learning models for citizen data scientists and developers. These are examples of tools beginning to address the broader need for better and more efficient means of preparing data for effective ML models.

Data Access & Sharing

Another key challenge relates to developing and publishing data catalogs with the parallel challenge of accessing critical data in secure ways. Often superficial policies and access controls limit the extent to which scientists are able to use sensitive data to train their models. At times, the same scientist is unable to reuse the data that they used for a previous model experiment. We see data access patterns differing across different steps in the model development workflow, indicating the need for data catalog solutions that provide built-in access controls as enterprises begin to consolidate data from a rapidly growing set of sources. This challenge of federating and securing data across organizations while ensuring privacy – whether partners, vendors, industry consortia, or regulatory bodies – is an increasingly important problem that we are observing in industries such as healthcare, financial services, and government. We see opportunities for new techniques and companies that will arise to enable this new “data economy.”

Observability & Explainability

As the use of machine learning models explodes across all facets of our lives, there’s an emerging need to monitor and deliver real-time analytics and insights around how a model is performing. Just as a whole industry has grown around APM (application performance management) and observability, we see an analogous need for model observability across the ML pipeline. This will enable companies to increase the speed at which they can tune and troubleshoot their models and diagnose anomalies as they arise without relying on their chief data scientists to root cause issues and explain model behavior. Explaining model behavior may sometimes be straightforward, such as in some medical diagnostic scenarios. In other cases, the need for underlying reasoning could be driven by regulation/compliance, customer requirements, or simply a business need to better understand the results and accuracy of model predictions. So far, explaining model predictions has largely been an academic exercise, though interesting new companies are emerging to operationalize this functionality in production for their customers.

Computer Vision and Video Analytics

The use cases for better, faster, and more accurate computer vision and analysis of video continue to proliferate. The COVID pandemic has highlighted more remote sensing scenarios and the use of robotics in scenarios ranging from cleaning to patient monitoring. Analyzing existing video streams for deep fakes is front and center in consumer consciousness while business scenarios for video analytics in media and manufacturing efficiency are promising new areas. Converting video streams to a visual object database could soon enable ‘querying’ a video stream for, say, the number of cars that crossed a given intersection between 10:00 to 10:15am. While entrepreneurs need to ethically navigate the privacy concerns around video analysis, we feel there will be numerous new company opportunities in this area.

Model Optimization for Diverse Hardware

The hyperscale cloud providers continue to release new compute instances and chips optimized for specific workloads, particularly for machine learning. Aiming to realize the desired performance on these specialized instances in any cloud environment or edge location as well as a range of hardware devices, businesses need a path to optimize their models to run efficiently on diverse hardware platforms. We recently invested in an exciting new company, OctoML, that builds on Apache TVM (an open source project created by OctoML’s founders), offering an end to end compiler stack for models written in Keras, MXNet, PyTorch, TensorFlow, and other popular machine learning frameworks. We continue to believe that hardware advances in this space will create new investment opportunities for applications across domains such as medical imaging, genomics, video analytics, and rich edge applications.

Vertical-specific Infrastructure

The Impact of 5G

Major wireless providers have begun rolling out 5G services while cloud providers such as AWS (with Wavelength) and Azure ($1B+ acquisitions of Affirmed Networks and Metaswitch) have been investing in supporting software services. Investments in next generation telecom infrastructure could provide significant opportunities for operators to move to virtual network appliances that previously required specialized hardware devices as well as expensive operations and support systems to provision these services. Further, the greater bandwidth and software-defined network infrastructure being built for 5G should create a variety of new opportunities for startups such as (a) network management for enterprises including converged WiFi/5G networks, (b) the harnessing and orchestration of new data (what will be connected and measured that never has before?), (c) new vertical applications and/or new business models for existing apps, and (d) addressing global issues of compatibility, coordination, and regulation. Like previous wireless network standard upgrades, the full move to 5G and its impacts will undoubtedly take a number of years to be fully realized. That being said, given current rollouts in key geographies, we expect the software ecosystem around 5G to coalesce fairly rapidly, creating new company opportunities in both the near and medium term.

Continued Proliferation of IoT

Relatedly, we expect 5G to push the wave of digitization beyond the inherently data-rich industries such as financial services and into more industrialized sectors such as manufacturing and agriculture. The Internet of Things (IoT) will capture the data in these sectors and is likely to result in billions of sensors being attached to a variety of machines. Earlier this year we invested in Esper.io that helps developers manage intelligent IoT devices, extending the type of DevOps functionality that exists in the cloud to any edge device with a UI, which are increasingly Android-based. Industrial IoT also continues to emerge into the mainstream with manufacturing companies investing in ML and other analytics solutions after years of discussion. We think companies taking a vertical approach and providing applications tailored to the specific need of a certain industry will grow most quickly.

Vertical-Specific Hardware+Software

We are also seeing several verticals requiring specialized hardware for key business functions. For example, electronic trade execution services must provide deterministic responses to orders placed within a small window of time. In addition to requiring hardware-based time sync across the network, participants often use specialized hardware including FPGAs to execute their algorithms. FPGAs are also common in high speed digital telecom systems for packet processing and switching functions. Similarly, FPGA-based solutions are being adopted across healthcare research disciplines. FPGA’s can accelerate identifying matches between experimental data and possible peptides or modified peptides that can be evaluated in near real time, enabling deeper investigation, faster discovery, and more effective diagnostics to improve healthcare outcomes. We are realizing that a long tail of such applications across verticals would benefit from a cloud-based “hardware-as-a-service” that offers a path for almost every application to run in the cloud.

Business Model Innovation

While this post has been largely organized around business needs that are being met by technology innovations and new product opportunities, we are also interested in investing in companies that take advantage of related business model innovations that these technological advances in enterprise infrastructure have enabled. For instance, the move to the cloud allows companies to provide consumption-based pricing, self-service models, “as-a-service” versions of products, freemium SKUs, rapid POCs and product trials, and direct reach to end-user developers or operations team members. We are equally interested in founders and companies that have found new ways to go-to-market and efficiently identify and reach customers.

Relatedly, the continued adoption of open source as the predominant software licensing approach for enterprise infrastructure has created new opportunities for business model innovation, significantly evolving the traditional “paid support” model for open source to open core and “run it for you” approaches. Enterprises are increasingly demanding an open source option because of the typical benefits of lower TCO and control. Developers (and vendors) love open source because of the bottoms-up adoption that creates validation and virality. At the same time, the bigger platforms (cloud providers) are embracing open source technologies on their platform often in a manner that provides an inherent tension with commercial companies built around those same open source technologies. We continue to strongly believe that having a differentiated, unique value proposition on top of an open source project is critical for a commercial company to be built. It is that differentiated value proposition that ultimately creates a strong moat and defensibility from the platform companies supporting open source technologies on their stack. We anticipate that all these factors, plus this intrigue of heightened tensions between hyperscale clouds and open source vendors, will add up to continued opportunity in the dynamic world of open source in the years to come.

[1] 451 Research, April 9, 2020, “COVID-19: Cloud-native impacts.” Brian Partridge, William Fellows, et. al.

Founded and Funded – Deploying ML Models in the Cloud, on Phones, in Devices with Luis Ceze and Jason Knight of OctoML

Photo: Luis Ceze

OctoML on Octomizing/Optimizing ML Models and Helping Chips, Servers and Devices Run Them Faster; Madrona doubles down on the Series A funding

Today OctoML announced the close of their $15 million Series A round led by Amplify Partners. Madrona led the seed (with Amplify participating) and we are excited to continue to work with this team that is building technology based on the Apache TVM open source program. Apache TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators that the founders built several years ago. OctoML aims to take the difficulty out of optimizing and deploying ML models. Matt McIlwain sat down with Luis Ceze and Jason Knight on the eve of their Series A, to talk about the challenges with machine learning and deep learning that OctoML helps manage. Listen below!

Transcript below:
Welcome to found it and funded My name is Erika Shaffer and work at Madrona Venture Group and we are doing something a little different here. We’re here with Matt McIlwain, Luis Ceze, and Jason Knight to talk about OctoML. I’m going to turn it over to Matt, who has been leading this investment. We are all super excited about OctoML and in hearing about what you guys are doing.

Matt McIlwain
Well, thanks very much, Erika. We are indeed super excited about OctoML. And it’s been great to get to know Luis and Jason over many years, as well as the whole founding team at OctoML. And we’ll get to their story in just a second. The one reflection that I wanted to offer was that this whole era of what we think of as the intelligent applications era has been building in its momentum over the past several years. We think back to companies like Turi that we were involved with and Algorithmia and more recently Xnor and now I think a lot of those pieces are coming together in the fullest of ways, is what OctoML is doing. But rather than hear it from me, I think you’ll all enjoy hearing it more from the founders. So I want to start off with a question of going back, Luis, to the graduate school work that some of your PhD students were doing at the University of Washington. Now, tell us a little bit about the founding story of the technology, and the Apache TVM open source project.

Yeah, absolutely. First of all, I would say that if you’re excited, we’re even more excited about this and super excited about the work you’ve been doing with us. Yes, so the technology came to be because there was this observation that Carlos Guestrin and I had a few a few years ago, actually four years ago now, that said that, there are quite a few more machine learning models that were becoming more popular, more useful, and people tend to use them but then there’s also a growing set of hardware targets, one could map these models to so when you have, a great set of models and growing set of hardware targets. Back the question, so I said, “Well, what’s going to happen when people start optimizing models for different hardware and making the most out of their of their deployments.”. That was the genesis of the TVM project. So it essentially became what it is today a fully automated flow that ingests models from expressing a variety of all the popular machine learning frameworks, and then automatically optimizes them for chosen deployment targets. We couldn’t be more grateful to a big open source community that grew around it too. So the project started as an open source project from the beginning. And today, it has over 200 contributors and is in active deployments in a variety of applications you probably use every day from Amazon, Microsoft and in Facebook.

I think that our listeners always enjoy hearing about founding stories of companies and your founding team, and principally some of the graduate students that you and Carlos had been working with. Maybe tell a little bit about that and then it’d be great to have Jason join in since he joined up with all of you right at the beginning.

Absolutely, as soon as is a great way of looping in Jason into the right moment, too. So yeah, so as TVM started getting more and more traction, we did the conference at the end of 2018. And we have well over 200 people come and we’re like, “Oh, wow, this there’s something interesting happening here.” and, it was one of those moments where all stars align where the key PhD students behind the project, including Tianqi Chen, and Thierry Moreau, and Jared Roesch, were all close to graduation and thinking about what’s next. And I was also thinking about what to do next. And then Jason was at Intel at that time, and was really interested in and was a champion of TVM on the Intel side. And He then said, Oh, it turns out that I’m also looking for opportunities. So it’s like since he came and visited us and started talking more seriously and the thing evolved super quickly from there. And now you can hear from Jason himself

Yeah, actually my background is a data scientist. And through a complicated backstory, I ended up at Intel through a silicon hardware, startup acquisition. And I was running a team of product managers looking at the software stack for deep learning and how a company like Intel was going to, make inroads here and continue to impress and delight our huge customer base. And I was helping to fund some of the TVM work as a result of that and really seeing that, despite my best efforts at Intel, kind of pushing the big ship a few degrees at a time towards these kind of new compiler approaches to supporting this type of new workload and new hardware targets, it was clear that the traction was already taking place with open source TVM project and, and that was where the action was happening. And so it was a natural timing and opportunity for something to happen here in terms of not only Intel’s efforts but more broadly, the entire ecosystem needing a solution like this and the kind of pain points I’d seen over and over again at Intel of just end users wanting to do more with the hardware they had available and the hardware that was coming to them and what needed to happen to make that realistic. And so that was a natural genesis for you me and Luis to talk about this and, and make something happen here.

That’s fantastic. And of course We had known Jason for a little while at Madrona. And we’re just delighted that all these pieces were coming together. Hey, Luis, can you say a little bit more because you had that first conference in December of 2018 and then a subsequent one in December of 2019. It seemed to be that not only the open source community was coming together, but folks from some of the big companies that might want to help somebody build and refine their models or deploy their models were coming together too and that’s kind of a magical combination when you get all those people working together in the same place.

Yes, absolutely. So, yes, as I said, the conference that made us realize something big was going on was December 2018. And then a year later, we ran another conference. And by that time, OctoML had already been formed. So we formed the company in late July of 2019. And then by December, we already had the demo of our initial project – our initial product that Jason demoed for the conference. Yes. So in the December 2019 conference, we had pretty much all of the major players in machine learning – those that use machine learning to develop machine learning, were present. So we had, for example, several hardware vendors join us. Qualcomm was being fairly active in deploying a hardware for accelerating machine learning on mobile devices. They had Jeff Gehlhaar, there on the record saying that TVM is key to accessing, their new hardware called hexagon. We had ARM come and also talk about their experience in building a unified framework to unify machine learning support in CPUs, GPUs and their upcoming accelerators. We had Xilinx and we had a few others and Intel who came and talked about their experience in this space. So I wanted to add more to that, what was interesting during that conference was having companies like Facebook and Microsoft talking about how TVM was really helpful in reducing their deployment pains of optimizing models enough such that they can scale in the cloud. And also such that it can actually run well enough on on mobile devices. This was very heartwarming for us because it’s confirming our thesis that a lot of the pain in machine learning, in using machine learning modern applications is shifting from creating the model to really deploying and making the best use of them. And that’s really, our central effort right now is to make it super easy for anyone to get their models optimized and deployed. And by offering our TVM in the cloud flow, so maybe Jason can have a little bit to that from the product side.

Yeah, so it’s, it’s great seeing the amount of activity and innovation happening in the TVM space at the TVM conference. But it’s clear that there’s still a long, long way to go in terms of just better supporting the long tail of developers who maybe don’t have the amount of experience that some of these TVM developers do in terms of just getting their model and optimizing it and running on a difficult target, like a phone or an embedded platform. So yeah, we’re happy to talk more about that. We actually just put up a blog post kind of detailing some of the things we talked about at the TVM conference. And, and we’ll be giving out more details soon.

Yeah, maybe I think what’s interesting, if I think about it, from a sort of a business perspective is, on the one hand, you have all kinds of folks, with different levels of skills and experiences, building models, refining their models, optimizing their models, so that they can be deployed. And then you’ve got this whole sort of fragmented group of not just kind of chip makers as you’re referencing but also the hardware devices, that those chips go into to run, whether that’s a phone or a camera or other kinds of devices that you know can be anywhere in a consumer or commercial sense. And what’s interesting to me what I like about the business is that you guys are helping connect some of the dots between those worlds and, a kind of a simplified end to end sort of way. And it would be interesting to spend a little bit more time and maybe talk about the, the the Octomizer, your kind of your first product, specifically, but more generally, what you’re trying to do and connecting those worlds.

Yeah, definitely. So one way to look at this is we’ve seen a lot of great work from TensorFlow from Google and PYtorch from Facebook and others on the training side for creating deep learning models and training those from data sets, but when you look at the next step in the lifecycle of machine learning model, there’s a lot less hand holding and tools available to get those models deployed into production devices. When you care about things like model size and computational efficiency, and portability across different hardware platforms, etc. And so this actually sits right at the one of the difficulties of the underlying infrastructure and how that’s built with the dependence on hardware kernel libraries. So these are handwritten, hand optimized kernel libraries built by each vendor. And these are, somewhat holding the field back and making it more difficult for end users to get their models into production. And so TVM and the Octomizer that we’re building on top of that makes it easier to just take a model, run it through a system, look at the optimized benchmark numbers for that model across a variety of hardware targets, and then get that model back in a packaged format that’s ready to go for production use, whether you’re using writing a Python application, or you need to bring out every bit of performance with the C shared library and a C API. Or a Docker image with a GRPC wrapper if you want some easy serverless access. So that’s what we’re building with the Octomizer. And it’s designed to be a kind of one pane of glass for your machine learning solutions across any hardware that you care about. And, and, and then we build on top of that with things like quantization and compression and distillation as we move into the future.

A couple more points to that. Yeah. So those are definitely important. And is the very first step, we’re taking. One interesting to realize is what we’re doing here is that TVM really offers this opportunity of providing a unified foundation for machine learning software on top of our idea of hardware. So by unifying the foundation, in which one could use to deploy models you also create the perfect wedge for you to add more machine learning ops into the flow. So if you know people are starting to realize more and more that, regular software development has enjoyed, incredible progress in DevOps. But now, machine learning doesn’t have that, right. So when we see the Octomizer has a platform, which we start with model optimization and packaging, but it’s the perfect point for us to build on to offer things like instrumenting models to monitor how they’re doing during deployment, to also help understand how models are doing, and essentially provide a complete solution of automated services for machine learning Ops,

One of those applications as well as training on the edge. In addition, in the sense that training is no more than just a set of additional operations that are required. And having a compiler based approach, it’s quite easy to add these extra set of ops and deploy those to hardware. And so getting things like training on the edge is in target for us in the future as we look forward here.

That’s great. Well, I want to come back a little bit to the prospect side, but I’m super curious. Now we talked about the company name OctoML. We talked about the product name Octomizer. How did this all come about? How did you guys come up with this name? And, and it’s a lot of fun. I know the story, but for the most the folks here, what, what’s the story?

Okay, all right. So I always say I’m sure Jason and I can interleave with because we have, there’s multiple angles here. So it turns out, they’re both Jason and I and other folks in our group have an interest in in biology. So nature has been an incredible source of inspiration in building better systems. And nature has evolved incredible creatures, but when you when you look around and you think about some creatures like an octopus, you see how incredibly smart they are. They have distributed brains, right? So they are incredibly adaptable, and they’re very, very, very smart plus very happy and light hearted creatures and creative so this is something that To like resonated with everyone, so it stems really from, from an octopus and, and so like a lot of what we do now has a nautical theme. And then we have the Octomizer, you’re going to hear more in the future about something called aquarium and Cracken and the Barnacles, which are all things that are part of our daily communication, which makes it super creative and light hearted. So all right, Jason, maybe I talked too much. It’s your turn. Now,

I guess one thing to point out is we really applied machine learning slant to even our name selection, because the objective function or set of regulators, we applied to the name selection process itself, because it needs to be relatively short, easy to spell, easy to pronounce, somewhat unique, but not too unique. And then it has, all these other associations that Luis was mentioning or similar associations. So those are definitely in the objective function as we were working through this process. It’s also rhymes with optimal as well. So, yeah, it took us a while to get there, but we were happy with the result.

I think you guys did a great job. And I also like the visual notion of, even though they’ve got distributed brains that there is this sort of central part of an octopus and then there’s it can touch anything. So it’s kind of this build one’s gonna run many places sort of image that sort of flows through, but maybe I’m stretching it too much now

No, that this excellent point is that we do think about, TVM being in our technologies really be a central place, I can touch multiple targets in a very efficient and adaptable and automatic way, right. It’s a definitely within scope of how we’re thinking as well. So great.

So 9 bits in a byte by being a core primitive computational power of two.

Very good. Coming back to the open source community, you guys have partly because of your your academic backgrounds and in involvement in other ways in the open source community. So how is it? How are things working within the Apache TVM community along alongside OctoML. So very important time in the life of both and curious to get your thoughts on that.

Yeah, we really see OctoML as doing a lot of and pushing a lot of work that needs to be done in the open source community, eating our vegetables. So we’re currently ramping up the team to just put more of that vegetable eating spirit in the TVM project and helping pitch in on documentation and packaging, all those things that need to be done. But it’s difficult. Open source is known to attract people to scratch their own itch and solve their own problems. But these kind of less sexy tasks often get undone for long periods of time. So we hope to be a driving force in doing a lot of that. And of course, working with the community more broadly to, connect the dots and help coordinate larger structural decisions that need to be made for the project. And all of this being done under the Apache Foundation, umbrella and governance process. So we’re working closely with the Apache folks and continuing to, smooth and work under that umbrella.

Yeah, just to add a couple more thoughts here, we are contributing heavily to the Apache TVM project in multiple domains as Jason as Jason said, and we think that this is, also very, very fortuitous for us because we see TVM as well as you one could go and use TVM directly to go and go do what they want. But then, as they start using it they realize that there are a lot of things that a commercial offering, could do, for example, make it much more automated, make it plug and play. TVM the core idea from that start was a research idea and now it’s part of what it I, iss using machine learning for machine learning optimization, and that can be made much more effective with the right data that we are helping to produce as well. So, we couldn’t be happier with the synergy between the open source project, open source community, and also what we’re doing on our private side as well.

Also, when one thing that’s been nice to see is in talking to users, or soon to be users in the TVM project, they’ll say, Oh, it’s great to see you guys supporting TVM. We were hesitant of kind of jumping in because we didn’t want to jump in and then be lost without anyone to turn to for help. But having someone like yourselves, knowing that you’re there for support makes us feel better about you putting those initial feet on the ground there. So that’s been really nice to see as well.

Now, that’s really interesting and, we are recording this in a time when in fact, we’re all in different places because we’re in the midst of the Covid-19 crisis. I’m curious on a couple of different levels. One, is with, the open source community two is with the some folks that are interested in becoming, early customers, but even just Thirdly, with your team, how are all those things going for you all working in this environment? And certainly there’s companies like GitLab and others that have had lots of success, both, working as distributed teams and working with their customers in a distributed way. What are some of the early learnings for you all on that front?

Well, since TVM, start as an open source project, then a lot of us have that distributed collaborative, blood flowing through our veins to begin with. So working remotely in a distributed, asynchronous capacity is kind of part and parcel to working with open source community. So luckily, both those community and us as a company have been relatively untouched on that front.

Oh, absolutely. So when we when we started the company, we we’re heavily based in Seattle but in no Jason is based in San Diego that started initiatives and we started growing more distributed – we hired people in the Bay Area. We had people in Oregon in the team and it’s working so well it’s been so productive to we were very, very fortunate and lucky not only we already started somewhat distributed to begin with, and now it’s serving us really well. We had great investors with us by to being stuck with us and, and fun years right to the right moment where we need to continue growing. And in fact, we are hiring people in a distributed way. Like just yesterday we had another one another person that we really wanted to hire, assign and join, join our team. So we are fully operating in all capacities, including interviewing, hiring and doing this submitted way and I haven’t noticed any hit in productivity whatsoever. If anything, I think we’d probably be even more productive and focused, right.

And on the customer side, I would say been a mixed bag in terms of, there are those customers that kind of have some wiggle in their direction or roadmaps here and there, but then there’s also customers that have, orders of magnitude increase in their product demand because they’re serving, Voice over IP or something to that effect. It’s being really heavily in demand in this time of need. And so it just depends, and so luckily, there’s not been any kind of negative shifts there.

Yeah, you guys, I’ve really been blown away by your ability to attract some just incredible talent to the team here in just a short period of I don’t know, like, seven or eight months of really being a company and I get the sense that that momentum is just going to continue here. So congratulations on that front. I’m curious on the customer front, to pick up on what you were saying, Jason, what are you finding in terms of, kind of customer readiness? I think back to even a few years ago, it seemed like it was almost still too early, there was a lot of, tire kicking around applied machine learning and deep learning. And people were happy to have meetings, but they were more kind of curiosity meetings. Seems like there’s a lot more doing going on now. But I’d be interested in your perspectives on the state of play.

Yeah, but I would say it’s more than timing, it’s variance, and that we see a huge range and customers that have deep pain in this today in terms of getting computational costs on their cloud bill down yesterday. And because they’re spending, tons of GPU hours on every customer inference request that comes in. And then you have really large organizations with hundreds of data scientists trying to support these very complex set of deployments across, several, half dozen or dozens of different model hardware endpoints. And, and so there’s a lot of pain and a lot of different angles. And it’s, it’s mixed over the set of value propositions that we have performance, ease of use and portability across hardware platforms. And so it’s, been really nice to see, we’re just talking to a large telecom company just the other night. And yeah, just huge amounts of demand. And so it’s, it’s really nice to have the open source ecosystem as well, because it’s a natural, funnel to, to try to pick up on this activity and see, oh, we see someone coming through using the open source project and talking about it on the forums and we have going have a conversation. with them, and there’s naturally already a need there, because otherwise they wouldn’t be looking to the open source project.

Yeah. And just just one more thing that I think it’s interesting to observe that, yes. So there is there is indication that is, it’s early, but already big enough to have serious impact. For example, we hear companies wanting to move computation to the edge to not only save on cloud costs, but be more privacy conscious. Right now, as you can imagine, as a lot of people working or working from home, all of a sudden, we see a huge spike in conditional demands in the cloud. And, we have some reasons to believe that a lot of that involves running machine learning models in the cloud, that, companies will have to, reduce and improve the performance, because otherwise there’s just simply no way to scale as fast as they need to. So We’re seeing that this spike in demand of cloud services as well being a source of opportunity for us.

Also, also, one thing I’m excited about too, is on, on the embedded side of things, it’s one reason why there is there’s pent up demand. But it’s, essentially, there hasn’t been much activity in terms of machine learning and the embedded side of things, because there haven’t been solutions out there that people can use to go and deploy machine learning models into embedded processors. And so being able to kind of unlock that chicken and egg problem and solve one, crack the egg essentially, and have a chicken come out and start that cycle, and really unlock the embedded ml market. It’s really exciting proposition to me, as we get there, through our cloud, mobile and embedded efforts.

And I think that’s what we saw to from, having, been fortunate to, provide the seed capital last summer with you guys into the early fall. And really, be alongside you from day one on this journey. And I’m interested in sort of two things. One is, I think, in retrospect, right, you all made this decision in the early part of this year, that there was enough visibility enough evidence that you were going to go ahead and, and raise a round. And that’s looking like it was well timed now but maybe a little bit of like, why do you decide to do that? And then the second question is, well, what are you going to do with this $15 million that you’ve just raised? And and what’s the plan in terms of, growing the, the business side of the the TVM movement?

Yeah, absolutely. So we, as I said, we, it was incredibly well timed, by, by luck and good advice as well. Yeah. So at that time, what motivated us was that we had an opportunity to hire, incredible people, and it was quite faster. We actually be more successful in hiring than we could have even, hope for in the best case. So it’s like why not, in this climate when we have interesting people to hire and amazing people, we just go and hire them and need resources for that. And that was the first, let’s do this early. But and now know, as we, as Jason said, we started to engage with, with more customers and getting our technology in the hands of customers. And this is immediately puts, more pressure on us to hire more people to, make sure that our customer engagements are successful. So we’re going to staff that up and make sure that, we have the right resources to make them successful. And also as we as as we go to market and explore more, more thesis on how we build a business around the Octomizer requires effort. And that’s what we that’s we’re going to use the funds for is increase our machine learning systems technology team, and also, grow our platform team because what we’re building here is essentially a cloud platform to automate all of these, a process that requires, a significant amount of engineering. And we’ve been very, very engineering heavy so far naturally because we’re building the technology, and we are very much technologists first. But now’s the time to definitely beef up our business development side as well. And that’s where, a good chunk of our resources are going to go as well.

Also, one thing to point out is just given where the TVM project sits in the stack, in terms of, having the capability to support pretty much any hardware platform for machine learning, you’re talking about dozens of hardware vendors here, silicon vendors, and then basically be able to cater to any machine learning and deep learning workload on top, whether it’s in the cloud, mobile or embedded, and you’re talking about a huge space of opportunity, right and, and that’s just the beginning in terms of, there’s extensions upstream to training and downstream to post deployment and there’s classical ml and science as well. And so each one of these Kind of permutations is a huge effort in itself. And so just trying to take even small chunks of this huge pie is a big engineering effort. So that’s, that’s definitely where a lot of the money spent is going at this point.

Well, we’re really excited and honored to be continuing on this journey with both of you and they’re in the not only the founding team, but of course, all the talented folks that you’ve hired. And I think from a timing perspective, the fundraise was, well, timed. But I think from a market perspective, the role that you all are trying to play, the real problems that you’re trying to solve are exceptionally well timed. And so we’re looking forward to seeing how that develops here in the in the months and years ahead.

And we’re excited to be here. Thanks, Matt.

We couldn’t be we couldn’t be more excited. Thank you. Thank you for everything.


Tesorio, Applying AI to the Office of the CFO

Today, we are thrilled to announce leading Tesorio’s $10m Series A funding round. As a career CFO, I am always looking for ways to automate the back office and to apply modern technologies, such as ML/AI and RPA to the office of the CFO. When you are managing a company that is growing quickly, it is imperative that processes scale and do not break as the organization changes. That is why I was excited when I met Tesorio and saw a practical application of new algorithms and technology in a space that I have been involved in my entire professional life.

Throughout my career at both private and public companies I was constantly frustrated by how many important analyses happen in a bespoke excel spreadsheet. In today’s modern era, it is amazing how many crucial decisions are made, key conclusions are formed and key metrics are created with spreadsheets that are on the brink of breaking – too many links, formulas, dependencies and worksheets!

The ultimate financial metric for a company is Cash. Not just the current balance, but the trajectory of the balance. In the vast majority of companies this analysis is performed on a spreadsheet. One containing many links, often circular references, and pulling in data from multiple sources. The risk of an error, a break, is high. Equally importantly, a spreadsheet is not exactly a living, breathing thing even though we might pretend otherwise. Changes to data sitting in different silos do not flow easily into spreadsheets without complex processes and significant human involvement.

When I met the Tesorio team, it was exciting to be able to quickly dive into a product that was replacing the spreadsheet and adding an intelligent layer to the cash flow forecasting process. By pulling actual transactions from back-office systems, adding ML/AI to that history and allowing the user to add in unique transactions, the system uses a 3-part process to generate a cash flow.

In addition, Tesorio enables their clients to impact and improve their cash flow. The building blocks of cash flow – the inputs and outputs–are addressed in the Tesorio offering. Their AR Automation offers the ability to streamline the collection of AR (Accounts Receivables) by understanding when customers typically pay, automating customer contact to speed payment, and delivering a dashboard for finance teams to manage the workflow and communication that is core to successful collections. The same is true with the management of AP (Accounts Payable) and the forecasting and planning of hedging strategies. The result of all these areas is that much of the Finance and Accounting teams spend their day in the Tesorio application – all ultimately feeding the cash flow forecast.

The founding team, Carlos Vega and Fabio Fleitas together bring a unique combination of technical and financial expertise. They partnered together at UPenn, where Carlos was studying analytics at Wharton after spending nearly a decade in finance, and Fabio was studying computer science in the School of Engineering where he founded PennApps Fellows. Together they have brought to market a product that has already been adopted and used by an impressive list of companies including Veeva Systems, Box, WP Engine, Instructure, and Couchbase.

Finally, Tesorio squarely fits our Intelligent Applications thesis that we at Madrona have been focused on for several years. As we have discussed here, we expect intelligent applications to disrupt every business process by collecting data across different silos and applying ML/AI to that data to extract unique insights, automate workflows and even obliterate obsolete processes in some instances. In Tesorio, we believe we have finally found a product that provides the CFO and her team unique insights into the business and optimizes its finances like never possible before.

Applying Machine Learning to Finding Great Talent – SeekOut

I’m excited to welcome SeekOut to our portfolio. SeekOut is a game-changing solution for recruiters and hiring managers to identify and connect with high performers who have a demonstrated track record of solving problems that are relevant to their needs.

In our capacity as the Talent Team at Madrona, Matt Witt and I help our portfolio companies – and the ecosystem at large in greater Seattle – uncover, attract, select and retain a diverse workforce of high performing individuals. We need great tools to do this work. Once every couple of decades a game-changing recruiting tool like SeekOut comes along to provide an order of magnitude advantage over previous solutions. Matt and I took SeekOut out for a spin last year to give feedback to Soma on the capabilities of the product. It didn’t take long for SeekOut to become an extension of our brains and our sourcing platform of choice. We now use it daily and recommend SeekOut to our portfolio companies as one of the most important tools they need for candidate discovery and engagement.

S. Somasegar (Soma) led our investment in SeekOut and he along with the founders, Anoop Gupta and Aravind Bala, know firsthand the pain of recruiting engineers from their days at Microsoft leading teams of engineers on projects with huge scopes. But since Anoop and Arvind were not recruiters by profession they spent a lot of time with their customers and that has paid off with a rich feature set that works for recruiters across a broad set of industries. They have applied the tools they used as engineering leaders on massive computational projects to the problem of matching data on people with the needs of companies that are trying to grow both quickly and intelligently.

“We see a lot of recruiting related solutions focused on the talent market. What made SeekOut stand out was the team’s pursuit of the solution, customer focus and perseverance which has paid off with the incredible customer adoption we have seen this year since the launch of the product,” commented Soma.

SeekOut recognized there is significant room to go beyond the open web or LinkedIn in terms of both data sources and ML/AI and with that, provide better insights on candidates in competitive markets. SeekOut leverages self-reported data from platforms like LinkedIn (what individuals say they’ve done) and performance data like GitHub and Patents/Publications databases (what they’ve actually done) and who they actually are (pedigree, work history, geography and specific demographics including diversity and contact information). The result is a massive, dynamic database that is constantly being updated to more effectively reach out to highly relevant candidates. We recently published our investment themes and SeekOut is a perfect example of our intelligent app category – they are taking a huge amount of raw data, organizing it and applying intelligence to it to deliver better outcomes for users.

Anoop and Aravind also found that SeekOut made it possible for non-technical recruiters to gauge the relative merits of engineering candidates based on the quality of their work products. Gauging engineering talent is something most non-engineers find impossible to do and is a constant source of frustration between engineering leaders and tech recruiters. SeekOut’s advanced features help recruiters dissect keywords to their root and suggests derivatives and alternatives while simultaneously learning about use cases and teaching recruiters what the terms actually mean. This gives recruiters the ability to go fast and compete more successfully.

As recruiters on the front lines, we can tell you that no other tool does this combination of things as well as SeekOut. Another recruiter told me “I was able to access hundreds of candidates through SeekOut that I hadn’t ever seen before on LinkedIn. It has been powerful specifically searching for female engineering managers. I’ve recently started looking at Data Science as well, and it feels like I’ve found a gold mine!”

Why and How Intelligent Applications Continue to Drive Our Investing

Intelligent Applications have been and continue to be a focus of our investing. These apps sit on top of the infrastructure a company chooses, the data they collect and curate, the machine learning they apply to the data and the continuous learning system they build. In this deep dive we talk about why intelligent applications are a central component to our investing themes and where we see the opportunities for company creation and building.

Intelligent apps are applications that use data and machine learning to create a continuous learning system that delivers rich, adaptive, and personalized experiences for users. These intelligent apps range from “net-new” apps like those powering autonomous vehicles and automated retail stores to existing apps that are enhanced with intelligence, such as lead scoring in a CRM app or content recommendations in a media app.

Intelligent apps will have a massive impact on the way we work, live, and play, and we have already been blown away by the potential in what we are seeing companies build today. Some of the most exciting intelligent apps we have seen do at least one of the following:

Enable completely new behaviors

Some of the most impressive demonstrations of machine learning are those that use AI to create new business processes and markets that completely change the way people do things. One high-profile example is Amazon Go stores using computer vision to completely change the supermarket or convenience store experience by removing the checkout process.

Another great example is Textio. Textio offers an AI-powered ‘augmented writing platform’ which draws on massive amounts of historical data to help companies write better job descriptions that will attract higher quality applicants. Both of these examples use AI to create new processes that result in better experiences and better outcomes for their users.

Drive 10x (or better) process improvements

AI automation and insights can also be used to optimize existing processes and workflows. Automation using AI is at the cornerstone of what every enterprise is going through in terms of digital transformation. For example, UiPath’s RPA platform allows companies to drastically reduce costs by automating a wide variety of software based tasks using UiPath’s “robots.” While the UiPath platform is early in its journey to becoming an intelligent app, it is already helping its customer drive 10x process improvements.

Suplari also uses AI to improve existing business processes, namely to analyze purchase behaviors to better understand how to drive cost reductions and manage supplier risks. While Suplari’s customers may have individual processes to reduce software costs through deduplication or to identify opportunities for savings in contract renewals, using AI to proactively identify the best opportunities allows their customers to realize large efficiencies in their procurement processes.

Integrate silos (data and workflows) and capture value

Another great opportunity for AI companies is to combine data and processes to allow companies to combine different parts of the value chain and capture more value. For example, Affirm uses machine learning to approve consumer loans and uses these loans to help ecommerce companies improve shopping cart conversion rates.

One of our portfolio companies, Amperity, literally combines different silos of customer data. Companies that have customer data stored in disparate systems and tools can’t easily leverage this data to get a full picture of their customer base. Intelligently stitching these silos together drives significant business results for Amperity’s clients who can now clearly see the stitched 360-view of their customers and use it to market and sell products in a more intelligent way.

Trends Converge

Now is an exciting time for investors and entrepreneurs to be focusing on intelligent applications because of the momentum and growth of several important technology trends:

  • Massive computational power and low-cost storage are creating the infrastructure to train machine learning models
  • More data is generated and stored than ever before in many different fields like healthcare, autonomous systems, and media
  • Availability of good-enough capabilities at the edge to do a lot of the inferencing work at the edge as opposed to having to round-trip to the cloud
  • Continued improvement and development in tools and frameworks make it easier for companies and developers to begin using machine learning
  • New “user interfaces” using voice, vision, and touch are bridging the gap between the digital and physical world

As these trends make it easier for entrepreneurs to build intelligent applications, we have been developing our own frameworks to understand how all of these pieces fit together to create value for customers. Generally, we think about the intelligent application ecosystem in three main parts:

  • The Data Platform Layer
  • The Machine Learning Platform Layer
  • The Intelligent Applications and “Finished Services” Layer

The Machine Learning Platform Layer

As an early believer in the potential of AI and machine learning, Madrona has made several investments in the machine learning platform layer, including companies like Turi, Lattice, and Algorithmia. This layer of the intelligent app stack is meant to make it easier for other developers and applications to make use of machine learning by providing the tools and automating tasks such as model training, model deployment, and model management.

The ML platform includes machine learning frameworks like TensorFlow and PyTorch, managed services and tools like Amazon Sagemaker and TVM, as well as “Model as a Service” providers in the form something like AWS Marketplace that can help developers and companies develop and deploy ML models in specific environments. While many of these tools have been developed by large companies or acquired by large companies, we believe there continues to be interesting opportunities at this layer because deploying and managing machine learning systems continues to be very difficult.

As an example, while the major cloud providers have made large investments in software and hardware to train ML models in the cloud, using those models for inference at the edge continues to be a difficult problem on resource-constrained devices. Xnor.ai is a portfolio company in this segment that uses software optimizations to improve the quality of machine learning predictions on edge devices that have limited power or bandwidth.

Overall, we believe that while frameworks and tools have been improving, advanced techniques like reinforcement learning still need frameworks and tools that are easier to use, and there are many interesting opportunities to continue improving the ML platforms that intelligent apps depend on.

The Data Platform Layer

A precursor to using AI effectively and building intelligent applications is having a “data” strategy. Having a unique data strategy that could be a combination of public data sets and proprietary data sets enables companies to provide unique and differentiated value. This is a necessary first step, before you can use the data to train models and build a continuous learning system that is a core part of building an intelligent application.

Within the Data Platform layer, we think of companies and products from portfolio companies, Datacoral and Snowflake, as well as those from Databricks and Amazon’s Redshift, which offer customers different ways to connect, transform, warehouse, and analyze data in order to be used in an ML platform. What we’ve seen at this layer of the stack is that getting data into the right place, in the right format, in order to be used for machine learning continues to be very difficult, and simplifying this process is extremely valuable to customers.

Additionally, access and ownership to data itself is a key part of the data platform layer. By this, we mean that companies need to be thoughtful about their data strategies in order to find ways to gain access to, generate, or combine different data sources in order to create unique data assets. As we are seeing frequently in the news these days, companies also need to be thoughtful about data privacy and making sure customers understand what data is being used, shared, and how.

The lines between the Data Platform Layer, the ML Platform Layer, and Intelligent Apps themselves can be quite blurry, especially as companies try to offer their customers a broader set of services or learn their way into new customer needs. However, we do see a distinction between companies that are focused on helping customers manage their data vs. helping customers manage their ML models.


Ultimately, we are looking for companies that can benefit from the virtuous data cycle – where more data creates better user experiences, leading to better user engagement, leading to more data, and ultimately better user experiences again.

Intelligent Applications and “Finished Services” Layer

Within the Intelligent Applications and Finished Services layer, there are several ways to segment the market. We like to think about verticals – applications that focus on a specific industry such as healthcare or insurance – and horizontals – cross-industry applications such as marketing automation or robotic process automation. One of the principles that we follow when looking for these types of opportunities is to find areas where data is becoming digitized and/or more data is being collected than ever before.

For example, one promising vertical for intelligent apps is healthcare. Technology and regulatory trends have driven the healthcare field to rapidly digitize many different types of records – from basic medical histories, to insurance claims, to x-rays, MRI scans, and ‘omics’ data (e.g., genomics, proteomics, biomics). This digitization of healthcare is creating new levels of visibility into patient and population health data, and ML will be a critical tool to help decision makers make sense of these new data sources.

Workforce productivity is another promising area for horizontal intelligent applications because more data is digitized than ever before in HR and employee engagement across industries. One example of a horizontal intelligent app is Madrona Venture Labs spinout company, UpLevel, which uses unstructured data from tools like Slack to help managers get better insights on how to best engage their teams and drive productivity.

In addition to vertical and horizontal apps for business users, we also include other types of “finished services” in this bucket. This can include services like Amazon Rekognition or Amazon Forecast, which help application developers add image and video analysis or time series forecasting models to other applications. In this case, the end customer for a product may not be a consumer, but the product is a “finished service” which can be plugged into a customer-facing application.

In each of these use cases, we are looking to find companies that deeply understand customer pain points and use machine learning as a tool to solve customer problems, rather than starting with a technology and searching for use cases.

Areas of Opportunity

We believe that every successful application built today will be an intelligent application, and that is why we think there is a huge amount of opportunity for entrepreneurs in this space. In particular, we would love to see more companies that are building at the nexus of multiple large markets, companies with unique data strategies, and companies with great ML teams (because AI continues to be very difficult). Four specific areas where we are excited to meet new companies are:

  • AI for Healthcare – More healthcare data is digitized and stored than ever before, and this is creating massive opportunities to reduce costs while improving quality of care and operations. The intersection of the biological sciences with computer science is going to be a difficult area to break through, but the potential value created will be huge, and we are looking for entrepreneurs who are ready to take on these challenges.
  • AI for Work – More and more, companies want to measure and become data-driven about productivity, hiring, and employee wellness. Traditionally, HR and workforce data has been incredibly hard to collect and analyze, but new applications like Slack and Workday are creating opportunities for startups like Polly and UpLevel to analyze workplace data to generate insights for employees and managers.
  • Automation – Robotic Process Automation (RPA) vendors are one set of companies building early intelligent apps that can analyze a business process and improve productivity through automation, but they will not be the last. We think there will also be opportunities to build vertical “RPA-like” businesses in specific industries, automation of manual work that can be dangerous and expensive, and new types of autonomous systems like autonomous vehicles.
  • “End-to-End AI” – Many companies have a section of their pitch explaining how valuable their data will be. We always encourage companies to think about the best use cases for their data, and, if it makes sense, execute on those use cases themselves. Some of our favorite examples in this category are companies like Climate Corp, which started with an ML system for predicting weather, found that they could use their predictions to sell weather insurance to farms, and eventually built an end-to-end farm management software system to capture more data and use it to write insurance policies.


During a recent CIO roundtable, we debated whether machine learning was an over-hyped or under-hyped technology trend. The answer in most people’s minds was both. There are incredibly high expectations for machine learning, and many of those expectations are not grounded in the reality of what ML can do today.

However, we believe that as we move forward, the ability to build new applications and continuously improve systems and processes using machine learning will be a core part of any app, and machine learning will be immensely impactful in every fabric of the society that we work and live in.

Current or previous Madrona Venture Group portfolio companies mentioned in this blog post: Algorithmia, Amperity, Datacoral, Lattice, Snowflake, Suplari, Turi, Xnor.ai, UIpath

The Road to Cloud Nirvana: The Madrona Venture Group’s View on Serverless

S. Somasegar – Managing Director, Madrona Venture Group

The progression over the last 20 years from on-premise servers, to virtualization, to containerization, to microservices, to event-driven functions and now to serverless computing is allowing software development to become more and more abstracted from the underlying hardware and infrastructure. The combination of serverless computing, microservices, event-driven functions and containers truly form a distributed computing environment that enables developers to build and deploy at-scale distributed applications and services. This abstraction between applications and hardware allows companies and developers to focus on their applications and customers—not worrying about scaling, managing, and operating servers or runtimes.

In today’s cloud world, more and more companies are moving towards serverless products like AWS Lambda to run application backends, respond to voice and chatbot requests, and process streaming data because of the benefits of scaling, availability, cost, and most importantly, the ability to innovate faster because developers no longer need to manage servers. We believe that microservices and serverless functions will form the fabric of the intelligent applications of the future. The massive movement towards containers has validated the market demand for hardware abstraction and the ability to “write once, run anywhere,” and serverless computing is the next stage of this evolution.

Madrona’s Serverless Investing Thesis

Dan Li – Principal, Madrona Venture Group

Today, developers can use products like AWS Lambda, S3, and API Gateway in conjunction with services like Algorithmia, to assemble the right data sources, machine learning models, and business logic to quickly build prototypes and production-ready intelligent applications in a matter of hours. As more companies move towards this mode of application development, we expect to see a massive amount of innovation around AI and machine learning, application of AI to vertically-focused applications, and new applications for IOT devices driven by the ability for companies to build products faster than ever.

For all the above-mentioned reasons, Madrona has made several investments in companies building tools for microservices and serverless computing in the last year and we are continuing to look for opportunities in this space as the cloud infrastructure continues to evolve rapidly.Nevertheless, with the move towards containerization and serverless functions, it can be much harder to monitor application performance, debug applications, and ensure that applications have the correct security and policy settings. For example, SPIFFE (Secure Production Identity Framework for Everyone) provides some great context for the kinds of identity and trust-related work that needs to happen for people to be able to build, share, and consume micro-services in a safe and secure manner.

Below, you’ll hear from three of the startups in our portfolio and how they are building tools to enable developers and enterprises to adopt serverless approaches, or leveraging serverless technologies to innovate faster and better serve their customers.

Portfolio Company Use Cases

Algorithmia Logo

Algorithmia empowers every developer and company to deploy, manage, and share their AI/ML model portfolio with ease. Algorithmia began as the solution to co-founders Kenny Daniel and Diego Oppenheimer’s frustrations at how inaccessible AI/ML algorithms were. Kenny was tired of seeing his algorithms stuck in an unused portion of academia and Diego was tired of recreating algorithms he knew already existed for his work at Microsoft.

Kenny and Diego created Algorithmia as an open marketplace for algorithms in 2013 and today it services over 60,000 developers. From the beginning, Algorithmia has relied on serverless microservices, and this has allowed the company to quickly expand its offerings to include hosting AI/ML models and full enterprise AI Layer services.

AI/ML models are optimally deployed as serverless microservices, which allows them to quickly and effectively scale to handle any influx of data and usage. This is also the most cost-efficient method for consumers who only have to pay for the compute time they use. This empowers data scientists to consume and contribute algorithms at will. Every algorithm committed to the Algorithmia Marketplace is named, tagged, cataloged, and searchable by use case, keyword, or title. This has enabled Algorithmia to become an AWS Lambda Code Library Partner.

In addition to the Algorithm Marketplace, Algorithmia uses the serverless AI Layer to power two additional services: Hosting AI/ML Models and Enterprise Services where they work with government agencies, financial institutions, big pharma, and retail. The AI layer is cloud, stack, and language agnostic. It serves as a data connector, pulling data from any cloud or on-premises server. Developers can input their algorithms in any language (Python, Java, Scala, NodeJS, Rust, Ruby, and R), and a universal REST API will be automatically generated. This allows any consumer to call and chain algorithms in any combination of languages. Operating under a Kubernetes-orchestrated Docker system allows Algorithmia’s services to operate with the highest degree of efficiency.

As companies add AI/ML capabilities across their organizations, they have the opportunity to escape the complications that come with a monolithic application and begin to implement a serverless microservice architecture. Algorithmia provides the expertise and infrastructure to help them be successful.

Pulumi Logo

Pulumisaw an opportunity in 2017 to fundamentally reimagine how developers build and manage modern cloud systems, thanks in large part to the rise in serverless computing intersecting with advances in containers and managed cloud infrastructure in production. By using programming languages and tools that developers are already familiar with, rather than obscure DSLs and less capable, home-grown templating solutions, Pulumi’s customers are able to focus on application development and business logic rather than infrastructure.

As an example, one of Pulumi’s Enterprise customers was able to move from a dedicated team of DevOps Engineers to a combined Engineering organization—reducing their cloud infrastructure scripts to 1/100th the size in a language the entire team already knew and empowering their developers – and is now substantially more productive than ever before in building and continuously deploying new capabilities. The resulting system uses the best of what the modern cloud has to offer—dozens of AWS Lambdas for event-driven tasks, replacing a costly and complex queuing system, several containers that can run in either ECS or Kubernetes, and several managed AWS services like Amazon CloudFront, Amazon Elasticsearch Service, and Amazon ElastiCache—and now runs at a fraction of the cost before migrating to Pulumi. They have been able to spin up entirely new environments in minutes where it used to take weeks.

Before the recent culmination of serverless, containers, and hosted cloud infrastructure, such an approach simply would not have been possible. In fact, we at Pulumi believe that the real magic is in these approaches living in harmony with one another. Each has its own strengths: containers are great for complex stateful systems, often taking existing codebases and moving them to the cloud; serverless functions are perfect for ultra-low-cost event- and API-oriented systems; and hosted infrastructure lets you focus on your application-specific requirements, instead of reinventing the wheel by manually hosting something that your cloud provider can do better and cheaper. Arguably, each is “serverless” in its own way because infrastructure and servers fade into the background. This disruptive sea change has enabled Pulumi to build a single platform and management suite that fully realizes this entire spectrum of technologies.

The future is bright for serverless- and container-oriented cloud architectures, and Pulumi is excited to be right at the center of it helping customers to realize the incredible benefits.

IOpipe co-founders Erica Windisch and Adam Johnson went from virtualizing servers at companies like Docker, to going “all in” on serverless in 2016. Erica and Adam identified serverless as the next revolution in infrastructure, coming roughly 10 years after the launch of AWS EC2. With a shift in computing moving towards a serverless world, there are new challenges that emerge. From dozens of production Lambda user interviews, Erica and Adam identified that one of the major challenges in adopting serverless was a lack of visibility and instrumentation. In 2016, Erica and Adam co-founded IOpipe to focus on helping companies build, ship, and run serverless applications, faster.

IOpipe is an application operations platform built for serverless architectures running AWS Lambda. Through the collection of high fidelity telemetry within Lambda invocations, users can quickly correlate important data points to discover anomalies and identify issues. IOpipe is a cloud-based SaaS offering that offers tracing, profiling, metrics, logs, alerting, and debugging tools to power up operations and development teams.

IOpipe enables developers to debug code faster by providing real-time visibility into their functions as they develop them. Developers can dig deep into what’s really happening under the hood with tools such as profiling and tracing. Once they’re in production, IOpipe then provides a rich set of observability tools to help bubble up issues before they affect end-users. IOpipe saw customers who previously spent days debugging tough issues in production now able to find the root cause in minutes using IOpipe.

Since launching the IOpipe service in Q3 of 2017, the company has seen customers ranging from SaaS startups to large enterprises enabling their developers to build and ship Lambda functions into production at an incredibly rapid pace. What previously took one customer 18 months can now be done in just two months.

IOpipe works closely with AWS as an advanced tier partner, enabling AWS customers to embrace serverless architectures with power-tools such as IOpipe.

The Finalists in Microsoft Ventures and Madrona Venture Group Innovate.AI Startup Competition

It’s been an exciting time since we announced the Innovate.AI startup competition with our partners at Microsoft Ventures last October. What started as an idea we shared with our friends there, evolved into a global competition generating interest from some of the most innovative companies in the ML/AI field. We’ve been thrilled with the enthusiasm and strong response we received and would like to thank each participating company for their submission and the judges for the countless hours they spent evaluating each application.

The competition showcased the breadth of problems and use cases that companies are addressing by applying ML/AI. A couple of interesting observations about trends from the applicant pool emerged:

  • Intelligent Applications are on the rise – as data become plentiful and easily available and accessible, using AI and ML to build a continuous learning system is a fundamental fabric of every application is the way of the future. Many of the companies are targeting a variety of industries with plentiful and readily available datasets.
  • Innovation follows data availability – most companies that are thinking about innovative ways to provide insights and predictive analytics focus a lot on their data strategy and how to best organize and use the data they have as an integral part of the value they can and want to deliver to their customers.
  • Business models are still evolving: most ML/AI companies don’t fit the traditional software model of selling licenses or software-as-a-service. We saw a combination of business models, some leaning towards pure professional services, others a hybrid between licensing and SaaS. It’s clearly an area that will evolve as the companies mature.

Additionally, we saw a concentration in the following verticals:

  • Healthcare & Research: personal and mental health assistants, drug research and diagnosis, and computer vision to spot patterns and abnormalities.
  • Financial Services: research summaries and insights for investment professionals.
  • IoT & Edge Computing: analyzing data from edge devices, predictive maintenance, security and autonomous vehicle applications
  • Sales & Marketing: optimizing leads and focusing sales people on top opportunities.
  • Retail: Using computer vision to automatically recognize and tag items in images and video, enhanced advertising & shopping experiences.

And finally, we’d like to congratulate all of our finalists and welcome them to the final stage of the competition. Here is a closer look at who they are:

  • Alpha Vertex: cognitive systems for the financial services community.
  • ConceptualEyes: accelerates the speed of pharmaceutical research and discovery with artificial intelligence.
  • Envisagencis, Inc.: uses artificial intelligence to unlock cures for hundreds of diseases caused by RNA splicing.
  • FunnelBeam: a customizable sales intelligence platform.
  • ID R&D Inc: next-generation authentication solutions including voice, behavioral, and fusion biometrics.
  • TARA Intelligence Inc: a SaaS application to scope projects, assign developers, and monitor ongoing performance to build software faster.
  • Uru: fusing computer vision and artificial intelligence to create better ad experiences for video.
  • Wallarm: an adaptive, intelligent, application security platform.
  • Waygum, Inc.: intelligent IOT platform and mobile app for manufacturing.


To see a list of finalists in Europe and Israel, visit Microsoft Ventures.


Madrona Expands the Team, Adds Talent Director, Venture Partner and Principal

Veteran Tech Talent Executive Shannon Anderson Joins as Director of Talent, Luis Ceze, a leader in computer systems architecture, machine learning, and DNA data storage joins as Venture Partner; Daniel Li is promoted to Principal

We are so excited to announce today some great additions to the Madrona Team. Each of these people is incredibly talented and will add a significant amount to what we can bring to our portfolio companies and to the greater Seattle ecosystem.

Shannon Anderson is joining us as Director of Talent. We expound on her role here.

Luis Ceze is joining the team as Venture Partner. Luis is an award-winning professor of computer science at the University of Washington, where he joined the faculty in 2007. His research focuses on the intersection of computer architecture, programming languages, molecular biology, and machine learning. At UW, he co-directs the Molecular Information Systems Lab where they are pioneering the technology to store data on synthetic DNA. He also co-directs the Sampa Lab, which focuses on the use of hardware/software co-design and approximate computing techniques for machine learning which enables efficient edge and server-side training and inference. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA Young Computer Architect Award and UIUC Distinguished Alumni Award. He is a member of the DARPA ISAT and MEC study groups, and consults for Microsoft.

Luis also has a track record of entrepreneurship. He spent the summer of 2017 with Madrona and has been a vital partner as we have evaluated new ideas and companies for several years. In 2008, Luis co-founded Corensic, a Madrona backed, UW-CSE spin-off company. We are excited to have him on board, continuing and building on Madrona’s long-standing relationship with UW CSE, and working formally with us to identify new companies and work closely with our portfolio companies.

Last but definitely not least, we promoted Daniel Li to Principal. Daniel joined us nearly three years ago and has been an incredible part of the Madrona team. He works tirelessly to not only analyze new markets and develop investment themes that help us envision future companies, but he also dives deeply into his passions. He has built apps that we use internally on a weekly basis at Madrona as well as given a Crypto 101 course to hundreds of people over the past year. He has also proven to be an indispensable partner to entrepreneurs, leading the Madrona investment in fast growing live streaming company, Gawkbox, last year. In addition to digital media and blockchain, Dan has done significant work in investment areas from autonomous vehicles, machine learning, and AR/VR. Dan brings an energy, curiosity and intelligence to everything he does and epitomizes what Madrona looks for in our companies and our team.

We are excited to continue to build the Madrona team to even better help entrepreneurs and further capitalize on the massive opportunity to build world-changing technology companies in the Pacific NW.

Saykara – Out of Stealth, Alexa for Physicians

At Madrona, we like to invest in the best entrepreneurs in the Pacific NW attacking the biggest technology markets in the world. Beyond this, we love when we find a founding team that understands and is addressing an acute customer pain point in a way that aligns with our key investment themes. Few times in the last decade have we found a company and a team that meets all of these criteria better than Saykara.

Madrona led the Series Seed in Saykara in 2016, and we are excited for the company to now emerge from stealth mode. Saykara provides an AI-powered, voice-activated virtual scribe for physicians. Think of it as an Alexa for doctors. Thematically, this aligns very well with several of our key investment themes: (1) voice and natural language as a key UI for applications and (2) ML/AI applied to vertical markets.

From a customer perspective, we talked to many physicians and health systems during due diligence and the pain point they have with laboriously filling out the electronic health record (EHR) is incredibly high. Physicians today face a dilemma: (a) type away in the EMR during an exam and thus disrupt and de-personalize the physician-patient interaction or (b) spend hours at night dictating or entering information and losing control of their personal lives. Health systems also have a dilemma. They can provide an in-person (human) scribe who follows the physician around and enters notes into the EHR, but this is generally cost prohibitive for all but the highest revenue generating physicians and specialties.

Creating further pressure for health systems, offering human scribes is becoming a competitive factor in determining which health system a physician decides to join or stay. Nevertheless, most physicians still use the old-fashioned approach of after-hours dictation, an $18B market. Not only is this time-consuming, the resulting EHR entry is the equivalent of an appended Word document – unstructured data that is difficult to search and analyze. There are other newer options using specialized equipment such as Google Glass for capturing the full recording audio/video of the patient visit, which is transcribed overseas. This is also an expensive option, and typically results with output data that has similar challenges to traditional transcription. The end result for health systems is not only overworked and frustrated physicians, but an EMR that is insufficiently populated and lacking in structured data that enables the type of patient-outcome improving and cost-saving analytics that were the original promise of the EMR.

Despite this searing pain point, this problem is a tough nut to crack. The technology is non-trivial, to say the least, and healthcare can be a difficult industry. It takes founders and a team who know both the tech and the market intimately. Co-founders Harjinder Sandhu (CEO) and Kulmeet Singh (board member) fit this bill perfectly. Harjinder and Kulmeet are pioneers in this space, founding the first automated medical transcription company (MedRemote) acquired by Nuance. From getting into the business with MedRemote, Nuance Healthcare has grown into a multi-hundred-billion-dollar business, and Harjinder was the VP/Chief Technologist of R&D for 5 years. Earlier in his career, Harjinder was a CS professor at York specializing in distributed computing.

Not only do we love the problem space and founding team, we think Saykara is building a better mousetrap underpinned with AI and ML technologies that are tuned just for this market. Saykara uses a Siri- or Alexa-like hotword (“OK Kara”) or physical tap on their smartphone to start and stop voice capture. The physician can then talk to the patient as they normally would. The Saykara system accurately transcribes the audio to text, parses the information into structured data, and intelligently inserts the structured data into the correct fields in the EHR. They are building ML that comes into play in two general areas: (1) specialized voice-to-text for natural language and medical vocabulary that accurately captures a physician’s natural verbal interaction with a patient and (2) intelligent parsing of the transcribed information and insertion into the correct field in the EHR.

Thus far the reception has been tremendous. Physicians love it because they can interact with patients in the natural way they always have, without using special equipment, typing, or after-hours dictation. Patients love it because they actually hear what and when the physician is capturing in their medical record. Health systems love it because it improves physician satisfaction, is significantly less expensive than a human scribe or other alternatives, and they end up with EHR data that (over time) can be analyzed to improve clinical decision support.

It is also important to note that Saykara’s ML-driven approach leveraging existing smartphone technology enables a price point that is accessible for all doctors, including family doctors for whom other options are generally price prohibitive.

We are excited to see Saykara come out of stealth and continue to help them in their mission to give ALL physicians back control of their lives and address this important pain point for the healthcare industry.

Welcome SmartAssist To The Madrona Family

It is exciting for me to announce our investment in SmartAssist and to welcome the team to our Madrona family. SmartAssist is applying AI to the business of customer support and is already assisting customers of brands you know including MailChimp and Twilio.

Application development and applications the last 10 years were primarily defined by movement to the cloud, SaaS delivery and touch as an interface. Looking ahead, we strongly believe that applications are going to be defined as intelligent applications with a broader set of natural user interfaces including voice/speech and vision. In our opinion, any application of consequence that is getting built now is an intelligent application. What differentiates the intelligent applications is the use of ML/AI and other techniques to apply on the ever-increasing data sets that enable applications to continuously learn and deliver more relevant and appropriate experience for the customers.

Applying ML/AI to intelligently automate use cases and workflows in enterprises is an area where we see a tremendous amount of opportunity and some of our recent investments reflect that investment thesis. As we think about beachhead use cases of ML/AI within enterprises, customer support stands out as one of the most tangible areas that could be fundamentally disrupted through technology.

By using intelligent routing, automated responses, and predictive modeling, SmartAssist helps enterprises significantly increase the efficiency and quality of services while decreasing customer service costs. The company is based on the platform developed by Wise.io which was acquired by GE in 2016. The team of Pradeep Rathinam and Prashant Luthra had a passion for this business and are taking that core business and building it into SmartAssist. Already they have secured some name brand customers. GE has an interest in the company and we expect to work with them as the company grows.

Another big reason for why we are excited about this investment is the entrepreneurial strength of the founding team. Both Pradeep and Prashant have led and been a part of successful start-ups in the past. Their focus and passion to make a difference in this space makes it a delight to partner with them.

We are thrilled to back the compelling vision of this leadership team and be part of a Seattle area start-up that is focused on driving customer success using ML/AI. All of us at Madrona are jazzed at the potential of what is possible here.

Looking forward to helping realize the potential with the SmartAssist team!

Madrona’s 2017 Investment Themes

Every year in March, Madrona wraps up what happened in 2016 and we sit down with our investors to talk about our business – the business of finding and growing the next big Seattle companies. First and foremost, our strategy is to back the best entrepreneurs in the Pacific NW attacking the biggest markets. But we also overlay this with key themes and trends in the broader technology market. As part of our annual meeting we present our key investment themes for the year. Below is a snapshot of what we are focusing on:

Business and Enterprise Evolution to Cloud Native

Tim Porter-Madrona-Venture Capital Seattle
Tim Porter

The IT industry is in the early innings of its next massive shift. The transition to “cloud native” is as big or bigger as the move from PC to mainframes, the adoption of hypervisors, or the creation of public clouds. Cloud native at its core refers to applications or services built in the cloud that are container-packaged, dynamically scheduled, and microservices-oriented. Cloud Native enables all companies to take advantage of the application architectures that were once the province of Google or Facebook. Companies like Heptio and Shippable are at the forefront of disrupting how IT infrastructure has traditionally been managed with vastly increased agility, computing efficiency, real-time data, and speed. We firmly believe software that helps applications complete the journey from development on a cloud platform to deployment on different clouds, and running them at scale, will become the backbone of technology infrastructure going forward. As such, we are interested in meeting more companies that are making it easier to network, secure, monitor, attach storage, and build applications with container-based, microservice architectures.

Intelligent Applications

Customers today demand their software deliver insights that are real-time, nimble, predictive, and prescriptive. To accomplish this, applications must continuously ingest data, increasingly using event-driven architectures, coupled with algorithm-powered data models and machine learning to deliver better service and novel, predictive recommendations. The new generation of intelligent applications will be “trained and predictive” in contrast to the old generation of software programs that were created to be “programmed and predictable.” We believe that intelligent applications which rely on proprietary datasets, event-driven cloud-based architectures, and intuitive multisense interfaces will unlock new business insights in real-time and disrupt current categories of software. Investments in intelligent app companies that leverage these trends will likely be our largest area of investment in coming years.

Voice and XR Interfaces for Businesses and Consumers

We believe the shift we are seeing for human computer interactions will be as fundamental as the mouse click was for replacing the command line or touch/text was for the rise of mobile computing. This shift will be as pertinent for the enterprise as it is for consumers, and in fact will serve to further blur the lines between productivity and social communication.

With voice, we are most excited by companies that can leverage existing platforms such as Alexa to create a tools layer, or build intelligent vertical end-service applications.

In the realm of XR (from VR to AR), we believe this is a long game. VR will not be an overnight phenomenon, but will play out over the next 5 years as mobile phones become VR capable and, particularly, as truly immersive VR headsets become less expensive and cumbersome. We are committed to this future and are particularly focused on VR/AR technologies that bring the major innovation of “presence” into a shared or social space, as well as “picks-and-shovels” technology that are needed by the XR community now to start the building process now even in advance of a largescale install base of headsets.

Vertical Market Applications that use proprietary data sets and ML/AI

As algorithms continue to become more accessible by way of access to open-source libraries and platforms such as the one our portfolio company, Algorithmia, provides, we believe that proprietary data will be the bottleneck for intelligent apps. Companies and products with ML at their core must figure out how to acquire, augment, and clean proprietary, workable data sets to train the machine learning models. We are excited about the companies with these data sets, as well as companies, such as Mighty AI, that help build these data sets or work with companies to help them leverage their proprietary data to deliver business value.

One area where we see this is happening is when ML/AI and proprietary data is applied to intelligent apps in vertical markets. Vertical market focus allows companies to amass rich data sets and domain expertise at a far faster pace than companies building software that tries to be omni-intelligent, providing both product and go-to-market advantage. Most industry verticals are ripe for this innovation, but several stand out including manufacturing, healthcare, insurance/financial services, energy, and food/agriculture.

AI, IoT and Edge Computing

Linda Lian

IoT can be an ambiguous term, but fundamentally we see the explosion of devices connected to the Internet creating an environment where enterprise decision-making and consumer quotidian life will be crucially dependent on real-time data processing, analytics, and shorter response times even in areas where connectivity may be inconsistent. Real time response is crucial to success and is difficult to meet in the centralized, cloud-based model of today. For example, instant communications between autonomous vehicles cannot afford to be dependent on internet access or the latency of connecting to a cloud server and back. Edge computing technologies hope to solve this by bringing the power of cloud computing to the source of where data is generated. We are particularly committed to companies building technologies that are focused on solving how to bring AI, deep learning, machine vision, speech recognition, and other compute-heavy services to resource-constrained and portable devices and improve communication between them.

Another facet of IoT where we continue to have investment interest is new vertical devices for consumer (home, vehicle, wearable, retail), healthcare, and industrial infrastructure (electrical grid, water, public safety), along with enabling supporting infrastructure. Opportunities persist for networking solutions that improve access, range, power, discoverability, cost, and flexibility of edge devices and systems management that provide enhanced security, control, and privacy.

Commerce Experiences that Bridge Digital to Physical

Retail is in a state of flux and technologies are disrupting traditional models in more ways than e-commerce. First, physical retail isn’t going away, but it has a fresh new look. 85% of shoppers say they prefer shopping in stores due to a variety of factors including seeing the product and the social aspect. This has led the new generation of web-native brands such as Indochino, Warby Parker, Glossier and Bonobos to open stores – but they are very different, carrying little physical inventory and geared towards intimacy with customers and helping find the right product for the buyer.

Second, the decreasing cost of IoT hardware technologies such as Impinj’s RFID, advancements in distributed computing, and intelligent software such as computer vision will fundamentally alter physical retail experiences. Experiments are already underway at Amazon Go where shoppers can pick what they want and casually stroll out without waiting in a check-out line.

Within e-commerce, vertically integrated, direct-to-consumer models remain viable and compelling. They bypass costly distribution channels and can build strong brands and intimate customer experiences like Dollar Shave Club, Blue Apron, or Stitch Fix. Marketplaces that leverage underutilized resources or assets; or the technology that underlies these marketplaces remain relevant and compelling particularly for the millennial generation that prioritize access over ownership.

Security and Data Privacy

While certain security categories have been massively over-funded, new investment opportunities continue to arise. Security and data privacy are areas of massive concern for businesses, particularly in the current macro environment. Internally, enterprises demand full visibility, remediation tools, and monitoring capabilities to guard against increasingly sophisticated attacks. Particularly vulnerable are companies that house massive amounts of customer data such as financial services, big retailers, healthcare, and the government. Externally, the collection and analysis of massive amounts of real-time consumer behavioral and personal data is the bread and butter of sales, marketing, and product efforts. But new privacy laws in the US and imminent from the EU are creating heightened awareness of both the control and security of this data. We continue to be interested in companies and technologies that take novel approaches to protecting consumer data and helping corporations and organizations protect their assets.

Technologies Supporting Autonomous Vehicles

Transportation technology is experiencing a massive disruption. Autonomous driving will be the biggest innovation in automobiles since the invention of the car, impacting suppliers, car makers, ridesharing, and everything in between. Lines are blurring between manufacturer and technology provider. We believe the value creation in AVs will, not surprisingly, shift to software, and the data that makes it intelligent. More innovation is required in areas such as computer vision and control systems. Important advancements also remain to be made in component technologies such as radar, cameras, and other sensors. Indeed, there are billions of edge cases due to construction, pedestrians, weather, and a murky regulatory environment that must be ironed out both at the technology and policy level before the promise of AV is a reality.

Additionally, the rise of AV could massively disrupt current modes of car ownership. Fleet and operations management software will become increasingly important as AV transportation-as-a-service becomes more and more tangible. Software and systems for other vehicles including drones, trucks, and ships will also be huge markets and create new investment opportunities.

Seattle and the PNW are emerging as thought leaders in the area of AV, and we believe a technology center of excellence as well, creating new investment opportunities. We are deeply interested in all the threads that go into this complex and massive shift in technology, the car industry and in social culture.

Well, there you have it – Madrona’s key investment themes for 2017. Thanks for reading. If you are working on a startup in any of these areas – we would love to talk to you. Please shoot any of us a note – our email addresses are on in our bios on our website.

Xnor.ai – Bringing Deep Learning AI to the Devices at the Edge of the Network

Photo – The Xnor.ai Team

Today we announced our funding of Xnor.ai. We are excited to be working with Ali Farhadi, Mohammad Rastegari and their team on this new company. We are also looking forward to working with Paul Allen’s team at the Allen Institute for AI and in particular our good friend and CEO of AI2, Dr. Oren Etzioni who is joining the board of Xnor.ai. Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge.

Mohammad and Ali and their team have developed a platform that enables low resource devices to perform tasks that usually require large farms of GPUs in cloud environments. This, we believe, has the opportunity to change how we think about certain types of deep learning use cases as they get extended from the core to the edge. Image and voice recognition are great examples. These are broad areas of use cases out in the world – usually with a mobile device, but right now they require the device to be connected to the internet so those large farms of GPUs can process all the information your device is capturing/sending and having the core transmit back the answer. If you could do that on your phone (while preserving battery life) it opens up a new world of options.

It is just these kinds of inventions that put the greater Seattle area at the center of the revolution in machine learning and AI that is upon us. Xnor.ai came out of the outstanding work the team was doing at the Allen Institute for Artificial Intelligence (AI2.) and Ali is a professor at the University of Washington. Between Microsoft, Amazon, the University of Washington and research institutes such as AI2, our region is leading the way as new types of intelligent applications takes shape. Madrona is energized to play our role as company builder and support for these amazing inventors and founders.