Let It Snow!

Sometimes you find a team and a technology that is just poised to take off. Snowflake was at that point when we invested in Snowflake in the very early part of 2017. Snowflake did not look like other companies we had invested in up to that time – it was an actual Snowflake for us. The stage and valuation made it a stretch for our core investment fund which focuses on seed and Series A (Investments like Snowflake are one reason we raised our Acceleration Fund in order to invest in and work with companies already quickly scaling). But our conviction about the team, the opportunity, and the early traction was high and continues to be so on this momentous day.

This week is another beginning for the company – and an exciting milestone – an IPO and becoming a publicly traded company (NYSE: SNOW). The IPO reflects a fulfilling journey over the last 8 years building a successful cloud infrastructure service while the public cloud vendors continued to scale, and enterprise adoption of the cloud took off. We believe this is just the beginning of what is possible for Snowflake in the future.

In our 25-year history at Madrona, we have been fortunate to invest and participate in many companies that have completed the IPO milestone. Notably, Snowflake is the 6th company in our portfolio to go through the IPO milestone in the last 4 years.

The main reasons we invested in Snowflake in early 2017 came down to team, technology and the market.

  • Snowflake’s founding team (Benoit Dageville, Thierry Cruanes, Marcin Zukowski) had built an incredible cloud native data-warehouse that had incredible room to grow – as the enterprise adoption increased and feature sets were built. And with a seasoned executive in Bob Muglia as CEO – it was a world-class team.
  • Snowflake’s product was truly revolutionary in terms of architecture (designed for the cloud from day1) and as a result allowed for superior performance, scale and capabilities.
  • We believed in Snowflake’s pursuit of the secular trend of cloud-first and cloud-only infrastructure and more importantly data in the cloud (today it seems a no-brainer) and hence we saw a massive opportunity.
  • We knew we could bring our relationship and deep connections with Amazon and Microsoft to help Snowflake partner with them to scale their business, given Snowflake’s intent to run on multiple public clouds. Snowflake has strong and solid relationships today with both companies that are the two largest public cloud providers in the world.

Snowflake had also made the decision then to set up an engineering office in the Greater Seattle area (Bellevue), and we worked hard to help them build their team here with outstanding talent.

One of the things that I remember vividly in some of the earlier conversations we had with Bob was how they were able to successfully utilize the land and expand sales pattern that enterprise software companies aspire to – starting at $20k/annually and rising in quick succession to multiples of that amount. This was a great validation in terms of how critical Snowflake could quickly become in how enterprise companies utilize their data. And this was just on AWS at that time. Today Snowflake runs on all three of the major public clouds.

Fast forward to early 2019 when Frank Slootman (with a tremendous track record and accomplishments across Data Domain/EMC and ServiceNow) came on board as the CEO. In the last year and a half Frank and his leadership team have continued the transformation of Snowflake from a cloud data warehouse platform to a cloud data platform.

Over my career in tech leading large business groups at Microsoft, I worked toward and witnessed growth curves like this and together with the experience of being a part of the Snowflake journey, here are some lessons for companies embarking on this growth journey:

  • Build a unique, differentiated, superior product that can help build a strong moat over time.
  • Deliver a product experience that provides a seamless, friction-free and self-serve on-ramp, to enable a bottom-up go-to-market model that can scale organically and fast.
  • Drive hard to a “land grab” and “growth” mode, when you see a massive market opportunity, while continuing to pay attention to unit economics. Grow fast and responsibly.
  • Build a culture that values and prioritizes customer focus and customer obsession from day one.
  • Hire the best and brightest. Every hiring decision is critical and creates a force multiplier. Everybody makes hiring mistakes – focus on minimizing them and pay attention to hiring great people that can fit in culturally.

Earlier this year, Frank Slootman and the Snowflake team unveiled their Data Cloud vision – a comprehensive data platform play on the cloud to completely mobilize your data in the service of your business. With that as a backdrop, I am eagerly looking forward to what Snowflake is going to accomplish in the coming years.

A hearty congratulations to everybody on the Snowflake team! Thank you for the opportunity to be a part of the Snowflake journey.

Our Journey With Snowflake

We first met the Snowflake team three years and three months ago. At the time, Snowflake was at a sub-$10M revenue run rate, and we were skeptical that the world needed another data warehouse, given the number of other data warehouses from both the cloud providers and legacy on-prem competitors.

However, after meeting the team and speaking with early customers, we realized that Snowflake was a must-have product for next generation intelligent applications. By rebuilding the data warehouse from the ground up with cloud-first design principles, modern enterprises can benefit from both higher throughput and speed as well as better concurrent queryability, and for any data-driven company, Snowflake’s product is a must-have, not a nice-to-have.

At the time, Snowflake also wanted to take a bet on the Seattle ecosystem to build stronger relationships with the cloud providers and to tap into the local talent pool of systems and database engineers.

So given the combination of technically superior product, early but strong customer traction, the perfect team for the space, and our ability to support their growth in Seattle, we decided to invest in the company.

Today, we are excited to announce Snowflake’s $479M funding round, led by Dragoneer Investment Group and Salesforce Ventures.

Despite Snowflake being the fastest growing enterprise company we have ever seen at Madrona, it still feels like it’s early days for Snowflake, and we are looking forward to the next chapter of their journey.

Our Investment in TwinStrand Biosciences, Leveraging Big Data And The Cloud To Improve Genome Sequencing Accuracy By 10,000x

Today, we’re excited to announce that Madrona has led the $16M Series A investment in TwinStrand Biosciences, a Seattle genomics company with the potential to profoundly impact all of us. TwinStrand’s technology will help detect cancer earlier when it can be most effectively treated, will help identify the most effective personalized therapies, and will help to recognize carcinogens quickly thereby lowering the development cost and time-to-market of powerful new drugs. We’ve previously discussed the incredible intersection of life sciences and computer science in our region – and TwinStrand is at the forefront of this amazing innovation opportunity.

When I first met Jesse, the founder and CEO of TwinStrand, he was discussing the technology in exclusively life science terms. However, as I listened, it was incredible how so many of the concepts had direct analogs to my experience with high scale software. TwinStrand’s “Duplex Sequencing” technology uniquely tags each strand of billions of individual DNA molecules with a chemical GUID. The DNA is then replicated to enable sequencing on a standard genome sequencer – resulting in up to 6 TB of data per run – then imported to the TwinStrand cloud where error correction algorithms are employed. The result is a high-resolution reading of the DNA sequence, 10,000x more accurate than standard sequencing. Duplex Sequencing reduces today’s DNA sequencing error rate of ~1% to below 0.0001%. This biochemical error correction approach reminded me of error correction techniques employed in high scale storage arrays in cloud datacenters.

Researchers are actively exploring how to use this level of precision to detect DNA mutations caused by chemicals (a market known as “genetic toxicology”). Today it can take more than 2 years to determine if a chemical is a carcinogen, as large tumors need to be given time to develop in lab animals. With Duplex Sequencing’s breakthrough accuracy, the resulting mutations can be detected as very small tumors within weeks – saving time, money, and the number of animals required. This testing is a critical step in the drug development process, but also is used to test the safety of agricultural chemicals, food contaminants, and even to examine the effect of space radiation.

When I talked with leaders in the clinical cancer community, a common response I heard was that this level of precision was amazing and insightful – but that today’s diagnostics don’t need that level of accuracy. This response reminded me of so many of the skeptics of 64-bit computing 15 years ago – who would ever need that much memory on any computer? With our investment, we are making the bet that new diagnostics, therapies and even information storage technologies will be developed to leverage this new precision, just like software has always found great new ways to leverage new system performance. It’s very exciting to see the future through the eyes of the TwinStrand team and invest in making it possible.

Jesse created Duplex Sequencing through his MD/PhD research with colleagues at the University of Washington. The TwinStrand team consists of half biochemists, and half software developers and bioinformaticians. Together, they have built an incredible foundation—contributing to more than 15 peer-reviewed scientific articles leveraging Duplex Sequencing and developing a portfolio of over 50 patents. To learn more, I’d suggest these three great recent articles:

TwinStrand’s product will be launching soon, and I look forward to seeing what scientists all over the world will create with it.

-Terry

P.S. Out of humility, Jesse doesn’t often share that he is the grandson of Jonas Salk, the scientist who discovered the vaccine for polio, definitively changing our world for the better. It’s pretty incredible to think that TwinStrand may have the same potential.

Our Investment in Knock

We are excited to announce today our investment in Knock, a company that is building a modern marketing cloud for the multifamily property market. The company was founded by Tom Petry and Demetri Themelis, Seattle natives and University of Washington graduates who moved to New York to work in finance just as the ’08-’09 financial crisis began. They survived and thrived in those turbulent times, and returned to Seattle five years later to start a company together.

Like many founders who start a company to scratch their own itch, Tom and Demetri, who had rented apartments throughout their working lives, saw an opportunity to vastly improve the apartment rental experience. They mapped the customer journey and identified the major pain points; from finding buildings that fit a renter’s needs, to touring available units at these properties, to the leasing process. Their very first product was an Open Table-like booking engine for apartment tours that made it faster and easier for renters to find the perfect apartment.

Also like many founders, Tom and Demetri have taken a non-linear journey to this point. While they began by focusing on the renter experience, they discovered a similar, if not greater, customer pain as they got to know property managers at the buildings they worked with. Property managers lacked the tools to they needed to effectively attract, close and retain tenants; everything from allocating marketing dollars across channels to attract tenant leads, nurturing prospective tenants from tour to lease, and communicating effectively with existing tenants to improve satisfaction and increase the likelihood that they renew.

By focusing on these pain points, Knock grew from a booking widget for prospective tenants to a comprehensive CRM that property management companies can use to manage communication and customer relationships throughout their journey. By listening to customers and deeply understanding the pain points and friction (a behavior we see in all great founding teams), the Knock team has built the best CRM system for multi-family property managers and are just getting started in their ambition to build a comprehensive, modern marketing cloud for the industry.

And this is a very compelling industry in which intelligent applications like Knock are badly needed. There are 18 million multi-family (apartment) units in the U.S., with a vacancy rate of about 5% annually. With average monthly rents pushing $1,400 per month, that 5% vacancy translates to nearly $15 billion in rental income per year that multi-family property managers and owners are leaving on the table. Property managers who want to close that vacancy gap need modern CRM tools to find, sign, and retain the best tenants, and that’s where Knock comes in. There are large, legacy software vendors to this industry who offer CRM as part of a suite, but in most cases, it’s an after-thought bolted onto software born out of a different era.

Knock’s software was designed to be intelligent from the beginning, and while it is still very early in the team’s journey, the quality of their product and their ability to serve customers is reflected in the customer roster Knock has assembled and the thousands of buildings and hundreds of thousands of units they’ve on-boarded to the Knock platform. In particular, the enthusiasm we heard from Knock’s customers for both the product and team really got our attention and got us excited about the opportunity to work together. Knock fits squarely into our intelligent applications investment theme, and we look forward to helping Tom, Demetri and the whole knock team to achieve their vision of building the marketing cloud for multi-family.

The Remaking of Enterprise Infrastructure – Investment Themes For Next Generation Cloud

Enterprise infrastructure has been one of the foundational investment themes here at Madrona since the inception of the firm. From the likes of Isilon to Qumulo, Igneous, Tier 3, and to Heptio, Snowflake and Datacoral more recently, we have been fortunate to partner with world-class founders who have reinvented and redefined enterprise infrastructure.

For the past several years, with enterprises rapidly adopting cloud and open source software, we have primarily focused on cloud-native technologies and developer-focused services that have enabled the move to cloud. We invested in categories like containerization, orchestration, and CI/CD that have now considerably matured. Looking ahead, with cloud adoption entering the middle innings but with technologies such as Machine Learning truly coming into play and cloud native innovation continuing at a dizzying pace, we believe that enterprise infrastructure is going to get reinvented yet again. Infrastructure, as we know it today, will look very different in the next decade. It will become much more application-centric, abstracted – maybe even fully automated – with specialized hardware often available to address the needs of next-generation applications.

As we wrote in our recent post describing Madrona’s overall investment themes for 2019, this continued evolution of next-generation cloud infrastructure remains the foundational layer of the innovation stack against which we primarily invest. In this piece, we go deeper into the categories that we see ourselves spending the most time, energy and dollars over the next several years. While these categories are arranged primarily from a technology trend standpoint (as illustrated in the graphic above), they also align with where we anticipate the greatest customer needs for cost, performance, agility, simplification, usability, and enterprise-ready features.

Management of cloud-native applications across hybrid infrastructure

2018 was undeniably the year of “hybrid cloud.” AWS announced Outposts, Google released GKE On-Prem and Microsoft beefed up Azure Stack (first announced in late 2017). The top cloud providers officially recognized that not every workload will move to the cloud and that the cloud will need to go to those workloads. However, while not all computing will move to public clouds, we firmly believe that all computing will eventually follow a cloud model, offering automation, portability and reliability at scale across public clouds, on-prem and every hybrid variation in between.

In this “hybrid cloud forever” world businesses want more than just the ability to move workloads between environments. They want consistent experiences so that they can develop their applications once and run anywhere with complete visibility, security and reliability — and have a single playbook for all environments.

This leads to opportunities in the following areas:

  • Monitoring and observability: As more and more cloud-native applications are deployed in hybrid environments, enterprises will demand complete monitoring and observability to know exactly how their applications are running. The key will be to offer a “single pane of glass” (complete with management) across multiple clouds and hybrid environments, thereby building a moat against the “consoles” offered by each public cloud provider. More importantly, the next-generation monitoring tools will need to be intelligent in applying Machine Learning to monitor and detect – potentially even remediate – error conditions for applications running across complex, distributed and diverse infrastructures.
  • SRE for the masses: According to Joe Beda, the co-founder of Heptio, “DevOps is a cultural shift whereby developers are aware of how their applications are run in a production environment and the operations folks are aware and empowered to know how the application works so that they can actively play a part in making the application more reliable.” The “operations” side of the equation is best exemplified by Google’s highly trained (and compensated) Site Reliability Engineers (SRE’s). As cloud adoption further matures, we believe that other enterprises will begin to embrace the SRE model but will be unable to attract or retain Google SRE level talent. Thus, there will be a need for tools that simplify and automate this role and help enterprise IT teams become Google-like operators with the performance, scalability and availability demanded by enterprise applications.
  • Security, compliance and policy management: Cloud, where enterprises lose total control over the underlying infrastructure, places unique security demands on cloud-native applications. Security ceases to be an afterthought – it now must be designed into applications from the beginning, and applications must be operated with the security posture front and center. This has created a new category of cloud native security companies that are continuing to grow. Current examples include portfolio company, Tigera, which has become the leader in network security for Kubernetes environments, and container security companies like Aqua, StackRox and Twistlock. In addition, data management and compliance – not just for data at rest but also for data in motion between distributed services and infrastructures – create a major pain point for CIOs and CSOs. Integris addresses the significant associated privacy considerations, partly fueled by GDPR and its clones. The holy grail is to analyze data without compromising privacy. Technologies such as security enclaves and blockchains are also enabling interesting opportunities in this space and we expect to see more.
  • Microservices management and service mesh: With applications increasingly becoming distributed, open source projects such as Istio (Google) and Envoy (Lyft) have emerged to help address the great need to efficiently connect and discover microservices. While Envoy has seen relatively wide adoption, it has acted predominantly as an enabler for other services and businesses such as monitoring and security. With next-generation applications expected to leverage the best-in-class services, regardless of which cloud/on-prem/hybrid infrastructure they are run on, we see an opportunity to provide a uniform way to connect, secure, manage and discover microservices (run in a hybrid environment).
  • Streams processing: Customers are awash in data and events from across these hybrid environments including data from server logs, network wire data, sensors and IoT devices. Modern applications need to be able to handle the breadth and volume of data efficiently while delivering new real time capabilities. The area of streams processing is one of the most important areas of the application stack enabling developers to unlock the value in these sources of data in real time. We see fragmentation in the market across various approaches (Flink, Spark, Storm, Heron, etc.) and an opportunity for convergence. We will continue to watch this area to understand whether a differentiated company could be created.

Abstraction and automation of infrastructure

While containerization and all of the other CNCF projects promised simplification of dev and ops, the reality has turned out to be quite different. In order to develop, deploy and manage a distributed application today, both dev and ops teams need to be experts in a myriad of different tools, all the way from version control, orchestration systems, CI/CD tools, databases, to monitoring, security, etc. The increasingly crowded CNCF roadmap is a good reflection of that growing complexity. CNCF’s flagship conference, Kubecon, was hosted in Seattle in December and illustrated both the interest in cloud native technologies (attendees grew 8x since 2016 to over 8,000) as well as the need for increased usability, scalability, and help moving from experimentation to production. As a result, in the next few years, we anticipate that an opposite trend will take effect. We expect infrastructure to become far more “abstracted,” allowing developers to focus on code and letting the “machine” take care of all the nitty gritty of running infrastructure at scale. Specifically, we think opportunities are becoming available in the following areas:

  • Serverless becomes mainstream: For way too long, applications (and thereby developers) have remained captive of the legacy infrastructure stack in which applications were designed to conform to the infrastructure and not the other way around. Serverless, first introduced by AWS Lambda, broke that mold. It allowed developers to run applications without having to worry about infrastructure and to combine their own code with best-in-class services from others. While this has created a different concern for enterprises – applications architected to use Lambda can be difficult to port elsewhere – the benefits of serverless, in particular rapid product experimentation and cost, will compel a significant portion of the cloud workloads to adopt it. We firmly believe that we are at the very beginning of serverless adoption and we expect to see a lot more opportunities in this space to further facilitate serverless apps across infrastructure, similar to Serverless.com (toolkit for building serverless apps on any platform) and IOpipe (monitoring for serverless apps).
  • Infrastructure backend as code: The complexity of building distributed applications often far exceeds the complexity of the app’s core design and wastes valuable development time and budget. For every app, a developer wants to build, s/he ends up writing the same low-level distributed systems code again and again. We believe that will change and that the distributed systems backend will be automatically created and optimized for each app. Companies like Pulumi and projects like Dark are already great examples of this need.
  • Fully autonomous infrastructure: Automating management of systems has been the holy grail since the advent of enterprise computing. However, with the availability of “infinite” compute (in the cloud), telemetry data, and mature ML/AI technology, we anticipate significant progress towards the vision of fully autonomous infrastructure. Even in the case of cloud services, many complex configuration and management choices need be made to optimize the performance and costs of several infrastructure categories. These choices range from capacity management in a broad range of workloads to more complex decisions in specific workloads such as databases. In databases, for example, there has been some very promising research done on applying machine learning to basic configuration all the way to index maintenance. We believe there are exciting capabilities to be built and potentially new companies to be grown in this area.

Specialized infrastructure

Finally, we believe that specialized infrastructure will make a comeback to keep up with the demands of next-general application workloads. We expect to see that in both hardware and software.

  • Specialized hardware: While ML workloads continue to proliferate and general-purpose CPUs (and even GPUs) struggle to keep up, new specialized hardware has arrived from Google’s TPUs to Amazon’s new Inferentia chips in the cloud. Microsoft Azure also now offers FPGA-based acceleration for ML workloads while AWS offers FPGA accelerators that other companies can build upon – a notable example being the FPGA-based genomics acceleration built by Edico Genome. While we are unlikely to invest in a pure hardware company, we do believe that the availability of specialized hardware in the cloud will enable a variety of new investable applications involving rich media, medical imaging, genomic information, etc. that were not possible until recently.
  • Hardware-optimized software: With ML coming to every edge device – sensors, cameras, cars, robots, etc. – we believe that there is an enormous opportunity to optimize and run models on hardware endpoints with constrained compute, power and/or bandwidth. Xnor.ai, for example, optimizes ML models to run on resource-constrained edge devices. More broadly, we envision opportunities for software-defined hardware and open source hardware designs (such as RISC-V) that enable hardware to be rapidly configured specifically for various applications.

Open Source Everywhere

For every trend in enterprise infrastructure, we believe that open source will continue to be the predominant delivery and license mechanism. The associated business model will most likely include a proprietary enterprise product built around an open core, or a hosted service where the provider runs the open source as a service and charges for usage.

Our own yardstick for investing in open source-based companies remains the same. We look for companies based around projects that can make a single developer look like a “hero” by making her/him successful at some important task. We expect the developer mindshare for a given open source project to be reflected in metrics such as Github stars, growth in monthly downloads, etc. A successful business then can be created around that open source project to provide the capabilities that a team of developers and eventually an enterprise would need and pay for.

Conclusion

These categories are the “blueprints” we have in our minds as we look for the next billion-dollar business in the enterprise infrastructure category. Those blueprints, however, are by no means exhaustive. The best founders always surprise us by their ability to look ahead and predict where the world is going, before anyone else does. So, while this post describes some of the infrastructure themes we are interested in at Madrona, we are not exclusively thesis-driven. We are primarily founder driven; but we also believe that having a thoughtful point of view about the trends driving the industry – while being humble, curious and open-minded about opportunities we have not thought as deeply about – will enable us to partner with and help the next generation of successful entrepreneurs. So, if you have further thoughts on these themes, or especially are thinking about building a new company in any of these areas, please reach out to us!

Current or previous Madrona Venture Group portfolio companies mentioned in this blog post: Datacoral, Heptio, Igneous, Integris, IOpipe, Isilon, Pulumi, Qumulo, Snowflake, Tier 3, Tigera and Xnor.ai

The Difficult Decision For Heptio To Sell to VMware

We are thrilled for Heptio’s acquisition by VMware! This transaction is another resounding reinforcement that Kubernetes has become the de facto standard for infrastructure across clouds. It is also a tremendous validation of Heptio’s team, vision and execution.

Deciding “when to sell” is one of the toughest decisions faced by founders, boards and investors in growing companies. When presented with an attractive alternative to continuing to build the company independently, boards have a “high class problem” — but one they must consider with utmost thoughtfulness. Heptio was presented a very difficult challenge in this regard.

Heptio was founded by Kubernetes co-creators, Joe Beda and Craig McLuckie, less than two years ago. Madrona had the privilege of investing with Accel in the $8.5M Series A round at the company’s formation, and I joined the board as an Observer. Since this Day One, I’ve never been associated with a company that has accomplished more in as short a period of time. Craig and Joe had an original vision that the Kubernetes’ community would continue to strengthen and its rapid adoption would continue to increase; however, it needed to become easier and enterprises needed help with adoption. From this starting point, they saw an opportunity to lead a cloud native transformation in the enterprise and redefine the deployment and operations of modern applications across clouds.

This vision has exactly played out, and Heptio backed it up with great execution landing a blue-chip array of Fortune 500 customers for their Heptio Kubernetes Service (HKS) including 3 of the 4 largest retailers in the world, 4 of the 5 largest telcos in the US, and 2 of the 6 largest financial services companies in the US. They also made significant impact on the Kubernetes community by contributing 5 OSS projects (ksonnet, sonobuoy, contour, gimbal, ark) and collecting over 5000 Github stars. With this great execution, more funding followed. Nine months in, Madrona led the $25M Series B and the company invited me to join the Board and my colleague Maria Karaivanova joined as an Observer.

Through it all, Craig and Joe were the consummate founders. They approached building their business with laser-focus and a driving ambition to genuinely help customers and create a large, lasting business in the process. They were rock stars in the Kubernetes community, but approached all interactions with humility and pragmatism. They were extremely strategic in thinking through potential moves on the industry chessboard in what is a very dynamic market; but they always realized that none of it would matter if not paired with week-in-week-out blocking and tackling. Perhaps most importantly, they were relentless recruiters and built a world-class team of over 100 employees in less than 2 years, attracting other great leaders like Shanis Windland, Marcus Holm and Scott Buchanan. In doing so, they walked the talk that culture and diversity matter deeply in building a successful business, often passing on a good hire in favor of the right hire who was an even stronger fit for the business.

So, why in the world did we decide to sell? In short, sometimes you receive an offer too good to refuse. Heptio had the team, momentum and plenty of funding to continue; but in VMware, they saw a partner who not only recognized Heptio’s unique insights, assets and market position, but also had the resources and reach to execute more quickly on their vision and deliver an enterprise Kubernetes service to any cloud. The excitement over this potential – and a great financial offer – drove this deal. Market consolidation was always anticipated, and this decision was certainly not a reaction to IBM acquiring Red Hat or other market externalities.

In this decision process, the role of the investor is to ensure the founders and management team have the broad perspective of “what might be possible,” provide an objective view on the market (both opportunities and risks), and ensure the company has the necessary resources. At the end of the day, we support the founders and management team. In this case, while this acquisition came sooner than anyone anticipated, we all agreed that the strategic fit and economics made joining forces the right decision. Through it all, Craig and Joe balanced the interests of shareholders and employees along with other strategic considerations in exactly the way you hope any founders would. Ping Li from Accel was also an incredible thought partner from before company formation through this decision, and overall was one of the best board directors I’ve ever had a chance to work with.

Congratulations again to the Heptio team! We wish you all the best in furthering your mission and vision via the leadership roles you are taking inside VMware. We are excited the whole team is staying intact in Seattle and will continue to grow here. This acquisition is also a great validation of our broader investment theme around the enterprise move to cloud native and open source, and we continue to be very excited about our related investments in companies like Tigera, Shippable, and Pulumi.

Now my and Madrona’s fortunate job is to go find the next great Day One company … but I know it will be difficult to find another quite like Heptio.

The Road to Cloud Nirvana: The Madrona Venture Group’s View on Serverless

S. Somasegar – Managing Director, Madrona Venture Group

The progression over the last 20 years from on-premise servers, to virtualization, to containerization, to microservices, to event-driven functions and now to serverless computing is allowing software development to become more and more abstracted from the underlying hardware and infrastructure. The combination of serverless computing, microservices, event-driven functions and containers truly form a distributed computing environment that enables developers to build and deploy at-scale distributed applications and services. This abstraction between applications and hardware allows companies and developers to focus on their applications and customers—not worrying about scaling, managing, and operating servers or runtimes.

In today’s cloud world, more and more companies are moving towards serverless products like AWS Lambda to run application backends, respond to voice and chatbot requests, and process streaming data because of the benefits of scaling, availability, cost, and most importantly, the ability to innovate faster because developers no longer need to manage servers. We believe that microservices and serverless functions will form the fabric of the intelligent applications of the future. The massive movement towards containers has validated the market demand for hardware abstraction and the ability to “write once, run anywhere,” and serverless computing is the next stage of this evolution.

Madrona’s Serverless Investing Thesis

Dan Li – Principal, Madrona Venture Group

Today, developers can use products like AWS Lambda, S3, and API Gateway in conjunction with services like Algorithmia, to assemble the right data sources, machine learning models, and business logic to quickly build prototypes and production-ready intelligent applications in a matter of hours. As more companies move towards this mode of application development, we expect to see a massive amount of innovation around AI and machine learning, application of AI to vertically-focused applications, and new applications for IOT devices driven by the ability for companies to build products faster than ever.

For all the above-mentioned reasons, Madrona has made several investments in companies building tools for microservices and serverless computing in the last year and we are continuing to look for opportunities in this space as the cloud infrastructure continues to evolve rapidly.Nevertheless, with the move towards containerization and serverless functions, it can be much harder to monitor application performance, debug applications, and ensure that applications have the correct security and policy settings. For example, SPIFFE (Secure Production Identity Framework for Everyone) provides some great context for the kinds of identity and trust-related work that needs to happen for people to be able to build, share, and consume micro-services in a safe and secure manner.

Below, you’ll hear from three of the startups in our portfolio and how they are building tools to enable developers and enterprises to adopt serverless approaches, or leveraging serverless technologies to innovate faster and better serve their customers.

Portfolio Company Use Cases

Algorithmia Logo

Algorithmia empowers every developer and company to deploy, manage, and share their AI/ML model portfolio with ease. Algorithmia began as the solution to co-founders Kenny Daniel and Diego Oppenheimer’s frustrations at how inaccessible AI/ML algorithms were. Kenny was tired of seeing his algorithms stuck in an unused portion of academia and Diego was tired of recreating algorithms he knew already existed for his work at Microsoft.

Kenny and Diego created Algorithmia as an open marketplace for algorithms in 2013 and today it services over 60,000 developers. From the beginning, Algorithmia has relied on serverless microservices, and this has allowed the company to quickly expand its offerings to include hosting AI/ML models and full enterprise AI Layer services.

AI/ML models are optimally deployed as serverless microservices, which allows them to quickly and effectively scale to handle any influx of data and usage. This is also the most cost-efficient method for consumers who only have to pay for the compute time they use. This empowers data scientists to consume and contribute algorithms at will. Every algorithm committed to the Algorithmia Marketplace is named, tagged, cataloged, and searchable by use case, keyword, or title. This has enabled Algorithmia to become an AWS Lambda Code Library Partner.

In addition to the Algorithm Marketplace, Algorithmia uses the serverless AI Layer to power two additional services: Hosting AI/ML Models and Enterprise Services where they work with government agencies, financial institutions, big pharma, and retail. The AI layer is cloud, stack, and language agnostic. It serves as a data connector, pulling data from any cloud or on-premises server. Developers can input their algorithms in any language (Python, Java, Scala, NodeJS, Rust, Ruby, and R), and a universal REST API will be automatically generated. This allows any consumer to call and chain algorithms in any combination of languages. Operating under a Kubernetes-orchestrated Docker system allows Algorithmia’s services to operate with the highest degree of efficiency.

As companies add AI/ML capabilities across their organizations, they have the opportunity to escape the complications that come with a monolithic application and begin to implement a serverless microservice architecture. Algorithmia provides the expertise and infrastructure to help them be successful.

Pulumi Logo

Pulumisaw an opportunity in 2017 to fundamentally reimagine how developers build and manage modern cloud systems, thanks in large part to the rise in serverless computing intersecting with advances in containers and managed cloud infrastructure in production. By using programming languages and tools that developers are already familiar with, rather than obscure DSLs and less capable, home-grown templating solutions, Pulumi’s customers are able to focus on application development and business logic rather than infrastructure.

As an example, one of Pulumi’s Enterprise customers was able to move from a dedicated team of DevOps Engineers to a combined Engineering organization—reducing their cloud infrastructure scripts to 1/100th the size in a language the entire team already knew and empowering their developers – and is now substantially more productive than ever before in building and continuously deploying new capabilities. The resulting system uses the best of what the modern cloud has to offer—dozens of AWS Lambdas for event-driven tasks, replacing a costly and complex queuing system, several containers that can run in either ECS or Kubernetes, and several managed AWS services like Amazon CloudFront, Amazon Elasticsearch Service, and Amazon ElastiCache—and now runs at a fraction of the cost before migrating to Pulumi. They have been able to spin up entirely new environments in minutes where it used to take weeks.

Before the recent culmination of serverless, containers, and hosted cloud infrastructure, such an approach simply would not have been possible. In fact, we at Pulumi believe that the real magic is in these approaches living in harmony with one another. Each has its own strengths: containers are great for complex stateful systems, often taking existing codebases and moving them to the cloud; serverless functions are perfect for ultra-low-cost event- and API-oriented systems; and hosted infrastructure lets you focus on your application-specific requirements, instead of reinventing the wheel by manually hosting something that your cloud provider can do better and cheaper. Arguably, each is “serverless” in its own way because infrastructure and servers fade into the background. This disruptive sea change has enabled Pulumi to build a single platform and management suite that fully realizes this entire spectrum of technologies.

The future is bright for serverless- and container-oriented cloud architectures, and Pulumi is excited to be right at the center of it helping customers to realize the incredible benefits.

IOpipe co-founders Erica Windisch and Adam Johnson went from virtualizing servers at companies like Docker, to going “all in” on serverless in 2016. Erica and Adam identified serverless as the next revolution in infrastructure, coming roughly 10 years after the launch of AWS EC2. With a shift in computing moving towards a serverless world, there are new challenges that emerge. From dozens of production Lambda user interviews, Erica and Adam identified that one of the major challenges in adopting serverless was a lack of visibility and instrumentation. In 2016, Erica and Adam co-founded IOpipe to focus on helping companies build, ship, and run serverless applications, faster.

IOpipe is an application operations platform built for serverless architectures running AWS Lambda. Through the collection of high fidelity telemetry within Lambda invocations, users can quickly correlate important data points to discover anomalies and identify issues. IOpipe is a cloud-based SaaS offering that offers tracing, profiling, metrics, logs, alerting, and debugging tools to power up operations and development teams.

IOpipe enables developers to debug code faster by providing real-time visibility into their functions as they develop them. Developers can dig deep into what’s really happening under the hood with tools such as profiling and tracing. Once they’re in production, IOpipe then provides a rich set of observability tools to help bubble up issues before they affect end-users. IOpipe saw customers who previously spent days debugging tough issues in production now able to find the root cause in minutes using IOpipe.

Since launching the IOpipe service in Q3 of 2017, the company has seen customers ranging from SaaS startups to large enterprises enabling their developers to build and ship Lambda functions into production at an incredibly rapid pace. What previously took one customer 18 months can now be done in just two months.

IOpipe works closely with AWS as an advanced tier partner, enabling AWS customers to embrace serverless architectures with power-tools such as IOpipe.

Why Madrona Invested in Pulumi

Today I am very excited to announce our investment in Pulumi.

Pulumi aims to fundamentally improve the way people build, manage, and interact with cloud-native applications, services, and infrastructure.

There is a massive movement to the cloud among enterprise customers around the world. As that trend continues to gather and gain momentum, new and transformative techniques are required as customers truly begin to take advantage of cloud-native capabilities. This transformation grows leaps and bounds with serverless computing starting to emerge as the next frontier to enable truly distributed applications and services that are powered by microservices and event-driven functions.

Recent cloud infrastructure breakthroughs include serverless, containers and hosted cloud infrastructure. Containers are great for complex stateful systems, often taking existing codebases and moving them to the cloud. Serverless functions are perfect for ultra-low-cost event- and API-oriented systems. Hosted infrastructure lets you focus on your application-specific requirements, instead of reinventing the wheel by manually hosting something that your cloud provider can do better and cheaper. Arguably, each is “serverless” in its own way because infrastructure and servers fade into the background.

“This disruptive sea change is enabling Pulumi to deliver a single platform and tools suite that allow developers to build and ship code to the cloud in the easiest and fastest way.”

 

Eric Rudder, Joe Duffy and Luke Hoban are a world-class team to deliver such transformative experiences in a cloud-native world. They have decades of experience in platforms, tools, and programming models. Eric Rudder was one of the most senior executives at Microsoft, including running the $10B+ Server and Tools business, serving as a Technical Advisor to Bill Gates, and most recently as the EVP for Advanced Technology before leaving Microsoft. Joe Duffy was a senior technical engineering leader at Microsoft and was a critical part of the early team that built .NET and C#. Most recently, he was Director of Engineering and Developer Tools Strategy and, in that role, was instrumental in open sourcing .NET and taking it cross-platform to Linux and Mac. Luke has held a variety of product and engineering roles at Amazon and Microsoft. While at Microsoft, Luke co-founded TypeScript and developed Go support for Visual Studio Code.

I have had the privilege and the fortune to have worked with Eric, Joe and Luke closely over the years, and their passion to solve hard problems for developers and enterprise customers is unparalleled. I am personally very excited for the opportunity to work with this very talented group of people. We are confident that this kind of a world-class team is what is going to help drive a breakthrough as cloud-native becomes fundamental to enterprise software today and in the future.

We are doubly excited to partner with Pulumi given it is a Seattle-based early-stage start-up focused on native-cloud environment with a world-class founding team.

Please join me in welcoming Eric, Joe, Luke and the Pulumi team to the Madrona family!

Welcome Sudip to Madrona

Today we are excited to welcome Sudip Chakrabarti to Madrona as a Partner on the Investment Team.

Sudip is the kind of team member we look for – someone who shares our passion for helping entrepreneurs build their companies from day one. When we first met, Sudip was an investor at Lightspeed Venture Partners in the valley, a team we have worked with for later stage fund raises for Madrona portfolio companies. His focus on cloud and infrastructure technologies means we crossed paths many times and we were impressed by not only his technical and business expertise in this market but also his approach to working with startups. He gets in and does the work to build companies and help founders succeed in realizing their vision. This is how Madrona approaches company building from day one – to day whatever – we are here to help companies succeed and build the greater Seattle ecosystem.

As a partner at Madrona, Sudip will be focused on investments in the enterprise, infrastructure markets including how open source software and technology is changing the enterprise software landscape.

Prior to joining Madrona, Sudip was a partner at Lightspeed Venture Partners where he led or co-led investments in Streamlio, Serverless, Rainnet, Exabeam and, a Madrona portfolio company, Heptio. He started his investing career at Osage University Partners and subsequently was an enterprise investor at Andreessen Horowitz where he was involved with companies such as Actifio, Databricks, Digital Ocean, Forward Networks, Mesosphere and Samsara.

Sudip also brings the experience of being a founder to the table with entrepreneurs and founders – he started two companies early in his career and understands first hand the triumphs and struggles of company building from day one.

Sudip is our second convert from the valley (Maria Karaivanova joined us from Cloudflare last year) and our first from a Valley VC. Please join us in welcoming Sudip to the Pacific Northwest!

Welcome Unearth to Madrona!

Pictured in photo: From left to right, Nate Miller, S. Somasegar, Brian Saab & Amy Hutchins.

Welcome Unearth to Madrona!

It is always a happy occasion to welcome a family member back into the household.

With Madrona’s investment in Unearth Technologies, we are excited to be working again with Brian Saab (CEO/Co-Founder) who we had previously worked with as the co-founder of buuteeq (a former Madrona portfolio company that was acquired by Booking Holdings, formerly Priceline). Brian co-founded Unearth with two other buuteeq & Booking Holdings employees, Amy Hutchins, Chief Product Officer and Nate Miller, Chief Design Officer. buuteeq spurred a lot of entrepreneurial spirit as we have also backed Pixvana – started by another co-founder of buuteeq, Forest Key.

Brian went back to his family roots as he began thinking of his next company. Brian grew up in a multigenerational construction company – a business he, as a technology executive, noticed hadn’t really changed that much despite cloud technologies, aerial imagery, and the plethora of tablets and laptops. He and his co-founders did some field testing and realized they could change that with their skills.

The construction industry has long been plagued by both low digitization and low productivity from its workforce. A $1.5T annual industry in the US alone and a $10T opportunity globally, the construction industry is projected to keep growing as new infrastructure is required to keep up with the global economy.

Because of the low productivity, the construction industry suffers from large amounts of waste and lost opportunity. For example, 98% of construction projects face cost overruns or delays with the average project delayed by 20 months and 80% more expensive than planned. In the US, this waste equates to approximately $500B lost per year. This is especially evident in large commercial and civic projects that require large teams and extensive communication between the field and office. This is the problem that Unearth is tackling.

Unearth has built a cloud-native collaboration and communication platform, called OnePlace, for construction and architecture teams to track the progress of their projects in real time. By giving all parties (including project owners and project managers) access to the Unearth platform, everyone is informed every step of the way of the progress of the construction. OnePlace is specifically built to seamlessly handle different data types aerial imagery, 360 images, traditional pictures, plans, and surveys.These are all integrated into the software platform which creates one view available to both office and field teams. After a year in beta and in use on major civic construction projects, OnePlace is open today for sign up at www.unearthlabs.com.

The construction industry is at an inflection point of digitization and software adoption and Unearth is well-positioned to provide a compelling solution for its customers.

Please join me in welcoming Brian, Amy, Nate and the Unearth team to the Madrona family.

The Next Big Step in the Snowflake Journey

Early last year, we invested in Snowflake, which is quickly becoming the leader for data warehousing in the cloud. Now, less than a year later they have raised a substantial round of $263M that underscores their phenomenal success in building a solution that speaks to a real business problem shared by many, many large business customers around the world.

Having had the opportunity to learn more about Snowflake and see first-hand their customer momentum and their execution both on the product and go-to-market fronts, I am more bullish than ever before about what is possible in the years to come.

In addition to building the world’s best data warehousing solution for the cloud, Snowflake introduced Data Sharing last year. The ability to seamlessly share data with your customers and partners from within Snowflake opens up a huge opportunity for business customers. This has been an unsolved problem in the past and is something that is very complex and painful to do. Snowflake has solved that with their Data Sharing capabilities.

Snowflake has done an outstanding job of riding two important and massive technology trends – the enterprise move to the cloud and the need for accessibility from anywhere to massive amounts of business data + analytics. They have built a fantastic product on the cloud and for the cloud that is resonating extremely well with a broad set of customers.

For all of the above-mentioned reasons, we are very excited to invest in this latest round of fund raising by Snowflake. We are very much looking forward to furthering our partnership with Bob Muglia and team to help them scale and see the kind of success that we are confident they will achieve.

Tigera Joins the Madrona Family

Ever since the advent of virtualization technology in the early 2000s, software has become increasingly abstracted from its underlying hardware. The ability to “write once, run anywhere” has led to the runaway success of technologies like containers, which package software in a format that can run in isolation on a shared operating system. This allows developers to more easily collaborate on code across environments, get better utilization from their hardware, and build agile, secure software delivery pipelines.

The concept of the “software-defined datacenter” extends this analogy beyond compute into networking, storage, and security as well. With the advent of hybrid and multi-cloud, the underlying infrastructure is only getting more complex. The ability to manage infrastructure that cuts across private and public cloud leads to cost-effective solutions; better ability to simplify, automate, and scale; and the agility to move quickly in today’s IT landscape.

While containers have been the rage for the last 18-24 months, the complexity that grows from this technology quickly escalates to an unmanageable level from an application connectivity and security perspective. In fact, this has often been the talking point of those who are hesitant to adopt them too deeply. This conundrum has been an issue for large enterprises as they look to benefit from these new methods of software architecture and management.

This is the context in which we are very excited to invest in Tigera and the team.

Tigera provides secure application connectivity solutions built for modern cloud native applications. It addresses the application networking connectivity challenges that come with cloud native architectures, especially those that must connect to on-premises, legacy environments. Tigera does this by extending leading open source projects, Calico and Istio into commercial enterprise software that enables policy-based security, enterprise controls and compliance that works across on-premises data centers and all public clouds. Tigera offers large enterprise companies a solution for deploying containers within the Zero Trust security framework that they require and opens up this Fortune 500 market to innovations that increase agility and cost effectiveness.

The leadership team behind Tigera including the CEO, Ratan Tipirneni; co-founders Andy Randall, Alex Pollitt, Christopher Liljenstolpe, VP of Engineering, Doug Collier; and the most recent additions including Andy Wright, VP of Marketing and Amit Gupta, VP of Product Management make a world-class team that knows networking and cloud deeply. They have demonstrated strong leadership in the open source community and the ability to build a commercial offering that solves real pain points for enterprise customers moving to the cloud.

We, at Madrona, are looking forward to this exciting journey with Ratan and team at Tigera.

Re:Invent: 2017 Preview & Predictions

AWS is another year older, bigger and more diverse and so will be the 6th Annual Re:Invent conference. Over 40,000 attendees are expected to attend the event reflecting the success of AWS and the cloud movement that the company kick-started. If AWS was a standalone company, it would be recognized as the software company that hit a $20 billion annual revenue run rate in the shortest amount of time. From a branding perspective, AWS appears focused on courting “builders” including business leaders, product managers and developers who want to create, or recreate in the cloud, solutions that solve real world problems. From a thematic perspective, I anticipate five broad areas to be highlighted:

  1. Modern services for modern cloud apps
  2. ML/AI everywhere!
  3. Hybrid workloads go mainstream
  4. Enterprise agility exceeds cost savings
  5. Customer focus balanced with competitive realities

Modern Services and ML/AI

The first two themes – Modern services and ML/AI are targeted at the grass roots builders and innovators who have long been associated with AWS. Modern services include containerized or “serverless” workloads that work individually or in conjunction with other microservices and event-driven functions like AWS Lambda functions. These technologies deliver greater flexibility, interoperability and cost effectiveness for many applications. And, they can be used to either build new applications or help modernize traditional applications. I have spoken to several smaller businesses and small teams at larger companies who are leveraging these capabilities to build more responsive and cost-effective applications.

Credit to @awsgeek, Jerry Hargrove

At Re:Invent we expect to see AWS embracing community standard like Kubernetes for orchestrating modern containers like Docker. Above is a visual highlighting AWS Elastic Container Service and the use of related services on AWS. AWS will also highlight innovative approaches in the cloud and at the edge that build on Lambda functions to ingest data and automatically produce a functional output. I wouldn’t be surprised to see a “developer pipeline” for building, testing and developing these types of event-driven applications.

ML/AI will likely be broadly highlighted in both Andy Jassy’s Day One keynote and the second keynote on Thursday. This category is where the most disruptive innovation is taking place and the fiercest platform competition is occurring. AWS will feature enhancements or new offerings at four levels.

At a platform level, they are expected to highlight Ironman as a unifying layer to help developers ingest and organize data and then design, test and run intelligent (ML/AI powered) applications. This platform leverages MXNet, which is a machine and deep learning framework originally built at the University of Washington, which has properties similar to Google’s Tensorflow framework. Ironman will leverage a new developer tool framework called Gluon that AWS and Microsoft recently launched.

At a core services level, AWS will continue to enhance AWS ML services and infrastructure processing services like GPU’s and FPGA’s that support the data scientists who can build and train their own data models.

For teams that need more finished ML/AI services, AWS will highlight improved versions of ReKognition, Lex and Polly. I also expect new finished services that leverage pre-trained data models the existing offerings to be announced.

The fourth area of ML/AI will be in the context of leveraging other services either built by AWS or AWS partners that deliver solutions to customers. AWS will likely focus on a combination of running cloud services (AWS, Non-AWS) as well as simplifying ML/AI at the edge. For example, third parties are increasingly building security services like Extrahop’s Addy or Palo Alto Network’s cloud firewall and SAAS security services on AWS. Other services using data stored or processed in AWS, often in data warehouses like Snowflake or Redshift, are rapidly growing for vertical markets and for specific use cases like customer personalization, fraud detection or health recommendations. Seeing what AWS and partners announce in ML/AI powered services across the platform, core services, finished services and solutions layers is likely to be the most exciting area of news at Re:Invent this year.

Matt McIlwain-Madrona Venture Group

“Seeing what AWS and partners announce in ML/AI powered services across the platform, core services, finished services and solutions layers is likely to be the most exciting area of news at Re:Invent this year.”

Hybrid Workloads and Enterprise Agility Solutions

While there are pockets of enterprise innovation in ML/AI and “serverless”, the biggest areas of enterprise focus are going to be hybrid applications and enterprise solutions. These areas also highlight some intriguing partnerships between AWS and other technology companies like VMWare, Microsoft and Qumulo.

Last year AWS on VMWare announced a major partnership where AWS created a dedicated, “bare metal” region for VMWare hypervisors, management tools and more running on AWS. This offering has been in beta all year and appears to be gaining strong enterprise traction. It simplifies moving VMWare-based workloads to AWS and enables hybrid workloads when a portion is on AWS and another portion remains on-premise. Customer examples and new capabilities will likely be announced for this partnership. We don’t expect major announcements around bare metal offerings outside VMWare, but enterprise customers are asking for them to be launched in 2018.

While AWS and Microsoft compete for cloud customers on many levels, there has also been a spirit of partnership between the two companies driven by both enterprise customer demand and competitive realities. Microsoft Windows operating system and applications (SQL Server, ActiveSync Directory, Sharepoint and more) are common applications on AWS in addition to their substantial on-premise installed base. AWS is increasingly enabling their defacto cloud standards like S3 object store and EC2 compute instances to run on-premise. AWS has a service called CodeDeploy that enables EC2 instances to run on-premise or for hybrid workloads (https://aws.amazon.com/enterprise/hybrid/). This enables AWS standard services to work with other Microsoft products on-premise. These examples highlight the growing customer demand for hybrid workloads and services across public cloud and on-premise. And, combined with services like Gluon and the Amazon/Microsoft Voice Assistant partnership, the two Seattle-based technology giants are finding ways to work productively together (often to counteract Google).

Beyond the technology giants, smaller companies like Qumulo will be highlighting hybrid workload flexibility and use cases. Qumulo offers a universal, scale-out file system that allows enterprise customers to scale across on-premise and cloud infrastructure. Technology sectors such as storage where Qumulo is focused, application management where New Relic, DataDog and AppDynamics run, along with databases and security and networking will all see “hybrid” highlighted at Re:Invent.

Beyond individual services and workloads, enterprises continue to look for solutions that help them embrace the agility and cost-effectiveness of cloud computing while mitigating the technology and compliance risks and skill-gaps they may face. AWS will continue to highlight their own professional services as well as a cloud-native solution providers like 2nd Watch and Cloudreach and established “systems integrators” like Accenture and CapGemini. But, I expect AWS will emphasize the growing role of the AWS Marketplace this year as a place to find, buy, deploy and bill first and third-party services from. Finally, more software services will be delivered on AWS in a “fully managed” mode. These modern “managed software services” like the aforementioned cloud data warehouse, database/datastores or storage services will help enterprises embrace cloud native applications.

Balancing Customer Focus with Competitive Realities

All four above themes are driven by customer needs and real technological innovations. But, there are also embedded competitive realities across these themes. Microsoft’s Azure adoption continues to grow rapidly. They have also successfully moved customers to Office365 pulling key services like Azure Active Directory and mobile device management with them. In addition, Microsoft is leveraging their on- premise advantage with hybrid solutions and Azure Stack. These offerings help enterprises embrace agility while cost effectively managing legacy hardware and software. Microsoft also continues to invest and promote their ML/AI and serverless capabilities.

Google has emphasized their ML/AI strength with both the Tensorflow open source adoption as well as leveraging differentiated data sources to build and offer data models “as-a-service”. These image, translation and text recognition models have the opportunity to be strategically disruptive for years to come. Of course, Google also operates broadly adopted cloud apps like Gmail and Google Docs where AWS does not. And, the defacto standard for serverless container orchestration and management, Kubernetes, was created inside Google.

These competitors, as well as other enterprise software and hardware incumbents like Oracle, VMWare/Dell, IBM and Salesforce.com and emerging Chinese competitors like Alibaba will continue to invest and challenge AWS in the years ahead as the enterprise gets more fully engaged in the cloud. While I am confident that AWS will remain the clear market leader for years to come, even they will need to continually “re:Invent” themselves to meet growing customer needs and competitive realities. I will be looking for clues about AWS’s future strategy and approach to emerging competition this week.

Note: Extrahop, Qumulo, 2nd Watch and Snowflake are portfolio companies for Madrona Venture Group, where Matt McIlwain is a Managing Director.

Day One in the Cloud with Skytap

It was the spring of 2006. Professor Hank Levy, incoming Chair of the University of Washington Computer Science Department (now the Paul G. Allen School) and I were catching up. Hank and I had previously worked together on a successful start-up that he and his grad students co-founded called Performant. As I sat watching Hank type on a keyboard, the words he typed appeared on a nearby computer monitor. But, the application was not running on a local device, it was running “in the cloud.”

As hard as it may seem, in the spring of 2006, AWS had not launched Simple Storage Service (S3) or Elastic Compute Cloud (EC2) yet. But Hank, two other remarkable Professors (Steve Gribble and Brian Bershad) and PhD student, David Richardson, were working on the underlying networking and compute technologies that would help power the cloud. As this group discussed the potential customer needs their technology could help address, we decided to found a company.

As we have written about recently, it is both energizing and inspiring to partner with founders from Day One. Skytap, originally known as “illumita”, is the company we seeded in the summer of 2006 along with the Washington Research Foundation and Bezos Expeditions. Eleven years later, Skytap is announcing its $45 million Series E round led by Goldman Sachs.

Skytap has always attracted incredible talent to the company. In the early years, the company had to build most of a ‘cloud infrastructure platform’ themselves as the market for modern hypervisors, public cloud infrastructure and enterprise customer readiness were immature. At that time, the team naturally focused on hiring world class product management and engineering executives. In more recent years, under Thor Cullverhouse’s leadership, Skytap has built a world class go-to-market team. Enterprise customers are now fully embracing the cloud in all its forms – public, hybrid and private. And Skytap is accelerating cloud innovation for the hybrid applications that are developed and increasingly deployed by enterprises in the cloud.

The Madrona team has had the privilege of partnering with Skytap’s founders and team from Day One. Eleven years later, we have never been more excited about the success and long-term potential of the company. It is interesting to note that the three Madrona backed companies that went public in the past 12 months were part of the Madrona family on average for thirteen years at the IPO date. We look forward to seeing what happens in two years when Skytap celebrates their thirteenth birthday!

Snowflake, a Cloud Native Data Warehouse and Our Newest Investment

Today we are announcing our investment in Snowflake. Snowflake is a cloud native data warehouse. Data warehouses have been used for years to store and analyze, not surprisingly, huge amounts of data. Over the past 5-10 years with the explosion of data and the rise of analytics & insights that this data provides, these stores have grown massively and are getting tougher and tougher to scale and manage in a cost-effective way. We are excited to back a company that embraced and leveraged the potential of cloud infrastructure from the start and is rapidly ramping their capabilities to meet the demands of enterprise cloud computing.

This investment is different from Madrona’s core strategy of investing at an early-stage in Pacific NW based companies. The company is later stage and is primarily based in Silicon Valley. But this company fits other Madrona criteria – the huge and growing secular shift to enterprise cloud computing, an A+ team with ties to Seattle and product and customer leadership in the emerging cloud data warehouse market. But even given this, why Snowflake?

Two of the massive computing trends we actively follow for investments are – the movement of enterprise computing and workloads to the cloud and the development of intelligent applications that make use of data through ML/AI and continuous learning. Both of these require and deal with massive amounts of data. For all the progress that we have made on these trends, we are still in the early phases of this tectonic computing shift – especially for enterprise customers. Many of the previous attempts to make enterprise applications available in the cloud have simply been a reworking of legacy applications, as opposed to cloud native design. We are seeing more technologies that are being designed and built ground-up to be cloud native. That’s exactly what Snowflake did for the world of data warehousing.

Benoit Dageville (co-Founder and CTO) and Thierry Cruanes (co-Founder and architect) came with a rich set of database experiences from Oracle, and they were joined by Bob Muglia as CEO in 2014. Bob is a very accomplished enterprise software and business leader, having spent more than 20 years at Microsoft including running Microsoft’s $16B Server and Tools business. Under Bob’s leadership, Microsoft grew several different multi-billion dollar businesses. Soma has had the opportunity to work for and with Bob over the years at Microsoft and everyone at Madrona sees Bob as a world-class leader. All this experience, expertise and background make Bob the ideal leader for Snowflake. We are really excited about this team and think they are the ones to create a meaningful new business in this industry.

Snowflake is a data warehouse designed and architected for the cloud. It is the first data warehouse built specifically to run in the cloud, and offers a range of performance, concurrency, scale and infrastructure management benefits which legacy, on-premises and cloud data platforms were not designed for. This allows Snowflake to achieve better database performance, respond to higher volumes of concurrent queries without performance degradation, and provide a simpler ongoing SaaS model without infrastructure maintenance – all with an outstanding price/performance characteristic.

Despite only being about 4 years into development, a recent GigaOm analyst report (http://info.snowflake.net/rs/252-RFO-227/images/GigaOm-sector-roadmap-cloud-analytic-databases-2017.pdf) ranked Snowflake as the top cloud analytics database ahead of Google BigQuery, Teradata, Azure Data Warehouse and AWS Redshift. While these other solutions can be a good fit in certain situations, we see Snowflake as a long-term leader in this massive market with its cloud-first technology and cross cloud platform potential.

Source: GigaOm

Snowflake is building a team in Bellevue given the cloud and big data talent that is available in this region. The combination of a world-class proven team, the focus on a cloud-native solution and the potential to be a leader in a massive cloud data warehousing and analytics market are the main reasons we decided to invest and participate in the Snowflake journey. Snowflake is built on Amazon Web Services (AWS) and there is a good partnership and collaboration between Snowflake and AWS. We look forward to being a valuable resource on that partnership given our long history working with AWS. In addition, we are excited to partner with Bob and team and help them build Snowflake’s presence here in the region and around the world.