Let It Snow!

Sometimes you find a team and a technology that is just poised to take off. Snowflake was at that point when we invested in Snowflake in the very early part of 2017. Snowflake did not look like other companies we had invested in up to that time – it was an actual Snowflake for us. The stage and valuation made it a stretch for our core investment fund which focuses on seed and Series A (Investments like Snowflake are one reason we raised our Acceleration Fund in order to invest in and work with companies already quickly scaling). But our conviction about the team, the opportunity, and the early traction was high and continues to be so on this momentous day.

This week is another beginning for the company – and an exciting milestone – an IPO and becoming a publicly traded company (NYSE: SNOW). The IPO reflects a fulfilling journey over the last 8 years building a successful cloud infrastructure service while the public cloud vendors continued to scale, and enterprise adoption of the cloud took off. We believe this is just the beginning of what is possible for Snowflake in the future.

In our 25-year history at Madrona, we have been fortunate to invest and participate in many companies that have completed the IPO milestone. Notably, Snowflake is the 6th company in our portfolio to go through the IPO milestone in the last 4 years.

The main reasons we invested in Snowflake in early 2017 came down to team, technology and the market.

  • Snowflake’s founding team (Benoit Dageville, Thierry Cruanes, Marcin Zukowski) had built an incredible cloud native data-warehouse that had incredible room to grow – as the enterprise adoption increased and feature sets were built. And with a seasoned executive in Bob Muglia as CEO – it was a world-class team.
  • Snowflake’s product was truly revolutionary in terms of architecture (designed for the cloud from day1) and as a result allowed for superior performance, scale and capabilities.
  • We believed in Snowflake’s pursuit of the secular trend of cloud-first and cloud-only infrastructure and more importantly data in the cloud (today it seems a no-brainer) and hence we saw a massive opportunity.
  • We knew we could bring our relationship and deep connections with Amazon and Microsoft to help Snowflake partner with them to scale their business, given Snowflake’s intent to run on multiple public clouds. Snowflake has strong and solid relationships today with both companies that are the two largest public cloud providers in the world.

Snowflake had also made the decision then to set up an engineering office in the Greater Seattle area (Bellevue), and we worked hard to help them build their team here with outstanding talent.

One of the things that I remember vividly in some of the earlier conversations we had with Bob was how they were able to successfully utilize the land and expand sales pattern that enterprise software companies aspire to – starting at $20k/annually and rising in quick succession to multiples of that amount. This was a great validation in terms of how critical Snowflake could quickly become in how enterprise companies utilize their data. And this was just on AWS at that time. Today Snowflake runs on all three of the major public clouds.

Fast forward to early 2019 when Frank Slootman (with a tremendous track record and accomplishments across Data Domain/EMC and ServiceNow) came on board as the CEO. In the last year and a half Frank and his leadership team have continued the transformation of Snowflake from a cloud data warehouse platform to a cloud data platform.

Over my career in tech leading large business groups at Microsoft, I worked toward and witnessed growth curves like this and together with the experience of being a part of the Snowflake journey, here are some lessons for companies embarking on this growth journey:

  • Build a unique, differentiated, superior product that can help build a strong moat over time.
  • Deliver a product experience that provides a seamless, friction-free and self-serve on-ramp, to enable a bottom-up go-to-market model that can scale organically and fast.
  • Drive hard to a “land grab” and “growth” mode, when you see a massive market opportunity, while continuing to pay attention to unit economics. Grow fast and responsibly.
  • Build a culture that values and prioritizes customer focus and customer obsession from day one.
  • Hire the best and brightest. Every hiring decision is critical and creates a force multiplier. Everybody makes hiring mistakes – focus on minimizing them and pay attention to hiring great people that can fit in culturally.

Earlier this year, Frank Slootman and the Snowflake team unveiled their Data Cloud vision – a comprehensive data platform play on the cloud to completely mobilize your data in the service of your business. With that as a backdrop, I am eagerly looking forward to what Snowflake is going to accomplish in the coming years.

A hearty congratulations to everybody on the Snowflake team! Thank you for the opportunity to be a part of the Snowflake journey.

The Remaking of Enterprise Infrastructure – The Sequel

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.” – Bill Gates, The Road Ahead

Just over a year ago, we wrote about how enterprise infrastructure was being reimagined and remade, driven by the rapid adoption of cloud computing and rise of ML-driven applications. We had postulated that the biggest trends driving the next generation of enterprise software infrastructure would be (a) cloud-native applications across hybrid clouds, (b) abstraction/automation of infrastructure, (c) specialized hardware and hardware-optimized software, and (d) open source software.

Since then, we have witnessed several of those trends accelerating while others are taking longer to gain adoption. The COVID-19 pandemic over recent months, in particular, has arguably accelerated enterprises multiple years down evolutionary paths they were already on – digital transformation, move to the cloud for business agility, and a perimeter-less enterprise. Work and investments in these areas have moved from initiatives to imperatives, balanced with macroeconomic realities and the headwind of widespread spending cuts. Against that backdrop, today we again take stock of where next-generation infrastructure is headed, recapping which trends we feel are accelerating, which are emerging, and which are stalling – all through the lens of customer problems that create opportunities for entrepreneurs.

Next-generation enterprise infrastructure, as we show in the figure above, will be driven by major business needs including usability, control, simplification, and efficiency across increasingly diverse, hybrid environments and evolve along the four dimensions of (1) cloud native software services, (2) developer experiences, (3) AI/ML infrastructure, and (4) vertical specific infrastructure. We dive into these four areas, and their respective components, in the rather lengthy post below. We hope some of you will read the whole thing, and others can jump to their area of interest!

As we have noted in the past, below are a few “blueprints” as we look for the next billion-dollar company in enterprise infrastructure. As we continue to meet with amazing founders who surprise and challenge us with their unique insights and bold visions, we continue to refine and recalibrate our thinking. What are we overlooking? What do you disagree with? Where are we early? Where are we late? We’d love to hear your thoughts. Let’s keep the dialogue going!

Cloud Native Software and Services

Cloud native technologies and applications continue to be the biggest innovation wave in enterprise infrastructure and will remain so for the foreseeable future. As 451 Research and others point out, “… the journey to cloud-native has been embraced widely as a strategic imperative. The C-suite points to cloud-native as a weapon it will bring to the fight against variables such as uncertainty and rapidly changing market conditions. This viewpoint was born prior to COVID-19 – which brings all those variables in spades. As this crisis passes, and those who survive plan for the next global pandemic, there are many important reasons to include cloud-native at the core of IT readiness.”[1]

However, enterprises that have begun to adopt technologies such as containers, Kubernetes, and microservices are quickly confronted with a new wave of complexity that few engineers in their organization are equipped to tackle. This is producing a second wave of opportunity to ease this adoption path.

Hybrid and Multi-cloud Management

We highlighted last year that we are now in a “hybrid cloud forever” world. Whether workloads run in a hyperscale public cloud region or on-premises, enterprises will adopt a “cloud model” for how they manage these applications and infrastructure. We are seeing the forces driving such multi-site and multi-cloud operations continuing to accelerate. While AWS remains the leader, both Azure and Google are adding new data centers around the world and expanding support for on-premises applications. Azure has gained significant ground with a growing number of services that are production-ready, and Google has invested heavily in expanding their enterprise sales and service capabilities while continuing to offer best-in-class ML services for areas such as vision, speech, and Tensorflow. Azure and Google continue to close the gaps and are often preferable to AWS in situations where enterprises must comply with regulatory and compliance directives for data residency and need to account for possible changes in strategic direction that may require migrating their applications to different cloud providers.

These compliance and data residency considerations are leading organizations to invest in skills and tools for building applications that are easily portable, which improves deployment agility and reduces the risk of vendor lock-in. This creates new sets of challenges in operating applications reliably across varying cloud environments and in ensuring security, governance, and compliance with internal and external policies. In 2019, we invested in MontyCloud which helps companies address the Day 2 operational complexities of multi-cloud environments. We continue to see more opportunities in hybrid and multi-cloud management as regulatory guidelines continue to evolve and organizations emerge from the early stages of executing the shift.

Automated Infrastructure

Automated infrastructure management has been a key enabler for organizations that need to operate in varying cloud and on-premises environments. As containers have grown mainstream, container orchestration with Kubernetes is becoming the most common enterprise choice for operating complex applications. Combining version-controlled configuration and deployment files with operational stability based on control loops has enabled teams to effectively and simultaneously embrace devops and automation while building applications that are portable across on-premises and multi-cloud environments. We invested in Pulumi, which allows organizations to use their programming language of choice in place of Kubernetes YAML files or other domain specific languages, further enabling a unified programming interface with the same development workflows and automated pipelines that development teams are already familiar with.

Machine Learning continues to promise automation of capacity management, failover and resiliency, security, and cost optimization. We see further innovation in ML-powered automation services that will allow developers to focus on applications rather than infrastructure monitoring while enabling IT organizations to identify vulnerabilities, isolate insecure resources, improve SLAs, and optimize costs. While we are already seeing technologies such as autonomous databases offer the promise of automating index maintenance, performance, and cost tuning, we have yet to see wider innovation in this space. We expect some of these capabilities to be natively offered by the public cloud providers. The opportunity for startups will be to offer a solution that leverages unique data from varying sources, delivering effective controls and mitigation, and supporting multi-cloud and on-premises environments.

Serverless

Serverless remains at the leading edge of automated infrastructure, where developers can focus on business logic without having to automate, update, or monitor infrastructure. This is creating opportunities across multiple application segments, from front end applications gaining richer functionality through APIs to backend systems expanding integrations with event sources and gaining richer built-in business logic to transform data. AWS Lambda continues to lead the charge, lending some of its core concepts and patterns to a range of fast-growing applications. However, migrating traditional enterprise applications to an event-driven serverless design can require enterprises to take a larger than anticipated leap. While several pockets of an organization could be experimenting with serverless applications, we continue to look for signs of broader adoption across the enterprise. New approaches that help serverless more effectively address internal policy and compliance requirements would help grease the skids and increase the adoption for many of these serverless applications. Opportunities exist for new programming languages to make it easier to write more powerful functions along with new approaches for managing persistence and ensuring policy compliance. As applications begin to operate across increasingly diverse locations, distributed databases such as FaunaDB will help address the need to persist state in addition to elastically scaling stateless compute resources in transient serverless environments. We are more convinced than ever that serverless will grow to be a dominant application architecture over time, but it will not happen overnight and thus far has been developing more slowly than we forecasted.

Security

With the growth of applications across public cloud regions, remote locations, and individual devices, enterprises are already learning new approaches to secure data at rest, define data perimeters, establish secure network paths. The move to working-from-home has accelerated this evolution, not only from a network perspective but also with a proliferation of bring-your-own-devices (BYOD). We are seeing continued and often increasing activity on several fronts:

  • Securing hardware and devices. Our portfolio company Eclypsium protects against firmware and hardware exploits, helping enterprises deal with the new normal of a distributed workforce and an increasingly risky environment of sophisticated attackers. We expect to see more companies realizing the need for firmware and hardware protection as well as broader opportunities around next generation endpoint protection solutions to support work-from-home, BYOD, and the now perimeter-less enterprise.
  • Secure computing environments. New virtualization technologies such as Firecracker using languages such as Rust are already delivering security and performance in constrained capacity environments. This is particularly valuable for the next generation of applications designed for low latency interactions with end users around the world. With Web Assembly (WASM), code written in almost any popular language can be compiled into a binary format and executed in a secure sandboxed environment within any modern browser. This can be valuable when optimizing for resource hungry tasks such as processing image or audio streams where Javascript isn’t the right tool for the required performance.
  • Securing data in use. While cryptographic methods can secure data at rest and in motion, these methods alone may be inadequate to protect data in use when it sits unencrypted in system memory. Secure enclaves provide an isolated execution environment that ensures that data is encrypted in memory and decrypted only when being used inside the CPU. This enables scenarios such as processing sensitive data on edge devices and aggregating insights securely back to the cloud.
  • Data privacy. Automated data privacy remains a challenge for companies of all sizes. GDPR and CCPA has resulted in unicorns such as OneTrust (who just acquired portfolio company Integris) as more countries adopt and implement similar regulations. Organizations around the world across industry verticals will require new workflows and services to store and access critical data as well as address an enduring business priority of understanding various data attributes – where it lives, what it contains, and what policies must apply to various usage patterns.
  • Securing distributed applications. Traditional approaches to securing applications that were designed for monolithic applications continue to be upended by distributed, microservices-based applications where security vulnerabilities may sit at varying points in the network or component services. Our portfolio company Extrahop’s Reveal(x) product exemplifies the value of deeply analyzing network traffic in order to secure applications. We expect to see this market continuing to expand in the future. We believe that companies can turn managing security from a business risk into a competitive advantage by embracing “SecOps.” SecOps includes building secure applications from the ground up, using secure protocols with end to end encryption by default, building tools to quickly identify and isolate vulnerabilities when they arise, and modernizing the way teams work together integrating security planning and operations directly into development teams. We are interested in new companies that further enable this SecOps approach for customers.

Developer Experiences

Rapid Application Development

Where front end and back end components were historically packaged together, we are seeing these components increasingly decoupled to speed up application development and raise productivity of relatively non-technical users.

For example, developers working on simple web applications, such as corporate websites, marketing campaigns, and small private publications that don’t require complex backend infrastructure, are already realizing the advantages of automated build and deployment pipelines integrated with hosting services. These automated workflows enable developers to see their updates published immediately and delivered blazingly fast through CDNs in SEO friendly ways. Open source Javascript-based frameworks such as GatsbyJS and Next.js can improve application performance by an order of magnitude by simply generating static HTML/CSS at build time or pre-rendering pages at the server instead of client devices. These improvements in application performance combined with ease of deploying to hosting platforms is empowering millions of front-end developers in building new applications.

Content Management Systems (CMS) that store and present the data for these simple web applications have turned ‘headless,’ storing and serving data through APIs that can be plugged into different applications across varying channels. This has enabled non-technical users to simply update their corporate website or product and documentation pages without depending on engineers to deploy updates. This points to a related trend of a rapidly growing API ecosystem that can enrich these ‘simple’ applications with functionality delivered by third party providers.

In fact, workflows (business activities such as processing customer orders, handling payments, adding loyalty points once a purchase is complete, etc.) in modern enterprises are increasingly implemented by calling a set of different (often 3rd-party) services that could be implemented as serverless functions or in other forms. While each service is independent and does not have any context of any other service, business logic dictates the order, timing, data, etc. with which each service should be called. That business logic needs to be implemented somewhere – using code – and the scheduling of each constituent service needs to be done by an orchestration engine. A workflow engine is exactly that – it stores and runs the business logic when triggered by an event and orchestrates the underlying services to fulfill that workflow. Such an engine is essential to build a complex, stateful, distributed application out of a collection of stateless services. The rapidly growing popularity of open source workflow engines such as Cadence (from Uber) is a good testament of this trend and we expect to see much more activity in this space going forward.

Everything as an API

Whether it’s a single page application with a mobile front end or a microservice that’s part of a complex system, APIs are enabling developers to reuse existing building blocks in different contexts rather than build the same functionality from scratch. “Twilio for X” has become shorthand for businesses that turn a frequently needed service into an easy to use, reliable, and affordable API that can be plugged into any distributed application. While Twilio (SMS), Stripe (payments), Auth0 (authentication), Plaid (fintech) and Sendgrid (emails) are already examples of successful API-focused companies, we continue to see more interesting companies in this area such as daily.co (adds 1-click video chat to any app/site), Sila (Madrona portfolio company providing ACH and other fintech back-end services as an API), and many more. As the API economy grows, so does the need for developers to easily create, query, optimize, meter, and secure these APIs. We are already seeing technologies such as GraphQL driving significant innovation in the API infrastructure and expect to see many more opportunities in this space.

AI/ML Infrastructure

Data Preparation

Data preparation remains the largest drain on productivity in data science today. Merging data from multiple sources, cleansing and normalizing training data, labeling and classifying this data, and compensating for sparse training data are common pain points that we hear from customers and our portfolio companies. Vertical applications that mine unstructured data is a large investment theme and reflected in Madrona investments such as intelligent contract management solution, Lexion, as well as in significant social challenges such as identifying and moderating misleading or toxic online content. Technologies such as Snorkel that help engineers quickly label, augment and structure training datasets hold a lot of promise. Similarly, tools such as Ludwig make it easier to train and test deep learning models for citizen data scientists and developers. These are examples of tools beginning to address the broader need for better and more efficient means of preparing data for effective ML models.

Data Access & Sharing

Another key challenge relates to developing and publishing data catalogs with the parallel challenge of accessing critical data in secure ways. Often superficial policies and access controls limit the extent to which scientists are able to use sensitive data to train their models. At times, the same scientist is unable to reuse the data that they used for a previous model experiment. We see data access patterns differing across different steps in the model development workflow, indicating the need for data catalog solutions that provide built-in access controls as enterprises begin to consolidate data from a rapidly growing set of sources. This challenge of federating and securing data across organizations while ensuring privacy – whether partners, vendors, industry consortia, or regulatory bodies – is an increasingly important problem that we are observing in industries such as healthcare, financial services, and government. We see opportunities for new techniques and companies that will arise to enable this new “data economy.”

Observability & Explainability

As the use of machine learning models explodes across all facets of our lives, there’s an emerging need to monitor and deliver real-time analytics and insights around how a model is performing. Just as a whole industry has grown around APM (application performance management) and observability, we see an analogous need for model observability across the ML pipeline. This will enable companies to increase the speed at which they can tune and troubleshoot their models and diagnose anomalies as they arise without relying on their chief data scientists to root cause issues and explain model behavior. Explaining model behavior may sometimes be straightforward, such as in some medical diagnostic scenarios. In other cases, the need for underlying reasoning could be driven by regulation/compliance, customer requirements, or simply a business need to better understand the results and accuracy of model predictions. So far, explaining model predictions has largely been an academic exercise, though interesting new companies are emerging to operationalize this functionality in production for their customers.

Computer Vision and Video Analytics

The use cases for better, faster, and more accurate computer vision and analysis of video continue to proliferate. The COVID pandemic has highlighted more remote sensing scenarios and the use of robotics in scenarios ranging from cleaning to patient monitoring. Analyzing existing video streams for deep fakes is front and center in consumer consciousness while business scenarios for video analytics in media and manufacturing efficiency are promising new areas. Converting video streams to a visual object database could soon enable ‘querying’ a video stream for, say, the number of cars that crossed a given intersection between 10:00 to 10:15am. While entrepreneurs need to ethically navigate the privacy concerns around video analysis, we feel there will be numerous new company opportunities in this area.

Model Optimization for Diverse Hardware

The hyperscale cloud providers continue to release new compute instances and chips optimized for specific workloads, particularly for machine learning. Aiming to realize the desired performance on these specialized instances in any cloud environment or edge location as well as a range of hardware devices, businesses need a path to optimize their models to run efficiently on diverse hardware platforms. We recently invested in an exciting new company, OctoML, that builds on Apache TVM (an open source project created by OctoML’s founders), offering an end to end compiler stack for models written in Keras, MXNet, PyTorch, TensorFlow, and other popular machine learning frameworks. We continue to believe that hardware advances in this space will create new investment opportunities for applications across domains such as medical imaging, genomics, video analytics, and rich edge applications.

Vertical-specific Infrastructure

The Impact of 5G

Major wireless providers have begun rolling out 5G services while cloud providers such as AWS (with Wavelength) and Azure ($1B+ acquisitions of Affirmed Networks and Metaswitch) have been investing in supporting software services. Investments in next generation telecom infrastructure could provide significant opportunities for operators to move to virtual network appliances that previously required specialized hardware devices as well as expensive operations and support systems to provision these services. Further, the greater bandwidth and software-defined network infrastructure being built for 5G should create a variety of new opportunities for startups such as (a) network management for enterprises including converged WiFi/5G networks, (b) the harnessing and orchestration of new data (what will be connected and measured that never has before?), (c) new vertical applications and/or new business models for existing apps, and (d) addressing global issues of compatibility, coordination, and regulation. Like previous wireless network standard upgrades, the full move to 5G and its impacts will undoubtedly take a number of years to be fully realized. That being said, given current rollouts in key geographies, we expect the software ecosystem around 5G to coalesce fairly rapidly, creating new company opportunities in both the near and medium term.

Continued Proliferation of IoT

Relatedly, we expect 5G to push the wave of digitization beyond the inherently data-rich industries such as financial services and into more industrialized sectors such as manufacturing and agriculture. The Internet of Things (IoT) will capture the data in these sectors and is likely to result in billions of sensors being attached to a variety of machines. Earlier this year we invested in Esper.io that helps developers manage intelligent IoT devices, extending the type of DevOps functionality that exists in the cloud to any edge device with a UI, which are increasingly Android-based. Industrial IoT also continues to emerge into the mainstream with manufacturing companies investing in ML and other analytics solutions after years of discussion. We think companies taking a vertical approach and providing applications tailored to the specific need of a certain industry will grow most quickly.

Vertical-Specific Hardware+Software

We are also seeing several verticals requiring specialized hardware for key business functions. For example, electronic trade execution services must provide deterministic responses to orders placed within a small window of time. In addition to requiring hardware-based time sync across the network, participants often use specialized hardware including FPGAs to execute their algorithms. FPGAs are also common in high speed digital telecom systems for packet processing and switching functions. Similarly, FPGA-based solutions are being adopted across healthcare research disciplines. FPGA’s can accelerate identifying matches between experimental data and possible peptides or modified peptides that can be evaluated in near real time, enabling deeper investigation, faster discovery, and more effective diagnostics to improve healthcare outcomes. We are realizing that a long tail of such applications across verticals would benefit from a cloud-based “hardware-as-a-service” that offers a path for almost every application to run in the cloud.

Business Model Innovation

While this post has been largely organized around business needs that are being met by technology innovations and new product opportunities, we are also interested in investing in companies that take advantage of related business model innovations that these technological advances in enterprise infrastructure have enabled. For instance, the move to the cloud allows companies to provide consumption-based pricing, self-service models, “as-a-service” versions of products, freemium SKUs, rapid POCs and product trials, and direct reach to end-user developers or operations team members. We are equally interested in founders and companies that have found new ways to go-to-market and efficiently identify and reach customers.

Relatedly, the continued adoption of open source as the predominant software licensing approach for enterprise infrastructure has created new opportunities for business model innovation, significantly evolving the traditional “paid support” model for open source to open core and “run it for you” approaches. Enterprises are increasingly demanding an open source option because of the typical benefits of lower TCO and control. Developers (and vendors) love open source because of the bottoms-up adoption that creates validation and virality. At the same time, the bigger platforms (cloud providers) are embracing open source technologies on their platform often in a manner that provides an inherent tension with commercial companies built around those same open source technologies. We continue to strongly believe that having a differentiated, unique value proposition on top of an open source project is critical for a commercial company to be built. It is that differentiated value proposition that ultimately creates a strong moat and defensibility from the platform companies supporting open source technologies on their stack. We anticipate that all these factors, plus this intrigue of heightened tensions between hyperscale clouds and open source vendors, will add up to continued opportunity in the dynamic world of open source in the years to come.

[1] 451 Research, April 9, 2020, “COVID-19: Cloud-native impacts.” Brian Partridge, William Fellows, et. al.

FreightWeb For The Win

One of the most fun opportunities I get as a venture capitalist is to partner with founding teams from the very beginning of their journey. It’s with great pleasure that I get to announce one such journey, which we have been working on for a while … our investment in and partnership with Will, Marty, Farah and team at FreightWeb Services.

FreightWeb has all the ingredients we love to see in Day One companies:

  • Massive markets ripe for new thinking and innovation
  • Founders with deep domain expertise who understand the customer(s) and pain point(s)
  • Founders who have been part of rocket ship rides and know what to do when they grab a tiger by the tail
  • Products that leverage copious amounts of data to deliver simple and effective solutions
  • Solutions that deliver clear, tangible, and near-immediate customer value

Simply put, FreightWeb Services’ mission is to increase the utilization of trucking capacity that moves freight in the U.S., an $800 billion market in 2018. As we started collaborating with the FreightWeb team and digging into the domestic trucking market, we were surprised to learn that most trucks transporting loads in the United States carry less than half of their maximum freight capacity. There are a number of reasons for this, but a primary one is that, as a shipper, it’s difficult to buy a fraction of the space in a truck. If you have less than 5 pallets to ship, there’s a service called Less than Truckload (LTL) shipping that works reasonably well. If you have 20+ pallets to ship, renting the full capacity of the truck (Full Truckload Shipping, or FTL) is a relatively simple and cost-effective option. But for all of the loads in-between, there is no great option. Mid-sized loads (often called partials) are hard to quote, hard to book capacity, and expensive. But they don’t need to be. And this is the problem FreightWeb is singularly focused on solving.

Will Payson, the co-founder/CEO of FreightWeb, uses the virtualization of cloud infrastructure, and the ability to sell compute and storage in bite-sized pieces, as a helpful (albeit rough) analogy. Many moons ago, when we wanted to run web applications, we’d have to buy or rent a server dedicated to running that application. Renting a server in internet land is akin to renting a full truckload to move your payload. If you don’t have much traffic on your server, the server capacity goes mostly un-utilized and your rental cost per unit of consumption is high. If you don’t have a lot of stuff to put in a truck, the truck space goes mostly un-utilized and the cost to move a pound of goods is very high. If only we could find ways to rent truck space, on-demand, measured in arbitrarily sized chunks, we could make the system much more efficient.

This problem is worth solving for all the players in the freight-hauling ecosystem. For shippers, buying freight in chunks of any size means they no longer need to optimize their shipping to conform to the restrictions imposed by the current system. They can buy smaller chunks of trucking capacity more cheaply while moving freight more frequently, shifting the balance from batch to continuous flow to better adapt to market demand. For carriers, they can better fill their trucks with freight from multiple shippers, increasing capacity utilization and total revenue. Innovations that enable the parties on both sides of a transaction to benefit financially are hard to find, and that’s one of a number of things that make the FreightWeb opportunity compelling.

We were first introduced to Will by Mike Fridgen, co-Managing Director of Madrona Venture Labs (MVL), a startup studio founded at Madrona in 2012 which has grown significantly since then. Many entrepreneurs start their journeys collaborating with MVL to develop their concepts and accelerate early learning and building. In this case, Will and his co-founder Marty were ready to go, but needed a strong technical co-founder to build out the core technology. Even before we wrote the check, our talent team (Matt Witt and Shannon Anderson) and Chief Product Officer/Venture Partner, Ted Kummert, helped define the spec and develop a list of target candidates for the co-founder/CTO role. We were delighted when Farah Ali, a seasoned technology executive and top of our prospect list, chose to come aboard as a co-founder to launch the company.

It’s early days in the life of the company, but we have been very impressed with the caliber of the team that FreightWeb has assembled, we are excited about the vision they have set out, and we look forward to our Day One for the long-run journey together.

The Remaking of Enterprise Infrastructure – Investment Themes For Next Generation Cloud

Enterprise infrastructure has been one of the foundational investment themes here at Madrona since the inception of the firm. From the likes of Isilon to Qumulo, Igneous, Tier 3, and to Heptio, Snowflake and Datacoral more recently, we have been fortunate to partner with world-class founders who have reinvented and redefined enterprise infrastructure.

For the past several years, with enterprises rapidly adopting cloud and open source software, we have primarily focused on cloud-native technologies and developer-focused services that have enabled the move to cloud. We invested in categories like containerization, orchestration, and CI/CD that have now considerably matured. Looking ahead, with cloud adoption entering the middle innings but with technologies such as Machine Learning truly coming into play and cloud native innovation continuing at a dizzying pace, we believe that enterprise infrastructure is going to get reinvented yet again. Infrastructure, as we know it today, will look very different in the next decade. It will become much more application-centric, abstracted – maybe even fully automated – with specialized hardware often available to address the needs of next-generation applications.

As we wrote in our recent post describing Madrona’s overall investment themes for 2019, this continued evolution of next-generation cloud infrastructure remains the foundational layer of the innovation stack against which we primarily invest. In this piece, we go deeper into the categories that we see ourselves spending the most time, energy and dollars over the next several years. While these categories are arranged primarily from a technology trend standpoint (as illustrated in the graphic above), they also align with where we anticipate the greatest customer needs for cost, performance, agility, simplification, usability, and enterprise-ready features.

Management of cloud-native applications across hybrid infrastructure

2018 was undeniably the year of “hybrid cloud.” AWS announced Outposts, Google released GKE On-Prem and Microsoft beefed up Azure Stack (first announced in late 2017). The top cloud providers officially recognized that not every workload will move to the cloud and that the cloud will need to go to those workloads. However, while not all computing will move to public clouds, we firmly believe that all computing will eventually follow a cloud model, offering automation, portability and reliability at scale across public clouds, on-prem and every hybrid variation in between.

In this “hybrid cloud forever” world businesses want more than just the ability to move workloads between environments. They want consistent experiences so that they can develop their applications once and run anywhere with complete visibility, security and reliability — and have a single playbook for all environments.

This leads to opportunities in the following areas:

  • Monitoring and observability: As more and more cloud-native applications are deployed in hybrid environments, enterprises will demand complete monitoring and observability to know exactly how their applications are running. The key will be to offer a “single pane of glass” (complete with management) across multiple clouds and hybrid environments, thereby building a moat against the “consoles” offered by each public cloud provider. More importantly, the next-generation monitoring tools will need to be intelligent in applying Machine Learning to monitor and detect – potentially even remediate – error conditions for applications running across complex, distributed and diverse infrastructures.
  • SRE for the masses: According to Joe Beda, the co-founder of Heptio, “DevOps is a cultural shift whereby developers are aware of how their applications are run in a production environment and the operations folks are aware and empowered to know how the application works so that they can actively play a part in making the application more reliable.” The “operations” side of the equation is best exemplified by Google’s highly trained (and compensated) Site Reliability Engineers (SRE’s). As cloud adoption further matures, we believe that other enterprises will begin to embrace the SRE model but will be unable to attract or retain Google SRE level talent. Thus, there will be a need for tools that simplify and automate this role and help enterprise IT teams become Google-like operators with the performance, scalability and availability demanded by enterprise applications.
  • Security, compliance and policy management: Cloud, where enterprises lose total control over the underlying infrastructure, places unique security demands on cloud-native applications. Security ceases to be an afterthought – it now must be designed into applications from the beginning, and applications must be operated with the security posture front and center. This has created a new category of cloud native security companies that are continuing to grow. Current examples include portfolio company, Tigera, which has become the leader in network security for Kubernetes environments, and container security companies like Aqua, StackRox and Twistlock. In addition, data management and compliance – not just for data at rest but also for data in motion between distributed services and infrastructures – create a major pain point for CIOs and CSOs. Integris addresses the significant associated privacy considerations, partly fueled by GDPR and its clones. The holy grail is to analyze data without compromising privacy. Technologies such as security enclaves and blockchains are also enabling interesting opportunities in this space and we expect to see more.
  • Microservices management and service mesh: With applications increasingly becoming distributed, open source projects such as Istio (Google) and Envoy (Lyft) have emerged to help address the great need to efficiently connect and discover microservices. While Envoy has seen relatively wide adoption, it has acted predominantly as an enabler for other services and businesses such as monitoring and security. With next-generation applications expected to leverage the best-in-class services, regardless of which cloud/on-prem/hybrid infrastructure they are run on, we see an opportunity to provide a uniform way to connect, secure, manage and discover microservices (run in a hybrid environment).
  • Streams processing: Customers are awash in data and events from across these hybrid environments including data from server logs, network wire data, sensors and IoT devices. Modern applications need to be able to handle the breadth and volume of data efficiently while delivering new real time capabilities. The area of streams processing is one of the most important areas of the application stack enabling developers to unlock the value in these sources of data in real time. We see fragmentation in the market across various approaches (Flink, Spark, Storm, Heron, etc.) and an opportunity for convergence. We will continue to watch this area to understand whether a differentiated company could be created.

Abstraction and automation of infrastructure

While containerization and all of the other CNCF projects promised simplification of dev and ops, the reality has turned out to be quite different. In order to develop, deploy and manage a distributed application today, both dev and ops teams need to be experts in a myriad of different tools, all the way from version control, orchestration systems, CI/CD tools, databases, to monitoring, security, etc. The increasingly crowded CNCF roadmap is a good reflection of that growing complexity. CNCF’s flagship conference, Kubecon, was hosted in Seattle in December and illustrated both the interest in cloud native technologies (attendees grew 8x since 2016 to over 8,000) as well as the need for increased usability, scalability, and help moving from experimentation to production. As a result, in the next few years, we anticipate that an opposite trend will take effect. We expect infrastructure to become far more “abstracted,” allowing developers to focus on code and letting the “machine” take care of all the nitty gritty of running infrastructure at scale. Specifically, we think opportunities are becoming available in the following areas:

  • Serverless becomes mainstream: For way too long, applications (and thereby developers) have remained captive of the legacy infrastructure stack in which applications were designed to conform to the infrastructure and not the other way around. Serverless, first introduced by AWS Lambda, broke that mold. It allowed developers to run applications without having to worry about infrastructure and to combine their own code with best-in-class services from others. While this has created a different concern for enterprises – applications architected to use Lambda can be difficult to port elsewhere – the benefits of serverless, in particular rapid product experimentation and cost, will compel a significant portion of the cloud workloads to adopt it. We firmly believe that we are at the very beginning of serverless adoption and we expect to see a lot more opportunities in this space to further facilitate serverless apps across infrastructure, similar to Serverless.com (toolkit for building serverless apps on any platform) and IOpipe (monitoring for serverless apps).
  • Infrastructure backend as code: The complexity of building distributed applications often far exceeds the complexity of the app’s core design and wastes valuable development time and budget. For every app, a developer wants to build, s/he ends up writing the same low-level distributed systems code again and again. We believe that will change and that the distributed systems backend will be automatically created and optimized for each app. Companies like Pulumi and projects like Dark are already great examples of this need.
  • Fully autonomous infrastructure: Automating management of systems has been the holy grail since the advent of enterprise computing. However, with the availability of “infinite” compute (in the cloud), telemetry data, and mature ML/AI technology, we anticipate significant progress towards the vision of fully autonomous infrastructure. Even in the case of cloud services, many complex configuration and management choices need be made to optimize the performance and costs of several infrastructure categories. These choices range from capacity management in a broad range of workloads to more complex decisions in specific workloads such as databases. In databases, for example, there has been some very promising research done on applying machine learning to basic configuration all the way to index maintenance. We believe there are exciting capabilities to be built and potentially new companies to be grown in this area.

Specialized infrastructure

Finally, we believe that specialized infrastructure will make a comeback to keep up with the demands of next-general application workloads. We expect to see that in both hardware and software.

  • Specialized hardware: While ML workloads continue to proliferate and general-purpose CPUs (and even GPUs) struggle to keep up, new specialized hardware has arrived from Google’s TPUs to Amazon’s new Inferentia chips in the cloud. Microsoft Azure also now offers FPGA-based acceleration for ML workloads while AWS offers FPGA accelerators that other companies can build upon – a notable example being the FPGA-based genomics acceleration built by Edico Genome. While we are unlikely to invest in a pure hardware company, we do believe that the availability of specialized hardware in the cloud will enable a variety of new investable applications involving rich media, medical imaging, genomic information, etc. that were not possible until recently.
  • Hardware-optimized software: With ML coming to every edge device – sensors, cameras, cars, robots, etc. – we believe that there is an enormous opportunity to optimize and run models on hardware endpoints with constrained compute, power and/or bandwidth. Xnor.ai, for example, optimizes ML models to run on resource-constrained edge devices. More broadly, we envision opportunities for software-defined hardware and open source hardware designs (such as RISC-V) that enable hardware to be rapidly configured specifically for various applications.

Open Source Everywhere

For every trend in enterprise infrastructure, we believe that open source will continue to be the predominant delivery and license mechanism. The associated business model will most likely include a proprietary enterprise product built around an open core, or a hosted service where the provider runs the open source as a service and charges for usage.

Our own yardstick for investing in open source-based companies remains the same. We look for companies based around projects that can make a single developer look like a “hero” by making her/him successful at some important task. We expect the developer mindshare for a given open source project to be reflected in metrics such as Github stars, growth in monthly downloads, etc. A successful business then can be created around that open source project to provide the capabilities that a team of developers and eventually an enterprise would need and pay for.

Conclusion

These categories are the “blueprints” we have in our minds as we look for the next billion-dollar business in the enterprise infrastructure category. Those blueprints, however, are by no means exhaustive. The best founders always surprise us by their ability to look ahead and predict where the world is going, before anyone else does. So, while this post describes some of the infrastructure themes we are interested in at Madrona, we are not exclusively thesis-driven. We are primarily founder driven; but we also believe that having a thoughtful point of view about the trends driving the industry – while being humble, curious and open-minded about opportunities we have not thought as deeply about – will enable us to partner with and help the next generation of successful entrepreneurs. So, if you have further thoughts on these themes, or especially are thinking about building a new company in any of these areas, please reach out to us!

Current or previous Madrona Venture Group portfolio companies mentioned in this blog post: Datacoral, Heptio, Igneous, Integris, IOpipe, Isilon, Pulumi, Qumulo, Snowflake, Tier 3, Tigera and Xnor.ai

Investment Themes for 2019

2018 was a busy year for Madrona and our portfolio companies. We raised our latest $300 million Fund VII, and we made 45 investments totaling ~$130 million. We also had several successful up-rounds and company exits with a combined increase of over $800 million in fund value and over $600 million in investor realized returns. We don’t see 2019 letting up, despite the somewhat volatile public markets. Over the past year we have continued to develop our investment themes as the technology and business markets developed and we lay out our key themes here.

For the past several years, Madrona has primarily been investing against a 3-layer innovation stack that includes cloud-native infrastructure at the bottom, intelligent applications (powered by data and data science) in the middle, and multi-sense user interfaces between humans and content/computing at the top. As 2019 kicks off, we thought it would be helpful to outline our updated, 4-layer model and highlight some key questions we are asking within these categories to facilitate ongoing conversations with entrepreneurs and others in the innovation economy.

For reference, we published our investment themes in previous years and our thinking since then has both expanded and become more focused as the market has matured and innovation has continued. A quick scan of this prior post illustrates our on-going focus on cloud infrastructure, intelligent applications, ML, edge computing, and security, as well as how our thinking has evolved.

Opportunities abound within AND across these four layers. Infinitely scalable and flexible cloud infrastructure is essential to train data models and build intelligent applications. Intelligent applications including natural language processing models or image recognition models power the multi-sense user interfaces like voice activation and image search that we increasingly experience on smartphones and home devices (Amazon Echo Show, Google Home). Further, when those services are leveraged to help solve a physical world problem, we end up with compelling end-user services like Booster Fuels in the USA or Luckin Coffee in China.

The new layer that we are spending considerable time on is the intersection between digital and physical experiences (DiPhy for short), particularly as it relates to consumer experiences and health care. For consumers, DiPhy experiences address a consumer need and resolve an end-user problem better than a solely digital or solely physical experience could. Madrona companies like Indochino, Pro.com and Rover.com provide solutions in these areas. In a different way, DiPhy is strongly represented in Seattle at the intersection of machine learning and health care with the incredible research and innovations coming out of the University of Washington Institute for Protein Design, the Allen Institute and the Fred Hutch Cancer Research Center. We are exploring the ways that Madrona can bring our “full stack” expertise to these health care related areas as well.

While continuing to push our curiosity and learning around these themes, they are guides not guardrails. We are finding some of the most compelling ideas and company founders where these layers intersect. Current company examples include voice and ML applied to the problem of physician documentation into electronic medical records (Saykara), integrating customer data across disparate infrastructure to build intelligent customer profiles and applications (Amperity), or cutting edge AI able to run efficiently in resource constrained edge devices (Xnor.ai).

Madrona remains deeply committed to backing the best entrepreneurs, in the Pacific NW, who are tackling the biggest markets in the world with differentiated technology and business models. Frequently, we find these opportunities adjacent to our specific themes where customer-obsessed founders have a fresh way to solve a pressing problem. This is why we are always excited to meet great founding teams looking to build bold companies.

Here are more thoughts and questions on our 4 core focus areas and where we feel the greatest opportunities currently lie. In subsequent posts, we will drill down in more detail into each thematic area.

Cloud Native Infrastructure

For the past several years, the primary theme we have been investing against in infrastructure is the developer and the enterprise move to the cloud, and specifically the adoption of cloud native technologies. We think about “cloud native” as being composed of several interrelated technologies and business practices: containerization, automation and orchestration, microservices, serverless or event-driven computing, and devops. We feel we are still in the early-middle innings of enterprise adoption of cloud computing broadly, but we are in the very early innings of the adoption of cloud native.

2018 was arguably the “year of Kubernetes” based on enterprise adoption, overall buzz and even the acquisition of Heptio by VMware. We continue to feel cloud native services, such as those represented by the CNCF Trail Map, will produce new companies supporting the enterprise shift to cloud native. Other areas of interest (that we will detail in a subsequent post) include technologies/services to support hybrid enterprise environments, infrastructure backend as code, serverless adoption enablers, SRE tools for devops, open source models for the enterprise, autonomous cloud systems, specialized infrastructure for machine learning, and security. Questions we are asking here include how the relationship between the open source community and the large cloud service providers will evolve going forward and how a broad-based embrace of “hybrid computing” will impact enterprise customer product/service needs, sales channels and post-sales services.

For a deeper dive click here.

Intelligent Applications with ML & AI

The utilization of data and machine learning in production has probably been the single biggest theme we have invested against over the past five years. We have moved from “big data” to machine learning platform technologies such as Turi, Algorithmia and Lattice Data to intelligent applications such as Amperity, Suplari and AnswerIQ. In the years ahead, “every application is intelligent” will likely be the single biggest investment theme, as machine learning continues to be applied to new and existing data sets, business processes, and vertical markets. We also expect to find interesting opportunities in services that enable edge devices to operate with intelligence, industry-specific applications where large amounts of data are being created like life sciences, services to make ML more accessible to the average customer, as well as emerging machine learning methodologies such as transfer learning and explainable AI. Key questions here include (a) how data rights and strategies will evolve as the power of data models becomes more apparent and (b) how to automate intelligent applications to be fully managed, closed loop systems that continually improve their recommendations and inferences.

For a deeper dive click here.

Next Generation User Interfaces

Just as the mouse and touch screen ushered in new applications for computing and mobility, new modes of computer interaction like voice and gestures are catalyzing compelling new applications for consumers and businesses. The advent of Alexa Echo and Show, Google Home, and a more intelligent Siri service have dramatically changed how we interact with technology in our personal lives. Limited now to short simple actions, voice is becoming a common approach for classic use cases like search, music discovery, food/ride ordering and other activities. Madrona’s investment in Pulse Labs gives us unique visibility into next generation voice applications in areas like home control, ecommerce and ‘smart kitchen’ services. We are also enthused about new mobile voice/AR business applications for field service technicians, assisted retail shopping (E.g., Ikea’s ARKit furniture app) and many others including medical imaging/training.

Vision and image recognition are also rapidly becoming ways for people and machines to interact with one another as facial recognition security on iPhones or intelligent image recognition systems highlight. Augmented and virtual reality are growing much more slowly than initially expected, but mobile phone-enabled AR will become an increasingly important tool for immersive experiences, particularly visually-focused vocations such as architecture, marketing, and real estate. “Mobile-first” has become table stakes for new applications, but we expect to see more “do less, but much better” opportunities both in consumer and enterprise with elegantly designed UIs. Questions central to this theme include (a) what ‘high-value’ new experiences are truly best or only possible when voice, gesture and the overlay of AR/VR/MR are leveraged? (b) what will be the limits of image (especially facial recognition) in certain application areas, (c) how effective can image-driven systems like digital pathology be at augmenting human expertise, and (d) how will multi-sense point solutions in the home, car and store evolve into platforms?

For a deeper dive click here.

DiPhy (digital-physical converged customer experiences)

The first twenty years of the internet age were principally focused on moving experiences from the physical world to the digital world. Amazon enabled us to find, discover and buy just about anything from our laptops or mobile devices in the comfort of our home. The next twenty years will be principally focused on leveraging the technologies the internet age has produced to improve our experiences in the physical world. Just as the shift from physical to digital has massively impacted our daily lives (mostly for the better), the application of technology to improve the physical will have a similar if not greater impact.

We have seen examples of this trend through consumer applications like Uber and Lyft as well as digital marketplaces that connect dog owners to people who will take care of their dogs (Rover). Mobile devices (principally smartphones today) are the connection point between these two worlds and as voice and vision capabilities become more powerful so will the apps that reduce friction in our lives. As we look at other DiPhy sectors and opportunities, one where the landscape will change drastically over the coming decades is physical retail. Specifically, we are excited about digital native retailers and brands adding compelling physical experiences, increasing digitization of legacy retail space, and improving supply chain and logistics down to where the consumer receives their goods/services. Important questions here include (a) how traditional retailers and consumer services will evolve to embrace these opportunities and (b) how the deployment of edge AI will reduce friction and accelerate the adoption of new experiences.

For a deeper dive click here.

We look forward to hearing from many of you who are working on companies in these areas and, most importantly, to continuing the conversation with all of you in the community and pushing each other’s thinking around these trends. To that end, over the coming weeks we will post a series of additional blogs that go into more depth in each of our four thematic areas.

Matt, Tim, Soma, Len, Scott, Hope, Paul, Tom, Sudip, Maria, Dan, Chris and Elisa

(to get in touch just go to the team page – our contact info is in our profiles)

The Difficult Decision For Heptio To Sell to VMware

We are thrilled for Heptio’s acquisition by VMware! This transaction is another resounding reinforcement that Kubernetes has become the de facto standard for infrastructure across clouds. It is also a tremendous validation of Heptio’s team, vision and execution.

Deciding “when to sell” is one of the toughest decisions faced by founders, boards and investors in growing companies. When presented with an attractive alternative to continuing to build the company independently, boards have a “high class problem” — but one they must consider with utmost thoughtfulness. Heptio was presented a very difficult challenge in this regard.

Heptio was founded by Kubernetes co-creators, Joe Beda and Craig McLuckie, less than two years ago. Madrona had the privilege of investing with Accel in the $8.5M Series A round at the company’s formation, and I joined the board as an Observer. Since this Day One, I’ve never been associated with a company that has accomplished more in as short a period of time. Craig and Joe had an original vision that the Kubernetes’ community would continue to strengthen and its rapid adoption would continue to increase; however, it needed to become easier and enterprises needed help with adoption. From this starting point, they saw an opportunity to lead a cloud native transformation in the enterprise and redefine the deployment and operations of modern applications across clouds.

This vision has exactly played out, and Heptio backed it up with great execution landing a blue-chip array of Fortune 500 customers for their Heptio Kubernetes Service (HKS) including 3 of the 4 largest retailers in the world, 4 of the 5 largest telcos in the US, and 2 of the 6 largest financial services companies in the US. They also made significant impact on the Kubernetes community by contributing 5 OSS projects (ksonnet, sonobuoy, contour, gimbal, ark) and collecting over 5000 Github stars. With this great execution, more funding followed. Nine months in, Madrona led the $25M Series B and the company invited me to join the Board and my colleague Maria Karaivanova joined as an Observer.

Through it all, Craig and Joe were the consummate founders. They approached building their business with laser-focus and a driving ambition to genuinely help customers and create a large, lasting business in the process. They were rock stars in the Kubernetes community, but approached all interactions with humility and pragmatism. They were extremely strategic in thinking through potential moves on the industry chessboard in what is a very dynamic market; but they always realized that none of it would matter if not paired with week-in-week-out blocking and tackling. Perhaps most importantly, they were relentless recruiters and built a world-class team of over 100 employees in less than 2 years, attracting other great leaders like Shanis Windland, Marcus Holm and Scott Buchanan. In doing so, they walked the talk that culture and diversity matter deeply in building a successful business, often passing on a good hire in favor of the right hire who was an even stronger fit for the business.

So, why in the world did we decide to sell? In short, sometimes you receive an offer too good to refuse. Heptio had the team, momentum and plenty of funding to continue; but in VMware, they saw a partner who not only recognized Heptio’s unique insights, assets and market position, but also had the resources and reach to execute more quickly on their vision and deliver an enterprise Kubernetes service to any cloud. The excitement over this potential – and a great financial offer – drove this deal. Market consolidation was always anticipated, and this decision was certainly not a reaction to IBM acquiring Red Hat or other market externalities.

In this decision process, the role of the investor is to ensure the founders and management team have the broad perspective of “what might be possible,” provide an objective view on the market (both opportunities and risks), and ensure the company has the necessary resources. At the end of the day, we support the founders and management team. In this case, while this acquisition came sooner than anyone anticipated, we all agreed that the strategic fit and economics made joining forces the right decision. Through it all, Craig and Joe balanced the interests of shareholders and employees along with other strategic considerations in exactly the way you hope any founders would. Ping Li from Accel was also an incredible thought partner from before company formation through this decision, and overall was one of the best board directors I’ve ever had a chance to work with.

Congratulations again to the Heptio team! We wish you all the best in furthering your mission and vision via the leadership roles you are taking inside VMware. We are excited the whole team is staying intact in Seattle and will continue to grow here. This acquisition is also a great validation of our broader investment theme around the enterprise move to cloud native and open source, and we continue to be very excited about our related investments in companies like Tigera, Shippable, and Pulumi.

Now my and Madrona’s fortunate job is to go find the next great Day One company … but I know it will be difficult to find another quite like Heptio.

Welcome Sudip to Madrona

Today we are excited to welcome Sudip Chakrabarti to Madrona as a Partner on the Investment Team.

Sudip is the kind of team member we look for – someone who shares our passion for helping entrepreneurs build their companies from day one. When we first met, Sudip was an investor at Lightspeed Venture Partners in the valley, a team we have worked with for later stage fund raises for Madrona portfolio companies. His focus on cloud and infrastructure technologies means we crossed paths many times and we were impressed by not only his technical and business expertise in this market but also his approach to working with startups. He gets in and does the work to build companies and help founders succeed in realizing their vision. This is how Madrona approaches company building from day one – to day whatever – we are here to help companies succeed and build the greater Seattle ecosystem.

As a partner at Madrona, Sudip will be focused on investments in the enterprise, infrastructure markets including how open source software and technology is changing the enterprise software landscape.

Prior to joining Madrona, Sudip was a partner at Lightspeed Venture Partners where he led or co-led investments in Streamlio, Serverless, Rainnet, Exabeam and, a Madrona portfolio company, Heptio. He started his investing career at Osage University Partners and subsequently was an enterprise investor at Andreessen Horowitz where he was involved with companies such as Actifio, Databricks, Digital Ocean, Forward Networks, Mesosphere and Samsara.

Sudip also brings the experience of being a founder to the table with entrepreneurs and founders – he started two companies early in his career and understands first hand the triumphs and struggles of company building from day one.

Sudip is our second convert from the valley (Maria Karaivanova joined us from Cloudflare last year) and our first from a Valley VC. Please join us in welcoming Sudip to the Pacific Northwest!

Snowflake, a Cloud Native Data Warehouse and Our Newest Investment

Today we are announcing our investment in Snowflake. Snowflake is a cloud native data warehouse. Data warehouses have been used for years to store and analyze, not surprisingly, huge amounts of data. Over the past 5-10 years with the explosion of data and the rise of analytics & insights that this data provides, these stores have grown massively and are getting tougher and tougher to scale and manage in a cost-effective way. We are excited to back a company that embraced and leveraged the potential of cloud infrastructure from the start and is rapidly ramping their capabilities to meet the demands of enterprise cloud computing.

This investment is different from Madrona’s core strategy of investing at an early-stage in Pacific NW based companies. The company is later stage and is primarily based in Silicon Valley. But this company fits other Madrona criteria – the huge and growing secular shift to enterprise cloud computing, an A+ team with ties to Seattle and product and customer leadership in the emerging cloud data warehouse market. But even given this, why Snowflake?

Two of the massive computing trends we actively follow for investments are – the movement of enterprise computing and workloads to the cloud and the development of intelligent applications that make use of data through ML/AI and continuous learning. Both of these require and deal with massive amounts of data. For all the progress that we have made on these trends, we are still in the early phases of this tectonic computing shift – especially for enterprise customers. Many of the previous attempts to make enterprise applications available in the cloud have simply been a reworking of legacy applications, as opposed to cloud native design. We are seeing more technologies that are being designed and built ground-up to be cloud native. That’s exactly what Snowflake did for the world of data warehousing.

Benoit Dageville (co-Founder and CTO) and Thierry Cruanes (co-Founder and architect) came with a rich set of database experiences from Oracle, and they were joined by Bob Muglia as CEO in 2014. Bob is a very accomplished enterprise software and business leader, having spent more than 20 years at Microsoft including running Microsoft’s $16B Server and Tools business. Under Bob’s leadership, Microsoft grew several different multi-billion dollar businesses. Soma has had the opportunity to work for and with Bob over the years at Microsoft and everyone at Madrona sees Bob as a world-class leader. All this experience, expertise and background make Bob the ideal leader for Snowflake. We are really excited about this team and think they are the ones to create a meaningful new business in this industry.

Snowflake is a data warehouse designed and architected for the cloud. It is the first data warehouse built specifically to run in the cloud, and offers a range of performance, concurrency, scale and infrastructure management benefits which legacy, on-premises and cloud data platforms were not designed for. This allows Snowflake to achieve better database performance, respond to higher volumes of concurrent queries without performance degradation, and provide a simpler ongoing SaaS model without infrastructure maintenance – all with an outstanding price/performance characteristic.

Despite only being about 4 years into development, a recent GigaOm analyst report (http://info.snowflake.net/rs/252-RFO-227/images/GigaOm-sector-roadmap-cloud-analytic-databases-2017.pdf) ranked Snowflake as the top cloud analytics database ahead of Google BigQuery, Teradata, Azure Data Warehouse and AWS Redshift. While these other solutions can be a good fit in certain situations, we see Snowflake as a long-term leader in this massive market with its cloud-first technology and cross cloud platform potential.

Source: GigaOm

Snowflake is building a team in Bellevue given the cloud and big data talent that is available in this region. The combination of a world-class proven team, the focus on a cloud-native solution and the potential to be a leader in a massive cloud data warehousing and analytics market are the main reasons we decided to invest and participate in the Snowflake journey. Snowflake is built on Amazon Web Services (AWS) and there is a good partnership and collaboration between Snowflake and AWS. We look forward to being a valuable resource on that partnership given our long history working with AWS. In addition, we are excited to partner with Bob and team and help them build Snowflake’s presence here in the region and around the world.

Madrona’s 2017 Investment Themes

Every year in March, Madrona wraps up what happened in 2016 and we sit down with our investors to talk about our business – the business of finding and growing the next big Seattle companies. First and foremost, our strategy is to back the best entrepreneurs in the Pacific NW attacking the biggest markets. But we also overlay this with key themes and trends in the broader technology market. As part of our annual meeting we present our key investment themes for the year. Below is a snapshot of what we are focusing on:

Business and Enterprise Evolution to Cloud Native

Tim Porter-Madrona-Venture Capital Seattle
Tim Porter

The IT industry is in the early innings of its next massive shift. The transition to “cloud native” is as big or bigger as the move from PC to mainframes, the adoption of hypervisors, or the creation of public clouds. Cloud native at its core refers to applications or services built in the cloud that are container-packaged, dynamically scheduled, and microservices-oriented. Cloud Native enables all companies to take advantage of the application architectures that were once the province of Google or Facebook. Companies like Heptio and Shippable are at the forefront of disrupting how IT infrastructure has traditionally been managed with vastly increased agility, computing efficiency, real-time data, and speed. We firmly believe software that helps applications complete the journey from development on a cloud platform to deployment on different clouds, and running them at scale, will become the backbone of technology infrastructure going forward. As such, we are interested in meeting more companies that are making it easier to network, secure, monitor, attach storage, and build applications with container-based, microservice architectures.

Intelligent Applications

Customers today demand their software deliver insights that are real-time, nimble, predictive, and prescriptive. To accomplish this, applications must continuously ingest data, increasingly using event-driven architectures, coupled with algorithm-powered data models and machine learning to deliver better service and novel, predictive recommendations. The new generation of intelligent applications will be “trained and predictive” in contrast to the old generation of software programs that were created to be “programmed and predictable.” We believe that intelligent applications which rely on proprietary datasets, event-driven cloud-based architectures, and intuitive multisense interfaces will unlock new business insights in real-time and disrupt current categories of software. Investments in intelligent app companies that leverage these trends will likely be our largest area of investment in coming years.

Voice and XR Interfaces for Businesses and Consumers

We believe the shift we are seeing for human computer interactions will be as fundamental as the mouse click was for replacing the command line or touch/text was for the rise of mobile computing. This shift will be as pertinent for the enterprise as it is for consumers, and in fact will serve to further blur the lines between productivity and social communication.

With voice, we are most excited by companies that can leverage existing platforms such as Alexa to create a tools layer, or build intelligent vertical end-service applications.

In the realm of XR (from VR to AR), we believe this is a long game. VR will not be an overnight phenomenon, but will play out over the next 5 years as mobile phones become VR capable and, particularly, as truly immersive VR headsets become less expensive and cumbersome. We are committed to this future and are particularly focused on VR/AR technologies that bring the major innovation of “presence” into a shared or social space, as well as “picks-and-shovels” technology that are needed by the XR community now to start the building process now even in advance of a largescale install base of headsets.

Vertical Market Applications that use proprietary data sets and ML/AI

As algorithms continue to become more accessible by way of access to open-source libraries and platforms such as the one our portfolio company, Algorithmia, provides, we believe that proprietary data will be the bottleneck for intelligent apps. Companies and products with ML at their core must figure out how to acquire, augment, and clean proprietary, workable data sets to train the machine learning models. We are excited about the companies with these data sets, as well as companies, such as Mighty AI, that help build these data sets or work with companies to help them leverage their proprietary data to deliver business value.

One area where we see this is happening is when ML/AI and proprietary data is applied to intelligent apps in vertical markets. Vertical market focus allows companies to amass rich data sets and domain expertise at a far faster pace than companies building software that tries to be omni-intelligent, providing both product and go-to-market advantage. Most industry verticals are ripe for this innovation, but several stand out including manufacturing, healthcare, insurance/financial services, energy, and food/agriculture.

AI, IoT and Edge Computing

Linda Lian

IoT can be an ambiguous term, but fundamentally we see the explosion of devices connected to the Internet creating an environment where enterprise decision-making and consumer quotidian life will be crucially dependent on real-time data processing, analytics, and shorter response times even in areas where connectivity may be inconsistent. Real time response is crucial to success and is difficult to meet in the centralized, cloud-based model of today. For example, instant communications between autonomous vehicles cannot afford to be dependent on internet access or the latency of connecting to a cloud server and back. Edge computing technologies hope to solve this by bringing the power of cloud computing to the source of where data is generated. We are particularly committed to companies building technologies that are focused on solving how to bring AI, deep learning, machine vision, speech recognition, and other compute-heavy services to resource-constrained and portable devices and improve communication between them.

Another facet of IoT where we continue to have investment interest is new vertical devices for consumer (home, vehicle, wearable, retail), healthcare, and industrial infrastructure (electrical grid, water, public safety), along with enabling supporting infrastructure. Opportunities persist for networking solutions that improve access, range, power, discoverability, cost, and flexibility of edge devices and systems management that provide enhanced security, control, and privacy.

Commerce Experiences that Bridge Digital to Physical

Retail is in a state of flux and technologies are disrupting traditional models in more ways than e-commerce. First, physical retail isn’t going away, but it has a fresh new look. 85% of shoppers say they prefer shopping in stores due to a variety of factors including seeing the product and the social aspect. This has led the new generation of web-native brands such as Indochino, Warby Parker, Glossier and Bonobos to open stores – but they are very different, carrying little physical inventory and geared towards intimacy with customers and helping find the right product for the buyer.

Second, the decreasing cost of IoT hardware technologies such as Impinj’s RFID, advancements in distributed computing, and intelligent software such as computer vision will fundamentally alter physical retail experiences. Experiments are already underway at Amazon Go where shoppers can pick what they want and casually stroll out without waiting in a check-out line.

Within e-commerce, vertically integrated, direct-to-consumer models remain viable and compelling. They bypass costly distribution channels and can build strong brands and intimate customer experiences like Dollar Shave Club, Blue Apron, or Stitch Fix. Marketplaces that leverage underutilized resources or assets; or the technology that underlies these marketplaces remain relevant and compelling particularly for the millennial generation that prioritize access over ownership.

Security and Data Privacy

While certain security categories have been massively over-funded, new investment opportunities continue to arise. Security and data privacy are areas of massive concern for businesses, particularly in the current macro environment. Internally, enterprises demand full visibility, remediation tools, and monitoring capabilities to guard against increasingly sophisticated attacks. Particularly vulnerable are companies that house massive amounts of customer data such as financial services, big retailers, healthcare, and the government. Externally, the collection and analysis of massive amounts of real-time consumer behavioral and personal data is the bread and butter of sales, marketing, and product efforts. But new privacy laws in the US and imminent from the EU are creating heightened awareness of both the control and security of this data. We continue to be interested in companies and technologies that take novel approaches to protecting consumer data and helping corporations and organizations protect their assets.

Technologies Supporting Autonomous Vehicles

Transportation technology is experiencing a massive disruption. Autonomous driving will be the biggest innovation in automobiles since the invention of the car, impacting suppliers, car makers, ridesharing, and everything in between. Lines are blurring between manufacturer and technology provider. We believe the value creation in AVs will, not surprisingly, shift to software, and the data that makes it intelligent. More innovation is required in areas such as computer vision and control systems. Important advancements also remain to be made in component technologies such as radar, cameras, and other sensors. Indeed, there are billions of edge cases due to construction, pedestrians, weather, and a murky regulatory environment that must be ironed out both at the technology and policy level before the promise of AV is a reality.

Additionally, the rise of AV could massively disrupt current modes of car ownership. Fleet and operations management software will become increasingly important as AV transportation-as-a-service becomes more and more tangible. Software and systems for other vehicles including drones, trucks, and ships will also be huge markets and create new investment opportunities.

Seattle and the PNW are emerging as thought leaders in the area of AV, and we believe a technology center of excellence as well, creating new investment opportunities. We are deeply interested in all the threads that go into this complex and massive shift in technology, the car industry and in social culture.

Well, there you have it – Madrona’s key investment themes for 2017. Thanks for reading. If you are working on a startup in any of these areas – we would love to talk to you. Please shoot any of us a note – our email addresses are on in our bios on our website.

CloudCoreo Joins the Madrona Family

(L-R Jason Needham, CMO; Paul Allen, CTO; S. Somaseagar, Venture Partner; Tom Hull, CEO)

It is a lot of fun for me to announce our investment in CloudCoreo and to welcome the team to our Madrona family. Again.

We have had a great experience working with Tom Hull and Jason Needham through the Union Bay Networks journey, and we have been super impressed by Paul Allen’s insight, understanding and passion about how to bring together deployment and monitoring for security and compliance as one closed loop system. We believe this is essential for businesses to manage their cloud operations as cloud infrastructure continues to change significantly.

As the cloud infrastructure world embraces micro-services and containers to build and deploy at-scale distributed services, a comprehensive devops solution that enables companies to automate and secure the entire cloud operation across multiple cloud and hybrid cloud environments is a must-have solution. CloudCoreo is doing just that and has already proven itself to be an invaluable product for many of its early customers.

We are thrilled to back the compelling vision of this leadership team and be part of a world-class cloud infrastructure and operations focused startup in Seattle, the “Cloud Capital” of the world. All of us at Madrona are jazzed at the potential of what is possible here.

Additionally, we are excited to partner with the folks at Divergent Ventures, Aritstos Ventures and notable angels that provide a variety of complementary experiences, expertise and connections as we embark on this journey.

Looking forward to a fun, impactful and at-scale journey with the CloudCoreo team!