Our Investment In Fauna, The Data API For Client-Serverless Applications

Today we are announcing our investment in Fauna.

We believe that the next generation of applications will be serverless. Those applications can be completely new, “greenfield” applications – like a dynamic Jamstack web application, or they can be new functionality that is added to an existing application or service but leverages a rich mobile or web client with a serverless back end. We think of this model of development as the “Client-Serverless” model, and this represents the 4th generation of application architectures.

Fauna helps developers to simplify code, reduce (development and operational) costs and ship faster by replacing their data infrastructure with a single serverless API that is easy to use, maintenance-free, yet full-featured. Fauna is the data API for Client-Serverless applications.

As computing platforms evolve, new opportunities for developer and application platform products are created. Some patterns (like the client/server era) were brought about by new technologies (Windows, SQL Server and the Windows hardware ecosystem) while others were as existing technologies aligned against important use cases (such as the LAMP stack for web applications) and just became “the best way to do it”. At any layer of the platform stack, if you can align with (or better yet create) one of these rising tides it can help you scale more efficiently. Changes in infrastructure also create opportunity, such as what we’ve seen in the Data Warehouse market where Snowflake’s forward-looking bet on exploiting elasticity and scale of cloud infrastructure have enabled them to disrupt a large and existing market.

The database market is massive and there are always opportunities for new platforms to emerge and differentiate. However, this has proven difficult as it is expensive to build a new database and even more expensive to sell one. Nowadays, decisions around infrastructure are more and more driven by developers and so any new platform needs to win the hearts and minds of developers first and foremost. Without this, the only way to land new customers is going to be through a deep technical sales process. You can argue that most of the NoSQL era (Mongo for example) came about via more effectively targeting developers – via Open Source and having more approachable platforms that were on trend for where applications were headed.

So, it is somewhat of a straightforward formula for success in the database market: build a database and latch on to the most important trends and developer technologies. This is what Fauna has done and why they are so well positioned.

Developers are moving en masse to serverless architectures for new applications, marking the dawn of serverless as the next tool chain for building global, hyperscale apps. FaunaDB plugs in seamlessly into this new ecosystem and uniquely extends the serverless experience all the way to the database. This developer journey began with the move to the cloud, but Fauna has correctly identified serverless as the next frontier for cloud and has succeeded in building the database of choice for this new era.

FaunaDB is unique in the market combining the following attributes into a single data API:

  1. Focus on developer productivity: Web-native API with GraphQL, custom business logic and integration with serverless ecosystem for any framework
  2. Modern, no-compromise programmable platform: Underlying globally distributed storage and compute engine that is fast, consistent and reliable, with a modern security infrastructure
  3. No database operations: Total freedom from database operations at any scale

Consequently, Fauna has seen its developer community grow quickly to over 25,000 users over the past year and has developed one of the strongest brands within the serverless and Jamstack ecosystem.

While it is great to identify a massive opportunity with a differentiated product, the most important part of investing in a company is the team.

The two co-founders, Evan Weaver and Matt Freels are amazing engineering and product leaders who were instrumental in building a scalable, distributed system at Twitter, where they witnessed the signs of where the world was moving and went on to build Fauna to fulfill their vision. They built Fauna as a 100% remote team from day 1 with the right focus of communication and collaboration to enable a high performing team aligned on a common vision. With a lot of the tech industry talking about working remotely currently, Evan and Matt have been leaders in adopting the “future of work” and setting up a strong culture for success as the team continues to scale in the new post COVID-19 era.

Eric Berg, who recently joined Fauna as the CEO is somebody that I have worked with at Microsoft in the past, was a key leader at one of Madrona’s portfolio companies (Apptio) and most recently the Chief Product Officer at Okta. During his eight-year Okta experience, he took a pre-Series A company through IPO, developing an identity product no one was sure they needed into a huge success.

And of course, I am very excited to have the opportunity to work together with Bob Muglia again as the Chairman of the Board at Fauna. I have had the opportunity to work together with Bob over the decades, initially at Microsoft and more recently at Snowflake and am thrilled to be able to work with him again here. Bob and I share a common vision of Client-Serverless being the next generation application model – applications are composed of Internet connected services using standard REST and GraphQL APIs, the Jamstack and the browser being the universal client and a globally distributed database as a cornerstone of this ecosystem.

With such a stellar team, a great product and a massive potential opportunity, it is a no-brainer for me to want to be a part of this journey and that’s one of the main reasons we decided to invest in Fauna. Looking forward to this journey!

The Remaking of Enterprise Infrastructure – Investment Themes For Next Generation Cloud

Enterprise infrastructure has been one of the foundational investment themes here at Madrona since the inception of the firm. From the likes of Isilon to Qumulo, Igneous, Tier 3, and to Heptio, Snowflake and Datacoral more recently, we have been fortunate to partner with world-class founders who have reinvented and redefined enterprise infrastructure.

For the past several years, with enterprises rapidly adopting cloud and open source software, we have primarily focused on cloud-native technologies and developer-focused services that have enabled the move to cloud. We invested in categories like containerization, orchestration, and CI/CD that have now considerably matured. Looking ahead, with cloud adoption entering the middle innings but with technologies such as Machine Learning truly coming into play and cloud native innovation continuing at a dizzying pace, we believe that enterprise infrastructure is going to get reinvented yet again. Infrastructure, as we know it today, will look very different in the next decade. It will become much more application-centric, abstracted – maybe even fully automated – with specialized hardware often available to address the needs of next-generation applications.

As we wrote in our recent post describing Madrona’s overall investment themes for 2019, this continued evolution of next-generation cloud infrastructure remains the foundational layer of the innovation stack against which we primarily invest. In this piece, we go deeper into the categories that we see ourselves spending the most time, energy and dollars over the next several years. While these categories are arranged primarily from a technology trend standpoint (as illustrated in the graphic above), they also align with where we anticipate the greatest customer needs for cost, performance, agility, simplification, usability, and enterprise-ready features.

Management of cloud-native applications across hybrid infrastructure

2018 was undeniably the year of “hybrid cloud.” AWS announced Outposts, Google released GKE On-Prem and Microsoft beefed up Azure Stack (first announced in late 2017). The top cloud providers officially recognized that not every workload will move to the cloud and that the cloud will need to go to those workloads. However, while not all computing will move to public clouds, we firmly believe that all computing will eventually follow a cloud model, offering automation, portability and reliability at scale across public clouds, on-prem and every hybrid variation in between.

In this “hybrid cloud forever” world businesses want more than just the ability to move workloads between environments. They want consistent experiences so that they can develop their applications once and run anywhere with complete visibility, security and reliability — and have a single playbook for all environments.

This leads to opportunities in the following areas:

  • Monitoring and observability: As more and more cloud-native applications are deployed in hybrid environments, enterprises will demand complete monitoring and observability to know exactly how their applications are running. The key will be to offer a “single pane of glass” (complete with management) across multiple clouds and hybrid environments, thereby building a moat against the “consoles” offered by each public cloud provider. More importantly, the next-generation monitoring tools will need to be intelligent in applying Machine Learning to monitor and detect – potentially even remediate – error conditions for applications running across complex, distributed and diverse infrastructures.
  • SRE for the masses: According to Joe Beda, the co-founder of Heptio, “DevOps is a cultural shift whereby developers are aware of how their applications are run in a production environment and the operations folks are aware and empowered to know how the application works so that they can actively play a part in making the application more reliable.” The “operations” side of the equation is best exemplified by Google’s highly trained (and compensated) Site Reliability Engineers (SRE’s). As cloud adoption further matures, we believe that other enterprises will begin to embrace the SRE model but will be unable to attract or retain Google SRE level talent. Thus, there will be a need for tools that simplify and automate this role and help enterprise IT teams become Google-like operators with the performance, scalability and availability demanded by enterprise applications.
  • Security, compliance and policy management: Cloud, where enterprises lose total control over the underlying infrastructure, places unique security demands on cloud-native applications. Security ceases to be an afterthought – it now must be designed into applications from the beginning, and applications must be operated with the security posture front and center. This has created a new category of cloud native security companies that are continuing to grow. Current examples include portfolio company, Tigera, which has become the leader in network security for Kubernetes environments, and container security companies like Aqua, StackRox and Twistlock. In addition, data management and compliance – not just for data at rest but also for data in motion between distributed services and infrastructures – create a major pain point for CIOs and CSOs. Integris addresses the significant associated privacy considerations, partly fueled by GDPR and its clones. The holy grail is to analyze data without compromising privacy. Technologies such as security enclaves and blockchains are also enabling interesting opportunities in this space and we expect to see more.
  • Microservices management and service mesh: With applications increasingly becoming distributed, open source projects such as Istio (Google) and Envoy (Lyft) have emerged to help address the great need to efficiently connect and discover microservices. While Envoy has seen relatively wide adoption, it has acted predominantly as an enabler for other services and businesses such as monitoring and security. With next-generation applications expected to leverage the best-in-class services, regardless of which cloud/on-prem/hybrid infrastructure they are run on, we see an opportunity to provide a uniform way to connect, secure, manage and discover microservices (run in a hybrid environment).
  • Streams processing: Customers are awash in data and events from across these hybrid environments including data from server logs, network wire data, sensors and IoT devices. Modern applications need to be able to handle the breadth and volume of data efficiently while delivering new real time capabilities. The area of streams processing is one of the most important areas of the application stack enabling developers to unlock the value in these sources of data in real time. We see fragmentation in the market across various approaches (Flink, Spark, Storm, Heron, etc.) and an opportunity for convergence. We will continue to watch this area to understand whether a differentiated company could be created.

Abstraction and automation of infrastructure

While containerization and all of the other CNCF projects promised simplification of dev and ops, the reality has turned out to be quite different. In order to develop, deploy and manage a distributed application today, both dev and ops teams need to be experts in a myriad of different tools, all the way from version control, orchestration systems, CI/CD tools, databases, to monitoring, security, etc. The increasingly crowded CNCF roadmap is a good reflection of that growing complexity. CNCF’s flagship conference, Kubecon, was hosted in Seattle in December and illustrated both the interest in cloud native technologies (attendees grew 8x since 2016 to over 8,000) as well as the need for increased usability, scalability, and help moving from experimentation to production. As a result, in the next few years, we anticipate that an opposite trend will take effect. We expect infrastructure to become far more “abstracted,” allowing developers to focus on code and letting the “machine” take care of all the nitty gritty of running infrastructure at scale. Specifically, we think opportunities are becoming available in the following areas:

  • Serverless becomes mainstream: For way too long, applications (and thereby developers) have remained captive of the legacy infrastructure stack in which applications were designed to conform to the infrastructure and not the other way around. Serverless, first introduced by AWS Lambda, broke that mold. It allowed developers to run applications without having to worry about infrastructure and to combine their own code with best-in-class services from others. While this has created a different concern for enterprises – applications architected to use Lambda can be difficult to port elsewhere – the benefits of serverless, in particular rapid product experimentation and cost, will compel a significant portion of the cloud workloads to adopt it. We firmly believe that we are at the very beginning of serverless adoption and we expect to see a lot more opportunities in this space to further facilitate serverless apps across infrastructure, similar to Serverless.com (toolkit for building serverless apps on any platform) and IOpipe (monitoring for serverless apps).
  • Infrastructure backend as code: The complexity of building distributed applications often far exceeds the complexity of the app’s core design and wastes valuable development time and budget. For every app, a developer wants to build, s/he ends up writing the same low-level distributed systems code again and again. We believe that will change and that the distributed systems backend will be automatically created and optimized for each app. Companies like Pulumi and projects like Dark are already great examples of this need.
  • Fully autonomous infrastructure: Automating management of systems has been the holy grail since the advent of enterprise computing. However, with the availability of “infinite” compute (in the cloud), telemetry data, and mature ML/AI technology, we anticipate significant progress towards the vision of fully autonomous infrastructure. Even in the case of cloud services, many complex configuration and management choices need be made to optimize the performance and costs of several infrastructure categories. These choices range from capacity management in a broad range of workloads to more complex decisions in specific workloads such as databases. In databases, for example, there has been some very promising research done on applying machine learning to basic configuration all the way to index maintenance. We believe there are exciting capabilities to be built and potentially new companies to be grown in this area.

Specialized infrastructure

Finally, we believe that specialized infrastructure will make a comeback to keep up with the demands of next-general application workloads. We expect to see that in both hardware and software.

  • Specialized hardware: While ML workloads continue to proliferate and general-purpose CPUs (and even GPUs) struggle to keep up, new specialized hardware has arrived from Google’s TPUs to Amazon’s new Inferentia chips in the cloud. Microsoft Azure also now offers FPGA-based acceleration for ML workloads while AWS offers FPGA accelerators that other companies can build upon – a notable example being the FPGA-based genomics acceleration built by Edico Genome. While we are unlikely to invest in a pure hardware company, we do believe that the availability of specialized hardware in the cloud will enable a variety of new investable applications involving rich media, medical imaging, genomic information, etc. that were not possible until recently.
  • Hardware-optimized software: With ML coming to every edge device – sensors, cameras, cars, robots, etc. – we believe that there is an enormous opportunity to optimize and run models on hardware endpoints with constrained compute, power and/or bandwidth. Xnor.ai, for example, optimizes ML models to run on resource-constrained edge devices. More broadly, we envision opportunities for software-defined hardware and open source hardware designs (such as RISC-V) that enable hardware to be rapidly configured specifically for various applications.

Open Source Everywhere

For every trend in enterprise infrastructure, we believe that open source will continue to be the predominant delivery and license mechanism. The associated business model will most likely include a proprietary enterprise product built around an open core, or a hosted service where the provider runs the open source as a service and charges for usage.

Our own yardstick for investing in open source-based companies remains the same. We look for companies based around projects that can make a single developer look like a “hero” by making her/him successful at some important task. We expect the developer mindshare for a given open source project to be reflected in metrics such as Github stars, growth in monthly downloads, etc. A successful business then can be created around that open source project to provide the capabilities that a team of developers and eventually an enterprise would need and pay for.

Conclusion

These categories are the “blueprints” we have in our minds as we look for the next billion-dollar business in the enterprise infrastructure category. Those blueprints, however, are by no means exhaustive. The best founders always surprise us by their ability to look ahead and predict where the world is going, before anyone else does. So, while this post describes some of the infrastructure themes we are interested in at Madrona, we are not exclusively thesis-driven. We are primarily founder driven; but we also believe that having a thoughtful point of view about the trends driving the industry – while being humble, curious and open-minded about opportunities we have not thought as deeply about – will enable us to partner with and help the next generation of successful entrepreneurs. So, if you have further thoughts on these themes, or especially are thinking about building a new company in any of these areas, please reach out to us!

Current or previous Madrona Venture Group portfolio companies mentioned in this blog post: Datacoral, Heptio, Igneous, Integris, IOpipe, Isilon, Pulumi, Qumulo, Snowflake, Tier 3, Tigera and Xnor.ai

The Road to Cloud Nirvana: The Madrona Venture Group’s View on Serverless

S. Somasegar – Managing Director, Madrona Venture Group

The progression over the last 20 years from on-premise servers, to virtualization, to containerization, to microservices, to event-driven functions and now to serverless computing is allowing software development to become more and more abstracted from the underlying hardware and infrastructure. The combination of serverless computing, microservices, event-driven functions and containers truly form a distributed computing environment that enables developers to build and deploy at-scale distributed applications and services. This abstraction between applications and hardware allows companies and developers to focus on their applications and customers—not worrying about scaling, managing, and operating servers or runtimes.

In today’s cloud world, more and more companies are moving towards serverless products like AWS Lambda to run application backends, respond to voice and chatbot requests, and process streaming data because of the benefits of scaling, availability, cost, and most importantly, the ability to innovate faster because developers no longer need to manage servers. We believe that microservices and serverless functions will form the fabric of the intelligent applications of the future. The massive movement towards containers has validated the market demand for hardware abstraction and the ability to “write once, run anywhere,” and serverless computing is the next stage of this evolution.

Madrona’s Serverless Investing Thesis

Dan Li – Principal, Madrona Venture Group

Today, developers can use products like AWS Lambda, S3, and API Gateway in conjunction with services like Algorithmia, to assemble the right data sources, machine learning models, and business logic to quickly build prototypes and production-ready intelligent applications in a matter of hours. As more companies move towards this mode of application development, we expect to see a massive amount of innovation around AI and machine learning, application of AI to vertically-focused applications, and new applications for IOT devices driven by the ability for companies to build products faster than ever.

For all the above-mentioned reasons, Madrona has made several investments in companies building tools for microservices and serverless computing in the last year and we are continuing to look for opportunities in this space as the cloud infrastructure continues to evolve rapidly.Nevertheless, with the move towards containerization and serverless functions, it can be much harder to monitor application performance, debug applications, and ensure that applications have the correct security and policy settings. For example, SPIFFE (Secure Production Identity Framework for Everyone) provides some great context for the kinds of identity and trust-related work that needs to happen for people to be able to build, share, and consume micro-services in a safe and secure manner.

Below, you’ll hear from three of the startups in our portfolio and how they are building tools to enable developers and enterprises to adopt serverless approaches, or leveraging serverless technologies to innovate faster and better serve their customers.

Portfolio Company Use Cases

Algorithmia Logo

Algorithmia empowers every developer and company to deploy, manage, and share their AI/ML model portfolio with ease. Algorithmia began as the solution to co-founders Kenny Daniel and Diego Oppenheimer’s frustrations at how inaccessible AI/ML algorithms were. Kenny was tired of seeing his algorithms stuck in an unused portion of academia and Diego was tired of recreating algorithms he knew already existed for his work at Microsoft.

Kenny and Diego created Algorithmia as an open marketplace for algorithms in 2013 and today it services over 60,000 developers. From the beginning, Algorithmia has relied on serverless microservices, and this has allowed the company to quickly expand its offerings to include hosting AI/ML models and full enterprise AI Layer services.

AI/ML models are optimally deployed as serverless microservices, which allows them to quickly and effectively scale to handle any influx of data and usage. This is also the most cost-efficient method for consumers who only have to pay for the compute time they use. This empowers data scientists to consume and contribute algorithms at will. Every algorithm committed to the Algorithmia Marketplace is named, tagged, cataloged, and searchable by use case, keyword, or title. This has enabled Algorithmia to become an AWS Lambda Code Library Partner.

In addition to the Algorithm Marketplace, Algorithmia uses the serverless AI Layer to power two additional services: Hosting AI/ML Models and Enterprise Services where they work with government agencies, financial institutions, big pharma, and retail. The AI layer is cloud, stack, and language agnostic. It serves as a data connector, pulling data from any cloud or on-premises server. Developers can input their algorithms in any language (Python, Java, Scala, NodeJS, Rust, Ruby, and R), and a universal REST API will be automatically generated. This allows any consumer to call and chain algorithms in any combination of languages. Operating under a Kubernetes-orchestrated Docker system allows Algorithmia’s services to operate with the highest degree of efficiency.

As companies add AI/ML capabilities across their organizations, they have the opportunity to escape the complications that come with a monolithic application and begin to implement a serverless microservice architecture. Algorithmia provides the expertise and infrastructure to help them be successful.

Pulumi Logo

Pulumisaw an opportunity in 2017 to fundamentally reimagine how developers build and manage modern cloud systems, thanks in large part to the rise in serverless computing intersecting with advances in containers and managed cloud infrastructure in production. By using programming languages and tools that developers are already familiar with, rather than obscure DSLs and less capable, home-grown templating solutions, Pulumi’s customers are able to focus on application development and business logic rather than infrastructure.

As an example, one of Pulumi’s Enterprise customers was able to move from a dedicated team of DevOps Engineers to a combined Engineering organization—reducing their cloud infrastructure scripts to 1/100th the size in a language the entire team already knew and empowering their developers – and is now substantially more productive than ever before in building and continuously deploying new capabilities. The resulting system uses the best of what the modern cloud has to offer—dozens of AWS Lambdas for event-driven tasks, replacing a costly and complex queuing system, several containers that can run in either ECS or Kubernetes, and several managed AWS services like Amazon CloudFront, Amazon Elasticsearch Service, and Amazon ElastiCache—and now runs at a fraction of the cost before migrating to Pulumi. They have been able to spin up entirely new environments in minutes where it used to take weeks.

Before the recent culmination of serverless, containers, and hosted cloud infrastructure, such an approach simply would not have been possible. In fact, we at Pulumi believe that the real magic is in these approaches living in harmony with one another. Each has its own strengths: containers are great for complex stateful systems, often taking existing codebases and moving them to the cloud; serverless functions are perfect for ultra-low-cost event- and API-oriented systems; and hosted infrastructure lets you focus on your application-specific requirements, instead of reinventing the wheel by manually hosting something that your cloud provider can do better and cheaper. Arguably, each is “serverless” in its own way because infrastructure and servers fade into the background. This disruptive sea change has enabled Pulumi to build a single platform and management suite that fully realizes this entire spectrum of technologies.

The future is bright for serverless- and container-oriented cloud architectures, and Pulumi is excited to be right at the center of it helping customers to realize the incredible benefits.

IOpipe co-founders Erica Windisch and Adam Johnson went from virtualizing servers at companies like Docker, to going “all in” on serverless in 2016. Erica and Adam identified serverless as the next revolution in infrastructure, coming roughly 10 years after the launch of AWS EC2. With a shift in computing moving towards a serverless world, there are new challenges that emerge. From dozens of production Lambda user interviews, Erica and Adam identified that one of the major challenges in adopting serverless was a lack of visibility and instrumentation. In 2016, Erica and Adam co-founded IOpipe to focus on helping companies build, ship, and run serverless applications, faster.

IOpipe is an application operations platform built for serverless architectures running AWS Lambda. Through the collection of high fidelity telemetry within Lambda invocations, users can quickly correlate important data points to discover anomalies and identify issues. IOpipe is a cloud-based SaaS offering that offers tracing, profiling, metrics, logs, alerting, and debugging tools to power up operations and development teams.

IOpipe enables developers to debug code faster by providing real-time visibility into their functions as they develop them. Developers can dig deep into what’s really happening under the hood with tools such as profiling and tracing. Once they’re in production, IOpipe then provides a rich set of observability tools to help bubble up issues before they affect end-users. IOpipe saw customers who previously spent days debugging tough issues in production now able to find the root cause in minutes using IOpipe.

Since launching the IOpipe service in Q3 of 2017, the company has seen customers ranging from SaaS startups to large enterprises enabling their developers to build and ship Lambda functions into production at an incredibly rapid pace. What previously took one customer 18 months can now be done in just two months.

IOpipe works closely with AWS as an advanced tier partner, enabling AWS customers to embrace serverless architectures with power-tools such as IOpipe.