Act Natural: Next Generation User Interfaces

This is the third in our series of four deep dives into our technology based investing themes – outlined in January.

Computers, and the ways in which we interact with them, have come a long way. Pre-1984, the only way to navigate a computer was by using a keyboard. Then Apple (with help from Xerox Parc) came along and introduced the world to the mouse. Suddenly, a whole host of activities became possible such as free-form graphical drawing. While this may seem trivial now, at the time it allowed for the new expansion of industries such as graphic design, publishing, and digital media, and the world hasn’t looked back since.

Fast forward to the present: we are seeing an awe-inspiring number of ways that technology is continuing to advance the user interface between humans and computers. Augmented reality, virtual reality, mixed reality (collectively “extended reality,” or XR), voice, touch screens, haptics, gestures, and computer vision, to name a few, are developing and will change the computing experience as we know it. In addition, they will create new industries and business opportunities in ways that we cannot yet imagine. At Madrona we are inspired by what is possible and ready to provide capital and company-building help for the next generation of computer interaction modalities.

We, along with many others in the industry, were wowed by the magic of VR specifically and dove in early with a couple key investments in platforms – some worked and some didn’t. We quickly came to realize that VR headset adoption wasn’t going to be as fast as initially predicted but we remain strong believers in the ability of all types of next generation user interfaces to change how we experience technology in our lives.

Last month we published our main investment themes, including a summary of our belief in next gen UI. Here is a deep dive into our updated take on the future of both voice and XR technologies.

Voice technology is becoming ubiquitous

Voice tech, more so than any other new UI in recent history, has reached consumers swiftly and relatively easily. While the most common use cases remain fairly basic, voice tech’s affordability, cross-operating system capabilities, and ease-of-use have helped drive adoption. The tech giants are pouring billions into turning their voice platforms and assistants such as Amazon Alexa, Google Assistant, and Apple’s Siri into sophisticated tools designed to become an integral part of our daily lives. Driven by their affordability (the Amazon Echo Dot sells for $30) and ease of use (all you need to do is speak), voice-enabled devices are becoming ubiquitous. Amazon recently reported that 100 million Alexa devices have been sold. Add to that the over 2 billion Apple iOS devices that come pre-loaded with Siri and the almost 1 billion (mostly Android phone) devices that come pre-loaded with Google Assistant, and it is clear that voice technology as a platform has reached unprecedented scale and market penetration in a very short period of time.

Figure 1: Smart speakers are showing the fastest technology adoption in history Source:

The key question now: how will voice become a platform for other businesses? In order for new business models to thrive off of voice technology, we think three things have to happen:

(1) Developers need new tools for creating, managing, testing, analyzing, and personalizing the voice experience
(2) Marketers need new tools for monetization and cross-device and cross-platform brand/content management, and
(3) Businesses need to adapt to a voice-enabled world and enhance their products with voice-enabled intelligence, performance, and productivity.

Similar to the early days of web and mobile, the number of voice applications is growing fast but monetization is nascent and users sometimes struggle with discoverability. For example, Alexa offers over 80,000 “skills” but if you ask most Alexa owners how they use their device, you may notice that use cases remain fairly high-level:

Figure 2: Voice assistants largely used for information & entertainment use cases. Source:

The next generation of voice is multi-modal so instead of using voice-only devices, we are moving toward a great wave of “voice-also” devices. Examples of this include Volkswagen voice recognition to make calls, voice shopping, Roku device voice commands for watching TV, Nest smart thermostat products that use Google Assistant, etc. Tech giants and startups alike are racing to integrate voice into everything, from your car “infotainment” system to your microwave and screen-first devices so they can leverage the ease-of-use of voice to unlock new functionality.

Figure 3: Many companies are springing up to help businesses create, monitor, and monetize voice applications

At Madrona, our investments in Pulse Labs and Saykara give us unique visibility into next gen voice applications and how businesses are looking to reach users with voice services. We believe that voice tech will enable new business models centered around e-commerce and advertising via multi-modal experiences. We see opportunities in creating a tools layer for voice developers and marketers as well as building intelligent vertical applications to solve specific problems. Opportunities exist in enhancing in-vehicle voice capabilities, integration across platforms and applications, home security systems, smart thermostats, and retail experiences that blend the digital and physical, to name a few.

Overall, voice technology is moving very quickly toward broad adoption. With the incredible amount of investment being put into voice technologies, both from the platform providers and from new software developers, we are looking forward to seeing breakthroughs in e-commerce, advertising, and multi-modal experiences. The first hurdle of creating an ecosystem has been cleared with voice-capable devices now in the hands (and homes) of millions of users. The true test will be finding ways to use that technology to solve a broader array of business and consumer problems, and to monetize those capabilities as impact grows.

XR interfaces – still slowly building momentum

XR is a big bet that we believe in long term but our initial estimates of a 3-5-year timeline for when it would hit critical mass (which would have been 2019-2021) were overly optimistic. As we kick off 2019, we have the benefit of time, experience in the market, and a greater appreciation for what it will take for this industry to reach the masses. Our best estimates now push mainstream adoption of ‘premium VR’ (full headset/PC) back another five years to 2024.

Microsoft’s HoloLens, which was released in late 2016, had only sold approx. 50,000 units by mid-2018 before its 100,000-unit deal with the US army was reported in Nov 2018. Hololens 2.0 is slated to be revealed later this month and so we look forward to learning more then. The much-hyped Magic Leap launched a developer product in Aug 2018, but that was not designed as a mass release. In addition, sales of advanced PC-tethered headsets amount to less than 5 million units. Why the slower-than-anticipated uptake? Mass adoption is facing many headwinds on both sides of the marketplace.

On the consumer side: head mounted devices (HMDs) are expensive (around $600 per headset + upwards of $1500 for the high-end PC needed to run the programs). Usability is also a challenge – people are still getting comfortable with the feeling of wearing HMDs for long periods of time. On top of this, there are many additional opportunities for better XR accessories (E.g., the ability to use foot pedals or “cyber shoes,” improved hand controllers, and other devices such as activity or workout aides or equipment) which will improve the immersive experience. On the brand/business side: developing and implementing a worthwhile offering in XR requires more time, resources, and patience than originally thought. In addition, when done right, XR experiences should provide something specific and unique to the business that cannot be achieved otherwise. Finally, XR platforms aren’t standardized yet and so development requires significant customization. Altogether these factors have contributed to the XR market being sub-scale. We ask ourselves now, how many millions of these HMDs need to exist, and what applications or what business verticals will need to be developed, in order for XR technology to become a self-supporting industry?

Most XR innovation to date is entertainment-based, with different approaches for attracting new users. For example, HTC’s Vive has VR rooms where you can watch whales and sea life and there are cross-platform games where you can use lightsabers to slice through oncoming hazards. Against Gravity’s Rec Room is full of communities where you can build your own virtual rooms and worlds.

A trend we’re seeing now, while the XR consumer base slowly builds, is the emergence of virtual worlds or “metaverses” (collective virtual shared spaces that are created by the convergence of virtually-enhanced physical reality and persistent virtual space) that are at first, but may not always be, primarily non-XR virtual experiences (E.g., Xbox, Twitch, mobile, or YouTube). The key here is gaining a mass following while maintaining a “call option” on the ability to allow users to flip over to an XR interface as soon as they are ready. For example, Against Gravity’s Rec Room has quests that are available in both VR and non-VR, and Epic Games’ Fortnite just successfully hosted a record-breaking live, mixed-reality in-game concert with pop DJ Marshmello, the success of which sets the stage for more XR possibilities in the future. Entertainment-based applications like these will be the pathway to mass XR adoption.

Beyond entertainment, there are many practical use cases for XR applications that we are excited about. Our investments in virtual reality startups Pixvana and Pluto help us follow customer needs and market movements, and one that we have been watching closely is the growing opportunity of XR adoption within enterprise and commercial use cases. Of note, we have seen an especially high number of applications in the medical training, retail, education, and field service categories. This is supported by the overall trends as reported by VRRoom, pictured below.

Figure 5: Industries of current and prospective XR end-users. Source:

XR has the capability to virtually “place shift”/transport the user and enable many compelling applications such as mixed-reality entertainment, medical/surgical training, field service equipment repair and maintenance, spatial design/modeling (E.g., architecture), remote education, experiential travel/events, and public safety activities, to name just a few. Imagine a world where you could shrink and zoom a virtual mockup of a new building with the flick of a finger to help design the HVAC system, you could participate in a Nascar race from the comfort of your living room, or you could literally join your far-flung friends for an “in person” hangout in a virtual living room – this is the power of XR, and it is not far off. The challenge now is to provide greater variety in content, low-cost hardware, and improved usability and comfort to consumers (we’re looking forward to the rumored lightweight ‘glasses’ under development). In addition, we are supportive of the other mediums that XR companies can invest in now (such as mobile, YouTube, PC, etc.) that can provide a bridge to a full premium XR experience in the future.

Together, these next gen user interfaces will feel more natural and make it easier for consumers to access features in a new way

New ‘interaction modes’ like voice, XR, and others will create compelling user experiences that both improve existing experiences and create new ones that previously weren’t possible. We are excited to work with entrepreneurs as they innovate in these new areas. Opportunity for new applications, enabling technologies/devices, and content creation tools/platforms in next gen user interfaces will take us to the future.

Investment Themes for 2019

2018 was a busy year for Madrona and our portfolio companies. We raised our latest $300 million Fund VII, and we made 45 investments totaling ~$130 million. We also had several successful up-rounds and company exits with a combined increase of over $800 million in fund value and over $600 million in investor realized returns. We don’t see 2019 letting up, despite the somewhat volatile public markets. Over the past year we have continued to develop our investment themes as the technology and business markets developed and we lay out our key themes here.

For the past several years, Madrona has primarily been investing against a 3-layer innovation stack that includes cloud-native infrastructure at the bottom, intelligent applications (powered by data and data science) in the middle, and multi-sense user interfaces between humans and content/computing at the top. As 2019 kicks off, we thought it would be helpful to outline our updated, 4-layer model and highlight some key questions we are asking within these categories to facilitate ongoing conversations with entrepreneurs and others in the innovation economy.

For reference, we published our investment themes in previous years and our thinking since then has both expanded and become more focused as the market has matured and innovation has continued. A quick scan of this prior post illustrates our on-going focus on cloud infrastructure, intelligent applications, ML, edge computing, and security, as well as how our thinking has evolved.

Opportunities abound within AND across these four layers. Infinitely scalable and flexible cloud infrastructure is essential to train data models and build intelligent applications. Intelligent applications including natural language processing models or image recognition models power the multi-sense user interfaces like voice activation and image search that we increasingly experience on smartphones and home devices (Amazon Echo Show, Google Home). Further, when those services are leveraged to help solve a physical world problem, we end up with compelling end-user services like Booster Fuels in the USA or Luckin Coffee in China.

The new layer that we are spending considerable time on is the intersection between digital and physical experiences (DiPhy for short), particularly as it relates to consumer experiences and health care. For consumers, DiPhy experiences address a consumer need and resolve an end-user problem better than a solely digital or solely physical experience could. Madrona companies like Indochino, and provide solutions in these areas. In a different way, DiPhy is strongly represented in Seattle at the intersection of machine learning and health care with the incredible research and innovations coming out of the University of Washington Institute for Protein Design, the Allen Institute and the Fred Hutch Cancer Research Center. We are exploring the ways that Madrona can bring our “full stack” expertise to these health care related areas as well.

While continuing to push our curiosity and learning around these themes, they are guides not guardrails. We are finding some of the most compelling ideas and company founders where these layers intersect. Current company examples include voice and ML applied to the problem of physician documentation into electronic medical records (Saykara), integrating customer data across disparate infrastructure to build intelligent customer profiles and applications (Amperity), or cutting edge AI able to run efficiently in resource constrained edge devices (

Madrona remains deeply committed to backing the best entrepreneurs, in the Pacific NW, who are tackling the biggest markets in the world with differentiated technology and business models. Frequently, we find these opportunities adjacent to our specific themes where customer-obsessed founders have a fresh way to solve a pressing problem. This is why we are always excited to meet great founding teams looking to build bold companies.

Here are more thoughts and questions on our 4 core focus areas and where we feel the greatest opportunities currently lie. In subsequent posts, we will drill down in more detail into each thematic area.

Cloud Native Infrastructure

For the past several years, the primary theme we have been investing against in infrastructure is the developer and the enterprise move to the cloud, and specifically the adoption of cloud native technologies. We think about “cloud native” as being composed of several interrelated technologies and business practices: containerization, automation and orchestration, microservices, serverless or event-driven computing, and devops. We feel we are still in the early-middle innings of enterprise adoption of cloud computing broadly, but we are in the very early innings of the adoption of cloud native.

2018 was arguably the “year of Kubernetes” based on enterprise adoption, overall buzz and even the acquisition of Heptio by VMware. We continue to feel cloud native services, such as those represented by the CNCF Trail Map, will produce new companies supporting the enterprise shift to cloud native. Other areas of interest (that we will detail in a subsequent post) include technologies/services to support hybrid enterprise environments, infrastructure backend as code, serverless adoption enablers, SRE tools for devops, open source models for the enterprise, autonomous cloud systems, specialized infrastructure for machine learning, and security. Questions we are asking here include how the relationship between the open source community and the large cloud service providers will evolve going forward and how a broad-based embrace of “hybrid computing” will impact enterprise customer product/service needs, sales channels and post-sales services.

For a deeper dive click here.

Intelligent Applications with ML & AI

The utilization of data and machine learning in production has probably been the single biggest theme we have invested against over the past five years. We have moved from “big data” to machine learning platform technologies such as Turi, Algorithmia and Lattice Data to intelligent applications such as Amperity, Suplari and AnswerIQ. In the years ahead, “every application is intelligent” will likely be the single biggest investment theme, as machine learning continues to be applied to new and existing data sets, business processes, and vertical markets. We also expect to find interesting opportunities in services that enable edge devices to operate with intelligence, industry-specific applications where large amounts of data are being created like life sciences, services to make ML more accessible to the average customer, as well as emerging machine learning methodologies such as transfer learning and explainable AI. Key questions here include (a) how data rights and strategies will evolve as the power of data models becomes more apparent and (b) how to automate intelligent applications to be fully managed, closed loop systems that continually improve their recommendations and inferences.

For a deeper dive click here.

Next Generation User Interfaces

Just as the mouse and touch screen ushered in new applications for computing and mobility, new modes of computer interaction like voice and gestures are catalyzing compelling new applications for consumers and businesses. The advent of Alexa Echo and Show, Google Home, and a more intelligent Siri service have dramatically changed how we interact with technology in our personal lives. Limited now to short simple actions, voice is becoming a common approach for classic use cases like search, music discovery, food/ride ordering and other activities. Madrona’s investment in Pulse Labs gives us unique visibility into next generation voice applications in areas like home control, ecommerce and ‘smart kitchen’ services. We are also enthused about new mobile voice/AR business applications for field service technicians, assisted retail shopping (E.g., Ikea’s ARKit furniture app) and many others including medical imaging/training.

Vision and image recognition are also rapidly becoming ways for people and machines to interact with one another as facial recognition security on iPhones or intelligent image recognition systems highlight. Augmented and virtual reality are growing much more slowly than initially expected, but mobile phone-enabled AR will become an increasingly important tool for immersive experiences, particularly visually-focused vocations such as architecture, marketing, and real estate. “Mobile-first” has become table stakes for new applications, but we expect to see more “do less, but much better” opportunities both in consumer and enterprise with elegantly designed UIs. Questions central to this theme include (a) what ‘high-value’ new experiences are truly best or only possible when voice, gesture and the overlay of AR/VR/MR are leveraged? (b) what will be the limits of image (especially facial recognition) in certain application areas, (c) how effective can image-driven systems like digital pathology be at augmenting human expertise, and (d) how will multi-sense point solutions in the home, car and store evolve into platforms?

For a deeper dive click here.

DiPhy (digital-physical converged customer experiences)

The first twenty years of the internet age were principally focused on moving experiences from the physical world to the digital world. Amazon enabled us to find, discover and buy just about anything from our laptops or mobile devices in the comfort of our home. The next twenty years will be principally focused on leveraging the technologies the internet age has produced to improve our experiences in the physical world. Just as the shift from physical to digital has massively impacted our daily lives (mostly for the better), the application of technology to improve the physical will have a similar if not greater impact.

We have seen examples of this trend through consumer applications like Uber and Lyft as well as digital marketplaces that connect dog owners to people who will take care of their dogs (Rover). Mobile devices (principally smartphones today) are the connection point between these two worlds and as voice and vision capabilities become more powerful so will the apps that reduce friction in our lives. As we look at other DiPhy sectors and opportunities, one where the landscape will change drastically over the coming decades is physical retail. Specifically, we are excited about digital native retailers and brands adding compelling physical experiences, increasing digitization of legacy retail space, and improving supply chain and logistics down to where the consumer receives their goods/services. Important questions here include (a) how traditional retailers and consumer services will evolve to embrace these opportunities and (b) how the deployment of edge AI will reduce friction and accelerate the adoption of new experiences.

For a deeper dive click here.

We look forward to hearing from many of you who are working on companies in these areas and, most importantly, to continuing the conversation with all of you in the community and pushing each other’s thinking around these trends. To that end, over the coming weeks we will post a series of additional blogs that go into more depth in each of our four thematic areas.

Matt, Tim, Soma, Len, Scott, Hope, Paul, Tom, Sudip, Maria, Dan, Chris and Elisa

(to get in touch just go to the team page – our contact info is in our profiles)

Madrona Expands the Team, Adds Talent Director, Venture Partner and Principal

Veteran Tech Talent Executive Shannon Anderson Joins as Director of Talent, Luis Ceze, a leader in computer systems architecture, machine learning, and DNA data storage joins as Venture Partner; Daniel Li is promoted to Principal

We are so excited to announce today some great additions to the Madrona Team. Each of these people is incredibly talented and will add a significant amount to what we can bring to our portfolio companies and to the greater Seattle ecosystem.

Shannon Anderson is joining us as Director of Talent. We expound on her role here.

Luis Ceze is joining the team as Venture Partner. Luis is an award-winning professor of computer science at the University of Washington, where he joined the faculty in 2007. His research focuses on the intersection of computer architecture, programming languages, molecular biology, and machine learning. At UW, he co-directs the Molecular Information Systems Lab where they are pioneering the technology to store data on synthetic DNA. He also co-directs the Sampa Lab, which focuses on the use of hardware/software co-design and approximate computing techniques for machine learning which enables efficient edge and server-side training and inference. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA Young Computer Architect Award and UIUC Distinguished Alumni Award. He is a member of the DARPA ISAT and MEC study groups, and consults for Microsoft.

Luis also has a track record of entrepreneurship. He spent the summer of 2017 with Madrona and has been a vital partner as we have evaluated new ideas and companies for several years. In 2008, Luis co-founded Corensic, a Madrona backed, UW-CSE spin-off company. We are excited to have him on board, continuing and building on Madrona’s long-standing relationship with UW CSE, and working formally with us to identify new companies and work closely with our portfolio companies.

Last but definitely not least, we promoted Daniel Li to Principal. Daniel joined us nearly three years ago and has been an incredible part of the Madrona team. He works tirelessly to not only analyze new markets and develop investment themes that help us envision future companies, but he also dives deeply into his passions. He has built apps that we use internally on a weekly basis at Madrona as well as given a Crypto 101 course to hundreds of people over the past year. He has also proven to be an indispensable partner to entrepreneurs, leading the Madrona investment in fast growing live streaming company, Gawkbox, last year. In addition to digital media and blockchain, Dan has done significant work in investment areas from autonomous vehicles, machine learning, and AR/VR. Dan brings an energy, curiosity and intelligence to everything he does and epitomizes what Madrona looks for in our companies and our team.

We are excited to continue to build the Madrona team to even better help entrepreneurs and further capitalize on the massive opportunity to build world-changing technology companies in the Pacific NW.

100 Demos, 50 Company Pitches & a Year With AR/VR Headsets – What We’ve Learned

We received our first market-ready VR headset at Madrona almost one year ago now, and since then, we have done VR (Vive and Rift) and AR (HoloLens) demos for over 100 people who have stopped by our office to check out the next big computing platform. We have done demos for kids, parents, grandparents, pro football players, elected officials, company executives, gaming enthusiasts, people who never game, and people all over the spectrum from skeptical to extremely excited about VR. We have also participated in over 50 unique VR/AR demos from companies pitching Madrona.

First unboxing.

After all of these demos, it’s interesting to take a look back and think about what consumers’ first reactions to the products and content have been and where we think the companies betting on the future of VR need to go from here. Here are some of the key takeaways from our first year in VR:

The Product

  • It’s here. And it’s awesome. Every person to try on the Vive has been blown away by the level of immersion and the experience of getting transported away to another place. While companies are still debating controllers, tracking, resolutions, and wires, there is no doubt that the technology is now ‘good enough’ for people to feel completely immersed in another world. Seeing someone’s fear of falling off a ledge suspended a few hundred feet above a city when they are in fact just standing on the carpet in your office is incredibly fun and proves that VR is a transformative experience.
  • But not quite awesome enough. Despite the incredible experiences that people are having in VR, consumers are not excited enough to go out and buy headsets. Having done over 100 demos, less than a handful of people were excited enough to go out and spend $600-800 for a VR set up and likely another $1,500 for a capable PC to play with VR at home. Maybe we aren’t selling hardware hard enough at Madrona, but our experience seems consistent with the overall industry, with big companies coming in significantly below their initial 2016 forecasts for hardware sales – for example, Sony cut its 2016 forecast for VR headsets from 2.6M units to 0.75M. Further, we know first-hand how cumbersome it is to update software and firmware in order to quickly and consistently enjoy most VR applications.
  • Companies are earlier adopters of VR and AR than consumers. Nearly every corporate innovation lab has taken the time to create some sort of VR experience. IKEA built a kitchen visualization tool, Ford released an app that allows you to experience what it’s like to be a race car driver, and even SleepNumber built a VR application… to simulate what it feels like to be tired (in case you haven’t felt that before!). One of our favorite applications of VR is Redfin’s house tours that allow a potential home buyer to explore a house in VR before making a purchase. We are also seeing early adoption of Hololens AR technology and demos especially in health care, financial services, and various forms of design. Companies are seeing the huge potential of VR and AR and quickly trying to determine what works and what doesn’t to advance their business objectives.

The Market

  • The first killer app will be social.The VR (and potentially “mixed reality”) apps that people will spend the most time in will be social. Humans have an innate desire to be social, and tools for communication are incredibly valuable – just look at Facebook, Instagram, Snapchat, WhatsApp, Skype, and Slack. Additionally, VR is a platform that is particularly well-suited for communication given the level of immersion you can experience when talking to other people. Early examples of companies with compelling social VR capabilities include Against Gravity’s Rec Room, BigBox’s Smashbox and Pluto VR’s general, mixed reality communications service.

    TechCrunch Oct 2016. Source:
  • Wow – Kids!if there is one thing that has been the most remarkable to watch — it’s how easily children pick up VR controllers and start flying around virtual worlds (before quickly asking if they can play Minecraft in VR). We know some kids who have spent dozens of hours playing social VR games like dodgeball and charades. The level of comfort and intuition that children have for navigating new technologies is incredible, and they will likely be able to imagine new ways to use VR that the current generation of developers haven’t even considered yet. Interestingly, the younger folks who try both AR and VR tend to be more interested in VR while older folks see more commercial promise in AR.
Onsite at Madrona.
  • Learning from Asia and 21stCentury “Arcades” – The US is way behind China for VR adoption, and we are seeing significantly more adoption in the Asian market through malls, theme parks, karaoke bars, and gaming cafes – not to mention the number of manufacturers of simple smartphone headset adapters that allow for a low-end mobile VR experience. We believe in the short to medium term that 21st Century VR Arcades will emerge around the world with the early learnings coming primarily from Asia. HTC is treating the China VR market as its first priority even though it is partnered with Valve on the Vive, and Tencent has begun producing VR movies. US-based VR companies are looking to learn from the Chinese experience, and they are also looking towards Asian investors to fund VR companies, as a large proportion of VR investments have been led by Asian investors. In fact, a recent CB Insights report showed that China-based investors participated in 21% of all AR/VR deals last year, including Magic Leap’s $794M Series C (Alibaba) and NextVR’s $80M Series B (5 Chinese investors).
    CB Insights, January 24, 2017. Source:

    Looking Forward

  • Companies and investors are waiting for a big breakthrough.Perhaps it will be Apple’s first foray into AR/VR, maybe it’s mass adoption of the Google Daydream, or maybe it’s a wild success in Snapchat spectacles, but until there is a bigger user base, AR/VR companies will not be making a lot of money. Monetization is tricky, and with the exception of some early wins in indie games, companies do not have a lot of room to experiment with business models to grow their businesses. Even what is arguably the first “hit” of the AR world, Pokemon Go, required a beloved set of content to help create a global phenomenon. That makes it important for early-stage VR and AR companies to ensure they have a plan to survive and continue developing great products until the early majority are more easily able to try VR/AR experiences.

    Kiro, July 11, 2016. Source:
  • VR/AR signal a new era of human-computer interaction.Traditional text interfaces and GUIs force people to interact with data and applications in a way that makes sense to computers. VR and AR are new, natural user interfaces that will allow people to interact with technology in the same way that people interact with the physical world. That means using voice, vision, and gestures to interact with computing and content. Think about how difficult it is to remember the positions of 100 different files on a shared drive and how comparatively easy it would be to say “show me the document from last week’s meeting with Sally” and have it appear in augmented reality. This will allow us to build new apps that take advantage of the ways that our brains work and interface with both the digital and physical worlds.
    YouTube, Microsoft Hololens. Source:

    We are incredibly excited to see what happens in the next 3-5 years in the virtual and augmented reality market. Already, we are seeing great examples of profitable businesses creating VR demos of buildings before they are built, so developers can sell real estate before prospects can walk through the space. We are also seeing partnerships with groups like Microsoft, Case Western Reserve University, and the Cleveland Clinic where a medical student can put on a HoloLens with 10 of her classmates to walk around and examine the different pieces of human heart in 3D space to gain completely new insights on how the heart really works.

    As we look towards the technologies that will affect humanity the most over the next decade, we firmly believe augmented and virtual reality will change the way we work, play, and live.

    Madrona is an investor in Redfin.

    A version of this post originally appeared on VentureBeat.

AWS re:Invent 5th Anniversary Preview: Five Themes to Watch

The 5th Annual AWS re:Invent is a week away and I am expecting big things. At the first ever re: Invent in 2012, plenty of start-ups and developers could be found, but barely any national media or venture capitalists attended. That has all changed and today, re:Invent rivals the biggest and most strategically important technology conferences of the year with over 25,000 people expected to be in Las Vegas the week after Thanksgiving!

So, what will be the big themes at re: Invent? I anticipate, from an innovation perspective, they will line up with the 3 layers of how we at Madrona think about the core of new consumer and enterprise applications hitting the market. We call it the “Future of Applications” technology stack shown below.

Future of Applications (Madrona Venture Group Slide, November 2016)

The Themes We Expect at 2016 re:Invent

Doubling Down on Lambda Functions

First is the move “beyond cloud” to what is increasingly called server-less and event-driven computing. Two years ago, AWS unveiled Lambda functions at re:Invent. Lambda quickly became a market leading “event-driven” functions service. The capability, combined with other micro-services, allows developers to create a function which is at rest until it is called in to action by an event trigger. Functions can perform simple tasks like automatically expanding a compute cluster or creating a low resolution version of an uploaded high resolution image. Lambda functions are increasingly being used as a control point for more complicated, micro-services architected applications.

I anticipate that re:Invent 2016 will feature several large and small customers who are using Lambda functions in innovative ways. In addition, both AWS and other software companies will launch capabilities to make designing, creating and running event-driven services easier. These new services are likely to be connected to broader “server-less” application development and deployment tools. The combination of broad cloud adoption, emerging containerization standards and the opportunities for innovating on both application automation and economics (you only pay for Lambda functions on a per event basis) presents the opportunity to transform the infrastructure layer in design and operations for next-generation applications in 2017.

Innovating in Machine and Deep Learning

Another big focus area at re:Invent will be intelligent applications powered by machine/deep learning trained models. Amazon already offers services like AWS ML for machine learning and companies like Turi (prior to being acquired by Apple) leveraged AWS GPU services to deploy machine learning systems inside intelligent applications. But, as recently reported by The Information, AWS is expected to announce a deep learning service that will be somewhat competitive with Google’s TensorFlow deep learning service. This service will leverage the MXNet deep learning library supported by AWS and others. In addition, many intelligent applications already offered to consumers and commercial customers, including AWS stalwarts such as Netflix and, will emphasize how marrying cloud services with data science capabilities are at the heart of making applications smarter and individually personalized.

Moving to Multi-Sense With Alexa, Chat and AR/VR

While AWS has historically offered fewer end-user facing services, we expect more end-user and edge sensors/devices interactions leveraging multiple user interfaces (voice, eye contact, gestures, sensory inputs) to be featured this year at re:Invent. For example, Amazon’s own Alexa Voice Services will be on prominent display in both Amazon products like the Echo and third party offerings. In addition, new chat-related services will likely be featured by start-ups and potentially other internal groups at Amazon. Virtual and augmented reality use cases for areas including content creation, shared-presence communication and potentially new device form factors will be highlighted. Madrona is especially excited about the opportunity for shared presence in VR to reimagine how people collaborate with man and machine (all powered by a cloud back-end.). As the AWS services stack matures, it is helping a new generation of multi-sense applications reach end users.

Rising Presence of AWS in Enterprises Directly and With Partners

Two other areas of emphasis at the conference, somewhat tangential to the future of applications, will be the continued growth of enterprise customer presentations and attendance at the conference. The dedicated enterprise track will be larger than ever and some high-profile CIO’s, like Rob Alexander from Capital One last year, will be featured during the main AWS keynotes. Vertical industry solutions for media, financial services, health care, and more will be highlighted. And, an expanding mix of channel partners, that could include some surprising cloud bedfellows like IBM, SAP and VMWare, could be featured. In addition, with the recent VMWare and AWS product announcements, AWS could make a big push into hybrid workloads.

AWS Marketplace Emerging as a Modern Channel for Software Distribution

Finally, the AWS Marketplace for discovering, purchasing and deploying software services will increase in profile this year. The size and significance of this software distribution channel has grown significantly the past few years. Features like metered billing, usage tracking and deployment of non “Amazon Machine Image (AMI)” applications could see the spotlight.

Over the years, AWS has always surprised us with innovative solutions like Lambda and Kinesis, competitive offerings like Aurora databases and elastic load balancing, as well as customer centric solutions like AWS Snowball. We expect to be surprised, and even amazed, at what AWS and partner companies will unveil at re: Invent 2016.