Act Natural: Next Generation User Interfaces

This is the third in our series of four deep dives into our technology based investing themes – outlined in January.

Computers, and the ways in which we interact with them, have come a long way. Pre-1984, the only way to navigate a computer was by using a keyboard. Then Apple (with help from Xerox Parc) came along and introduced the world to the mouse. Suddenly, a whole host of activities became possible such as free-form graphical drawing. While this may seem trivial now, at the time it allowed for the new expansion of industries such as graphic design, publishing, and digital media, and the world hasn’t looked back since.

Fast forward to the present: we are seeing an awe-inspiring number of ways that technology is continuing to advance the user interface between humans and computers. Augmented reality, virtual reality, mixed reality (collectively “extended reality,” or XR), voice, touch screens, haptics, gestures, and computer vision, to name a few, are developing and will change the computing experience as we know it. In addition, they will create new industries and business opportunities in ways that we cannot yet imagine. At Madrona we are inspired by what is possible and ready to provide capital and company-building help for the next generation of computer interaction modalities.

We, along with many others in the industry, were wowed by the magic of VR specifically and dove in early with a couple key investments in platforms – some worked and some didn’t. We quickly came to realize that VR headset adoption wasn’t going to be as fast as initially predicted but we remain strong believers in the ability of all types of next generation user interfaces to change how we experience technology in our lives.

Last month we published our main investment themes, including a summary of our belief in next gen UI. Here is a deep dive into our updated take on the future of both voice and XR technologies.

Voice technology is becoming ubiquitous

Voice tech, more so than any other new UI in recent history, has reached consumers swiftly and relatively easily. While the most common use cases remain fairly basic, voice tech’s affordability, cross-operating system capabilities, and ease-of-use have helped drive adoption. The tech giants are pouring billions into turning their voice platforms and assistants such as Amazon Alexa, Google Assistant, and Apple’s Siri into sophisticated tools designed to become an integral part of our daily lives. Driven by their affordability (the Amazon Echo Dot sells for $30) and ease of use (all you need to do is speak), voice-enabled devices are becoming ubiquitous. Amazon recently reported that 100 million Alexa devices have been sold. Add to that the over 2 billion Apple iOS devices that come pre-loaded with Siri and the almost 1 billion (mostly Android phone) devices that come pre-loaded with Google Assistant, and it is clear that voice technology as a platform has reached unprecedented scale and market penetration in a very short period of time.

Figure 1: Smart speakers are showing the fastest technology adoption in history Source: https://xappmedia.com/consumer-adoption-voice/

The key question now: how will voice become a platform for other businesses? In order for new business models to thrive off of voice technology, we think three things have to happen:

(1) Developers need new tools for creating, managing, testing, analyzing, and personalizing the voice experience
(2) Marketers need new tools for monetization and cross-device and cross-platform brand/content management, and
(3) Businesses need to adapt to a voice-enabled world and enhance their products with voice-enabled intelligence, performance, and productivity.

Similar to the early days of web and mobile, the number of voice applications is growing fast but monetization is nascent and users sometimes struggle with discoverability. For example, Alexa offers over 80,000 “skills” but if you ask most Alexa owners how they use their device, you may notice that use cases remain fairly high-level:

Figure 2: Voice assistants largely used for information & entertainment use cases. Source: https://voicebot.ai/voice-assistant-consumer-adoption-report-2018/

The next generation of voice is multi-modal so instead of using voice-only devices, we are moving toward a great wave of “voice-also” devices. Examples of this include Volkswagen voice recognition to make calls, Walmart.com voice shopping, Roku device voice commands for watching TV, Nest smart thermostat products that use Google Assistant, etc. Tech giants and startups alike are racing to integrate voice into everything, from your car “infotainment” system to your microwave and screen-first devices so they can leverage the ease-of-use of voice to unlock new functionality.

Figure 3: Many companies are springing up to help businesses create, monitor, and monetize voice applications

At Madrona, our investments in Pulse Labs and Saykara give us unique visibility into next gen voice applications and how businesses are looking to reach users with voice services. We believe that voice tech will enable new business models centered around e-commerce and advertising via multi-modal experiences. We see opportunities in creating a tools layer for voice developers and marketers as well as building intelligent vertical applications to solve specific problems. Opportunities exist in enhancing in-vehicle voice capabilities, integration across platforms and applications, home security systems, smart thermostats, and retail experiences that blend the digital and physical, to name a few.

Overall, voice technology is moving very quickly toward broad adoption. With the incredible amount of investment being put into voice technologies, both from the platform providers and from new software developers, we are looking forward to seeing breakthroughs in e-commerce, advertising, and multi-modal experiences. The first hurdle of creating an ecosystem has been cleared with voice-capable devices now in the hands (and homes) of millions of users. The true test will be finding ways to use that technology to solve a broader array of business and consumer problems, and to monetize those capabilities as impact grows.

XR interfaces – still slowly building momentum

XR is a big bet that we believe in long term but our initial estimates of a 3-5-year timeline for when it would hit critical mass (which would have been 2019-2021) were overly optimistic. As we kick off 2019, we have the benefit of time, experience in the market, and a greater appreciation for what it will take for this industry to reach the masses. Our best estimates now push mainstream adoption of ‘premium VR’ (full headset/PC) back another five years to 2024.

Microsoft’s HoloLens, which was released in late 2016, had only sold approx. 50,000 units by mid-2018 before its 100,000-unit deal with the US army was reported in Nov 2018. Hololens 2.0 is slated to be revealed later this month and so we look forward to learning more then. The much-hyped Magic Leap launched a developer product in Aug 2018, but that was not designed as a mass release. In addition, sales of advanced PC-tethered headsets amount to less than 5 million units. Why the slower-than-anticipated uptake? Mass adoption is facing many headwinds on both sides of the marketplace.

On the consumer side: head mounted devices (HMDs) are expensive (around $600 per headset + upwards of $1500 for the high-end PC needed to run the programs). Usability is also a challenge – people are still getting comfortable with the feeling of wearing HMDs for long periods of time. On top of this, there are many additional opportunities for better XR accessories (E.g., the ability to use foot pedals or “cyber shoes,” improved hand controllers, and other devices such as activity or workout aides or equipment) which will improve the immersive experience. On the brand/business side: developing and implementing a worthwhile offering in XR requires more time, resources, and patience than originally thought. In addition, when done right, XR experiences should provide something specific and unique to the business that cannot be achieved otherwise. Finally, XR platforms aren’t standardized yet and so development requires significant customization. Altogether these factors have contributed to the XR market being sub-scale. We ask ourselves now, how many millions of these HMDs need to exist, and what applications or what business verticals will need to be developed, in order for XR technology to become a self-supporting industry?

Most XR innovation to date is entertainment-based, with different approaches for attracting new users. For example, HTC’s Vive has VR rooms where you can watch whales and sea life and there are cross-platform games where you can use lightsabers to slice through oncoming hazards. Against Gravity’s Rec Room is full of communities where you can build your own virtual rooms and worlds.

A trend we’re seeing now, while the XR consumer base slowly builds, is the emergence of virtual worlds or “metaverses” (collective virtual shared spaces that are created by the convergence of virtually-enhanced physical reality and persistent virtual space) that are at first, but may not always be, primarily non-XR virtual experiences (E.g., Xbox, Twitch, mobile, or YouTube). The key here is gaining a mass following while maintaining a “call option” on the ability to allow users to flip over to an XR interface as soon as they are ready. For example, Against Gravity’s Rec Room has quests that are available in both VR and non-VR, and Epic Games’ Fortnite just successfully hosted a record-breaking live, mixed-reality in-game concert with pop DJ Marshmello, the success of which sets the stage for more XR possibilities in the future. Entertainment-based applications like these will be the pathway to mass XR adoption.

Beyond entertainment, there are many practical use cases for XR applications that we are excited about. Our investments in virtual reality startups Pixvana and Pluto help us follow customer needs and market movements, and one that we have been watching closely is the growing opportunity of XR adoption within enterprise and commercial use cases. Of note, we have seen an especially high number of applications in the medical training, retail, education, and field service categories. This is supported by the overall trends as reported by VRRoom, pictured below.

Figure 5: Industries of current and prospective XR end-users. Source: https://vrroom.buzz/sites/default/files/xr_industry_survey_results.pdf

XR has the capability to virtually “place shift”/transport the user and enable many compelling applications such as mixed-reality entertainment, medical/surgical training, field service equipment repair and maintenance, spatial design/modeling (E.g., architecture), remote education, experiential travel/events, and public safety activities, to name just a few. Imagine a world where you could shrink and zoom a virtual mockup of a new building with the flick of a finger to help design the HVAC system, you could participate in a Nascar race from the comfort of your living room, or you could literally join your far-flung friends for an “in person” hangout in a virtual living room – this is the power of XR, and it is not far off. The challenge now is to provide greater variety in content, low-cost hardware, and improved usability and comfort to consumers (we’re looking forward to the rumored lightweight ‘glasses’ under development). In addition, we are supportive of the other mediums that XR companies can invest in now (such as mobile, YouTube, PC, etc.) that can provide a bridge to a full premium XR experience in the future.

Together, these next gen user interfaces will feel more natural and make it easier for consumers to access features in a new way

New ‘interaction modes’ like voice, XR, and others will create compelling user experiences that both improve existing experiences and create new ones that previously weren’t possible. We are excited to work with entrepreneurs as they innovate in these new areas. Opportunity for new applications, enabling technologies/devices, and content creation tools/platforms in next gen user interfaces will take us to the future.

Investment Themes for 2019

2018 was a busy year for Madrona and our portfolio companies. We raised our latest $300 million Fund VII, and we made 45 investments totaling ~$130 million. We also had several successful up-rounds and company exits with a combined increase of over $800 million in fund value and over $600 million in investor realized returns. We don’t see 2019 letting up, despite the somewhat volatile public markets. Over the past year we have continued to develop our investment themes as the technology and business markets developed and we lay out our key themes here.

For the past several years, Madrona has primarily been investing against a 3-layer innovation stack that includes cloud-native infrastructure at the bottom, intelligent applications (powered by data and data science) in the middle, and multi-sense user interfaces between humans and content/computing at the top. As 2019 kicks off, we thought it would be helpful to outline our updated, 4-layer model and highlight some key questions we are asking within these categories to facilitate ongoing conversations with entrepreneurs and others in the innovation economy.

For reference, we published our investment themes in previous years and our thinking since then has both expanded and become more focused as the market has matured and innovation has continued. A quick scan of this prior post illustrates our on-going focus on cloud infrastructure, intelligent applications, ML, edge computing, and security, as well as how our thinking has evolved.

Opportunities abound within AND across these four layers. Infinitely scalable and flexible cloud infrastructure is essential to train data models and build intelligent applications. Intelligent applications including natural language processing models or image recognition models power the multi-sense user interfaces like voice activation and image search that we increasingly experience on smartphones and home devices (Amazon Echo Show, Google Home). Further, when those services are leveraged to help solve a physical world problem, we end up with compelling end-user services like Booster Fuels in the USA or Luckin Coffee in China.

The new layer that we are spending considerable time on is the intersection between digital and physical experiences (DiPhy for short), particularly as it relates to consumer experiences and health care. For consumers, DiPhy experiences address a consumer need and resolve an end-user problem better than a solely digital or solely physical experience could. Madrona companies like Indochino, Pro.com and Rover.com provide solutions in these areas. In a different way, DiPhy is strongly represented in Seattle at the intersection of machine learning and health care with the incredible research and innovations coming out of the University of Washington Institute for Protein Design, the Allen Institute and the Fred Hutch Cancer Research Center. We are exploring the ways that Madrona can bring our “full stack” expertise to these health care related areas as well.

While continuing to push our curiosity and learning around these themes, they are guides not guardrails. We are finding some of the most compelling ideas and company founders where these layers intersect. Current company examples include voice and ML applied to the problem of physician documentation into electronic medical records (Saykara), integrating customer data across disparate infrastructure to build intelligent customer profiles and applications (Amperity), or cutting edge AI able to run efficiently in resource constrained edge devices (Xnor.ai).

Madrona remains deeply committed to backing the best entrepreneurs, in the Pacific NW, who are tackling the biggest markets in the world with differentiated technology and business models. Frequently, we find these opportunities adjacent to our specific themes where customer-obsessed founders have a fresh way to solve a pressing problem. This is why we are always excited to meet great founding teams looking to build bold companies.

Here are more thoughts and questions on our 4 core focus areas and where we feel the greatest opportunities currently lie. In subsequent posts, we will drill down in more detail into each thematic area.

Cloud Native Infrastructure

For the past several years, the primary theme we have been investing against in infrastructure is the developer and the enterprise move to the cloud, and specifically the adoption of cloud native technologies. We think about “cloud native” as being composed of several interrelated technologies and business practices: containerization, automation and orchestration, microservices, serverless or event-driven computing, and devops. We feel we are still in the early-middle innings of enterprise adoption of cloud computing broadly, but we are in the very early innings of the adoption of cloud native.

2018 was arguably the “year of Kubernetes” based on enterprise adoption, overall buzz and even the acquisition of Heptio by VMware. We continue to feel cloud native services, such as those represented by the CNCF Trail Map, will produce new companies supporting the enterprise shift to cloud native. Other areas of interest (that we will detail in a subsequent post) include technologies/services to support hybrid enterprise environments, infrastructure backend as code, serverless adoption enablers, SRE tools for devops, open source models for the enterprise, autonomous cloud systems, specialized infrastructure for machine learning, and security. Questions we are asking here include how the relationship between the open source community and the large cloud service providers will evolve going forward and how a broad-based embrace of “hybrid computing” will impact enterprise customer product/service needs, sales channels and post-sales services.

For a deeper dive click here.

Intelligent Applications with ML & AI

The utilization of data and machine learning in production has probably been the single biggest theme we have invested against over the past five years. We have moved from “big data” to machine learning platform technologies such as Turi, Algorithmia and Lattice Data to intelligent applications such as Amperity, Suplari and AnswerIQ. In the years ahead, “every application is intelligent” will likely be the single biggest investment theme, as machine learning continues to be applied to new and existing data sets, business processes, and vertical markets. We also expect to find interesting opportunities in services that enable edge devices to operate with intelligence, industry-specific applications where large amounts of data are being created like life sciences, services to make ML more accessible to the average customer, as well as emerging machine learning methodologies such as transfer learning and explainable AI. Key questions here include (a) how data rights and strategies will evolve as the power of data models becomes more apparent and (b) how to automate intelligent applications to be fully managed, closed loop systems that continually improve their recommendations and inferences.

For a deeper dive click here.

Next Generation User Interfaces

Just as the mouse and touch screen ushered in new applications for computing and mobility, new modes of computer interaction like voice and gestures are catalyzing compelling new applications for consumers and businesses. The advent of Alexa Echo and Show, Google Home, and a more intelligent Siri service have dramatically changed how we interact with technology in our personal lives. Limited now to short simple actions, voice is becoming a common approach for classic use cases like search, music discovery, food/ride ordering and other activities. Madrona’s investment in Pulse Labs gives us unique visibility into next generation voice applications in areas like home control, ecommerce and ‘smart kitchen’ services. We are also enthused about new mobile voice/AR business applications for field service technicians, assisted retail shopping (E.g., Ikea’s ARKit furniture app) and many others including medical imaging/training.

Vision and image recognition are also rapidly becoming ways for people and machines to interact with one another as facial recognition security on iPhones or intelligent image recognition systems highlight. Augmented and virtual reality are growing much more slowly than initially expected, but mobile phone-enabled AR will become an increasingly important tool for immersive experiences, particularly visually-focused vocations such as architecture, marketing, and real estate. “Mobile-first” has become table stakes for new applications, but we expect to see more “do less, but much better” opportunities both in consumer and enterprise with elegantly designed UIs. Questions central to this theme include (a) what ‘high-value’ new experiences are truly best or only possible when voice, gesture and the overlay of AR/VR/MR are leveraged? (b) what will be the limits of image (especially facial recognition) in certain application areas, (c) how effective can image-driven systems like digital pathology be at augmenting human expertise, and (d) how will multi-sense point solutions in the home, car and store evolve into platforms?

For a deeper dive click here.

DiPhy (digital-physical converged customer experiences)

The first twenty years of the internet age were principally focused on moving experiences from the physical world to the digital world. Amazon enabled us to find, discover and buy just about anything from our laptops or mobile devices in the comfort of our home. The next twenty years will be principally focused on leveraging the technologies the internet age has produced to improve our experiences in the physical world. Just as the shift from physical to digital has massively impacted our daily lives (mostly for the better), the application of technology to improve the physical will have a similar if not greater impact.

We have seen examples of this trend through consumer applications like Uber and Lyft as well as digital marketplaces that connect dog owners to people who will take care of their dogs (Rover). Mobile devices (principally smartphones today) are the connection point between these two worlds and as voice and vision capabilities become more powerful so will the apps that reduce friction in our lives. As we look at other DiPhy sectors and opportunities, one where the landscape will change drastically over the coming decades is physical retail. Specifically, we are excited about digital native retailers and brands adding compelling physical experiences, increasing digitization of legacy retail space, and improving supply chain and logistics down to where the consumer receives their goods/services. Important questions here include (a) how traditional retailers and consumer services will evolve to embrace these opportunities and (b) how the deployment of edge AI will reduce friction and accelerate the adoption of new experiences.

For a deeper dive click here.

We look forward to hearing from many of you who are working on companies in these areas and, most importantly, to continuing the conversation with all of you in the community and pushing each other’s thinking around these trends. To that end, over the coming weeks we will post a series of additional blogs that go into more depth in each of our four thematic areas.

Matt, Tim, Soma, Len, Scott, Hope, Paul, Tom, Sudip, Maria, Dan, Chris and Elisa

(to get in touch just go to the team page – our contact info is in our profiles)

Madrona Expands the Team, Adds Talent Director, Venture Partner and Principal

Veteran Tech Talent Executive Shannon Anderson Joins as Director of Talent, Luis Ceze, a leader in computer systems architecture, machine learning, and DNA data storage joins as Venture Partner; Daniel Li is promoted to Principal

We are so excited to announce today some great additions to the Madrona Team. Each of these people is incredibly talented and will add a significant amount to what we can bring to our portfolio companies and to the greater Seattle ecosystem.

Shannon Anderson is joining us as Director of Talent. We expound on her role here.

Luis Ceze is joining the team as Venture Partner. Luis is an award-winning professor of computer science at the University of Washington, where he joined the faculty in 2007. His research focuses on the intersection of computer architecture, programming languages, molecular biology, and machine learning. At UW, he co-directs the Molecular Information Systems Lab where they are pioneering the technology to store data on synthetic DNA. He also co-directs the Sampa Lab, which focuses on the use of hardware/software co-design and approximate computing techniques for machine learning which enables efficient edge and server-side training and inference. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA Young Computer Architect Award and UIUC Distinguished Alumni Award. He is a member of the DARPA ISAT and MEC study groups, and consults for Microsoft.

Luis also has a track record of entrepreneurship. He spent the summer of 2017 with Madrona and has been a vital partner as we have evaluated new ideas and companies for several years. In 2008, Luis co-founded Corensic, a Madrona backed, UW-CSE spin-off company. We are excited to have him on board, continuing and building on Madrona’s long-standing relationship with UW CSE, and working formally with us to identify new companies and work closely with our portfolio companies.

Last but definitely not least, we promoted Daniel Li to Principal. Daniel joined us nearly three years ago and has been an incredible part of the Madrona team. He works tirelessly to not only analyze new markets and develop investment themes that help us envision future companies, but he also dives deeply into his passions. He has built apps that we use internally on a weekly basis at Madrona as well as given a Crypto 101 course to hundreds of people over the past year. He has also proven to be an indispensable partner to entrepreneurs, leading the Madrona investment in fast growing live streaming company, Gawkbox, last year. In addition to digital media and blockchain, Dan has done significant work in investment areas from autonomous vehicles, machine learning, and AR/VR. Dan brings an energy, curiosity and intelligence to everything he does and epitomizes what Madrona looks for in our companies and our team.

We are excited to continue to build the Madrona team to even better help entrepreneurs and further capitalize on the massive opportunity to build world-changing technology companies in the Pacific NW.

100 Demos, 50 Company Pitches & a Year With AR/VR Headsets – What We’ve Learned

We received our first market-ready VR headset at Madrona almost one year ago now, and since then, we have done VR (Vive and Rift) and AR (HoloLens) demos for over 100 people who have stopped by our office to check out the next big computing platform. We have done demos for kids, parents, grandparents, pro football players, elected officials, company executives, gaming enthusiasts, people who never game, and people all over the spectrum from skeptical to extremely excited about VR. We have also participated in over 50 unique VR/AR demos from companies pitching Madrona.

First unboxing.

After all of these demos, it’s interesting to take a look back and think about what consumers’ first reactions to the products and content have been and where we think the companies betting on the future of VR need to go from here. Here are some of the key takeaways from our first year in VR:

The Product

  • It’s here. And it’s awesome. Every person to try on the Vive has been blown away by the level of immersion and the experience of getting transported away to another place. While companies are still debating controllers, tracking, resolutions, and wires, there is no doubt that the technology is now ‘good enough’ for people to feel completely immersed in another world. Seeing someone’s fear of falling off a ledge suspended a few hundred feet above a city when they are in fact just standing on the carpet in your office is incredibly fun and proves that VR is a transformative experience.
  • But not quite awesome enough. Despite the incredible experiences that people are having in VR, consumers are not excited enough to go out and buy headsets. Having done over 100 demos, less than a handful of people were excited enough to go out and spend $600-800 for a VR set up and likely another $1,500 for a capable PC to play with VR at home. Maybe we aren’t selling hardware hard enough at Madrona, but our experience seems consistent with the overall industry, with big companies coming in significantly below their initial 2016 forecasts for hardware sales – for example, Sony cut its 2016 forecast for VR headsets from 2.6M units to 0.75M. Further, we know first-hand how cumbersome it is to update software and firmware in order to quickly and consistently enjoy most VR applications.
  • Companies are earlier adopters of VR and AR than consumers. Nearly every corporate innovation lab has taken the time to create some sort of VR experience. IKEA built a kitchen visualization tool, Ford released an app that allows you to experience what it’s like to be a race car driver, and even SleepNumber built a VR application… to simulate what it feels like to be tired (in case you haven’t felt that before!). One of our favorite applications of VR is Redfin’s house tours that allow a potential home buyer to explore a house in VR before making a purchase. We are also seeing early adoption of Hololens AR technology and demos especially in health care, financial services, and various forms of design. Companies are seeing the huge potential of VR and AR and quickly trying to determine what works and what doesn’t to advance their business objectives.
Redfin.

The Market

  • The first killer app will be social.The VR (and potentially “mixed reality”) apps that people will spend the most time in will be social. Humans have an innate desire to be social, and tools for communication are incredibly valuable – just look at Facebook, Instagram, Snapchat, WhatsApp, Skype, and Slack. Additionally, VR is a platform that is particularly well-suited for communication given the level of immersion you can experience when talking to other people. Early examples of companies with compelling social VR capabilities include Against Gravity’s Rec Room, BigBox’s Smashbox and Pluto VR’s general, mixed reality communications service.

    TechCrunch Oct 2016. Source: https://techcrunch.com/2016/10/11/facebook-social-virtual-reality/
  • Wow – Kids!if there is one thing that has been the most remarkable to watch — it’s how easily children pick up VR controllers and start flying around virtual worlds (before quickly asking if they can play Minecraft in VR). We know some kids who have spent dozens of hours playing social VR games like dodgeball and charades. The level of comfort and intuition that children have for navigating new technologies is incredible, and they will likely be able to imagine new ways to use VR that the current generation of developers haven’t even considered yet. Interestingly, the younger folks who try both AR and VR tend to be more interested in VR while older folks see more commercial promise in AR.
Onsite at Madrona.
  • Learning from Asia and 21stCentury “Arcades” – The US is way behind China for VR adoption, and we are seeing significantly more adoption in the Asian market through malls, theme parks, karaoke bars, and gaming cafes – not to mention the number of manufacturers of simple smartphone headset adapters that allow for a low-end mobile VR experience. We believe in the short to medium term that 21st Century VR Arcades will emerge around the world with the early learnings coming primarily from Asia. HTC is treating the China VR market as its first priority even though it is partnered with Valve on the Vive, and Tencent has begun producing VR movies. US-based VR companies are looking to learn from the Chinese experience, and they are also looking towards Asian investors to fund VR companies, as a large proportion of VR investments have been led by Asian investors. In fact, a recent CB Insights report showed that China-based investors participated in 21% of all AR/VR deals last year, including Magic Leap’s $794M Series C (Alibaba) and NextVR’s $80M Series B (5 Chinese investors).
    CB Insights, January 24, 2017. Source: https://www.cbinsights.com/blog/ar-vr-startups-china-investors/

    Looking Forward

  • Companies and investors are waiting for a big breakthrough.Perhaps it will be Apple’s first foray into AR/VR, maybe it’s mass adoption of the Google Daydream, or maybe it’s a wild success in Snapchat spectacles, but until there is a bigger user base, AR/VR companies will not be making a lot of money. Monetization is tricky, and with the exception of some early wins in indie games, companies do not have a lot of room to experiment with business models to grow their businesses. Even what is arguably the first “hit” of the AR world, Pokemon Go, required a beloved set of content to help create a global phenomenon. That makes it important for early-stage VR and AR companies to ensure they have a plan to survive and continue developing great products until the early majority are more easily able to try VR/AR experiences.

    Kiro, July 11, 2016. Source: http://www.kiro7.com/news/local/here-are-some-of-the-best-places-to-play-pokmon-go-in-seattle/396941017
  • VR/AR signal a new era of human-computer interaction.Traditional text interfaces and GUIs force people to interact with data and applications in a way that makes sense to computers. VR and AR are new, natural user interfaces that will allow people to interact with technology in the same way that people interact with the physical world. That means using voice, vision, and gestures to interact with computing and content. Think about how difficult it is to remember the positions of 100 different files on a shared drive and how comparatively easy it would be to say “show me the document from last week’s meeting with Sally” and have it appear in augmented reality. This will allow us to build new apps that take advantage of the ways that our brains work and interface with both the digital and physical worlds.
    YouTube, Microsoft Hololens. Source: https://www.youtube.com/watch?v=SKpKlh1-en0

    We are incredibly excited to see what happens in the next 3-5 years in the virtual and augmented reality market. Already, we are seeing great examples of profitable businesses creating VR demos of buildings before they are built, so developers can sell real estate before prospects can walk through the space. We are also seeing partnerships with groups like Microsoft, Case Western Reserve University, and the Cleveland Clinic where a medical student can put on a HoloLens with 10 of her classmates to walk around and examine the different pieces of human heart in 3D space to gain completely new insights on how the heart really works.

    As we look towards the technologies that will affect humanity the most over the next decade, we firmly believe augmented and virtual reality will change the way we work, play, and live.

    Madrona is an investor in Redfin.

    A version of this post originally appeared on VentureBeat.

Technology Trends Changing the World As We Look Ahead

Drones, Cars, Intelligent Apps, Virtual Reality and More – What to expect in 2017
There’s an age old saying that humans tend to overestimate what can be accomplished in one day, but underestimate what can be accomplished in one year. As 2016 comes to a close, it is a good time to zoom out the lens, and get reflective on what has happened this year, and predictive about what we are excited about for the coming 3-5 years.

1. Commercial Drone (UAV) Technology will Turn to Software

The 2015 hype around drones generated over $155M of VC funding in the second half of 2015, but 2016 has seen far chillier attitudes by VCs towards drone startups. However, we believe 2017 will be a year of renewal for investments and innovations in drone technology. For one, the FAA passed the first set of rules in June governing drone fly rules, allowing commercial drones to finally take to the skies without filing for lengthy and cumbersome case-by-case permission. Secondly, over the last year, the hardware war which has spooked many VCs from entering the space has been all but won. Forbes estimates that Chinese drone manufacturer DJI is valued at $8 billion and controls over 70% of the hardware market. Other contenders for this mantle such as 3D Robotics have retooled to focus on vertical software. For 2017, we see the main opportunity for drone technology to be in best-in-class tools and software deployed across platforms such as equipping drones with advanced sensing capabilities, or software for vertical industries such as real estate and farming.

2. Intelligent Applications

Customers nowadays demand their software delivers insights that are real-time, nimble, predictive and prescriptive. We have no doubt that in the future, every application will be an
intelligent application. However, the reality has not caught up to the hype. We believe data, not algorithms are the bottle-neck. Algorithms continue to become commoditized by the way of access to open-source libraries such as Algorithmia, Tensorflow, Hadoop and Cockroach DB. If products wish to do better than commodity performance, companies with machine learning at their core must figure out how to acquire proprietary, unique, clean and workable data sets to train the machine learning models.

Companies with a leg up are also likely to be vertically integrated in such a way that their data, learning models and product are all geared towards developing the best data network effects that will feed the learning loop.

We believe there is a big opportunity for companies focused on a specific industry such as healthcare, retail, legal, construction to build higher quality domain expertise at a faster rate, which facilitates the acquisition and labeling of relevant data critical to building accurate and effective machine problem solvers.

3. Virtual Intelligent Assistants with Focus on a Problem Space Will Succeed

A great example of vertical vs horizontal machine learning applications can be found in chat bots. There are some horizontal chat bot assistants that help you with any and all requests (viv.ai, Magic, and Awesome to name a few). It would seem obvious that building NLP and intelligent capabilities across all conceivable tasks and requests could be a long slow training slog of manual human validation. These companies are also at a heavy disadvantage to incumbent players tackling the horizontal assistant space. Voice enabled platforms like Alexa, Siri, Cortana, or the new Google Assistant still see limited usability despite enormous access to training data bolstered by the distribution platforms of three of the largest companies in the world. Realizing this, Amazon announced at Re:Invent that Lex, the software that powers Alexa, is now available for developers to build their own chat bots. Every developer who designs their conversation on the Lex Console is now feeding Lex’s data model. Microsoft followed suit with a similar announcement of the Cortana Skills Kit and Devices SDK.

Assistants that will be more successful in the short term are bots that are narrowly focused. There is Kasisto for finance, Digital Genius for customer service, or the many virtual assistant/meeting scheduler apps (Meekan, JulieDesk, X.ai’s Amy and later “brother” Andrew, and Clara). What excites us about these vertically oriented chat bot startups is that they are applying machine learning, artificial intelligence and natural language processing in a highly specialized and narrow way. It is far easier to train a bot to recognize and act appropriately on the finite set of lexicon and circumstances around scheduling a meeting, compared to the infinite set of scenarios that could occur otherwise. In machine learning, it is better to be a master of one, than a master of none.

4. Blockchain Will Expand as Enterprise Services Embrace it

2017-01-03-techcrunch-post-blockchain

The technological innovation of Bitcoin, blockchain, seeks to create a global distributed ledger for the transfer of assets (currency, cryptocurrency, music, real-estate deeds etc). This enables peer to peer transactions that bypass traditional intermediaries like banks, credit card companies, and governments whose centralized nature slows down processing speed, increases cost of transaction, and are vulnerable to security threats at the hub-level. Blockchain technology has been heralded by some as being as disruptive to the way people view, share, and interact with their assets as the internet was for information. However, adoption has significantly lagged this envisioned seismic shift.

We believe blockchain’s path to mainstream adoption will be more likely to arise from the enterprise and infrastructure side (creation of APIs and protocols that enable ease of adoption) as opposed to consumer adoption of cryptocurrencies (i.e. Bitcoin). An example is R3 which has gathered a consortium of 42 banks to create the technological base layer for various systems including Bitcoin, Ethereum and Ripple to talk to each other and facilitate global payment transfers.

5. Autonomous Vehicles Have More Validation Work

Aside from machine learning, autonomous vehicles were one of the most hyped technologies in 2016. This year, we saw major product announcements and technology demos from Uber, Lyft, Ford, GM, BMW, Tesla, Cruise, Comma.ai, and many other startups and corporations. Google went so far as to create an entirely new company, Waymo, devoted to their driverless car technology.

Nearly all of the major car manufacturers have announced they will be releasing autonomous vehicles in the next five years, and Lyft has stated that they are planning for the majority of rides to be autonomous within the next five years. Even President Obama said “The technology is essentially here” in a November WIRED interview.

However, despite the hype, there is a tremendous amount of heavy lifting that needs to happen in technology, infrastructure and policy to say the least. Companies still need to solve basic problems related to sensors (e.g., see Tesla Autopilot crash where cameras could not distinguish white truck against bright sky), and billions of edge cases due to construction, pedestrians, and weather, and a murky regulatory environment.

We are huge believers in the long-term benefits of autonomous vehicles, but 2017 may be a year when autonomous vehicle companies and startups are heads-down solving tough problems rather than continuing to push out flashy tech demos.

6. Augmented Reality and Virtual Reality

We believe there is still a three-year runway before VR and AR sees wide adoption by mainstream audiences. Consumer adoption will be mobile-first and/or low-end tech – think the successful recent launch of Snap Spectacles, and the cheaper price points of Google Daydream, and the Samsung Gear. VR uptake today is still burdened by hardware adoption and ease of use. Prices are still too high for anyone but the hardcore technologist or gamer.

On the enterprise side, we see 2017 as a continuing year of innovation and activity particularly in core applicable industries like engineering, science, medicine, real estate education and manufacturing. However, until the dominant form factor (whether it is glasses, head-mounted-display, or some other yet to be seen hardware) emerges, time spent in VR will still be miniscule compared to time spent in this reality.

Ultimately, if gazing into the future of technology was really so straightforward, there would be no need for speculation and VCs would be out of a job. We’ll be back next year to see assess how many of these predictions hit the nail.

AWS re:Invent 5th Anniversary Preview: Five Themes to Watch

The 5th Annual AWS re:Invent is a week away and I am expecting big things. At the first ever re: Invent in 2012, plenty of start-ups and developers could be found, but barely any national media or venture capitalists attended. That has all changed and today, re:Invent rivals the biggest and most strategically important technology conferences of the year with over 25,000 people expected to be in Las Vegas the week after Thanksgiving!

So, what will be the big themes at re: Invent? I anticipate, from an innovation perspective, they will line up with the 3 layers of how we at Madrona think about the core of new consumer and enterprise applications hitting the market. We call it the “Future of Applications” technology stack shown below.

future-of-applications
Future of Applications (Madrona Venture Group Slide, November 2016)

The Themes We Expect at 2016 re:Invent

Doubling Down on Lambda Functions

First is the move “beyond cloud” to what is increasingly called server-less and event-driven computing. Two years ago, AWS unveiled Lambda functions at re:Invent. Lambda quickly became a market leading “event-driven” functions service. The capability, combined with other micro-services, allows developers to create a function which is at rest until it is called in to action by an event trigger. Functions can perform simple tasks like automatically expanding a compute cluster or creating a low resolution version of an uploaded high resolution image. Lambda functions are increasingly being used as a control point for more complicated, micro-services architected applications.

I anticipate that re:Invent 2016 will feature several large and small customers who are using Lambda functions in innovative ways. In addition, both AWS and other software companies will launch capabilities to make designing, creating and running event-driven services easier. These new services are likely to be connected to broader “server-less” application development and deployment tools. The combination of broad cloud adoption, emerging containerization standards and the opportunities for innovating on both application automation and economics (you only pay for Lambda functions on a per event basis) presents the opportunity to transform the infrastructure layer in design and operations for next-generation applications in 2017.

Innovating in Machine and Deep Learning

Another big focus area at re:Invent will be intelligent applications powered by machine/deep learning trained models. Amazon already offers services like AWS ML for machine learning and companies like Turi (prior to being acquired by Apple) leveraged AWS GPU services to deploy machine learning systems inside intelligent applications. But, as recently reported by The Information, AWS is expected to announce a deep learning service that will be somewhat competitive with Google’s TensorFlow deep learning service. This service will leverage the MXNet deep learning library supported by AWS and others. In addition, many intelligent applications already offered to consumers and commercial customers, including AWS stalwarts such as Netflix and Salesforce.com, will emphasize how marrying cloud services with data science capabilities are at the heart of making applications smarter and individually personalized.

Moving to Multi-Sense With Alexa, Chat and AR/VR

While AWS has historically offered fewer end-user facing services, we expect more end-user and edge sensors/devices interactions leveraging multiple user interfaces (voice, eye contact, gestures, sensory inputs) to be featured this year at re:Invent. For example, Amazon’s own Alexa Voice Services will be on prominent display in both Amazon products like the Echo and third party offerings. In addition, new chat-related services will likely be featured by start-ups and potentially other internal groups at Amazon. Virtual and augmented reality use cases for areas including content creation, shared-presence communication and potentially new device form factors will be highlighted. Madrona is especially excited about the opportunity for shared presence in VR to reimagine how people collaborate with man and machine (all powered by a cloud back-end.). As the AWS services stack matures, it is helping a new generation of multi-sense applications reach end users.

Rising Presence of AWS in Enterprises Directly and With Partners

Two other areas of emphasis at the conference, somewhat tangential to the future of applications, will be the continued growth of enterprise customer presentations and attendance at the conference. The dedicated enterprise track will be larger than ever and some high-profile CIO’s, like Rob Alexander from Capital One last year, will be featured during the main AWS keynotes. Vertical industry solutions for media, financial services, health care, and more will be highlighted. And, an expanding mix of channel partners, that could include some surprising cloud bedfellows like IBM, SAP and VMWare, could be featured. In addition, with the recent VMWare and AWS product announcements, AWS could make a big push into hybrid workloads.

AWS Marketplace Emerging as a Modern Channel for Software Distribution

Finally, the AWS Marketplace for discovering, purchasing and deploying software services will increase in profile this year. The size and significance of this software distribution channel has grown significantly the past few years. Features like metered billing, usage tracking and deployment of non “Amazon Machine Image (AMI)” applications could see the spotlight.

Over the years, AWS has always surprised us with innovative solutions like Lambda and Kinesis, competitive offerings like Aurora databases and elastic load balancing, as well as customer centric solutions like AWS Snowball. We expect to be surprised, and even amazed, at what AWS and partner companies will unveil at re: Invent 2016.

Evolving the Application Platform from Software to Dataware

Every decade, a set of major forces work together to change the way we think about “applications.” Until now, those changes were principally evolutions of software programming, networked communications and user interactions.

In the mid-1990s, Bill Gates’ famous “The Internet Tidal Wave” letter highlighted the rise of the internet, browser-based applications and portable computing.

By 2006, smart, touch devices, Software-as-a-Service (SaaS) and the earliest days of cloud computing were emerging. Today, data and machine learning/artificial intelligence are combining with software and cloud infrastructure to become a new platform.

Microsoft CEO Satya Nadella recently described this new platform as “a third ‘run time’ — the next platform…one that doesn’t just manage information but also learns from information and interacts with the physical world.”

I think of this as an evolution from software to dataware as applications transform from predictable programs to data-trained systems that continuously learn and make predictions that become more effective over time. Three forces — application intelligence, microservices/serverless architectures and natural user interfaces — will dominate how we interact with and benefit from intelligent applications over the next decade.

In the mid-1990s, the rise of internet applications offered countless new services to consumers, including search, news and e-commerce. Businesses and individuals had a new way to broadcast or market themselves to others via websites. Application servers from BEA, IBM, Sun and others provided the foundation for internet-based applications, and browsers connected users with apps and content. As consumer hardware shifted from desktop PCs to portable laptops, and infrastructure became increasingly networked, the fundamental architectures of applications were re-thought.

By 2006, a new wave of core forces shaped the definition of applications. Software was moving from client-server to Software-as-a-Service. Companies like Salesforce.com and NetSuite led the way, with others like Concur transforming into SaaS leaders. In addition, hardware started to become software services in the form of Infrastructure-as-a-Service with the launch of Amazon Web Services S3 (Simple Storage Service) and then EC2 (Elastic Cloud Compute Service).

Smart, mobile devices began to emerge, and applications for these devices quickly followed. Apple entered the market with the iPhone in 2007, and a year later introduced the App Store. In addition, Google launched the Android ecosystem that year. Applications were purpose-built to run on these smart devices, and legacy applications were re-purposed to work in a mobile context.

As devices, including iPads, Kindles, Surfaces and others proliferated, application user interfaces became increasingly complex. Soon developers were creating applications that responsively adjusted to the type of device and use case they were supporting. Another major change of this past decade was the transition from typing and clicking, which had dominated the PC and Blackberry era, to touch as a dominant interface for humans and applications.

Software is programmed and predictable, while the new dataware is trained and predictive.

Matt McIlwain

In 2016, we are on the cusp of a totally new era in how applications are built, managed and accessed by users. The most important aspect of this evolution is how applications are being redefined from “software programs” to “dataware learners.”

For decades, software has been ­programmed and designed to run in predictable ways. Over the next decade, dataware will be created through training a computer system with data that enables the system to continuously learn and make predictions based on new data/metadata, engineered features and algorithm-powered data models.

In short, software is programmed and predictable, while the new dataware is trained and predictive. We benefit from dataware all the time today in modern search, consumer services like Netflix and Spotify and fraud protection for our credit cards. But soon, every application will be an intelligent application.

Three major forces underlie the shift from software to dataware which necessitates a new “platform” for application development and operations and these forces are interrelated.

Application intelligence

Intelligent applications are the end product of this evolution. They leverage data, algorithms and ongoing learning to anticipate and improve interactions with the people and machines they interact with.

They combine three layers: innovative data and metadata stores, data intelligence systems (enabled by machine learning/AI) and the predictive intelligence that is expressed at an “application” layer. In addition, these layers are connected by a continual feedback loop that collects data at the points of interaction between machines and/or humans to continually improve the quality of the intelligent applications.

Microservices and serverless functions

Monolithic applications, even SaaS applications, are being deconstructed into components that are elastic building blocks for “macro-services.” Microservice building blocks can be simple or multi-dimensional, and they are expressed through Application Programming Interfaces (APIs). These APIs often communicate machine-to-machine, such as Twilio for communication or Microsoft’s Active Directory Service for identity. They also enable traditional applications to more easily “talk” or interact with new applications.

And, in the form of “bots,” they perform specific functions, like calling a car service or ordering a pizza via an underlying communication platform. A closely related and profound infrastructure trend is the emergence of event-driven, “serverless” application architectures. Serverless functions such as Amazon’s Lambda service or Google Functions leverage cloud infrastructure and containerized systems such as Docker.

At one level, these “serverless functions” are a form of microservice. But, they are separate, as they rely on data-driven events to trigger a “state-less” function to perform a specific task. These functions can even call intelligent applications or bots as part of a functional flow. These tasks can be connected and scaled to form real-time, intelligent applications and be delivered in a personalized way to end-users. Microservices, in their varying forms, will dominate how applications are built and “served” over the next decade.

Natural user interface

If touch was the last major evolution in interfaces, voice, vision and virtual interaction using a mix of our natural senses will be the major interfaces of the next decade. Voice is finally exploding with platforms like Alexa, Cortana and Siri. Amazon Alexa already has more than 1,000 voice-activated skills on its platform. And, as virtual and augmented reality continue to progress, voice and visual interfaces (looking at an object to direct an action) will dominate how people interact with applications.

Microsoft HoloLens and Samsung Gear are early examples of devices using visual interfaces. Even touch will evolve in both the physical sense through “chatbots” and the virtual sense, as we use hand controllers like those that come with a Valve/HTC Vive to interact with both our physical and virtual worlds. And especially in virtual environments, using a voice-activated service like Alexa to open and edit a document will feel natural.

What are the high-level implications of the evolution to intelligent applications powered by a dataware platform?

SaaS is not enough. The past 10 years in commercial software have been dominated by a shift to cloud-based, always-on SaaS applications. But, these applications are built in a monolithic (not microservices) manner and are generally programmed, versus trained. New commercial applications will emerge that will incorporate the intelligent applications framework, and usually be built on a microservices platform. Even those now “legacy” SaaS applications will try to modernize by building in data intelligence and microservices components.

Data access and usage rights are required. Intelligent applications are powered by data, metadata and intelligent data models (“learners”). Without access to the data and the right to use it to train models, dataware will not be possible. The best sources of data will be proprietary and differentiated. Companies that curate such data sources and build frequently used, intelligent applications will create a virtuous cycle and a sustainable competitive advantage. There will also be a lot of work and opportunity ahead in creating systems to ingest, clean, normalize and create intelligent data learners leveraging machine learning techniques.

New form factors will emerge. Natural user interfaces leveraging speech and vision are just beginning to influence new form factors like Amazon Echo, Microsoft HoloLens and Valve/HTC Vive. These multi-sense and machine-learning-powered form factors will continue to evolve over the next several years. Interestingly, the three mentioned above emerged from a mix of Seattle-based companies with roots in software, e-commerce and gaming!

The three major trends outlined here will help turn software applications into dataware learners over the next decade, and will shape the future of how man and machine interact. Intelligent applications will be data-driven, highly componentized, accessed via almost all of our senses and delivered in real time.

These applications and the devices used to interact with them, which may seem improbable to some today, will feel natural and inevitable to all by 2026 — if not sooner. Entrepreneurs and companies looking to build valuable services and software today need to keep these rapidly emerging trends in mind.

I remember debating with our portfolio companies in 2006 and 2007 whether or not to build products as SaaS and mobile-first on a cloud infrastructure. That ship has sailed. Today we encourage them to build applications powered by machine learning, microservices and voice/visual inputs.

This post was originally published by TechCrunch