Madrona Labs Adds CTO, long time entrepreneur and big data, machine learning, computer vision technologist, Jay Bartot

New Role Brings Deeper Level of Technology Expertise to Madrona Venture Labs Team
We are thrilled to announce that Jay Bartot has joined our Madrona Venture Labs team as Chief Technology Officer. Jay has been the cofounder of four successful startups, each of which leveraged big data and machine learning to provide targeted services to consumers and businesses. These startups include Farecast, where Jay and I partnered to change how consumers purchase airline tickets by using algorithms to predict airfare price fluctuations.

Joining Madrona Venture Labs is like a homecoming for Jay, as Farecast was incubated at Madrona Venture Group and he worked closely with Madrona’s Managing Director Matt McIlwain and Venture Partner Oren Etzioni in the earliest days of the company’s formation. At Farecast, Jay and I formed a highly productive engineering and product partnership and we aim to bring that same collaborative spirit to our Labs culture.

Jay’s other startups include AdRelevance, acquired by Media Metrix, Medify acquired by Alliance Health and most recently Vhoto, acquired by Hulu. Jay brings a wealth of deep technical and engineering leadership experience to our team and in support of our spinout founding teams. With Jay onboard, we will look to explore new, innovative technical startup ideas that leverage his experience in machine-learning and data-mining.

Jay is one of the most creative and inventive engineering leaders I know and we could not be more excited about our future with his influence and leadership.

AWS re:Invent – the Big Announcements and Implications

The momentum continues to build and scale in leaps and bounds. That’s the overwhelming observation and feeling at the end of the 5th annual conference that Amazon hosted in Las Vegas last week for AWS (Amazon Web Services).

Here are some of the key take-aways that we think will have the highest industy impact.

Event-driven Functions and Serverless Computing

Serverless has definitely arrived. As expected, there were a number of new capabilities announced around Lambda, including C# language support, AWS Lambda@Edge to create a “CDN for Compute” and AWS Step Functions to coordinate the components of distributed applications using visual workflows through state machines. Beyond this, it was clear that Lambda, and the serverless approach overall, is being broadly woven into the fabric of AWS services.

In the world of event-driven functions, thinking about a standard way for people to publish events that make it easy to consume those events is going to be critical. Whichever platform gets there first will likely see a tremendous amount of traction.

Innovation in Machine and Deep Learning

AWS has had a machine learning service for a while now, and it was interesting to see a whole slew of new machine learning, deep learning and AI suite of services including Amazon Image Rekognition, Amazon Polly (Text to Speech deep learning service) and Amazon Lex (Natural Language Understanding engine inside Alexa that is now available as a service).

While the concrete use cases are still relatively spare, we – like Amazon – believe this functionality will be integrated into the functionality of virtually all applications in the future.

It is also clear that the proprietary data used to train models are what create differentiated and unique intelligent apps. The distinction between commodity and proprietary data is going to be critical as algorithms become more of a commodity.

Enterprise Credibility

In past years, whether it was intended or unintended, the perception was that taking a bet on AWS meant taking a bet on the public cloud. In other words, there was an unintended consequence of AWS as “all in on public cloud or nothing”. With the VMWare partnership, which was announced a couple months ago, but solidified on stage with VMWare’s CEO, Amazon clearly is supporting the hybrid infrastructure that many enterprises will be dealing with for years to come.

Equally noteworthy was the appearance of Aneel Bhusri, CEO of Workday, on stage to announce that Workday was moving to AWS as their primary cloud for production workloads. Clearly no longer just the realm of primarily dev and test, this is perhaps the strongest statement yet that the public cloud – and AWS in particular – is enterprise production capable.

Moving Up a Layer From a Set of Discrete Services to Solution-based Services

One big theme that showed through this year at AWS was the movement from a set of discrete services to complete solutions both for developers and for operators of applications and services. The beauty of this all is that AWS continues to move forward on this path in a way that is highly empowering for developers and operators.

This approach really shone through during Werner Vogels keynote on Day 2. He laid out AWS’ approach for the “modern data architecture” and then announced how the new service AWS Glue (fully managed data catalog and ETL service) covers all the missing pieces in terms of their end-to-end solution for a modern data architecture on AWS.

Eat the Ecosystem

One of the implications of AWS’ continued growth towards complete solutions is that they continue to eat into the domain of their partner ecosystem. This has been an implied theme in years past, but the pace is accelerating.

Some of the examples that drew the biggest notice:

• AWS X-Ray (analyze and debug distributed applications in production) which aims directly at current monitoring companies like New Relic, AppDynamics and Datadog
• AWS Lightsail (virtual private servers made easy) that, at $5/month, will put significant pressure on companies like Digital Ocean and Linode
• Rekognition (image recognition, part of AI suite described above) that provides a service very similar to Clarifai, who had actually been on a slide just a few prior to the service announcement!

No one should be surprised that AWS’ accelerating expansion will step on the toes of partners. An implication, as @benkepes tweeted, is that the best way to partner and extend AWS is to go very deep for a given use case because AWS will eventually provide the most common horizontal scenarios.

Partner Success = AWS Success

Although some of the new services conflicted with partner offerings, the other side of the coin was that AWS continues to embrace partners and is vested in partners’ success. They clearly understand that having their partners be successful ultimately contributes to more success for AWS. Having customers like Salesforce, WorkDay and Twilio take a complete bet on AWS , making the product of a partner like Chef be available as a fully-managed service on AWS, having a partner like Netflix excited to switch off their last datacenter as they are completely on AWS, and having a company like VMWare embrace AWS as their public cloud partner are some of the great examples of how Amazon is systematically working to ensure that their partners remain successful on AWS, all of which accrues more value and consumption of AWS.

Summary

The cloud opportunity is gigantic and there is room for multiple large players to have a meaningful position strength. However, as of today, Amazon is not just the clear leader but continues to stride forward in an amazing way.

First Published by Geekwire.

AWS re:Invent 5th Anniversary Preview: Five Themes to Watch

The 5th Annual AWS re:Invent is a week away and I am expecting big things. At the first ever re: Invent in 2012, plenty of start-ups and developers could be found, but barely any national media or venture capitalists attended. That has all changed and today, re:Invent rivals the biggest and most strategically important technology conferences of the year with over 25,000 people expected to be in Las Vegas the week after Thanksgiving!

So, what will be the big themes at re: Invent? I anticipate, from an innovation perspective, they will line up with the 3 layers of how we at Madrona think about the core of new consumer and enterprise applications hitting the market. We call it the “Future of Applications” technology stack shown below.

future-of-applications
Future of Applications (Madrona Venture Group Slide, November 2016)

The Themes We Expect at 2016 re:Invent

Doubling Down on Lambda Functions

First is the move “beyond cloud” to what is increasingly called server-less and event-driven computing. Two years ago, AWS unveiled Lambda functions at re:Invent. Lambda quickly became a market leading “event-driven” functions service. The capability, combined with other micro-services, allows developers to create a function which is at rest until it is called in to action by an event trigger. Functions can perform simple tasks like automatically expanding a compute cluster or creating a low resolution version of an uploaded high resolution image. Lambda functions are increasingly being used as a control point for more complicated, micro-services architected applications.

I anticipate that re:Invent 2016 will feature several large and small customers who are using Lambda functions in innovative ways. In addition, both AWS and other software companies will launch capabilities to make designing, creating and running event-driven services easier. These new services are likely to be connected to broader “server-less” application development and deployment tools. The combination of broad cloud adoption, emerging containerization standards and the opportunities for innovating on both application automation and economics (you only pay for Lambda functions on a per event basis) presents the opportunity to transform the infrastructure layer in design and operations for next-generation applications in 2017.

Innovating in Machine and Deep Learning

Another big focus area at re:Invent will be intelligent applications powered by machine/deep learning trained models. Amazon already offers services like AWS ML for machine learning and companies like Turi (prior to being acquired by Apple) leveraged AWS GPU services to deploy machine learning systems inside intelligent applications. But, as recently reported by The Information, AWS is expected to announce a deep learning service that will be somewhat competitive with Google’s TensorFlow deep learning service. This service will leverage the MXNet deep learning library supported by AWS and others. In addition, many intelligent applications already offered to consumers and commercial customers, including AWS stalwarts such as Netflix and Salesforce.com, will emphasize how marrying cloud services with data science capabilities are at the heart of making applications smarter and individually personalized.

Moving to Multi-Sense With Alexa, Chat and AR/VR

While AWS has historically offered fewer end-user facing services, we expect more end-user and edge sensors/devices interactions leveraging multiple user interfaces (voice, eye contact, gestures, sensory inputs) to be featured this year at re:Invent. For example, Amazon’s own Alexa Voice Services will be on prominent display in both Amazon products like the Echo and third party offerings. In addition, new chat-related services will likely be featured by start-ups and potentially other internal groups at Amazon. Virtual and augmented reality use cases for areas including content creation, shared-presence communication and potentially new device form factors will be highlighted. Madrona is especially excited about the opportunity for shared presence in VR to reimagine how people collaborate with man and machine (all powered by a cloud back-end.). As the AWS services stack matures, it is helping a new generation of multi-sense applications reach end users.

Rising Presence of AWS in Enterprises Directly and With Partners

Two other areas of emphasis at the conference, somewhat tangential to the future of applications, will be the continued growth of enterprise customer presentations and attendance at the conference. The dedicated enterprise track will be larger than ever and some high-profile CIO’s, like Rob Alexander from Capital One last year, will be featured during the main AWS keynotes. Vertical industry solutions for media, financial services, health care, and more will be highlighted. And, an expanding mix of channel partners, that could include some surprising cloud bedfellows like IBM, SAP and VMWare, could be featured. In addition, with the recent VMWare and AWS product announcements, AWS could make a big push into hybrid workloads.

AWS Marketplace Emerging as a Modern Channel for Software Distribution

Finally, the AWS Marketplace for discovering, purchasing and deploying software services will increase in profile this year. The size and significance of this software distribution channel has grown significantly the past few years. Features like metered billing, usage tracking and deployment of non “Amazon Machine Image (AMI)” applications could see the spotlight.

Over the years, AWS has always surprised us with innovative solutions like Lambda and Kinesis, competitive offerings like Aurora databases and elastic load balancing, as well as customer centric solutions like AWS Snowball. We expect to be surprised, and even amazed, at what AWS and partner companies will unveil at re: Invent 2016.

Takeaways from the 5th Annual Data Science Summit

The 2016 Data Science Summit just wrapped up in San Francisco and it was bigger and better than ever. With over 1,300 attendees over two days, the conference combines business and academic leaders in a broad mix of machine learning areas – bringing together the latest in research with the state of the art in the industry. Many of the speakers are both leaders at key technology companies and involved with the top research institutions in the U.S.

Carlos Guestrin, with both Turi (previously Dato) and University of Washington, framed the world of intelligent applications including the opportunities for automating machine learning processes, creating online, closed-loop systems and increasing trust in machine learning applications.

Pedro Domingos, author of The Master Algorithm and also a UW professor, outlined the five schools of machine learning, their underlying philosophical approaches and the types of problems they best address.

Jeff Dean from Google highlighted their powerful new service TensorFlow along with its rapid adoption and independent forks in the open source community. Jeff emphasized that TensorFlow has potential beyond the deep learning area as an end-to-end system for Machine Learning applications.

While Jeff highlighted several Google ML use cases, Robin Glinton from Salesforce.com and Jure Leskovec from Pinterest (and Stanford University) impressed the audience with detailed examples of how to build and continually improve intelligent applications.

Stepping back, there are several observations from this conference that generally confirm and expanded upon learnings from Madrona’s recent AI/ML Summit in Seattle.

  1. Deep Learning is both real and overhyped. Deep learning is very well suited for image recognition problems and is growing in areas like speech recognition and translation. However, deep learning is only one branch of machine learning and is not the best approach for many intelligent application needs.
  1. Greater agility is required for intelligent applications in production. Agility comes in many forms, including automating development processes like data munging and feature engineering. It also applies to model training and ongoing model iterations for deployed intelligent apps. Automated, end-to-end pipelines that continually update production applications are rapidly becoming a requirement. These applications, like the ones consumers experience with Netflix and Spotify recommendations are increasingly referred to as “on line” applications due to their agility in both making real time recommendations and bringing data back to update models.
  1. “Closed” loops and “humans-in-the-loop” co-exist. Many intelligent applications become business solutions by involving humans to verify, enhance or act on machine outputs. These “humans-in-the-loop” cases are expected to persist for many years. However, intelligent applications increasingly require automated, closed-loop systems to meet narrow business requirements for performance and results. For example, product recommendations, fraud predictions and search results are expected to be more accurate and relevant than ever and delivered in milliseconds!
  1. The strategic value of differentiated data grows by the day. Intelligent applications are dependent on data, metadata and the models this data trains. Companies are increasingly strategic about the data they collect, the additional data they seek and the technologies they use to more rapidly train and deploy data models. Google’s internal use cases leveraging data like RankBrain are expanding. And, their decision to “open source” data models for image and speech recognition built on TensorFlow is a leading example of engaging the outside world to enhance a model’s training data.

Overall, I found the conference extremely energizing. There was substantial depth and a diversity of backgrounds, ideas and experiences amongst the participants. And, the conference furthered the momentum in moving from academic data science to deployed intelligent applications.

Machine Learning and AI. Why Now?

Trying to go to the moon, but today we’re at the top of a tree

You can hardly talk to a technology executive or developer today without talking about artificial intelligence, machine learning, or bots. Madrona recently hosted a conference on ML and AI bringing together some of the biggest technology companies and innovative startups in the Intelligent Application ecosystem.

One of the key themes for the event emerged from a survey of the attendees. Everybody who responded to the survey said that ML is either important or very important to their company and industry. However, more than half of the respondents said their organizations did not have adequate expertise in ML to be able to do what they need to do.

Here are the other top 5 takeaways from the conversations at the summit.

Every application is going to be an intelligent application

If your company isn’t using machine learning to detect anomalies, recommend products, or predict churn today, you will start doing it soon. Because of the rapid generation of new data, availability of massive amounts of compute power, and ease of use of new ML platforms (whether it is from large technology companies like Amazon, Google, Microsoft or from startups like Dato), we expect to see more and more applications that generate real-time predictions and continuously get better over time. Of the 100 early-stage start-ups we have met in the last six months, 90%+ of them are already planning to use ML to deliver a better experience for their customers.

Intelligent apps are built on innovations in micro-intelligence and middle-ware services

Companies today fall into two categories broadly – ones that are building some form of ML/AI technology or ones that are using ML/AI technologies in their applications and services. There is a tremendous amount of innovation that is currently happening in the building block services (aka middle-ware services) that include both data preparation services and learning services or models-as-a-service providers. With the advent of microservices and the ability to seamlessly interface with them through REST APIs, there is an increasing trend for the learning services and ML algorithms to be used and re-used as opposed to having to be re-written from scratch over and over again. For example, Algorithmia runs a marketplace for algorithms that any intelligent application can use as needed. Combining these algorithms and models with a specific slice of data (use-case specific within a particular vertical) is what we call micro-intelligence that can be seamlessly incorporated into applications.

Trust and transparency are absolutely critical in a world of ML and AI

Several high profile experiments with ML and AI came into the spotlight in the last year. Examples include Microsoft Tay, Google DeepMind AlphaGo, Facebook M, and the increasing number of chat bots of all kinds. The rise of natural user interfaces (voice, chat, and vision) provide very interesting options and opportunities for us as human beings to interact with virtual assistants (Apple Siri, Amazon Alexa, Microsoft Cortana and Viv).

There are also some more troubling examples of how we interact with artificial intelligences. For example, at the end of one online course at Georgia Tech, students were surprised to learn that one of the teaching assistants (named Jill Watson after the IBM Watson technology) they were interacting with throughout the semester was a chat bot and not a human being. As much as this shows the power of technology and innovation, it also brings to mind many questions around the rules of engagement in terms of trust and transparency in a world of bots, ML and AI.

Understanding the ‘why’ behind the ‘what’ is often another critical component of working with artificial intelligence. A doctor or a patient will not be happy with a diagnosis that tells them they have a 75% likelihood of cancer, and they should use Drug X to treat it. They need to understand which pieces of information came together to create that prediction or answer. We absolutely believe that going forward we should have full transparency with regards to ML and think through the ethical implications of the technology advances that will be an integral part of our lives and our society moving forward.

We need human beings in the loop

There have been a number of conversations on whether we should be afraid of AI based machines taking over the world. As much as advances in ML and AI are going to help with automation where it makes sense, it is also true that we will absolutely need to have human beings in the loop to create the right end-to-end customer experiences. At one point, Redfin experimented with sending ML-generated recommendations to its users. These machine-generated recommendations had a slightly higher engagement rates than a users’ own search and alert filters. However, the real improvement came when Redfin asked its agents to review recommendations before they were sent out. After agents reviewed the recommendations, Redfin was able to use the agents’ modifications as additional training data, and the click-through rate on recommended houses rose significantly. Splunk re-emphasized this point by describing how IT Professionals play a key role in deploying and using Splunk to help them do their jobs better and more efficiently. Without these humans in the loop, customers won’t get the most value out of Splunk. Another company Spare5 is a good example of how humans are sometimes required to train ML models by correcting and classifying the data going into the model. Another common adage in ML is garbage-in, garbage-out. The quality and integrity of data is critical to build high quality models.

ML is a critical ingredient for intelligent applications. But you may not need ML on day one.

Machine learning is an integral part and critical ingredient in building intelligent applications, but the most important goals in building intelligent apps are to build applications or services that resonate with your customers, provide an easy way for your customer to use your service, and continuously get better over time. To use ML and AI effectively, you often need to have a large data set. The advice from people who have done this successfully before is to start with the application and experience that you want to deliver, and in the process think about how ML can enhance your application and what data set you need to collect to build the best experience for your customers.

In summary, we have come a long way in the journey towards every app being an intelligent app, but we are still in the early stages of the journey. As Oren Etzioni, CEO of the Allen Institute for AI said in one fireside chat, we have made tremendous progress in AI and ML, but declaring success in ML today is like “Climbing to the top of a tree and declaring we are going to the moon.”

Previously published by TechCrunch.

The Intelligent App Ecosystem (It’s not just bots!)

Intelligence App thumbnail
Click to view the full image PDF (1.8 MB)

Today we interact with many intelligent applications like the Google and Bing search engines, Spotify and Netflix media services, and the Amazon shopping experience. The machine learning technologies that power these services are becoming mainstream and setting the stage for the Intelligent App Era.

Application intelligence is the process of using machine learning technology to create apps that use historical and real-time data to make predictions and decisions that deliver rich, adaptive, personalized experiences for users.

We believe that every successful, new application built today will be an intelligent application. The armies of chat bots and virtual assistants, the ecommerce sites that show the right recommendations at the right time, and the software that detects anomalous behavior for cybersecurity threats, to name a few, are all built to learn and create continuously improving experiences. In addition, legacy applications are becoming more and more intelligent to compete and keep pace with this new wave of applications.

We believe that every successful, new application built today will be an intelligent application.

S. Somasegar & Daniel Li

Now is an exciting time to be investing in the broader intelligent app ecosystem because several important trends are coming together in application development:

  • The availability of massive computational power and low-cost storage to feed machine learning models
  • The ease with which developers can take advantage of data sources and machine learning techniques,
  • The adoption of microservices as a development paradigm for applications, and
  • The proliferation of platforms on which to develop applications, and in particular platforms based on “natural user interfaces” like messaging and voice

We have spent time thinking about the various ways Intelligent Apps emerge – and how they are built. This Intelligent App Stack illustrates the various layers of technology that are crucial to the creation of Intelligent Apps. (Please send us feedback on this world view! @SSomasegar @danielxli )

As investors we like to think about the market dynamics of major industry shifts, and the rise of intelligent apps will certainly create many new opportunities for startups and large technology companies alike. Here are some thoughts on the key implications for companies operating at various layers of the intelligent app stack:

“Finished Services”: Applications will define the end user’s experience with machine learning
At the application layer there will be two primary classes of applications: net-new apps that are enabled by application intelligence and existing apps that are improved by application intelligence.

somasagar-gridNet-new apps will need to solve the tough problem of determining how much end users will pay for “artificial intelligence” and how to ensure they capture a portion of the value delivered to users. More broadly, it will be interesting to see if our thesis that the value proposition of machine learning will primarily be a revenue generator comes true.

Also because of the importance of high-quality, relevant data for machine learning models, we think that use-case specific or industry-specific applications will be the most immediate pockets of opportunity at the Finished Services or application layer. Today, we see the main categories of use-case specific applications as autonomous systems, security and anomaly detection, sales and marketing optimization, and personal assistants. We are also seeing a number of interesting vertically focused intelligent applications especially serving the retail, healthcare, agriculture, financial services, and biotech industries.

The killer apps of the last generation were built by companies like Amazon for ecommerce, Google for search and advertising, Facebook for social, Uber for transportation, and Netflix for entertainment. These companies have a significant head-start in machine learning and user data, but we believe there will be apps that are built from the ground up to be more intelligent that can win in these categories and brand new categories that are enabled by application intelligence.

Interfaces: New interfaces will transform applications into cross-platform “macro-services”
As we think about how new intelligent applications will be developed, one significant approach will be the transformation of an “app” to a service or experience that can be delivered over any number of interfaces. For example, we will see companies like Uber build “services” that can be delivered via an app, via the web, and/or via a voice interface.

It will also be easier for companies to deliver their services across platforms as they design their apps using a microservices paradigm where adding a new platform integration might be as simple as adding a new API layer that connects to all of the existing microservices for authentication, product catalog, inventory, recommendations, and other functions.

The proliferation of new platforms such as Slack, Facebook Messenger, Alexa, and VR stores will also be beneficial for developers because platforms will become more open, add features that make developers lives easier, and compete for attention with offerings such as investment funds.

Finally, at the interface layer, we see the “natural interfaces” of text, speech, and vision unlocking new categories such as conversational commerce and AR/VR. We are incredibly optimistic about the future of these interfaces as these are the ways that humans interact with one another and with the world.

Building Blocks and Learning Services: Intelligent building blocks and learning services will be the brains behind apps
As companies adopt the microservices development paradigm, the ability to plug and play different machine learning models and services to deliver specific functionality becomes more and more interesting. The two categories of companies we see at this layer are the providers of raw machine intelligence and the providers of trained models or “Models as a Service.”

In the first category, companies provide the “primitives” or core building blocks for developers to build intelligent apps, like algorithms and deployment processes. In the second category, we see intermediate services that allow companies to plug and play pre-trained models for tasks like image tagging, natural language processing, or product recommendations.

These two categories of companies provide a large portion of the value behind intelligent apps, but the key question for this layer will be how to ensure these building blocks can capture a portion of the value they are delivering to end users. IBM Watson’s approach to this is to provide developer access to its APIs for free but charge a 30% revenue share when the app is released to customers. Others are charging based on API calls, compute time, or virtual machines.

li-gridThe key differentiators for companies in this layer will be the ability to provide a great user experience for developers and the accuracy and performance of machine leaning algorithms and models. For complicated, but general problems like natural language understanding, it will likely be easier and more performant to use a pre-built model from a provider who specializes in generating the best data, models, and processes. However, for specialized, business-specific problems, startups and enterprises will need to build their own models and data sets.

Data Collection and Prep: The difficult and boring tasks of data collection and preparation will get smarter
Before data is ready to be fed into a machine intelligence workflow or model, it needs to be collected, aggregated, cleaned, and prepped. Sources of data for consumer and enterprise apps include photos and video, websites and text, customer behavior data, IT operations data, IOT sensor data, and data from the web.

After applications are instrumented to collect the right pieces of raw data, the data needs to be transformed into a machine-ready format. For example, companies will need to take unstructured data like text documents and photos and transform it into structured data (think of rows and columns) that is ready for a machine to review.

The important part of this step is realizing that the quality of a model is highly dependent on the quality of its input data. Creating bots or ‘artificial intelligences’ without high quality training data can lead to unintended consequences (see Microsoft’s Tay), and the creation of this training data often relies on semi-manual processes like crowdsourcing or finding historical data sets.

The other area of this space to keep an eye on is the companies that have traditionally served as “dumb” pipes for data sources like clickstream data or application performance logs. Not only will they try to build predictive and adaptive features, they will also see competition from intelligent services that draw insights from the same data sources. This will be an area of innovation for finance, CRM, IT Ops, marketing, HR, and other key business functions that have traditionally collected data without receiving immediate insights. For example, HR software will become better at providing feedback for interviewers and highlighting the best candidates for a position based on historical data from previous hires.

Data Infrastructure: Intelligent apps will be built on the “Big Data” infrastructure
The amount of data in the world is doubling every 18 months, and thanks to this explosion in big data, enterprises have invested heavily in storage and data analysis technologies.

Projects like Hadoop and Spark have been some of the key enablers for the larger application intelligence ecosystem, and they will continue to play a key role in the intelligent app stack. Open source will remain an important feature for choosing an analytics infrastructure because customers want to see what is ‘under the hood’ and avoid vendor lock in when choosing where and how to store their data.

The amount of data in the world is doubling every 18 months, and thanks to this explosion in big data, enterprises have invested heavily in storage and data analysis technologies.

S. Somasegar & Daniel Li

Within the IaaS bucket, each of the major cloud providers will compete to run the workloads that power intelligent apps. Already we are seeing companies open source key areas of IP such as Google’s TensorFlow ML platform, in a bid to attract companies and developers to their platform. Google, in particular, will be an interesting company to watch as they give users access to their machine learning models, trained on some of the world’s largest data sets, to grow their core IaaS business.

Finally, hardware companies that specialize in storing and managing the massive amount of photos, videos, logs, transactions, and IOT data will be critical to help businesses keep up with the new data generated by intelligent applications.

There will be value captured at all layers of this stack, and there is the opportunity to build significant winner-take-all businesses as the machine learning flywheel takes off. In the world of intelligent applications, data will be king, and the services that can generate the highest quality data will have an unfair advantage from their data flywheel – more data leading to better models, leading to a better user experience, leading to more users, leading to more data.

Ten years from now, all applications will be intelligent, and machine learning will be as important as the cloud has been for the last 10 years. Companies that dive in now and embrace intelligent applications will have a significant competitive advantage in building the most compelling experiences for their users and as a result, the most valuable businesses.

This post was previously published on TechCrunch.com

 

‘Intelligent Apps’: Seattle Area At Forefront Of Next Big Thing

The Seattle Times 5/11/16 – Chances are the entity managing your favorite smartphone app or Internet service isn’t a person.

Algorithms are setting the price of your airline ticket and hailing your Uber driver. They’re placing the vast majority of stock-market trades.

And we’re only at the beginning of a transition that is going to make the algorithms behind the software people interact with better able to understand and react to humans, technologists at a gathering of Seattle’s burgeoning artificial-intelligence industry said Wednesday.

“Every application that is going to get built, starting today and into the future, is going to be an intelligent app,” said S. “Soma” Somasegar, a venture partner with Madrona Venture Group and a former Microsoft executive.

READ MORE