Founded and Funded – AI, NLP and Technology in the Physician’s Office with Saykara

(Dr. Graham Hughes and Harjinder Sandhu of Saykara)

On the newest Founded and Funded podcast, join Madrona managing director, Tim Porter, as he sits down with Founder, Harjinder Sandhu, and President, Dr. Graham Hughes of Saykara to discuss the future of AI in the healthcare provider’s domain. Saykara is an AI, Natural Language Processing and speech recognition-based iOS application for physicians to help them significantly decrease the amount of time they need to spend charting – entering information into the electronic medical record. Saykara listens and assists the physician, much like a human scribe might do, interpreting and transforming the doctor- patient conversations into the salient content required for notes, orders, referrals, and scheduling.

But building a business around natural language processing and speech recognition, as you’ll hear, was no easy task. Harjinder and Graham reflect on how these two technologies have evolved – and how combining them with AI has led to an application that is changing physicians’ lives across the country. The two also talk about the challenges and thrills of building a startup that is doing something entirely new.

When Madrona first invested in Saykara in 2016 we were excited about the technology trends around voice, machine learning and natural language processing and it’s rewarding to see this come to fruition and be changing physician’s lives for the better.

Listen here or on the podcast platform of your choice!

Transcript

Tim

Hi, Harjinder. Hi Graham. Great to be here with you today in this fully remote virtual world. This is my first podcast recording where we’re not all in one little studio, but nice to see you both today.

Saykara is really the first truly intelligent AI assistant that automates physician charting. So, we’ll get more into that. Quick background on Harjinder and Graham, we’re going to talk a little bit today about Saykara, a little bit about startups and the challenge of founding and scaling startups successfully, and a little bit about things going on in healthcare IT broadly of which Saykara is in a really interesting part in the middle of that. Harjinder is the co-founder and CEO of the company. He started his career back as a professor of computer science at York University, and then co-founded a company called Med Remote, which was really one of the first automated medical speech transcription companies that was then acquired by Nuance and helped build what’s become a powerhouse business for Nuance around healthcare speech services. This is his second startup since Med Remote. He co-founded a company called Twistle also in healthcare IT, so he’s a veteran of healthcare and ML-related startups. So, lots of great insights there. And Graham joined as president of Saykara last year and comes with an interesting set of backgrounds as well. Most recently, as CEO of Sutherland Healthcare Solutions and before that, his experience ranged from being a doctor, he’s an MD himself, as well as building healthcare software systems like EMRs, working at GE healthcare and also working on machine learning applied to healthcare at SAS. And so, a great wealth of experience here that’s coming to bear to build Saykara.

 

Let’s jump in. Harjinder, as I mentioned, you’ve been working at the forefront of speech rec and ML in healthcare for over 20 years. Can you just give us a brief history, a primer, on how those technologies developed and where you see things today?

 

Harjinder

Healthcare has been one of the key drivers in the adoption of speech recognition. While in other industries, voice and speech have been interesting modalities to use and it’s alongside other things in healthcare, voice has always been one of the primary modalities that providers have used for documentation. Whenever a physician, a provider sees a patient, they’re required to document that encounter. They need to do so both for legal and billing and clinical purposes. If you’re a typical physician and you see 20 to 25 patients a day, every encounter that you do, with each of those patients, requires a page or two of documentation. If you add all of that up in a typical day, that’s a lot of documentation that a physician has to do.

Back in the day before the internet, physicians would dictate their notes and you’d have transcriptionists listening to those dictations and typing them up. Back in around 2000 or so, myself and a colleague and a few others started looking at how to apply speech recognition to this natural use of voice, which is how do we take this dictation and rather than have people sitting there typing it up, how do we apply speech recognition to this and make this process much more cost effective. Back in those days, transcription used to be about a 10 to 20-billion-dollar industry in this country. So that’s really where speech recognition found a home. In 2000, speech recognition was still very much in its early days, it wasn’t really ready for this space. It took us about three years to get speech recognition to the point where it was actually useful in this space and useful actually meant not that it was replacing transcriptionists, but that it was augmenting the transcriptionists.

 

So, if you think about the cost of transcription, the spend in the millions of dollars for a typical health system, our goal was to use speech recognition to augment the transcriptionist in a way that reduced that transcription costs. We were able to do so and do so very successfully, and speech recognition continued to improve over the years. The use of speech recognition as a tool directly for physicians started around the same time, but really was just for early adopters that would use it for documenting care, but it was really the wealth of data that we were able to capture through this speech recognition process, where we were augmenting transcriptions that gave rise to the real success of speech recognition in healthcare.

 

NLP, natural language processing, was always a secondary focus and that really came about because again, in healthcare, if you think about the use case of physician documentation — 20 years ago, almost all medical documentation was in the form of these narrative notes that were being created through the dictation process or else being typed up by physicians. So, the idea was, and many people were thinking about this back then, is that as long as medical documentation is a narrative format, it’s not really amenable to any kind of automated processes. You can’t do simple analytics on it. You can’t do any kind of drug-drug interaction checking, if all you have is a bunch of narrative that talks about these medications and allergies and all that kind of stuff. So, we had started working on ways to interpret what the system was hearing through this dictation process and that NLP work continues till this day. It actually turned out that NLP was a much harder challenge when it came to interpreting natural language dictations by physicians than speech recognition. Speech recognition has gotten to the point of course today where it’s a commodity. Lots of companies do speech recognition very, very well, but NLP continues to be a challenge.

 

Tim

That’s great. When we originally invested in Saykara a few years ago, we loved the technology trends around voice, around machine learning and around NLP, and as we dug into the use case and problem you’re solving here for physicians, we just thought it was one of the great applications of these set of technologies to solve a really burning pain point for customers. I mean, you’ve both seen both large companies and small companies succeed and fail, maybe fail more often than not, trying to kind of crack the nut around these technologies in healthcare. What are some of the places you’ve seen, quickly, both succeed and fail and kind of what are those characteristics that gets to the kind of why now for Saykara?

 

Harjinder

I would say in terms of success, speech recognition by and large has been a tremendous success. It’s taken many years, but it’s been successful. The primary source of failure I think has been that NLP never kept pace with the quality of speech recognition. So, you think about, again, what everybody wanted to do was put structured data within the medical record and a lot of focus went into how do we take what physicians are saying through their speech recognition process and break it apart into the discrete data components. So, you want to be able to put what medications a patient’s on, what problems they’re being diagnosed with, what procedures they’re undergoing and then all the details around all of those directly into discrete fields in the EHR. And that, by and large, in years past failed. And I think the tail end of your question, in terms of why now for Saykara, largely, because we’re looking at right now is the confluence of speech recognition and NLP capabilities that are now able to do what we weren’t able to do 10 years ago. So, it’s not a bold prediction anymore to say that we are now on the cusp of having automated systems that can both transcribe and interpret what physicians are saying and put that data directly into the medical record.

 

Tim (

Graham, do you want to add anything there?

 

Graham

What I would say is that electronic health record systems and other vendors who’ve tried to look at this space have often made it too complicated and too much of a burden on the physician, or on the backend, it’s not granular enough to really be usable and meaningful for the physician. So, the confluence of technologies and the pressures that doctors are under, I think just makes this the right time.

 

Tim

Yeah, that’s great. So some big . . . the confluence of some big technology trends that have been building for years now, and some big healthcare trends, and I want to come back and talk more about some of those healthcare specific trends, but maybe describe exactly what Saykara is doing. Right? We sort of shorthand AI virtual scribe. What is that and what makes Saykara different and talk about what really are we solving for physicians in the healthcare system.

 

Harjinder

Ultimately physicians just want to talk to their patients, and they want to provide care. They don’t want the tedium of documentation. So, what Saykara focuses on is listening in on doctor-patient conversations and interpreting those conversations and generating out of that conversation a note that otherwise a physician would have to create. Ultimately, the holy grail in this space is that a system such as ours can listen in on that doctor-patient encounter and simply interpret and create that note. I’ve already mentioned that NLP has come a long way, but it remains a very difficult challenge to figure out how to interpret conversations. It’s hard enough to just interpret a dictation, let alone a two-way conversation between a doctor and a patient and then sometimes of course you have more than just two parties in the room. And so, I think the differentiator here for Saykara as a company is our ability to do this successfully in an augmented fashion. What I mean by that is, that having a system that can do this completely autonomously without any human assistance remains our goal, our vision, and something that we’re working towards and making progress towards, but in the meantime, we use a augmented AI solution and have humans that are helping the system to learn. What differentiates us from, I think, virtually everybody else in this space is our ability to actually make that AI system continuously learn from the human in that loop. I would say there’s a lot of companies in this space that are purely human only solutions and they may talk about how they’re trying to incorporate AI into their solutions, but they’re by and large just human transcription companies. And what we’ve been able to do is create a platform that incorporates a combination of AI and human, and in many tasks now, the AI is actually able to do this completely autonomously on specific kinds of tasks within that encounter. That capability is getting better and better over time.

 

Tim

We have this broad investment theme around intelligent applications, and I think a core piece to successful intelligent applications needs to be this continuous learning loop. We were impressed from the beginning and your vision for how to implement that and we think that fully automating this process with this learning system, yet being able to fully delight doctors right away, with the product that we have today, but yet it’s going to continue to get more efficient and over time, is really insightful and exciting part about Saykara and what differentiates you. We also always think about with an investment, you know, who are all the constituents that matter here. And so, clearly, helping physicians is the number one driver, and it helps them with their documentation burden, etc., but health systems like this too, because physician burnout is a key issue for them. They’re not getting the data into the EHR that they originally intended, you know, to be there. I think patients, you know, we’ve learned, like this also kind of makes it more transparent, “What is my physician doing when I’m trying to tell her or him what’s wrong and they’re typing away on the screen and not paying attention to me?” So, really this kind of value prop that’s for the physician, for the system and for the patient I think is really important and exciting about this space. But Graham, not to put blame on you, but you built some of these EHR systems, how do we get to this point, right, where the EHR was supposed to capture all this information, and it’s not fully succeeding at that and yet, physicians like hate it, and so it’s sort of on both ends, you know, causing friction and a problem, you know, in the world right now? How do we sort of get to this point and maybe talk a little bit about how you got excited about Saykara’s solution for it and have come on board here in the last year?

 

Graham

Yeah. That’s true. I kind of joke to Harjinder that I’m here to pay my penance for building these things. What the electronic health record companies tried to do, to their credit, was to try and make this as comprehensive as possible so that you could cover all angles. You know, they ended up, unfortunately though, by trying to accommodate all of those things, is that they created a monster, which meant that doctors ended up getting dragged into using the computer screen and pulled away from the patient interaction. You talked about burnout. That’s really a feeling that about 50% of the doctors in this country have, depending on specialty, where, they’re feeling disempowered, they’ve lost control, they don’t really have the passion for the work anymore and they’re doing the best that they can under terrible situation, where they’re spending sometimes up to three hours at night, trying to catch up on the documentation. With that said, my background, having been involved in physician workflow all my life — as a physician, then working in this space — and then having spent so much time working on natural language processing and analytics and advanced AI at SAS, it just felt to me that this was a problem that needs to be solved. And, I had the great fortune of meeting Harjinder. I love to work with people who live in the future. They’re developing assets that we will be using in four or five- or ten-year’s time. This one was incubated well enough that I could see the great potential, but also the reality of how this could work. So, I’m excited about it, Tim, and I get more excited every day and seeing what we’re doing with customers and the promise of the tech.

 

Tim

That’s great. Really a unique set of backgrounds that allowed you to see this problem/opportunity from all sides. Strategically the opportunity made a lot of sense to you and your experience allowed you to see it from various sides. You know, we had a lot of listeners to the podcast who maybe are not in healthcare related, but they maybe are at a big company thinking about going to a smaller company. So, maybe just talk for a minute on a personal level, making that decision, it made sense strategically, but then, it’s going to work every day and look, Saykara has hundreds of physicians on the system; Harjinder had built the company to a really interesting place, so it wasn’t like this was a de novo startup when you joined, but compared to Sutherland, much smaller. Talk about what that transition was like and how you kind of thought that this was the right time for you personally to make that kind of move once you saw that strategically the company and the solution made great sense.

 

Graham

Yeah, no, you’re right, a lot of organizations at the larger level are somewhat risk averse because they’re trying to keep their performance machine, the existing engine running, and innovation, which could potentially disrupt your existing business, is problematic. And you’ve probably seen, Tim, many, many organizations that have struggled with how to incubate new offerings out of large companies. And so, I actually had found that my passion lay with that type of innovation and transformation. And for me, having run large organizations, it just felt that working with a bunch of really smart people, people who I like, who have deep experience in the industry, strong tech, definitely are thinking about the transformation in healthcare that needs to happen is that for me, I wanted to take everything that I’d learned at a large product and services organizations and then just bring it into Saykara and see what we could do. I love the mission, love the challenge, love the people. Let’s do something really meaningful to transform the lives of hopefully hundreds of thousands of doctors.

 

Tim

You used the word passion and as we’ve seen people successfully go from bigger companies to smaller ones, that’s the number one thing. The second one is, kind of, revel in getting your hands dirty and going and doing stuff and not just kind of directing. We’ve seen that as you’ve jumped in with customers and analytics and internal processes, and we’ll get into a little more of that, but back to the offering. One of the fun things about Saykara is we all know doctors. We’ve heard about this burnout issue, but what are we seeing about the receptivity to use this type of technology? Where are we kind of in the cycle from an adoption standpoint?

 

Graham

It’s interesting because you often think of cool new tech and interesting tech, and you think about, you know, hey, this is going to appeal to the PlayStation generation of doctors — people who grew up with technology in their back pocket, they can’t remember life without Google, and they think that maybe they would be the only folks who are really drawn to this. In fact, nothing could be further from the case. I think that there are physicians who started their career with the idea of just focusing on the patient and whether they had gone through the years of dictation and transcription or whatever, they immediately understand the idea that, hey, this thing would get me away from a computer, it allows me to focus. It’s by reducing all this hassle that’s been brought into their lives over the last 10 years, it’s actually like coming home. So, for doctors, it’s like what they got into medicine for. The time’s right, we’re just seeing an incredible amount of demand out there in the industry, not surprising. Whether it’s doctors in large health systems, individual group practices, large multispecialty practices, ambulatory surgery centers, it’s all about just getting the friction out of their day-to-day life. That’s probably no different than many, many other industries. If you can make it easy for someone to get their job done, and you can make it as transparent as possible, people are going to want to do that.

 

Tim

You refer to the product as a virtual scribe, an AI virtual scribe. So, some physicians actually are fortunate enough to have a person that follows them around and that I believe is sort of why it’s called a scribe. One of the things we loved about, and I loved about Saykara, is it really democratizes access to that type of service, right? Where I think scribes today, it’s very high end specialties who can afford that, and Saykara is at a price point and capability where family docs can have access to this, and also, Saykara doesn’t leave for medical school every one or two years, having to retrain someone else. What are your thoughts and sort of that — Saykara, sort of this AI option versus an in-person?

 

Graham

There’s been an absolute explosion over the last few years of medical scribes, but the problem is, that’s a very expensive model is bringing another person into the room for every visit. It’s also, as you said, these are difficult to train up, you try to bring someone on board and make sure they understand your specialty, understand the way you work, understand the types of way that you document and work with patients. Often times, patients don’t feel that comfortable either having another person in the room. So, we found a tremendous amount of receptivity to the idea that there’s this virtual AI assistant on a mobile device, it just sits there on your mobile phone and, and is unintrusive. The doctor says, “Hey, do you mind if I use my AI assistant — this system that’ll help me here to make sure that we capture everything?” And, the patient has always agreed. I don’t think I’ve ever seen anyone or heard of anyone who’s got a problem with it.

 

Tim

Not to mention, there hasn’t been a lot of people standing together in rooms here over the last few months, how has COVID impacted this whole market? The move to telemedicine is well understood and well-documented, but has this new normal, is this a tailwind for Saykara? Is it a challenge? What have you seen?

 

Graham

For us, it’s a tremendous tailwind. Our system works just great for telemedicine visits. All that it has to do is to be able to really hear the physician, if it can hear the patient too, even better. But if you’re on a video call with a patient, it’s very difficult to be on your computer, hunting and pecking and clicking through menus. So, yeah, telemedicine visits when you’re face to face with a patient in that way, the ability to just pick up on the voice. Doctors love that ability to just continue to have a hundred percent attention through telemedicine visits and Kara picks it up just as it would do if the patient was in the room.

 

Tim

So, Graham shared some color on what it’s been like and why he made a move from a bigger company to an earlier stage company. Harjinder I guess, at this point, you’re a serial entrepreneur. This is number three. What made you take this leap again? Why do you keep going after these healthcare problems and starting companies? Maybe share a little of the motivations and starting Saykara beyond the technical kind of strategic reasons you shared earlier.

 

Harjinder

Uh, in a word, I like punishment, I guess. Tim, as you probably know, having invested in a lot of companies that the highs and the lows of a startup are pretty extreme versus being in a big company where you have a salary regardless of your successes or failures, but in a startup, you can go from one extreme to another very rapidly. The thrill that I get of a startup is particularly around building solutions that people want to use and that make a difference in people’s lives. I think that I would put a premium on that. When people, when physicians use our solution, it makes a real difference. We hear from physicians that they’re spending hours, literally hours less per day on documentation and the physician burnout problem is so huge, as Graham also talked about. But then, the other side of that, as I said, is getting to work on problems that are unsolved problems that are really difficult problems. Conversational AI is one of those problems. Twenty years ago, speech recognition was one of those problems and getting to kind of join these two things together, the joy of users getting satisfaction out of using your system and building something that’s really complex and hard and has never been done before. That’s what wakes me up every day and gets me excited about working.

 

Tim

You were mostly joking about the punishment comment, but is it fair to say that that largely refers to the sales cycles can be long, that you sort of like, hey, this solution solves the problem, but yet there can be a fair amount of friction to sort of successfully get installed and implemented? Is that sort of some of the challenge? And maybe expand a little bit on other people starting companies in the healthcare space, any advice from what you’ve learned across these three companies and what it takes to be successful?

 

Harjinder

Yeh, so healthcare is particularly challenging largely. In healthcare, for most digital health companies, sales cycles can be anywhere from 9 to 18 months long. That’s typically what you’ll hear quoted from a lot of veterans in this space. Fortunately, ours are not that long, in general, and it really comes down to your sales strategy. We’re able to sell both to very large health systems, which do have these lengthy sales cycles, but next to that we’re also selling to small independent practice or specialty groups and they make decisions much more rapidly. We’ve been able to turn around a lot of deals within the space of a couple of months, which is phenomenal in healthcare. As far as advice to other entrepreneurs, I think the biggest thing I generally think about, and I recommend to others is that, when you’re doing a startup, you have to have, I think, two components, you have to have a big vision and you have to have a small vision. The big vision is, you know, if we’re wildly successful, what can we do here that’s earth shattering, changes the game completely. And you need that to motivate yourself, you need that as kind of that guiding vision of where we can get to. The problem that I see a lot of people get into is they get hung up on that big vision and they don’t actually see or map out the steps that are required to get there, and so, I always like to couple what we do with both that big vision and the small vision. The small vision for us is, there’s a physician that wants to do documentation, and forget about everything else, how do we make that physician happy? How do we save that physician two or three hours a day? Because if we can make that individual physician happy in his or her day-to-day work, we’re going to have the ability to do many more exciting things. I would say, having both those components in mind is critical.

 

Tim

That’s great advice. Couple your big vision with an initial specific problem that you’re going to go solve for a customer, and that they’re going to pay you money for it. I couldn’t say it better myself. Graham, you know, on this sales cycle and go-to-the market side of things, a big piece of what you’re leading for the company is scaling the go-to-market side of the business. Maybe give just a little bit of a sense for where things are today and what you see as the keys to scale to the next level and beyond. Again, with an eye towards, these I think principles and things you’re doing are really broadly applicable while you’re a healthcare IT company or another tech founder here looking to scale their company.

 

Graham

What I’ve learned over the years is you’ve got nothing if you don’t have happy customers. So it always, to me, on a go-to-market, sounds kind of, sort of back to front, which is you focus on your customers to be able to sell more, but it’s kind of obvious as well, is that focusing intensely on delighting our customers with great service and great responsiveness and a level of humility that shows that we respect the complexity of the work they do, and the problem that we’re trying to solve. Once you have that, then you’re in a position I think to start to work on the other pillars, which is figuring out how best to segment and target the market that you want to go after, who’s the right fit for your product in the short term. You’ll take those that happen to sort of find your products as well, but that you focus really your attention right on building the right market awareness, the right outreach, get the target market that you want. For us, we’re small getting the word out there and also managing channels. So, channel partnership relationships become very important. So, I’d say, start with the customer, make sure you’ve got the right market and then start to aggressively communicate to that market and you have to build channel partnerships to help you scale. So those are the three points that we’re focused on.

 

Tim

Talk a little bit more about the channel partnership part. We see a fairly consistent misconception or maybe mistake is for early stage companies to think they’re going to get partners and channels to help them early on before they build enough sort of scale direct selling, customer references, etc. On the other hand, particularly for our market, without figuring out how you work with the other people in the ecosystem and who might be partners, I think you’re really hamstringing yourself, also. So how do you kind of balance that — the direct selling piece versus channel partners, for our market?

 

Graham

Yeah. The point you’re making is right on. I mean our sense is that we’re now at a level where we, I think, have proven our capability in the marketplace, we’ve grown and we’re pretty well penetrated into a number of the markets or at least really started to get a lot of traction. My sense is that we’re looking now not for someone to sort of be the mega company that will push and promote our software and services and the capabilities we have. It’s more to do with finding those types of very strategic alliances where it’s in everyone’s best interest. Where I would recommend that people start is, pick one or two potential strategic partnerships that can help your channel, but no more than that, because you really need to stick to your knitting and make sure that you control the message, control your brand and control the service that you’re delivering. So, much of that, when I’m talking about channels, is on mutually beneficial sales channel relationships and co-marketing arrangements. You still need to control your brand, your customer experience, and ideally, you’re still controlling the majority of the sales cycle once you’ve identified the lead opportunity.

Our Investment In Lexion: Finding The “Data Needle” In A Haystack Of Documents

Today, Lexion announced a $4.2 million Series Seed funding round led by Madrona, and we couldn’t be more thrilled.

The business world is filled with and ruled by an ever-increasing mix of complex documents that need to be constantly managed – from customer and vendor contracts to insurance agreements to commercial real estate agreements. Exactly how and when “business gets done” is determined by the key details in these documents, and yet the process for tracking and acting upon these details is often highly manual, time consuming, and inconsistent. This is where the power of Artificial Intelligence (AI) and Natural Language Processing (NLP) come in.

Enter Lexion, a new offering for underserved mid-market businesses that provides a powerful application, based on AI and NLP technology developed at Seattle’s Allen Institute for Artificial Intelligence (AI2), built from the ground-up to ingest and understand your corporate contracts. Whether it be a pricing term, a partnership revenue share agreement, a renewal date, or an extension clause, the details are tedious to find and track and yet necessary to understand and be able to act on in order to run a business well. Lexion is your powerful magnet for locating that “needle hidden in the haystack” of your paperwork, and it helps make you smarter, faster, and more action-oriented when running your business.

Lexion Founders, Emad Elwany, Gaurav Oberoi, and James Baird.

Gaurav Oberoi, Lexion founder and CEO, is a Seattle-based entrepreneur whom we have known and have wanted to work with for a long time. He is a tremendous founder and well-known in the tech community both for his successes in and his enthusiasm for the Seattle startup scene. He founded and bootstrapped successful startups Billmonk (acquired by Obopay) and Precision Polling (acquired by SurveyMonkey) and originated the still popular startup email list, STS (SeattleTechStartups), over a decade ago. Gaurav, together with co-founders Emad Elwany and James Baird, came together at AI2 and formed the idea for Lexion over discussions on the common customer problem of intelligent document management and then built Lexion as a solution to solve it.

At Madrona, we love this customer- and problem-first approach to company creation and are excited to support the Lexion team from Day One. This investment also supports one of our core investment themes of ML-driven intelligent applications, in this case using advanced text mining and NLP techniques to extract structured data and insights from large corpuses of unstructured information to solve a vertical business problem, here specifically contract management. We think there will continue to be more opportunities in this form of entity extraction from large text sets, and this investment builds on the theme behind our other startup in this space, Lattice Data (acquired).

In talking to dozens of portfolio and other midmarket and growth companies, we heard this pain point and market opportunity reiterated over and over again. We were enthused as this company and technology came together to solve the problem, and are very excited to also have forward-thinking law firm, Wilson Sonsini, Goodrich and Rosati, the premier legal advisor to technology, life sciences, and growth enterprises worldwide, invest a sizable amount and join the board. WSGR clearly sees their clients dealing with this same issue of contract management and understands deeply the differentiated approach Lexion is bringing to market. We are looking forward to working with David Wang at WSGR and the team there to help build Lexion’s success.

Lexion is also the latest company to spin out from AI2 and the third we have funded (previous early stage investments were Kitt.ai and Xnor.ai). AI2 has proven to be an incredible incubation ground for companies – and it’s great to see Seattle and our region nurturing AI for the broader good.

To learn more about intelligently and cost-effectively managing your contracts – visit https://lexion.ai/

Evolving the Application Platform from Software to Dataware

Every decade, a set of major forces work together to change the way we think about “applications.” Until now, those changes were principally evolutions of software programming, networked communications and user interactions.

In the mid-1990s, Bill Gates’ famous “The Internet Tidal Wave” letter highlighted the rise of the internet, browser-based applications and portable computing.

By 2006, smart, touch devices, Software-as-a-Service (SaaS) and the earliest days of cloud computing were emerging. Today, data and machine learning/artificial intelligence are combining with software and cloud infrastructure to become a new platform.

Microsoft CEO Satya Nadella recently described this new platform as “a third ‘run time’ — the next platform…one that doesn’t just manage information but also learns from information and interacts with the physical world.”

I think of this as an evolution from software to dataware as applications transform from predictable programs to data-trained systems that continuously learn and make predictions that become more effective over time. Three forces — application intelligence, microservices/serverless architectures and natural user interfaces — will dominate how we interact with and benefit from intelligent applications over the next decade.

In the mid-1990s, the rise of internet applications offered countless new services to consumers, including search, news and e-commerce. Businesses and individuals had a new way to broadcast or market themselves to others via websites. Application servers from BEA, IBM, Sun and others provided the foundation for internet-based applications, and browsers connected users with apps and content. As consumer hardware shifted from desktop PCs to portable laptops, and infrastructure became increasingly networked, the fundamental architectures of applications were re-thought.

By 2006, a new wave of core forces shaped the definition of applications. Software was moving from client-server to Software-as-a-Service. Companies like Salesforce.com and NetSuite led the way, with others like Concur transforming into SaaS leaders. In addition, hardware started to become software services in the form of Infrastructure-as-a-Service with the launch of Amazon Web Services S3 (Simple Storage Service) and then EC2 (Elastic Cloud Compute Service).

Smart, mobile devices began to emerge, and applications for these devices quickly followed. Apple entered the market with the iPhone in 2007, and a year later introduced the App Store. In addition, Google launched the Android ecosystem that year. Applications were purpose-built to run on these smart devices, and legacy applications were re-purposed to work in a mobile context.

As devices, including iPads, Kindles, Surfaces and others proliferated, application user interfaces became increasingly complex. Soon developers were creating applications that responsively adjusted to the type of device and use case they were supporting. Another major change of this past decade was the transition from typing and clicking, which had dominated the PC and Blackberry era, to touch as a dominant interface for humans and applications.

Software is programmed and predictable, while the new dataware is trained and predictive.

Matt McIlwain

In 2016, we are on the cusp of a totally new era in how applications are built, managed and accessed by users. The most important aspect of this evolution is how applications are being redefined from “software programs” to “dataware learners.”

For decades, software has been ­programmed and designed to run in predictable ways. Over the next decade, dataware will be created through training a computer system with data that enables the system to continuously learn and make predictions based on new data/metadata, engineered features and algorithm-powered data models.

In short, software is programmed and predictable, while the new dataware is trained and predictive. We benefit from dataware all the time today in modern search, consumer services like Netflix and Spotify and fraud protection for our credit cards. But soon, every application will be an intelligent application.

Three major forces underlie the shift from software to dataware which necessitates a new “platform” for application development and operations and these forces are interrelated.

Application intelligence

Intelligent applications are the end product of this evolution. They leverage data, algorithms and ongoing learning to anticipate and improve interactions with the people and machines they interact with.

They combine three layers: innovative data and metadata stores, data intelligence systems (enabled by machine learning/AI) and the predictive intelligence that is expressed at an “application” layer. In addition, these layers are connected by a continual feedback loop that collects data at the points of interaction between machines and/or humans to continually improve the quality of the intelligent applications.

Microservices and serverless functions

Monolithic applications, even SaaS applications, are being deconstructed into components that are elastic building blocks for “macro-services.” Microservice building blocks can be simple or multi-dimensional, and they are expressed through Application Programming Interfaces (APIs). These APIs often communicate machine-to-machine, such as Twilio for communication or Microsoft’s Active Directory Service for identity. They also enable traditional applications to more easily “talk” or interact with new applications.

And, in the form of “bots,” they perform specific functions, like calling a car service or ordering a pizza via an underlying communication platform. A closely related and profound infrastructure trend is the emergence of event-driven, “serverless” application architectures. Serverless functions such as Amazon’s Lambda service or Google Functions leverage cloud infrastructure and containerized systems such as Docker.

At one level, these “serverless functions” are a form of microservice. But, they are separate, as they rely on data-driven events to trigger a “state-less” function to perform a specific task. These functions can even call intelligent applications or bots as part of a functional flow. These tasks can be connected and scaled to form real-time, intelligent applications and be delivered in a personalized way to end-users. Microservices, in their varying forms, will dominate how applications are built and “served” over the next decade.

Natural user interface

If touch was the last major evolution in interfaces, voice, vision and virtual interaction using a mix of our natural senses will be the major interfaces of the next decade. Voice is finally exploding with platforms like Alexa, Cortana and Siri. Amazon Alexa already has more than 1,000 voice-activated skills on its platform. And, as virtual and augmented reality continue to progress, voice and visual interfaces (looking at an object to direct an action) will dominate how people interact with applications.

Microsoft HoloLens and Samsung Gear are early examples of devices using visual interfaces. Even touch will evolve in both the physical sense through “chatbots” and the virtual sense, as we use hand controllers like those that come with a Valve/HTC Vive to interact with both our physical and virtual worlds. And especially in virtual environments, using a voice-activated service like Alexa to open and edit a document will feel natural.

What are the high-level implications of the evolution to intelligent applications powered by a dataware platform?

SaaS is not enough. The past 10 years in commercial software have been dominated by a shift to cloud-based, always-on SaaS applications. But, these applications are built in a monolithic (not microservices) manner and are generally programmed, versus trained. New commercial applications will emerge that will incorporate the intelligent applications framework, and usually be built on a microservices platform. Even those now “legacy” SaaS applications will try to modernize by building in data intelligence and microservices components.

Data access and usage rights are required. Intelligent applications are powered by data, metadata and intelligent data models (“learners”). Without access to the data and the right to use it to train models, dataware will not be possible. The best sources of data will be proprietary and differentiated. Companies that curate such data sources and build frequently used, intelligent applications will create a virtuous cycle and a sustainable competitive advantage. There will also be a lot of work and opportunity ahead in creating systems to ingest, clean, normalize and create intelligent data learners leveraging machine learning techniques.

New form factors will emerge. Natural user interfaces leveraging speech and vision are just beginning to influence new form factors like Amazon Echo, Microsoft HoloLens and Valve/HTC Vive. These multi-sense and machine-learning-powered form factors will continue to evolve over the next several years. Interestingly, the three mentioned above emerged from a mix of Seattle-based companies with roots in software, e-commerce and gaming!

The three major trends outlined here will help turn software applications into dataware learners over the next decade, and will shape the future of how man and machine interact. Intelligent applications will be data-driven, highly componentized, accessed via almost all of our senses and delivered in real time.

These applications and the devices used to interact with them, which may seem improbable to some today, will feel natural and inevitable to all by 2026 — if not sooner. Entrepreneurs and companies looking to build valuable services and software today need to keep these rapidly emerging trends in mind.

I remember debating with our portfolio companies in 2006 and 2007 whether or not to build products as SaaS and mobile-first on a cloud infrastructure. That ship has sailed. Today we encourage them to build applications powered by machine learning, microservices and voice/visual inputs.

This post was originally published by TechCrunch