RunwayML Co-Founder Cristobal Valenzuela on the Intersection of Art and Technology

RunwayML, Cristóbal Valenzuela

In this episode of Founded and Funded, Madrona is launching a special series to highlight some of its IA40 winners, starting with RunwayML, which offers web-based video editing tools that utilize machine learning to automate what used to take video editors hours if not days to accomplish. Madrona Investor Ishani Ummat speaks with Co-founder and CEO Cristobal Valenzuela all about where the idea came from, how he decided to launch a company instead of joining Adobe – and even how TikTok fits into all of this. Listen now to hear all about it.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to founded and funded. This is Coral Garnick Ducken and this week we are launching a special series to spotlight some of last year’s IA40 winners. Today, Madrona investor Ishani Ummat is talking to Cristobal Valenzuela about the web-based video editing tool RunwayML. It all started as a research project inside NYU using an algorithm to stylize and colorize images in Photoshop, but Cristobal now sees Runway as an opportunity to not simply improve how things have commonly been done, but rather leapfrog an entire industry. And the company secured a $35 million Series B in December to work toward that goal. With that, I’m going to just hand it over to Ishani and Cristobal to dive into it.

Ishani: Hi everyone. My name is Ishani and I’m delighted to be here today with Cristobal Valenzuela. The CEO of RunwayML. RunwayML is building a web-based real-time video editing tool with machine learning and last year RunwayML was selected as a top 40 intelligent application by over 50 judges across 40 venture capital firms. We define intelligent applications as the next generation of applications that harness the power of machine intelligence to create a continuously improving experience for the end user and solve a business problem better than ever before. Runway is a story I love — re-imagining creativity with machine learning. And I can’t think of a more interesting conversation to kick off our IA40 spotlight.

Cris, thank you for joining us today.

Cris: Thank you for the invitation. I’m super happy to be here.

Ishani: I’d love to start off with your thesis project actually at NYU. That’s sort of the basis for this company. Take us back to that time. What led you to this idea? Why did you start working on it? And did you know you wanted to start a company?

Cris: So, the short story about Runway is — I’m from Chile, and I moved to New York five years ago. And the reason I moved was at the time, I was just fascinated with things that were coming up in the computer vision world. I’m coming from an econ background and had no experience building deep learning models before, but the things I was seeing specifically around computer vision generative models like five, six years ago, it just blew my mind, and it blew it so much that I just decided to move to study this on a full-time basis at NYU.

So at NYU, I basically spent two years just doing a deep dive into how to really take what was happening, specifically after I would say ImageNET and AlexNET a bunch of really impactful and big milestones in the computer vision world started to emerge, and apply them inside creative and art domains. And the reason was , I think we’re just touching the surface of what it would really actually mean to deploy algorithms inside the creative practice. The reason I wanted to explore those was just, I knew something was happening. I knew something was about to happen, but yet no one was doing it.

So why not just do it yourself? Um, no, I didn’t know if I wanted to start a company, but by the time I was building the thesis, it was more of an organic direction that we took that I realized that my research was way more impactful than I originally thought of. Specifically, when you’re doing research in an academic situation, you’re always constrained, and the bubble is always perfect. You have all the perfect conditions. But when I started applying some of the things I was doing inside school to the outside world, I immediately realized that industry experts, VFX people, film, creators, artists, designers were like, “Hey, I’m interested in this. I want to use it.” And so that kind of sparked the conversation of — “Oh, maybe we should think about this as a company.” And then yeah, it started from there.

Ishani: Was there an aha moment, in that journey as you’re talking to people and they say — “Oh yeah, interesting research, but I don’t actually know how to apply it.” Was there one moment that you can take us back to that said, “Oh, wow. This is actually so significantly bigger and it’s a company, not just a project.”

Cris: I mean, we started the first research projects in school, there were more about taking image segmentation or image understanding models and video understanding models and applying them with creative domains. So how do you take like someone who’s working in Photoshop and help them understand how the software could basically be a bit smarter in terms of understanding what the person is actually trying to do? What the intent of editing and image is and see if you can have an algorithm or a system that assists you on that editing. So, we built a bunch of experiments and integrations in Photoshop and Premiere. And the ideas were very simple. Like, let’s see, for instance, if I can help you just stylize or colorize or edit an image faster by using some very simple algorithms. And again, it was more of let’s see if this is interesting for these creators. And when I realized there was something definitely here, is the reaction when I remember a few tweets around like, here’s a prototype, anyone interested in trying this? And I remember the amount of inbound interest I got from professional photographers, people working in film people working in ad agencies, very organically being just basically, “Hey, I’ve been struggling with this for years can you just help me cut something that took me weeks of work to 10 minutes. I want to learn more.” That’s when we were like, okay, there’s something definitely happening within the scope of creative domains, and so we should go deeper.

I guess there was one moment in particular where I really thought I should try to do it myself. And, so when I was presenting Runway at my thesis at NYU, someone from Adobe was in the panel. And two weeks after my presentation, they basically offered me to join Adobe, to build all the things that we were building at Runway as part of it their new AI team. I was two years into New York as an immigrant, with the perfect dream company offering you the dream job with a visa and the perfect salary – it is just the dream. When I thought about it at that time, I remember my mom was visiting me and she was asking me, “What else do you want? It’s perfect – everything makes sense, rationally. Why would you not take that? Everything you want is there.” But I couldn’t say yes, my gut, my intuition was like, I can’t do it. If I am doing this, if I’m going to build this thing, I need to do it. And I want to have control of how it’s built. And so, the decision of having the offer and having a capacity of jumping in and being like, “Hey, I’m going to take this. This is a safe solution.” Versus, no, I would really want to try and build it on my own even if I fail, I fail, but at least I tried.

So for me, that was the moment where I was like, OK, something happened — either I go and build inside a company or I try to build on my own because I haven’t raised any capital. I’ll try to see if I can sustain living in New York with no money for a couple of months until I figure this out. I think that motivation of like, okay, I’m going to try to prove that I can make the right decision of not taking it, not going to Adobe, was something that I guess motivated us to do it.

Ishani: That’s an incredible story. Can you talk to us a little bit about this technology that underpins Runway? You know, many of the models that you reference and leverage weren’t even around five to seven years ago. We’ve all spent time editing, whether it’s home videos or in Final Cut Pro and the range in between of getting that mug out of the background or even being able to remove the background from an image was such a huge feature in Microsoft PowerPoint that for everyone out there who makes slides on a daily basis and translating that to video seems like an order of magnitude more difficult. Tell us a little bit about the step change in technology that really enabled the core product of Runway to exist.

Cris: Totally. I think there are a bunch of megatrends on which Runway sits today. We’re seeing an emergence of new video content platforms emerging of the last couple of years. And so, the need to create more video has become more obvious for creators, for ad agencies, but also for companies in general. Every company is becoming some sort of media company. They’re creating content all the time. Everyone’s producing their own podcast, their own YouTube shows. The way that software to create content has evolved and has been developed over the last 10, 20 years is, I would say, still based on an old paradigm of how media works. Like, if you open Premiere, if you open Final Cut, those were software made to make ads for TV. And so the limitations and the constraints and the configurations are all set up for like 10 years ago, right? But if you speak with anyone creating content today for YouTube, for TikTok, for Instagram, the volume and the quantity and the type of content is very different. And so that’s, the first megatrend: How do you think about new tools for the next generation of creators. so within that where ML kind of like really come in and where the differentiator of Runaway is that we see a few things that are happening first, the emergence of the web, like the web as a creative medium. I think Figma and Canva have proven this.

The web is such a collaborative space that you need to just be able to build things on the web. If you want to collaborate with more people, if you want to move really fast, if you want to just not be constrained to any limitations from hardware and desktop. I guess, to your question of ML in particular, we build it so in a way that the video platform, the video rendering, the video encoding itself is entirely ML driven. By that, we mean that every single process in that media pipeline that is either tedious, time-consuming or very expensive to do. We can automate via this kind of like pipeline of algorithms. And so, things like you were saying, like removing an object from a background, has been a very tedious process to do historically in video making. It’s a process known as rotoscoping. And it’s been in film and in video for like as early as video was there. Yet, it’s extremely expensive. So, we thought about it. If that’s a primitive principle, for instance, of video-making, how do you make it so it’s accessible? It’s extremely fast. It’s on the web and the way you do it, it’s not a manual, tedious process. It’s automatic – as fast as possible. So, we’ve built it taking those principles of what folks really want in video, simplifying to the core components, using these human-in-the-loop algorithms and then basically helping you make video faster and better. And there’s a lot of other kinds of components of video that we’re automating as well that basically help drive that motion forward to create more video as fast as possible.

Ishani: I love that you frame the company as being built off of megatrends but then focus on the specific use cases. But then, there’s a broad range of use cases here that I hear you talk about. Across whether it’s an individual creator or, you know, a professional photographer. And so it seems quite widely applicable. When you think about some of the research work that you’re doing and the capabilities of making machine learning more accessible to each of those range of end users How do you actually go about picking and choosing the sort of machine learning models that drive it?

Cris: I would say that. Going back to 5, 6, 7 years ago, a lot of the computer vision and ML models started to become more relevant and commonplace. A bunch of things were also built around that time, like the infrastructure to deploy models. And we’ve seen the emergence of ML ops community in general, like tools and systems the monitor, your training process, tools to deploy models to production tools to optimize models to different devices. There’s a lot of things that happen to basically help drive these models into production. And we’ve seen that in like robotics and self-driving cars. Like those algorithms are becoming more predominant than ever before. Basically, because we’ve invested as a community of ML, folds or ML companies on that infrastructure. And so, for us is the realization that we don’t have to build infrastructure ourselves. Like, you can take off-the-shelf solutions to help you deploy the models into production environments, with millions of users in real time, for instance. The core component, I would say it’s not like spending too much time on that infrastructure, given that it’s already been built. It’s more like what’s the unique problem that you’re trying to solve here? If we think about that, there are two ways you can take that approach one is just looking at open source.

The ML community in general has been built a lot on top of open source. And so there’s a lot of ideas that are really interesting. You can borrow them, you can build on top, and you can contribute as well. We do it a lot. We publish. But when it comes to production like getting things and putting them at the level of perfection that your customers really want it is a whole other beast. That requires a different mindset. For instance, going back to the rotoscoping example. Video segmentation is a task that has been approached in very different ways on the research side. But when you speak with someone doing video, even if it’s a professional VFX and filmmaker or some casual creator, the way you think about it is completely different. At the end of the day, as a creator, you don’t really care what model goes behind the scenes. I think a lot of people might want to overemphasize the need of showing you how the algorithm works and demonstrating its capabilities. But if you just focus on the customer itself, people just really want to remove the objects from their backgrounds. And so with that in mind, there’s a lot of that comes from like automation from how do you build a robust segmentation model? How do you build it so it works really well? It all has all of these kinds of constraints, but at the same time, how do you involve the user input in that process? So half of it is research on ML and the other is a lot of just user research. How are you doing this today? How are you actually doing a background removal process? Some people might use Photoshop or some very complicated to use tools. Some other people may use some sort of automation by building their own tools, and you’re trying to really understand what that actually means. So you build a solution that specifically within creative domains is never fully automated.

Cris: I’m a big believer that you’re never going to find a tool in the creative space that does everything for you. That’s just a dream. That’s a Utopia. Nothing in the creative world works like that. So every solution that’s just input here, do nothing because the machine will do it for you.

It’s just a complete mistake and totally would not work. So, for us, it is more about, you have a problem, you have an insight, you need something to be done. Here’s a system that we build on research that helps you, but we also understand what you require, how you work with the device and how you work with that loop that we call.

Ishani: And, you know, you could argue that if a machine was doing all of it, isn’t really creative inherently. Do you lose that aspect? That sort of intangible aspect of creativity? So, much to unpack here. So, you talked about the infrastructure layer. We call those enablers in intelligent applications where there’s this whole system of, the Databricks of the world, but that DataRobots and all these other companies that are out there that are Grafana, Monte Carlo, that sit at the enabler level that create the ability for folks like you, RunwayML, to build endpoint applications much faster and better than before. Some of that’s in the open-source community, as you say. And some of that is actually, company-based but it removes the infrastructure layer from every intelligent application that has to be built. And, being able to capitalize on that, I think has made a huge impact on the endpoint applications like Runway. And then you think about bringing that to the product. So much of what you talk about is around accessibility. You know, new technology adoption – so much of it is related to how accessible that technology has become, and so in the academic sense, this machine learning models and development and rendering and all these sorts of technical terms, don’t feel very accessible to creators and particularly the demographic that you’re targeting. But building it into a, low-code/no-code, video editing tool, it really does.

So, the classic question is browser versus application, and you talked a little bit about why you’re in the browser and how it’s become so much more of a collaborative and creative space. What are the other decisions you’ve made along the way to make the Runway experience — specifically being able to get machine learning into the hands of creators at a product level, more accessible for new users, borrowing from things like the workflow of Final Cut Pro or some of the other tools that are out there. Tell us about those decisions that you’ve made along the way.

Cris: There are a lot of things that come into this conversation. The first one is, we’re always thinking in terms of the company, like the build versus buy. If I want to build and deploy models to millions of users, I don’t have to build a whole backend infrastructure and don’t have to own the instances. You just plug into the whole infrastructure that has already been built. And that’s so good because you can focus on the key differentiators of your company. What are the things that are unique as a product that will help your customers do more?

So, for our customers, what they want is just to create more video faster. And so, for that, we basically take existing primitives from the video space. And so, we’re really close to like professional software for people working in the industry for years to try to understand what are you trying to actually do in your workflow and how could something like an automated system help you, but also open the doors for other folks who would have never of being able to do that thing before, do it as well? And when you think about that, you think about, OK, we need to build on top of the infrastructure. We need to allow the new generation of creators to tap into what making video is. The web becomes such an important aspect of that. Mostly because it democratizes access to complicated and sophisticated tools like professional video in a way that I don’t think we’ve seen before.

There are a few things that are really important. The first one is the need for hardware gets reduced to zero. Like a lot of our users are on Chromebooks, on Windows laptops on iPad. It’s really hard to edit video in any of those devices if you don’t have a powerful or deep-feed, GPU machine. So, for a lot of people, that’s not a limitation if you have that capacity to compute. But if you’re a small shop or a small business, or if you’re a small ad agency or even a big ad agency, you still have that limitation on hardware. The web just like completely reduced it to zero. Basically, you’re connected to our cloud. You have that endpoint. And since we already have that GPU cluster running the models, you’re basically able to access not just one GPU machine, you’re able to access a lot. And so if you want to export hundreds of versions of your video, that’s possible. And I think that the second one really important aspect of the web, and why we decided to build in the web again, building on the accessibility point is collaboration.

When you think about video creation today, you can think about people editing video, like video creators, themselves, video editors. But video encompasses more than just people doing the actual editing. It involves the managers. It involves the viewers and the designers. If you’re building a brand, and you have design assets and files, and someone is building in a video, how you share those assets with that person, or with that team really matters. So video becomes like a central hub of collaboration as well. And the web facilitates that at a rate that’s impossible to do in any kind of environment. And so, for us, it’s considering those aspects as well when deciding how and when to build a platform. And aiming and investing in the web for us has been a long-term goal. A lot of the things we’re doing right now in the video space, on the web hasn’t been done, so we’re working with the Chrome team with the Google team, really closely to work on some of the new standards that they’re developing to make sure editing 4k footage with 10 layers at the same time feels as native as possible. And I think Figma has already proven this in the vector UI design. You can run things natively or even more better than native on the web. And now we’re actually starting to see these in video as well, which is a bit more complex in terms of latency and interactions, but we’re definitely getting there.

Ishani: That’s awesome. You talk about cloud computing as a big enabler again and this collaboration concept. Multiplayer in the web is this next generation of collaboration and you’re right, Figma, Coda, Notion, Canva have made collaboration and multiplayer inherent and I think a lot of the applications that don’t have that multiplayer component are proving to be much more difficult to use, especially within teams and within a remote and hybrid kind of world that we’re entering. Figma and Canva — you mentioned them. They really, to me, started to pave the way to this multiplayer concept — web-based — but also this concept of low-code/no-code and being able to set the precedent for using machine learning, using technology in a much more accessible way for a non-technical user.

Do you think of that as one of the big trends that’s enabled and paved the way for you and Runway.

Cris: So, when we think about it, I guess no code for us on the ML side of things, we actually think a lot about how we take these models, these very complex pieces of software with hundreds of thousands of connections and systems to make them work really well and robust, into really consumable and easy to digest and simple solutions as an interface. Making sure that you build interfaces that are programmable or accessible and customizable. I think in a way, it becomes a commodity like it’s a system that you build, it’s proprietary, you develop it, but your customers are less concerned about the internal aspects of how it works and are more concerned about the output, right? And so, when I think about Webflow for instance, and I think about web designing in general, like Squarespace or those kinds of companies, would build like democratizing, no-code solutions for building websites, you really care about your customers just building really good websites. Right? How CSS and the JavaScript endpoints work on the backend are not really useful for them, unless you’re helping them solve a business use case. And so, you don’t really expose those kinds of things.

Ishani: That’s great, framing it as exposure. I hadn’t quite thought of it that way before, but it does make sense. You’re masking sort of the code and you can expose to components of it where it matters and where it’s a variable that people want to influence. But where It’s not. And you learn a lot of this through user testing, but where it’s not you can mask it. Tell us a little bit about the process for that user testing. I mean, so much of what you’re talking about is really driven by your end user. And it seems like you’re really in touch with who that is and how you learned a lot from them. What does the process for that look like? I think it’s so important as you iterate on early product and early build. And when you launch a new feature, you know, in your case Green Screen, for example, what’s the process you go through for a user iteration and feedback.

Cris: I love that question. I think a few things are important. The first one is a lot of times your users don’t actually know what they want.

Ishani: They just know They have a problem, but they don’t know to solve it.

Cris: Exactly. So, if you ask them the answer to what they want, that will not necessarily be the best solution. That’s the realm of knowledge they have today. In a way, no one was ever asking for an automated rotoscoping solution because no one thought that was possible. When you start doing and developing technologies or start delving deeper into things that haven’t been done before, it’s really hard to do comparisons to like, how has these been working before? Because no one has done it before, so it’s really hard to have a benchmark.

And so, when you ask people, what’s a pain for you in the video space, a lot of people will tell you like, Hey, rotoscoping, extremely painful. So, what do you want? Well, I want a better brush so I can do my mask five times faster. And so, I could be like, great, I’ve listened to you. I’ve built this thing, now you’re working two times faster. Do you like it? And it’s great. I like it. But the moment you mentioned like, “Hey, I can actually automate the whole thing for you. Just literally type a word.” And this is true. We have this as a beta that we are going to release really soon, where you can type. Let’s say you have a shot of a car and a tree. You can type “car.” Then we have a model that understands the object in that video, understands the car, creates the masks for you and extracts the mask immediately. And so, you’re not editing anymore with frames, your editing with words, right?

It’s really hard for our customer to tell you that — “Hey, I want to like this thing.” But the moment you show them to them, they’re like, “Oh, it’s insane. Like I want this. It is not only helping me move twice as fast. It’s helping me move a hundred times faster.” So, a lot of the user research in a way is like listening to your customers and listening to your users, but actually trying to really listen or hear their pain. Okay, what are you actually trying to say when you’re saying these things, this is actually the tool itself is a problem, or it’s more of like, the process is broken. If you have the process that’s broken and you as a product person know that technology, know the skills of your team and what’s possible today, how do you build quick prototypes and solutions that can help you actually figure out if that’s actually something worth investing and building?

Cris: So, we do a lot of that. We listen a lot. We understand our customers. We understand either people who have never used the Runway before. We interview them a lot and we try to distill, okay, what’s the fundamental things that are happening here. And how would we build them with a set of technologies that we’ve been developing over the last couple of years?

Ishani: Right — and from the end-user standpoint. It’s just not in the realm of possibility to augment their workflow so much with automation, you know, maybe incremental baby steps. But as you say, the 100X just doesn’t fall within the imagination of someone using a video tool to take it all the way to, for example, text-based video editing. That’s in the realm of researchers at OpenAI, doing GPT3 work and DALL-E, and all those image processing things. So being able to really distill down a pain point, but then you use your imagination to go from up with a solution.

Cris: And that’s a lot of prototyping as well. Basically, coming up with ideas and you just test those ideas with your customers as quickly as possible before building really robust and technically complex solutions. So, I guess to your point of for instance more on the generative side, something we’ve been spending a lot of time on generative models, deficient models, transform, applies to computer vision. The thesis there is that we’re probably going to start seeing more video content being entirely generated. So, think about stock footage or stock video, right? It was the case before you had to either shoot something or buy that footage from like a Getty Image platform, and that’s a really expensive process, both because the acid itself is super expensive to buy, but also because the asset might never actually be the perfect asset that you want. There’s some things that you want to change the color isn’t right. I want that person, but in a different position. It’s so complicated. And so, we’re approaching the space where you’re actually going to be able to generate those things, generate that stock footage, that footage in general. So, when you ask people, how do you want to create or work with assets, with templates, with custom content, they might ask you like, “Hey, I want a better search for my stock footage library.” But the moment you have Dall-E or other models that are able to generate realistic content, the conversation completely changes. You’re not marginally improving a process. You’re leapfrogging a whole industry. You’re like, okay, this was the way people used to operate.

Now, this technology is enabling you to think in just a completely different way. The questions you’re asking yourself are so different. And so having that is something we’ve always had in mind. And we’re also betting on that, on the long term as well.

Ishani: Incredible. Yeah. That leap from video editing to transformer model augmented video editing is massive, right? Transformative from technology perspective, but massive from it. Just how do I make that leap and requiring the technology, the examples to saying, oh, I can use transformer models in this process. We can talk about transformer models forever. Maybe take us to the moment where that started to make sense for you as a business.

Cris: I think the moment we started seeing this as an interesting research technique was the moment people understood that you can apply it not just to tokens, but to like pixels themselves. We use some of these techniques for our models behind the scenes, but in general, I’m less of a fan of a specific technique because techniques tend to move really fast, and something else will happen. And so, I think it’s important always to like — when you see those trends coming up, see how they can adjust to your product or your needs. But at the same time, don’t fixate too much on specific technique because a new technique might come up that might be better. And the ability to switch and learn from what’s better, I think we’ll always pay off versus like, if you’ve spent too much time developing something and then a new approach comes and you’re unable to adjust. then it’s going to be hard. I mean, the space moves so fast. The ML space is moving so fast that something that just published four months ago has already been changed, so keeping track of that, I think it’s the most impactful thing. I guess on the research side of Runway, we do a lot of different approaches from transformers to more generative stuff from our traditional computer vision as well. Again, always in the aim of like, how do we help you make video faster?

Ishani: That’s a really great insight to be nimble across, you know, a rapidly evolving technology field. And the conversation, even if you just zoom in on transformer models on how large these models have been and how many parameters they’ve been trained on, even over the course of the last 12 months, the chart is absurd, right? And that point of you building a business on top of some of these platform technologies, or what will evolve to be platform technologies, being nimble across the methodology is so, key.

Cris: One hundred percent. Because at the same time, there are a lot of things that are happening specifically where you mentioned if like those hose models themselves that our greater research insights, but try to, productionalize a model that has 2 billion parameters for like a million users. You either have a budget of a million, a million dollars a second, or it’s impossible to do it. Right? So, it’s great. Like fundamentally. It’s moving the field in such an interesting way. There’s new techniques. But again, if you’re thinking about how to put it into a product, that’s a whole different conversation.

Cris: So always trying to balance those things for us is really important.

Ishani: So how do you straddle then the business side of Runway and the research side of Runway.

Cris: We don’t see them as different worlds. It’s part of the same. So, research at Runway, is just applied research to product? As a researcher at Runway, you work really closely with the design team and with the engineering team and with everyone to really figure out if there’s something we can do. There’s a cost, like a literal cost, that needs to be considered, and compute, to have in mind when you are developing that. There’s the feasibility approach — is there something we can actually build in a reasonable amount of time? There’s a performance trade-off and all of these things that you have in mind when you’re thinking about applying those into a product.

Perhaps if you’re a more formal academic context, and you’re just doing research. You’re not constrained by those things. I mean, when OpenAI was building GPT-3 they were not thinking about deploying this for video domains with millions of visitors, they were thinking — this is an idea, let’s see if it works. And then people start building on top of that. Now there’s a lot of like pruning and ideas that can come to make it more efficient, more fast. But it’s still, if you look at OpenAIs for like pricing model for, GPT-3 today’s it is still very expensive to use it. And it’s a language model. So video is way more expensive. And so, we’re less concerned about, how do we push like a field so far where it’s opened all these doors for positive expressions. And we are more of let’s be more pragmatic and like research is a product. It’s the same thing. It’s — just make sure that it works inside our environment where users can actually get value out of it. So that’s how, I guess how we want to think about it.

Ishani: I love that researcher as product. Let’s zoom out a little bit. When we look at the rhetoric around RunwayML, you talked a little bit about this confluence of code and art. And it’s not often that we see companies at this intersection. Talk a little bit about that, conceptually, what it means to you and your customers. One question I’m curious about is, you know, did it make it harder to raise venture funding back in the 2019-2020 timeframe because you are sitting at that intersection and that framing,

Cris: One thing I will clarify is, when I came to school, at NYU I went to art school. I spent two years in an art school, which is working and taking classes in computer science. It’s a unique kind of like arts program inside NYU call ITP. It’s our program that’s been running for 40 years. And it sits at the intersection of like technology arts and design. You can think about as a hacky, hacker space. You can just be there working on whatever you want, any kind of topic that involves technology and art and design and take classes from any department in NYU. And so you’re surrounded by really smart people from all sorts of backgrounds and ideas, and skills and you’re building interesting creative projects- just to just building things because you’re interested in exploring things. When we started the company, we started doing the research inside this program. It was a way for us to just have fun. We just enjoyed doing this. Experimenting with this technology, building our projects and then like showing them in galleries or in spaces or in online places. Seeing what was coming out of it, that’s what drove us. When we started, like seeing that the interest was more than just artists, but like companies and filmmakers and creators, and it was like, Hey, we should actually take this outside of an art experimental approach and productionalize it to make sure that we can deliver on the promise of transforming how content is created.

Was it challenging to raise capital at that time with that kind of like art experimental narrative? I don’t know. It’s difficult for me to benchmark because again, I was like two years in New York coming from a totally different country, culture. So, I didn’t really know at that time what raising actually meant. I was more of like, Hey, we just need to start this company. A bunch of VCs and investors had already started to reach out. So, we built a process — it actually took us like four weeks to raise. It was really fast, I think. I was thinking about the time that I never had a deck. We just showed a demo, and everyone immediately understood how it worked. I guess the advice for me from that time would be definitely to just build demos. Build things more than just decks.

Cris: Now that I look at it, and I started raising a few more rounds after that, it was interesting to see that we’re coming from a background on skills that are not common I would say most like venture-funded companies. Most of the members on our team do have an art practice or a creative background. They are artists themselves our engineers and have studied art as their primary study, and then they became engineers after. And I think that drives a few things. First of all, culture. The culture of Runway is very creative-driven, very altruistic driven, and that sits perfectly with the product we are building. Like we’re really thinking about creativity, thinking about content, thinking and about creative tools. And when you’re an artist yourself, you’re building a way for you. You know, you understand this type of user.

Ishani: You frame it as the intersection of art and technology, art and code. There’s so much opportunity as you’re articulating for the intersection of, you know, technology and X. I think that’s where we’re super excited about the next generation of applications that maybe we haven’t all thought about yet. So, we’re excited to see the success that you’ve had and all the continued progress. You know, building a culture of creativity in a technology company is inherently both easy and difficult.

And so being able to do that and then continuing to scale, it is so exciting for us to see.

Cris: Yeah. And it’s been a great way of attracting really great talent. The intersection of art and technology is something that has grown a lot over the last couple of years. And there’s a lot of interesting and talented engineers and designers and people, in general, sitting at that intersection wanting to really think about how to apply these technologies for art making, creative making. So, Runway has become like that spot where you can just come and help us and build the kind of like reality in a way. And yeah, I’m really excited to continue doing that.

Ishani: Chris, thanks so much for walking us through the business. We’re going to end the series of podcasts with three lightning round questions that have a little bit less to do with your business specifically but more about where you sit in the ecosystem. So aside from your own, what startup or company are you most excited about in the intelligent application space and why?

Cris: That’s a good question. I’m really excited about companies who are verticalizing ML, in kind of like niche domains. Uh, we started using this company called SeekOut for recruiting a couple of months ago. And it’s been so transformative for us, specifically for finding talent. I’m excited about companies like Weights and Biases as well — in terms of like research, how do you make sure that within our problem, you can help your team just move faster by identifying what needs to be done and how you can run experiments just faster. So, any company who is just seeking to like, just think about long-tailed use cases and think about optimizations so you can run with some of these algorithms or these platforms are the companies that I’m excited about.

Ishani: Incredible. And what a great segue to the fact that SeekOut is going to be our next podcast. Okay. Question number two. Outside of artificial intelligence and machine learning to solve real-world challenges, where do you think the greatest source of technological disruption and innovation is over the course of the next five years?

Cris: I guess I’m a bit biased about this, but I would say, from non-domain experts diving into like domain expert fields. The barriers of entry to a lot of technologies have considerably been lower, and so you have people who are able to build on domains that perhaps they’re not their own domains of expertise and bring in insights and thoughts and ways of working and ways of thinking that are completely new. The misfits of those spaces for me is where a lot of transformation will happen. So, I guess for us, it was like, we’re coming from an art background from a creative perspective. We’re changing how video works in businesses, right? We have so many insights and so many ways of thinking about the product and the ecosystem that perhaps people in the industry today are not really thinking of. And that’s just so unique. And such a differentiator that I’m really excited to see more of those people just jumping in between different kinds of domains and backgrounds.

Ishani: Right, this concept of accessibility begets innovation.

Cris: Yes, exactly.

Ishani: Question number three. What is the most important lesson, perhaps something you wish you did better, that you’ve learned over your startup journey so far.

Cris: Oh, well, a lot, perhaps a good way of summarizing all the learning is, I think something I’ve learned, is that in order to just build a great product a great business is the rate of learning really matters. Like how fast you are learning as a company and as a team and as a product, how fast you are learning about your customers, how fast you were learning about the industry, about the competition, about the market, about technology. That rate of learning and how fast you can just do something you’ve never done before. Experiment with it, learn as much as possible and adapt really, really, really, really is important. And it’s something I’ve seen a lot from other companies is perhaps it’s easy to get stuck, uh, and it has happened to us as well, into something that you’ve realized you kind of like, quote know works. But then something happens and you’re not able to adapt. And so, just having that mentality of always learning — learning never stops in every single domain of the company. Always keep on learning as much as possible. And then everything else will come.

Ishani: I love that in the same way. You’re always launching your product; you’re always learning about how to build a company.

Cris: Exactly. Always.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you’d like to learn more about Runway, they can be found at RunwayML.com. To learn more about IA40, please visit IA40.com. Thanks again for joining us, and tune in, in a couple of weeks for Founded and Funded’s next spotlight episode on another IA 40 winner.

Related Insights

    Founder Voices from Madrona’s 2022 Annual Meeting
    Qumulo CEO Bill Richter on the Benefits of Enterprise Partnerships
    How SpiceAI is Tackling the AI Tooling Gap with Luke Kim
    Terray Therapeutics Building an Intersection of Innovation Company

Related Insights

    Founder Voices from Madrona’s 2022 Annual Meeting
    Qumulo CEO Bill Richter on the Benefits of Enterprise Partnerships
    How SpiceAI is Tackling the AI Tooling Gap with Luke Kim
    Terray Therapeutics Building an Intersection of Innovation Company