Robotics Expert Sidd Srinivasa on Trends and What’s Ripe for Innovation

Robotics with Sidd Srinivasa

In this episode of Founded and Funded Madrona Investors Aseem Datar and Sabrina Wu sit down with robotics expert and University of Washington Professor Sidd Srinivasa to talk about the technology and sociological trends that are leading to innovation in the robotics space, where Sidd sees opportunities for founders, and why now is the time to pay attention to what’s happening in the space. Sidd also shares why he is what he calls an “accidental roboticist” and some of the hard-learned lessons from throughout his extensive career.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Found it and Funded this is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group, and this week we are diving into a topic that I think we can agree everyone loves to talk about — robotics. George Devol created the first digitally operated and programmable robot back in 1954. And since then, we have been awed by the likes of C-3PO from “Star Wars,” Tipsy, the cocktail serving robot in Las Vegas, and Scout — Amazon’s delivery robots in Snohomish County here in Washington. Robots are transforming productivity, efficiency, cost, output, and product quality for companies, and many trends are coming together to push the move to automate from the pandemic, of course, which has pushed for a more touchless remote-first way of operation to an enduring labor shortage, to technological innovation in computing, AI, and machine learning to technology, infrastructure and data quality advancements that means the use of computer vision in real time is now possible. All of these trends come together to create almost endless opportunity for founders in the robotics space.

So, this week, investors Aseem Datar and Sabrina Wu are talking with robotics expert Sidd Srinivasa about all of this and so much more. Not only do we learn how Sidd is actually what he calls an accidental roboticist, but he outlines the areas of robotics that he sees are ripe for innovation and some of the hard-learned lessons from throughout his extensive career. With that, I’ll hand it over to Aseem and Sabrina to dive in.

Aseem: Hello everyone. My name is Aseem Datar and I’m happy to be here today with one of my fellow investors, Sabrina Wu and our guest of honor, Professor Siddhartha Srinivasa to talk about our favorite topic — robotics. So recently there’ve been a whole bunch of technological advancements in the field of robotics. That means that the world is prime for accelerated innovation and adoption, especially within sectors like industrial, manufacturing, logistics, and many, many more. At Madrona, we’re excited to see where entrepreneurs take it and the kind of companies that they buried using this technological building block per se. We wanted to bring in one of the foremost experts in robotics to talk about some of these recent trends and why now is the time to pay attention to what’s happening in the space.

Sidd, thank you so much for joining us and welcome to this conversation.

Sidd: Thank you so much for having me, Aseem and Sabrina. It’s a pleasure to be here and it’s a pleasure to chat about robots. One of my most favorite things to talk about.

Sabrina: Yes, Sidd thanks so much for being here. We’re really excited that you were able to join us today. You know, looking at your background, you were previously at Carnegie Mellon University for 18 years, and many of those years you were running the robotics institute. Thankfully we were able to steal you away from them and have you join the University of Washington, where you’re now an endowed professor focusing on human robotics interactions. Uh, You, of course, we’re also one of the First Wave Founders of Berkshire Grey. Now publicly traded on the New York stock exchange after having revolutionized the use case of robotics and AI for fulfillment at scale. So, you know, why don’t we start though with how you really got interested in robotics in the first place. Was there a pivotal moment for you when you were growing up that got you interested in the field or, you know, really what was it?

Sidd: That’s a tough one. I wish I could say that there was some origin story one day in which I had this revelation. But I’m actually a very accidental roboticist. It was in 1999. I was ready to go do a Ph.D. in mathematics at CalTech or in fluid mechanics at Cornell. The then director of the Robotics Institute Raj Reddy, visited IIT Madras, where I was doing my undergrad, and he happened to come home and was talking to us — my dad was a professor there as well. Then he asked me what are you going to do with your life? And I said, “Oh yeah, I’m going to do one of these things.” He said, “Nope, you should do robotics and apply to this robotics Institute place”— that, you know, back in 1999 was fledgling. I said, “Why not?” I still remember after I got my acceptance, my dad sat me down and said, “Son, you know what the future is? It’s turbines. It’s not robotics. Robotics is just a fad.” I still talk to him about that, about how turbines are doing compared to robotics. I’m sure they’re doing really well. But certainly, I’m glad that I pursued robotics. Then ever since, it’s been such a pleasure waking up every morning, working on robots. I just continue to be flabbergasted that people pay me money to do something that I would in a heartbeat do for free.

Aseem: That’s awesome. I thought that there was going to be some, “I was watching “Small Wonder” kind of story'” but maybe now, and who knows maybe your someday going to build robots that operate turbines, and you’ll bring the best of both worlds together. I think we have the most fun learning about backgrounds — these stories that don’t surface on LinkedIn. So, thank you for sharing that. As we at Madrona are thinking about robots, the one obvious question we sort of always come across in our minds as we think about the spaces and build a prepared mind kind of framework is why now? What’s changed in the world — robots have always existed in some way, shape, or form for decades. Following on that question, what are some of the driving factors that you believe are leading toward the acceleration, the investment in the field and ultimately toward adoption?

Sidd: It’s been a slow boil of robotics. I must say. It’s not that there’s been some step-function improvement. One of the things that has actually been hugely beneficial is Moore’s law. Computers are getting faster and faster day by day. Essentially the same algorithms that we used to run 20 years ago when I started my Ph.D., now take seconds to run instead of tens of minutes. I think that’s a huge win because one of the interesting things about robotics is that your clock is set by nature. It’s set by gravity, right? If you have a coffee mug that you’re trying to pick up and it starts dropping, then you can slow down time so that your computation reaches up to it. You just have to make it not fall. You have to grab it. I think the ability of our computing to finally catch up with nature and potentially exceed nature has been a huge tailwind for us. I think additionally, there are a few other factors. One is hardware, particularly perception hardware, which has gotten much better and much cheaper.

Some of that has been driven by the self-driving car industry. You know, back when I started my Ph.D., you had to pay tens of thousands of dollars to get a FireWire camera and then buy a giant board that then you would attach to your computer and have to write like custom software to even be able to grab pixels out of a camera.

That’s no longer true, things are much cheaper now. And that’s super useful. It’s super useful, not just to bring down the bond cost of your product. But it’s also super useful to prototype things. It’s much faster and easier to prototype things when parts don’t cost tens of thousands of dollars. That means that now we can very speedily go through several iterations of a robot or a robotic system, without necessarily having to think too much about like, oh, what am I purchasing right now, so you don’t have to prematurely optimize just yet.

Aseem: Yeah, that’s so interesting and so relevant. I remember the time when I was writing code on embedded systems, and you would think about memory management, right? Like you would think about how much memory is my algorithm using. And now when you graduate from college, you’re just commissioning another VM. You’re just buying more compute at cents on the dollar, right? I think that’s just fascinating in terms of where the world has gone. Sidd, what about networks? What about latency? Is there something to unpack there in terms of 1) time to making a decision getting faster and 2) what about advances in hardware itself — in terms of precision arms, in terms of actuators and so on? Is there something there that’s also a, I would say light tailwind that’s pushing this forward?

Sidd: I think one of the things that we’re seeing recently is that there has been a greater availability of compliant manipulators, you know, things that can work with and around people. We call them human safe, but essentially, they have the ability to feel forces and respond to them just like our arms do. And one of the advantages of that is that it transfers a lot of the complexity from the metal to the silicon. These robots that are not industrial manipulators, but combined manipulators are much more complicated to program and manipulate, but they are intrinsically safe and intrinsically more capable because they are able to feel forces and modulator their forces.

And I think our ability to wrangle this new piece of technology better is going to be a big unlock for the future. You’re already seeing how, if you look at even automotive, a majority of their manipulation or their assembly is done by these giant industrial manipulators that just pick and place. But a lot of their relevant and important manipulation, particularly of flexible things like brake lining or seat cushions need forces and torques and very careful manipulation. And that even now is done by people. That is particularly challenging. I think a future that I can see is the ability for robots to be able to perform those careful force-guided tasks that we humans do so effortlessly.

Aseem: I think that’s a great characterization of what things are coming together. You hinted a little bit at the industrial sectors and so I want to go down that path of how do you think about the market? What are areas that you see are ripe for robotics to play a huge role in? How do you think about industry focus? What are industries where robots are an obvious solution? And tell us a little bit about your thinking around the application of robots to those use cases.

Sidd: One thing I would say is that I have a bias to be a very full-stack roboticist. I like nails and I like to hammer them with whatever hammer is available. I think for me, there are a few criteria that are really important when trying to decide what the right nails are. One is how relevant is it? There are a lot of places where we may think robotics is relevant, but the technology that’s needed to do it is not there at all. Part of the reason for that is that we tend to anthropomorphize. We think, oh, this is easy for me so surely this must be easy for a robot and that’s sometimes true, but it’s more often not true. So, I think being able to find the intersection of something that robots are capable of doing and something that is of value to people is really interesting.

From a sort of vertical point of view, I think there are a few places where robotics has a lot of potential. And I think a lot of that is related to how complexity can be addressed via either changing the process path or changing how the work is done. One of the places that I am particularly excited about is being able to use robotics in farming or agriculture. I think that there’s tremendous potential in being able to merge the way food is produced, the science behind how food is produced, and the way food is harvested, and the way it’s packaged, and the way it’s sold. I think sometimes we assume that, and this is funny because we assume that strawberries have to grow in a particular way. But that’s not even true, right? Like we humans have manipulated the way strawberries grow and appear based on a lot of criteria that we care about. But you can imagine a world where we are optimizing those criteria, not just for our consumption, but also for the ability for robots to be able to pick them. The ability for robots to be able to identify them. The ability for robots to be able to package them. I think when you think about it holistically as my goal is to be able to produce really delicious food and to be able to automate its harvesting and delivery to a person, then you can really think of ways in which you can automate the entire process and think about how you can manipulate the entire process. So that’s certainly something that I’m interested in.

I think another piece that to me is really interesting that I continue to be fascinated by is last mile. You look around outside and outside any doorstep, there are packages and it’s interesting and challenging to understand how those packages can be delivered faster, better to you. Right now, it’s both labor-intensive and energetically inefficient. I don’t just mean packages, right? Even if you think about food delivery, I think of it as a full stack of how would we imagine the preparation and the combination of the food such that it continues to be delicious.

But also, something that can be automated and delivered on time to us. Some foods are actually very, very hard to deliver as we all know. Getting fries delivered at home or getting a nice, like Indian samosa delivered by let’s still crispy and not soggy is super hard. But I think part of that is because of the way those food items are created — because they were never created to be packaged in a box and delivered to us. They were created to be eaten hot off of the tava or the plate into our mouth. So, I think, thinking through how that entire process might work, I think it would be interesting and valuable.

Aseem: That’s so cool because it’s complimentary to our view we have yet at Madrona around, there’s a strong wave around, you know, COVID start us that, a lot of systems, processes now are moving towards more autonomous touchless, contactless as well as, high-quality outcomes, right? Because the more systematic approach you take, the more consistent quality comes out of it. An area that we’ve not talked about here but it’s interesting to us is also around the smart factory, the autonomous vehicle assembly. I think all these things coupled with the problem of like, you know, an aging workforce slash shortage of labor, we believe are just areas that are ripe for disruption, or I would say opportunity from a robot standpoint.

Sidd: Yeah, I completely agree with that. I also think that part of this might be to rethink. How processes are engineered. As an example, if you wanted a robot that would do your laundry, this is everybody’s favorite robot. Building a robot that like is in your home, that’s loading your washer, pulling it out, putting it into the dryer, taking it out, folding your clothes might be incredibly challenging.

But you can imagine a world where some entity takes all of your dirty laundry, takes it to some centralized location where there’s a larger physical space, which does all the cleaning for you and delivers it back to you as quickly as possible. By changing the way things are processed and turning it from many small things to one aggregated larger thing. I think you can get potentially a lot of wins. That of course demands that we, as humans, change the way we want to live to some extent. But there’s a lot of evidence to that. Right? In that, like, we’re willing to change the way we work, and we live if it is longer term more convenient for us. We haven’t talked about consumer robotics — robots in the home. I find that to be the most challenging market and something that like I haven’t particularly thought about because building something boutique for everyone’s home is way, way, way harder than building something that sits in its own physical space that can be controlled and manipulated by you and everything goes to it and comes out of it.

Sabrina: We have this debate a lot at Madrona as well of just where is the best use case for robotics? Is it in the enterprise setting? Is it in the consumer setting? And I’m curious, you touched upon it a little bit about the different verticals in agriculture and other, but to be a little bit more specific, if you’re a future founder you know, listening to this podcast today, what opportunities are you seeing? What white spaces are you seeing for a founder to come in? Is it specifically within verticals or applications or do you see it more on the hardware or software side? Just curious what your thoughts are around that.

Sidd: I do think that there is potential everywhere. My own personal interest has always been in trying to find a vertical opportunity and then do whatever it takes to solve that problem. Also specifically look at a place where automation is not necessarily a must have but can be a ramp function value add. I think if you start off with,” Hey, if I don’t build Rosie the robot, then I don’t have a business.” Well, then you’re in trouble. I think we want to make sure that there is a business case even with very limited automation. Even there like I would stair-step automation as oftentimes quality assurance prediction is much easier than actual physical manipulation. If you can actually have a value add that’s just about having sensors in your world that help you understand your process better or someone’s process better such that it can make it more efficient. That’s already a big win. And every single motor that you add to your world is an order of magnitude, greater complexity because everything breaks when you interact with the physical world. So, I think even there, when you’re starting to add automation, at first ask the question. Can you add automation that doesn’t move but that is able to monitor and enhance your process path through AI, computer vision, machine learning, and then subsequently use that to bootstrap how you might want to integrate physical automation in.

I think that’s a place where I think that there’s a lot of potential, right? Like even thinking about quality assurance. I think the biggest challenge with just inference and perception as a business is that you might get sharded by so many different applications. You know, someone has a light bulb that they want to assure, someone else has a PCB. Someone else has a salad that they want to know whether any of the produce is old or not. Someone else may have bananas. Someone else may have other things. So, I think the challenge that is in making sure that there aren’t so many different verticals that you’re chasing, that you end up doing a poor job of any one of them. I think this is the biggest challenge that I see in this particular space is that sometimes people either focus too much on a vertical and that’s too narrow. It’s one of the teeth in a comb and it’s too small or they try to build infrastructure and that becomes too broad, like, I don’t want a machine learning model. What I want is a managed service. I want someone not to hand me over like a piece of code. I want someone to solve my problem. My problem might be, I want to be assured that the chicken I’m selling are all of the right shape, or I want to be assured that the fries that I’m selling are all numbered, 37, there are 37 fries in each bag that I’m selling. I think being able to produce value while still being able to not be sharded by too many teeth in the comb is interesting and challenging. I don’t think anyone’s cracked that yet, but I think that there’s a lot of opportunity in that space.

Aseem: Yeah, you alluded to this, but I want to ask you this million-dollar question, or maybe it’s a millions of dollars question these days with how companies are performing and creating value. Hardware, robotics, or software robotics? Let me qualify that a little bit. There’s generally healthy tension on — do I solve a problem using hardware smarts and precision and building more complex arms, or do I actually solve it using the power of software and intelligence and ML models and CV? How should one think about that?

Sidd: I think about this a lot, I must say. The way I think about it is so first of all, I don’t have an answer. I just have a thought about it. I think that the constraints of the built environment often tell us what’s possible and what’s not possible. So, if you look at automating your kitchen, for example, it’s very hard to put belts and pulleys and tubes in your kitchen that plop food on your plate. Just the natural constraints that you created because it’s a kitchen that you want to use — it’s a kitchen that has certain dimensions — makes certain hardware choices possible or not possible.

The fewer constraints you have, the easier it is to solve using only hardware. You can use off-the-shelf mechatronics to solve a lot of these problems. Our beer factories and our Frito-Lays factories are great examples of solving a very hard food manufacturing problem effortlessly because we’ve removed a lot of the constraints that exist there. My personal taste is in looking at spaces where the constraints of the built environment make it nearly impossible to use off-the-shelf mechatronic solutions that compel us to use a combination of what we call robotics. Whether it’s robot arms or more complicated actuators and a lot of intelligence — computer vision, machine learning nonlinear control.

I think those are the spaces that lie at the intersection of things that are very valuable because no one has a solution for it and things that are fundamentally going to get better. Our compute is always fundamentally going to get better. So, I think to answer your question of like hardware versus software, there are many problems that can be solved using just hardware. But I think I gravitate towards problems, which are much, much harder to solve, either constraint wise or from a value proposition point of view, with off-the-shelf mechatronic solutions.

Aseem: That’s very cool. A slightly related question. There’s always this concern around safety, robotic operation, like human in the loop. You know, what happens when a robotic system like Tesla goes off the road and what’s the correction mechanism. I know Sidd, last time we chatted, you had a really cool posture on how you think about humans in the loop. I remember distinctly your comment about these things will fail. We know that they would fail as we are building and getting better. How should you design for that?

Sidd: First of all, I do agree that safety is a requirement. It’s not a nice to have, it’s a must-have. I think also that we have to assume that robots will fail. I always believe that it’s not the happy path. It’s not the YouTube video that you should be looking at. You should just be looking at all the times that the robot fails, right — the unhappy path. And I think that humans also have perceptions of robot capability based on happy path that they see. I think as an analogy if an alien being watched YouTube videos of 7- to 10-year-old children, they would think that their virtuoso pianists, incredible gymnasts, amazing singers, the best at math — can recite thousands of digits of Pi because they don’t see the unhappy path. Which is they’re running around kicking and screaming most of the time. I think it’s the same with robots, right? I think when people look at videos of robots, what they see is the happy path of robotics.

A lot of what I do is anticipate what the unhappy path will be and address it. This is actually hard because sometimes your robot doesn’t know when something goes wrong. This happens commonly, you know, the robot fails to grab something, and it doesn’t know that it’s failed to grab something.

So, there’s an observability question of we need to make sure that the robot knows that something has gone wrong. I think the second piece is around creating exception paths, such that you can gracefully fail. In most situations, you can gracefully fail. There are a lot of opportunities for correction, particularly if you own the full stack. A lot of the design engineering that is needed is to make sure that we are able to identify what the exception paths are and handle them. Actually, if you watch a high-speed video of yourself grabbing a coffee mug, you’ll notice that you’re just fumbling all the time. You’re failing and failing, and then grabbing the coffee mug. But all of that happens in less than 10 to 15 milliseconds. So being able to react to these in an elegant way is important.

In terms of human in the loop. One of the things that I believe strongly in is to be able to leverage human feedback whenever and wherever possible. You always want to build systems where you can either offline or even online annotate data, annotate the robot, such that it’s able to learn from its experiences as well as it’s able to learn from human supervision. I think that we have a lot of tools available now that help us do that. We have the ability to capture large amounts of data. We have the ability to send that data to annotators who are able to annotate it for us. I think that’s, to me, being able to build continual learning algorithms and being able to formalize that is a way to capture human insight without necessarily having to rely fully on it.

Sabrina: That’s fascinating. I’d love to pivot a little bit and have you tell us about your journey at Berkshire Grey? You were one of the first founders of the company, and now they are one of the leaders in providing robotic picking and packing technology used by companies like Target and FedEx. Can you tell us a little bit about how that came to fruition? What were the challenges you saw in the industry at the time? And I would love to learn a little bit more about your experience, scaling the business and ultimately making a bet on the future.

Sidd: I still have such warm feelings about my time at Berkshire. I really loved it. It coincided with my daughter being born. So, it was pretty epic time for us as a family. I see my daughter grow —she’s seven years old now. I can tell how old Berkshire Grey is based on how much Sameera has grown. Obviously, full credit goes to a lot of people. I’m just one of the people who is part of this journey.

But I think the central thesis was always this idea of being able to build a full robotic stack for automation. One of the things that we had observed was that there were some really amazing companies that were out there, but they were providing a Lego block that would attempt to fit itself into a giant jigsaw puzzle. Like Saying, “Hey, I have a nice picking system, or I have a nice system that can move a tote from one place to another.” You realize very quickly that to integrate a picking system with a very complicated warehouse management system that has so many inputs and so many outputs is much harder than building the picking system itself. Even if you have the best picking system in the world, your ability to integrate it with even one integrator is very hard and to think about like having to integrate with 10 or 20 of them, right? Those kinds of businesses were failing. Not because they didn’t have a beautiful, perfectly crafted Lego block, but it’s because it didn’t fit in the house. It was too much work to make it fit in the house. You have to take the house apart and put it back together. The sort of central pieces of Berkshire Grey was, give us an empty space. As an input, trucks come in and as an output, packages come out and, we won’t tell you what’s in this empty space and you don’t tell us how to control that empty space. It was a huge bet for us to think about automation that way. Because we had to believe that people would give us this empty lot. It’s a huge investment on people to give us this empty lot, but the positives were that we could fill this empty lot with whatever we wanted — people, robots, anything — and we controlled the entire experience. That was what we really sought to do. I must say, initially a large part of it was not automated, but still, the input-output relationships were maintained. I think over time as more and more maturity came about — and obviously, since I left Berkshire Grey, they’ve become even more mature on everything that they’ve been doing. I think you fill out more and more pieces of this Lego house, but you control everything that happens in there. So, I think that was a big learning for me. I think another learning is also that you know, when we were four people each one of us had to write code, talk to vendors, be a program manager, weld robots. I really enjoyed that. I really enjoyed that because I just love building robots. As the company grew to like 100 and then 200 people, I think we had to organize ourselves into various roles. A lot of fun too, but fun and a different way and potentially needed a different set of people. Obviously, I’ve done a few things since Berkshire Grey, and I realized that it’s almost like shedding skin. You have to have one skin and then you molt, and you shed that skin and then a new skin comes about. And you have to just accept that the people who were part of the original skin may not necessarily be the ones who are ready for the next one. The one after that. Some people might grow into those roles and those opportunities. But I think just acceptance of that was valuable.

I think another lesson that I learned was customers don’t want to tell you anything. This is incredibly frustrating for us because we just wanted to know what actually they wanted to solve.

If we knew what they wanted to solve, we could do it, but it took us a material amount of time before we earned sort of their trust for them to be able to open the door more and more. I think that was really interesting for us.

Sabrina: That’s awesome. I hadn’t heard that story before. You know, from your experience at Berkshire Grey, and as you mentioned, you’ve now worked with a lot of earlier stage companies and ideas since then, curious to hear what mistakes you’ve seen, people make along the way, and any advice that you have for new founders as they think about their journey in robotics.

Sidd: Oh, boy, I haven’t made a lot of mistakes. So, I think that in some ways the scars that we have are what help us not make those same mistakes again. I think that’s probably the only value that I provide is that I’ve made more mistakes in robotics than other people. So, I cannot just tell you what not to do. I think it’s really important to carefully think about what your minimum lovable product is. I cannot stress how important that is. I think that people fall in love with a certain way of doing something or fall in love with a certain piece of technology, and they forget that in the end it has to be valued and loved by your actual end customer. This was, frankly, a big struggle for me too because I’ve been building robots for so long that I have a way of building robots. I have to unthink that sometimes, because I don’t want to be stuck in that same rut. I think the other thing is that a lot of people who want to build robots come from software or AI or machine learning and forget about, or at least don’t have enough scars from, just long lead times for getting anything. I was actually just talking to somebody who is fascinated by how hard it was to do integration testing in robotics. They were telling me, “Oh, you know, with software, you just click this button and then you can run a, you know, integration tests on everything. How do you do that with hardware?” I was like, “Nope. It can’t be done.” You have to actually have a QA team that goes out and does these tests for you? You have to pay them a fairly significant amount of money to go do that and that takes a significant amount of effort.

So, I think there are certain mental models when you’re only building software that you need to undo yourself of. That said, there are other people who will only build hardware who want to build robots? You know, they build amazing, beautiful hardware systems and there too, there’s a failing because you believe that everything can be done with hardware ingenuity. Whereas, you know, I keep telling them, computers are free and instead of building a mechatronic way of, let’s say isolating a part, “Hey, just put a camera there and then it’ll tell you where it is”.

So, I think that robotics is a funny space, which requires you to know both hardware and software, and I think my advice would be make sure that you have enough people in the room who have enough scars of making enough mistakes in hardware and software and have the nuance to be able to.

Lead them to do the right thing. I think that’s been the biggest learning for me.

Aseem: Yeah, very profound. It’s almost like go hire the people who make mistakes so that the robots don’t make the mistakes. It’s amazing what we take away from this conversation. Hey, Sidd, I know that the only thing between you and dinner is us and ever since you mentioned samosas, I’m envisioning, you’re going to go off to a room it’s a Bat Cave in your house, you’re going to press a button and the robot is going to start frying a samosa.

Thank you so much for making time. I think there’s a lot of aspiring founders that we’ve talked to who are deeply interested in, you know, very passionate about this space and I’m sure they will take a lot away from this conversation. So, thanks for spending the time and thanks to those of you who tuned in.

Sidd: Thank you.

Coral: Thanks for joining us for this week’s episode of Founded and Funded. If you’re interested in learning more about Madrona’s investments in the robotics space, you can check out the show notes for Aseem and Sabrina’s contact information. Thanks again for joining us and tune in, in a couple of weeks, for our next episode of Founded and Funded with Snorkel’s Alex Ratner.

Related Insights

    SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen on AI-powered talent solutions, developing talent, and maintaining culture
    Hugging Face CEO Clem Delangue and OctoML CEO Luis Ceze on foundation models, open source, and transparency
    Starburst’s Justin Borgman on entrepreneurship, open source, and enabling intelligent applications
    The Time is Now for Enterprise Robotics

Related Insights

    SeekOut CEO Anoop Gupta and VP of People Jenny Armstrong-Owen on AI-powered talent solutions, developing talent, and maintaining culture
    Hugging Face CEO Clem Delangue and OctoML CEO Luis Ceze on foundation models, open source, and transparency
    Starburst’s Justin Borgman on entrepreneurship, open source, and enabling intelligent applications
    The Time is Now for Enterprise Robotics