In this special live episode of Founded & Funded, Madrona Partner Vivek Ramaswami sits down with Jason Kwon, Chief Strategy Officer at OpenAI — a 2025 IA40 winner — just days after the company’s Sora and agentic commerce announcements.
They dive into OpenAI’s thoughts on which markets they enter directly and which ones they support as a platform, their perspective on AGI, and their overall perspective on where the space is headed, from compute and data to reasoning and agents.
Anyone building in AI will benefit from this candid and in-depth discussion recorded during our 2025 IA Summit.
Listen on Spotify, Apple, and Amazon | Watch on YouTube.
You can also read Vivek’s takeaways for founders here.
This transcript was automatically generated and edited for clarity.
Vivek: Both of us came up from the Bay Area, me two days ago and Jason this morning, so we’re glad to have him here. Jason’s been part of the OpenAI leadership for almost five years now. He’s seen a lot there. There’s no shortage of incredible and interesting things coming out of OpenAI. Let’s go ahead and get started with this: What does being chief strategy officer at arguably the most strategic company in AI and tech mean on a day-to-day basis?
Jason: Yeah. The way I’m mostly thinking about my job is thinking about the external and bringing it in inside the company, so nominally, the functions I work with a lot are: legal policy, global affairs, parts of trust and safety. The legal part also brings me a lot into deal making, but really it’s the external world that’s reacting to the technology, and there’s a whole process of actually taking in these sets of considerations, especially when the technology is very disruptive, and thinking about how that should impact things like how we approach ecosystems through deal making or deal structures, how we should think about various policy measures and how we should think about building solutions, working also with startups as well as big companies to address various concerns, and so I think about that, really, on a day-to-day basis.
Vivek: One of the interesting things we’ve been talking about is that OpenAI is best known for frontier models, everything from when GPT-3 was exploding into our world and then GPT-4 and beyond. Obviously, the ecosystem around those models is really exploding, and we often have this term: full stack of AI. What does a full stack of AI look like to you? How do you all think about that in OpenAI lens?
Jason: Yeah, so I think it starts with compute and data, and you could see us becoming a lot more active in this space in recent history, much more so than we were in years prior. Then, certainly, there’s the foundation model layer, which we’ve been active in for a long time. Then, there’s the application layer, ChatGPT, and then the applications you get built on the API. Those are essentially the basic, several layers, and then sooner or later, you’ll have devices that are kind of custom-built or purpose-built for this kind of technology. Then, you can even go more specifically within each of these layers.
In the future, you’ll probably see additional stratification, like at the application layer or in the model layer, you’ll find infrastructure in terms of tooling, eval suites, things that have to do with cleaning or structuring data when it comes to task-relevant data that you need to collect for post-training. I think all of those elements, you’ll see further deepening and development, and I think that’s actually a very interesting way to think about what you should build as a founder going forward, because a lot of this will become very relevant, not just to us, but various other foundation model labs. The pace at which all these labs can actually build all these tools and technologies at the various layers is not necessarily going to keep up with the pace at which the ecosystem can build all these things, and so there’s a lot of opportunity there.
Vivek: Yeah. We’re definitely going to get into the meat of how founders should be thinking about where OpenAI plays and where it doesn’t, because I think it’s so important and something we think about all the time, but going back to the point you were making around compute, infrastructure, the data, we’re seeing all this news about hundreds of billions of dollars being poured into or about to be poured into the compute side and the infra side, and OpenAI has had amazing partnerships with Nvidia, Oracle, and so many others, and I think one of the cool things for us as early stage investors and founders is this CapEx build ends up being really, really great for building applications, but maybe just take us through some of those partnerships and maybe just a little bit beyond the headlines, like why is it so important for OpenAI to be spending significant resources here around compute infra and with these partnerships involved?
Jason: Yeah. Well, I think just the simplest answer is, just because the scale models continue to work and we believe that they will continue to work, and so that just implies that the amount of compute that we’re going to need, both for training as well as inference. If you want to continue to push ahead to capabilities, you’re going to need increasing amounts of compute, and if you also believe that the demand for this technology is still in early innings, which would be the case if you kind of look at enterprise adoption. A lot of them are still in proof of concepts or pilot stages, and there’s a lot more token conception to come. If you are able to deliver on deep use cases that get repeat usage, there’s a lot more inference compute that you need that’s implied in the future, and so that’s really it, in terms of what is the actual impetus or driver.
Vivek: Right. And so, one of the major themes that has kind of been running throughout the day and on the previous panels as well, is what we talk about this reasoning revolution, right? And I think on one of the previous podcasts you were on, you were also talking about reasoning, and the name of the Fireside chat today is “The Collaborative Reasoning Full Stack.” So curious, what does that mean to you? What does reasoning mean in the world of OpenAI, and how you all think about the future of the reasoning revolution?
Jason: Yeah. I think there are at least two elements to it. There’s probably more than that, but two that immediately come to mind. So, one is, just going back to the prior topic we were touching on, which is that the more compute that you apply at inference time, when you’re using reasoning capabilities, the better the answers actually become or the better the quality of the response. And so, if you’re in that kind of paradigm, you are now seeing this kind of relationship between compute and execution, essentially. The other thing that is very much, you can kind of start to see develop right now, and this has, I think, been a topic today, as well as possibly yesterday, is agentic capabilities, which is really enabled by the reasoning stack, right? Because you can get to task situations, and it’s actually the reasoning capabilities that let the model understand what it’s supposed to do, especially in situations that would normally confound, like sort of normal, programmatic software.
You can also get into situations where systems can start to interact with each other, because they both have reasoning capabilities. You see elements of this previously with some of the tools that we put out, like deep research or our search capabilities, which use elements of reasoning to unlock agentic capabilities, such as searching on your behalf, understanding what it’s reading, doing analysis, and then writing the content and the report for you, which is agentic capabilities, but it’s able to accomplish those things, because it’s reasoning through the task that you assign to it. Then, up to one of our latest releases, which was on Monday, which had to do with agentic eCommerce with Stripe, which is now the ability for ChatGPT or other applications if they want to make use of the protocol, which we open source, to have the ability to leverage API and the models to have transactions occur based on commerce stacks and reasoning logic interacting with them. I think this is actually probably going to be one of the primitives, in terms of capabilities or building blocks, when it comes to AI, that sort of unlocks a lot more building onto it.
Vivek: Yeah. No, that partnership and sort of announcement you had with Stripe on Monday was really interesting, because I think one of the things that we think about is, “Okay. In this future of agents, how do they interact with each other, and how do payments work?” And there are so many parts down the line. Is your sense that that’s something OpenAI will partner with a lot of the next generation set of companies, or in the case of Stripe, one of the best payment companies out there? How do you think about what makes sense to partner, versus what you’re building yourself at OpenAI?
Jason: Yeah. I think our core activity is focusing on general intelligence, and so everything around that needs to be accessed or to work with that is much more something that we’re inclined to partner at, and so focusing on the foundation models and then the surrounding stack around it that enables us to deliver that efficiently and at scale, that’s what we’re focused on, but then taking that and interacting with commerce, taking that and interacting with content, taking that and interacting with other capabilities like research or whatnot, enterprise capabilities, those are all things that other people have been doing for decades and have a lot of expertise in. This kind of ability to use reasoning to interface with an entire ecosystem, it’s also very consistent with the nature of intelligence to be able to work with lots of different other systems. So I think partnership is really, very consistent with that part of our mission.
Vivek: Right. The interesting thing here is that we always hear about OpenAI talking about its mission of finding and fulfilling our journey to AGI and doing that in a way that’s going to be beneficial to all of us. What does that mean on a day-to-day basis? So, a lot of us hear about AGI from the outside. You guys are around that mission every single day. What does it mean, inside of OpenAI, to be working towards AGI?
Jason: Really, I think it’s about staying research-focused, so the company is still very much a research-centric, research-driven company, despite the fact that most of the attention is still on the products. That is mostly what gets popularized, but inside the company, if you were to just kind of measure the amount of content at all hands, for example, 90% of it is still talking about the research. One of the things, for example, I like to talk about with the teams that I work with is, when you apply a “5 whys” framework to something while you’re doing something, it’s like somewhere along the way to the fifth why, it should be because that advances, helps, or supports some core research activity. I think that that’s the one thing that’s very consistent about the inside of the company.
Vivek: Right, so research at its core, but product working in tandem with research.
Jason: Yeah, yeah.
Vivek: Yeah. And I think one of the things that is always top of mind for founders is, if I build something, is OpenAI just going to build it, or if I’m building something, what happens if OpenAI builds it? I think the old question would be, what would happen if Google or Microsoft built it? Now I think it’s very reasonable to ask, what happens if OpenAI builds it? So, we see so many interesting applications/tools coming out of OpenAI, so how do you think about what you are building and what makes sense on a 6, 12, 18-month timeframe? And to flip it on its head, where is OpenAI not building?
Jason: I think that this is an interesting question, and we can talk about this a little bit, too, in relation to stuff that we just did this week. So, a very general answer would just be, if you look at our mission, it’s about AGI. You could take issue with the fact that, whether you believe AGI is a useful concept or not, but the crux of it is that if it’s on the critical path to this general intelligence capability, that’s something that we’re going to be interested in. If it’s not so much on a critical path to that, that, by definition, is something that we’re going to be less interested in. That seems ostensibly simple enough, but every so often things will happen, and maybe it’s not so simple, or people get surprised.
And so, I think a couple of days ago, we released Sora, and I think some people wonder, “Is this actually on the path to general intelligence or not, and how does this actually fit?” I think that it’s very easy to look at the form factor of the product and think, “Oh. That doesn’t necessarily make sense,” but I would just say, we’ll look through to the actual capability, which is to understand the physical world as a world model, as a simulation that’s represented through video, and there’s a live question as to whether you can actually fully capture what you need to in order to get to general intelligence just through text. And so, if it’s true that actually you need sort of representation of the world and movie pictures, then having this kind of advanced video capability is possibly quite important to having general intelligence of some sort, and that is why it is important.
Vivek: It’s interesting, because I think that’s true. I think a lot of people would say, “Hey. You talk about AGI, but then you’re coming out with really, really good video models, and there are a lot of companies going after building video models and applications.” So, your sense is, these are things that are the outgrowth of needing the data and needing these modalities and the information that comes to fulfill our ultimate goal.
Jason: Which is general intelligence.
Vivek: Which is AGI. So, you’ve got a bunch of founders in the audience here who are building various things and thinking about building various things. What do you think are the open areas that OpenAI is probably not going to end up building against, that you would suggest founders should focus on?
Jason: I think it’s just posing that question, and I think that there are two ways to think about this. One is, “Hey, what is not necessarily going to be straight down, the middle of the fairway, when it comes to that general intelligence question?” And so, if you’re going to do some kind of product that is very focused on figuring out how to apply the AI models to a manufacturing process, that’s very specific. That’s probably not an area that we’re going to go super deep in, in terms of developing a whole product suite, right? And that’s just an example of how to think through it. There’s another way to think through this, too, which is that, and we see this happen all the time, and maybe founders in the audience actually have experience with this, which is you might be in a particular area and the current class of models are just a little bit short of being good enough to address the use case that you have. Maybe they’re slightly too expensive, or maybe they’re slightly not reliable enough, or maybe they just don’t quite hit the use case to the level that you want.
The interesting thing to do, then, is actually to bet on the increase in quality of the general capability of the models, as just sort of like a macro force, and that is an interesting way I think to also build a startup, because we’ve definitely seen a few where it’s just, that is what they’ve done, rather than try to over-optimize on this particular set of capabilities that you see on the models today, but actually, just bet that the general capability will continue to improve. And I think that that would be perhaps the wrong way to build a startup, is to actually say, “Okay. The current class of model capabilities is here, and now I’m going to spend a lot of effort in terms of fine-tuning, customization, data collection to then fix this last-mile issue with the fundamental capability,” rather than actually building productization around delivering the intelligence and the actual utility, which is about probably more than intelligence, that is then going to be further enabled by actually the growth of capabilities, just natively of the model itself.
Vivek: Right, yeah. So, don’t assume that the models are not going to get better, basically, and build around that. Maybe I wanted to just make sure we have enough time for folks to ask questions, and so I’ll open it up to the audience here in case anybody has anything they’d like to ask. I see a hand in the back there?
Audience Question: From a profitability standpoint, with the cost of inference still being a non-trivial number, how do you think about the business model from a profitability standpoint, and at what point do you see an inflection where the gross margins, from a user perspective, become positive and compelling?
Jason: Yeah, so for ChatGPT itself, we’re actually profitable in most markets, if you look based on compute margin. It’s actually the overall company may not be profitable because we continue to invest in actually scaling the compute and also scaling research, or maybe scaling is not the right word, but continuing to reinvest in research. And so, maybe the question behind the question is, what is the lesson for other companies maybe, in terms of how you should think about the business model, if OpenAI is kind of in this kind of financial situation?
I think it’s probably that you should be thinking about the margins relative to serving compute, really. Then, what other costs actually go into your variable input? But I think if you are doing well on the basis of, here’s how much you pay for compute, and then here’s how much you actually get per unit of whatever delivery and you’re positive on that, then actually, in a very plain English sense, the core input, which is compute, you’re actually deriving more value out of than you’re paying for. Then, you can probably also have a theory that the cost of compute itself is, over time, going to decline.
Vivek: Actually, picking up on that comment about the, we didn’t talk too much about financials and the revenue explosion of OpenAI, which is unlike anything we’ve ever seen, and more on a personal note for you, you joined OpenAI in early ’21 as GEC, so months before ChatGPT really exploded on the public consciousness. What surprised you the most about that launch and what you’ve seen since then? Has that really transformed the company and from what you remember pre-ChatGPT?
Jason: Yeah. This goes back to what we were talking about earlier, which is trying to stay research-focused, which is actually a struggle, right? So, it’s something you have to work on, and it was much easier to be a research-focused company naturally when the rate of change inside the company itself was not ChatGPT-like.
Prior to ChatGPT, I think we were about 200 people, and we were not adding that many people per month, and then after ChatGPT, we 3Xed every year in terms of headcount since then, and that’s a painful amount of headcount growth. And so, we’re still relatively small compared to lots of companies, but in terms of experiencing that amount of change, it takes a lot of effort, I think, to sort of retain certain types of working habits, styles, prioritizations, and an idea of what’s important and what are we trying to do. It’s a lot of work.
Vivek: Well, maybe on that, how do you manage that? Because it’s somewhat unprecedented, right? You can’t really look at any other comps. There’s not a lot of other companies that have grown as fast and as quickly, and so you say the whole organization has to change. There’s no blueprint for it, so is there one or two things that you’ve thought about or you’ve done tactically that has helped in this journey?
Jason: I think, really, I would just kind of keep it simple and just pick one, which is it comes down to leadership and focus, and then what does the leadership focus talk about all the time? What’s been really interesting is, so Sam, when he does all hands, and he always speaks at all hands, he always has a few minutes, right? He doesn’t speak to all hands, but he always has a few minutes, but when he does, he chooses to focus on research or compute, and those are the two things he just always talks about. And so, that center is the company, because it’s just the CEO sets the tone.
It’s funny. There have been times when I or other execs have been like, “Well, no. We need to talk about a bunch of these other things,” and he will nix whole parts of presentations or take what should have been a 20-minute presentation on what a normal company would actually spend 20 minutes on and on all hands, and he’ll be like, “Two minutes,” right? And I think that, during the course of all this growth, you’d be confused, because no, these are important normal company things to talk about, but in retrospect, it’s very clear what he was doing, right? It’s like, through all of this chaos and change, it’s like he was trying to make very clear what the main thing was and just kind of have that be a ballast.
Vivek: Ruthless prioritization, right?
Jason: Yeah, yeah.
Vivek: Super interesting.
Jason: And sending a very clear message.
Vivek: I think we have room for one more?
Audience Question:
Thanks. Thanks very much for being here with us. What can you share about how OpenAI is thinking about its role to play in commerce? Obviously, you had the announcement earlier this week. People ask for recommendations of products, look for specific queries, comparisons, things like that. Sam Altman’s been on record, very publicly, as not wanting to go down the advertising route. What can you share about how you think about that huge opportunity of what people are asking for help with and what your role might be in delivering that, and then how you might monetize that? Thank you.
Jason: Yeah, so I think we want to kind of approach this like an ecosystem player. I think that’s the first thing, and so it’s kind of going back to what that Bill Gate’s quote, which is like, “You’re not really a platform, unless you create a lot more value for everybody else than you create for yourself,” and I think that’s probably one, a very good principle to think about how we approach this space. I think the other part of this is, even when you think about it from a technical standpoint, which is the real sort of beauty of the reasoning capabilities and the agentic capabilities expressing in this way, as an interesting engineering and science problem, is not if it does everything inside of ChatGPT itself, but if it’s able to actually execute an indeterminate number of transactions with an indeterminate number of commercial players, right? And that is actually a much more interesting, technical achievement. And so, I think that people are excited to build on that, and that is partly what the agentic commerce protocol is about, and so I think those are probably two strong indicators of how we think about this.
Vivek: One more?
Audience Question: I just wanted to get your thoughts on the application versus the model layer, Cursor versus Claud, where does the value come from, and do you see OpenAI coming into moving up the stack?
Jason: I mean, I think you can just look at our actions here with Cursor, which is that we’ve decided to partner with them, because they’ve just done a very good job executing when it comes to actually the application space. Adropic’s probably taking a slightly different view, in terms of how they want to go further into the application space. It might just be that it’s not necessarily always a matter of, you have decided permanently that one thing is a particular type of approach.
It’s just that, here you have this company with an amazing founder and Michael, and they’ve done an excellent job executing, and they’ve cracked some way in bringing the light to engineers, and so it’s a good partnership and a strong one that we are pretty happy with. I think when it comes to the coding capabilities itself, for our own research, what’s interesting to us, really, is thinking about that as how to advance our own research, because the ability to actually have automated software engineering is going to also increase the rate at which you can run a bunch of experiments and do the research itself for AI, and that is the core of our interest.
Vivek: That’s great. Maybe just to wrap everything up, if you’re sitting here one year from now at our next IA summit in 2026, what’s one thing that you’d be excited about that OpenAI would’ve accomplished over the last 12 months over everything that OpenAI is doing right now?
Jason: Man, so I think that’s such an interesting question, but there’s a bunch of things on the model side that I could pick, but I think it’s like you could just look at our current state of activities, just extrapolate that out over 12 months, and say, “We just continue to deliver on those things,” and that would still be exciting, right? So, we’ve got agentic commerce, and if that really works, there’ll be a decent ecosystem of lots of building on top of that and a lot of additional opportunity for people.
We’ve got a new video platform that is intended to help creators monetize, too, and so there’s a lot of potential opportunity there for lots of other people. The third here is we just announced a bunch of partnerships to build more compute, and we’re going to do that with a bunch of players, including Microsoft and Oracle. That’s also going to provide a lot of opportunity, both for those partners, as well as people who are going to use that compute. And so, I think just executing on those things over the next 12 to 18 months, and seeing what everybody else is going to do with it, I think that is actually the thing that’s going to be really interesting and fun.
Vivek: It’s going to be really fun next year. Well, Jason, thank you so much. Let’s all give a big round of applause.
Jason: Thank you.
Vivek: Thanks.
