The Infrastructure of Intelligence: Inside Crusoe’s AI Factory in Texas

 

In this episode of Founded & Funded, Ben Gilbert, co-host of the Acquired podcast, sits down with Chase Lochmiller, co-founder and CEO of Crusoe, the company building what it calls AI factories, including its massive campus in Abilene, Texas, which are designed to power this new era of intelligence.

In this conversation, Ben and Chase explore the physical reality behind today’s AI revolution. Why modern AI workloads demand entirely new infrastructure. How energy has become the primary bottleneck to scaling intelligence. What it takes to compress multi-year building timelines into months. And how Crusoe’s energy-first philosophy, from capturing flared methane to siting facilities near abundant wind power, shaped its path to building one of the world’s largest AI computing campuses.

This is a must-watch for anyone building in AI or rethinking infrastructure for the next era of intelligence.

Listen on Spotify, Apple, and Amazon | Watch on YouTube.


This transcript was automatically generated and edited for clarity.

Ben Gilbert: So, I hear you’re building a really, really big AI data center in Abilene, Texas.

Chase Lochmiller: This is accurate. We have a project in Abilene, Texas, that we’re doing with Oracle. This has been a very special project to work on because it’s really been reinventing a lot of the infrastructure layers of computing that are really the base substrate that enables all of this innovation and enables innovators to do their life’s work.

And we’ve really had to rethink the data… We don’t even really call them data centers; we call them AI factories. These are the factories of intelligence because we think about it as one coherent cluster of computing. When you look at it from an aerial view, it almost looks a bit like a motherboard when you’re sort of looking down on it, because there are these giant clusters that are all interconnected and designed in a way so that it can think as one coherent, giant brain.

But the scale of the infrastructure is just really dramatic, and it’s a huge shift from legacy web applications to the energy needs and the infrastructure needs to support intelligence at scale. Maybe to give you a couple quick tidbits about this site, so there’s about 1.2 gigawatts of power that powers the site.

Ben Gilbert: Can you help us understand that compared to historical norms?

Chase Lochmiller: Sure. So, I’m from Denver, so Denver runs on maybe a little bit less than 1.2 gigawatts. It’s about the power of Denver to power this data center.

If you look at Northern Virginia, where I think many people would consider the center of the world for data centers, this is where the bulk of the internet runs. At the end of ’24, JLL published a report, which states that there are about 4.5 gigawatts of total capacity in Northern Virginia. So, this one site, is a quarter of that.

Ben Gilbert: And that’s like all the hyperscaler clouds have very large presences in Northern Virginia.

Chase Lochmiller: Exactly. Yeah. And I think that’s sort of the evolution that we’re seeing. I mean, Sam recently just announced that his big KPI that he’s targeting his team for is basically one gigawatt per week. So, it’s basically one of these projects per week that would be delivered. It’s like total insanity. And 250 gigawatts by 2030.

And just maybe speaking about a bit of what it takes to make this happen, right? I think there’s a lot of talk about all the jobs that AI is going to take. We’re creating just an insane amount of jobs. It’s wild. We have 7,000 people on site working on this every single day. And this is in a city of… Abilene, Texas is city of 120,000 people. So, having 7,000 people working on this one project is a tremendous amount of labor, and it’s a lot of blue-collar trade work. It’s electricians, it’s plumbers, it’s construction workers that are making all of this infrastructure happen.

Ben Gilbert: So, what we saw here is two buildings. And John, could you loop this video again? What we saw here is two buildings. Across the street. There are six more of these, is that right?

Chase Lochmiller: Yeah, you can kind of see them in the background there. So, basically the first two buildings we started in June of 2024, and-

Ben Gilbert: That was dirt in June of ’24.

Chase Lochmiller: It was like dirt and mesquite trees. And then actually behind it, you see the six buildings going up behind it. Those started in February. So, it’s the whole project has moved very, very quickly. I think one of the reasons we sort of had the opportunity to do this is that we kind of came over with a bunch of creative ways to really accelerate this. So, there was actually an RFP that went out for the initial first 100 megawatts, which is going to be one of these buildings. And the next, fastest bid to basically make this happen was two and a half years. And they called me, they were like… I was the 35th person. I was the last person that anybody called to do this.

Ben Gilbert: Crusoe was a seven-year-old company. You weren’t a startup.

Chase Lochmiller: Yes, totally. And they were like, “Do you think you could do this in a year?” I was like, “Yeah, for sure. Absolutely.” But we had to do a lot of creative things. And I think part of it was the fact that I had a lot of great data center insiders on my team, but I was very much an outsider. I had never built anything of this sort of scale. And I was really coming at it from building out large GPU clusters that we’d been doing for our Crusoe cloud business, and really having this framework of thinking through, “Okay, what does it take to build out a large-scale coherent cluster? And what is the actual building design, the cooling design, the power design that you would want to have to support that at a much bigger scale?”-

And that led us to all these different modular optimizations in terms of how we brought this infrastructure to life. We actually, Crusoe, we stood up a whole… It’s kind of the journey of an entrepreneur is so funny because you end up doing all these things that you wouldn’t have expected yourself to do when you started.

We have a whole manufacturing arm at Crusoe called Crusoe Industries, where we have a factory that makes electrical equipment for data centers. And the reason we did it was so that we could control our own supply chain, and we could control time-to-market for a lot of these things. So, to give you an example, there was a critical component called a power distribution center that distributes the medium-voltage electricity, the medium-voltage power, to get to the data center. It distributes to all these low-voltage transformers. When we went out, and we were requesting bids from all the different suppliers in the industry, the fastest offer we had was a hundred weeks. And we were like, “I committed to 12 months, so a hundred weeks isn’t going to work.”

Ben Gilbert: And a hundred weeks, that’s like three or four big foundational model releases for Oracle’s customers.

Chase Lochmiller: That’s not going to work. So, we figured out how to make it ourselves, and we can make it in 20 weeks. And so it was a lot of different modular components that we were basically manufacturing off-site.

What you see is you see these big buildings, but a lot of the guts of the data center, the electrical components, the switch gear, the low-voltage UPS, the RPPs, the hot aisle containment systems, these are all modularized into these data center Lego blocks. And what we do is we actually do that off-site in a controlled manufacturing environment, and then we bring it to the site inside this big building, and then we assemble it. And it’s kind of a lot faster to do that on-site than having to build everything on-site, essentially.

Ben Gilbert: So, just to make everyone in the room aware, a lot of the AI applications you are using when you go kick off some interaction with image generation or a chatbot is happening right here in this building. And it’s pretty recent that this has been full of GPUs and actually operating.

Can you take us through what it actually takes, the inputs to a site like this, and how they’re different than the old world of building sort of classic data centers versus the new world of AI factories?

Chase Lochmiller: Yeah. I kind of touched on this, but I think just from a very first principle basis, the number one thing is we think about this as one giant, coherent cluster, and then you sort of end up building and designing around that.

Ben Gilbert: The data center is the computer as Jensen would say.

Chase Lochmiller: Exactly, the data center is the computer. And because of that, you see that central core, that T in the center of the four wings, that’s where all of the network and storage lives. And then it distributes out to each of those four wings, where you have these giant clusters of liquid-cooled Blackwell GB200 NVL72 racks. And on the perimeter, you see this stream, it kind of looks like we were sort of talking about-

Ben Gilbert: It looks like RAM.

Chase Lochmiller: It looks like memory. Yeah, yeah. But it’s actually, they’re chillers, it’s air-cooled chillers that are cooling the water. So, we have a giant water loop. There’s a million gallons of water per building that are cooling these high-density GPU racks.

And while these buildings look quite large, it’s a way denser configuration than a traditional web cloud data center. For this amount of capacity for an AWS data center to serve EC-II or something like that, it would be probably three to four times bigger in terms of square footage. So, each of those buildings is about 500,000 square feet, probably 1.5 to 2 million square feet, if that were a traditional web data center.

Ben Gilbert: And so obviously, this increased power need has to come from somewhere. Where do you source power from?

Chase Lochmiller: Power is definitely the key bottleneck in a lot of this. And I think we’ve sort of seen this evolution of scaling laws and scaling AI infrastructure, and it’s very rapidly saturated, the infrastructure we have to support computing and the energy we have to support computing. So, that’s led us to a place where we fundamentally just need a lot more new energy generation, and we need a lot more new data center development.

Crusoe’s always really taken this very energy-first approach to computing. It’s what led us to build in Abilene, Texas. Abilene was not a data center market before we put a shovel in the ground in Abilene, and now the world knows about Abilene. But what brought us there is actually, it’s an area of Texas in West Texas where there’s very abundant wind energy. It’s one of the most consistently windy areas of the country. And what had happened was a lot of wind developers had built out these large-scale wind farms, and they were having to curtail because power prices were going negative, they were getting paid these production tax credits, which only last for 10 years, and at the end of the 10 years, you’re subject to economic curtailment or face negatively priced power. So, it wasn’t like a great outcome for these renewable energy developers.

Ben Gilbert: They have wind turbines that are sitting there that they’re intentionally not letting run?

Chase Lochmiller: Yeah. You’ll drive by it on a windy day, and you’ll see this wind turbine not spinning. You’re like, “What’s going on? Is this thing broken?” But it’s actually that they’ve turned it off because of economic curtailment. So, there’s no marginal bid for that power. And so this is obviously… AI needs a lot of energy. They sort of had a lot of energy. And it made sense for us to basically, instead of trying to build the next data center in Northern Virginia, our focus was bring the demand for compute to areas where we can access low-cost, abundant energy. So, that was one of the big drivers of us coming to Abilene. We’ve done other stuff across Texas-

Ben Gilbert: That happens to work well in AI in a way that it wouldn’t have worked well in the previous internet era, right?

Chase Lochmiller: Yeah, that’s right. I think certainly for these mega clusters, latency, it’s not super sensitive from a latency perspective. Certainly, for training, if you’re training a new model or fine-tuning something or post-training, you’re far less concerned about adding 10, 20, 30 milliseconds of latency to get to the data center. And then even for most inference applications, frankly, you don’t really care about that latency.

So, one of the beauties of AI from an infrastructure standpoint is it’s far more agnostic as far as where the workload’s actually running. There’s certainly applications that you want to be very low latency. I don’t think anybody wants their self-driving car to be running on a cloud data center network-

Ben Gilbert: Right. But if I’m going to go train GPT-6 for six months…

Chase Lochmiller: Exactly.

Ben Gilbert: I don’t care where it trains.

Chase Lochmiller: You don’t care. And even for all these chain-of-thought reasoning models, where you’re doing test-time compute scaling, and the model’s really thinking about the answer before it gets back to you. The response time is order of seconds, minutes, days, weeks. It could be a very, very long period of time. Adding 30 milliseconds, it’s irrelevant. It just doesn’t matter at all.

That’s set up this entirely new framework for how we think about the compute infrastructure to support AI in the future. And it’s very much in line with Crusoe’s philosophy of energy-first. There’s going to be large AI factories built in areas where we can access abundant energy resources. Abilene’s a great initial application of this.

We’ve announced a project that we’re doing in Wyoming that has initially 1.8 gigawatts of power. It will scale to 10 gigawatts of power. So, again, this is two New York Cities or something like that. It’s a ton of power, and I think there’s going to be many of these sorts of facilities that are in these naturally energy-rich areas.

Ben Gilbert: I’d love to talk about your entrepreneurial journey a little bit because I think everyone in the room who doesn’t know much about Crusoe is probably trying to figure out how you, a seven-year-old company, are building what I think is currently operating the world’s largest and most power-intensive AI data center. Is that fair to say in the current world?

Chase Lochmiller: I think that’s right. I don’t want to go on record, and Elon get mad at me or something, but…

Ben Gilbert: A very large-

Chase Lochmiller: It’s big.

Ben Gilbert: Before you were doing this data center business that you have, you were doing Crusoe Cloud, and you currently do both of those. You had several other iterations of the business, too. Can you take us to… If I’m a founder sitting in the room starting a business, what unknown steps may be on the journey ahead of me?

Chase Lochmiller: Yeah. So, my background — I was working in AI research for the financial industry, where I was actually a quant portfolio manager for about a decade, building AI and machine learning models to forecast security returns. We were historically using a lot of classical machine learning techniques, and then doing a lot of feature engineering on these economic relationships between stocks or commodities or whatnot. And then we’d sort of train these models.

And then deep learning sort of appeared, and AlexNet was published, and we shifted a lot of our workloads from classical machine learning with heavily engineered features to deep learning models, where the model was actually discovering the feature for us. And that shift led us to shifting from training on CPUs to training on GPUs and consuming a lot of computing power. And I sort of had this firm belief at that point that intelligence was going to be embedded in every aspect of the economy. Everything could be made better by having Silicon-based intelligence, making it more optimized, more accelerated, more improved, compared to just humans doing it.

So, when I set out to build Crusoe, my big ambition and goal was to really build this AI cloud platform, and I recognized early on how important energy was and the scaling of that. And so when we first started, we were actually capturing this waste methane that was being flared in the oil field, and it was basically a free energy resource that was being wasted by another industry, and we were capturing it, we’re utilizing it to generate our own power to power these mobile and modular data centers. Our first off-take to monetize it was actually Bitcoin money.

Ben Gilbert: It’s the craziest thing. You had these effectively shipping containers that they would fill with GPUs, and you’d drop it in an oil field and power it with flared methane.

Chase Lochmiller: Yeah, it was like…

Ben Gilbert: But you learn a lot doing it.

Chase Lochmiller: But I learned a lot. Exactly. My co-founder came from this very energy background, and I’d never even been to an oil field, and I was like, “Wow, they’re just burning this stuff all day, every day. It’s just being lit on fire.” And so we sort of found this unique and interesting opportunity to basically turn otherwise wasted, stranded energy into money via computing.

And I think that energy-first mentality has continued to persist at Crusoe. I think what led us to getting into the data center design engineering and development business was that as we were growing and building and scaling Crusoe Cloud, I spent a lot of time with the Nvidia team going through the roadmaps, looking at the architecture of chips, looking at B100s, A100s, and looking at future generations of chips saying like, Okay, this generation’s 200 watts per chip. The next one’s 300 watts per chip. Okay, then you think you’re going to do something that’s like 600 or 700 watts per chip, like, wow. This is really accelerating in terms of the power density being consumed by these chips, and it got more competitive.

And a similar thing had actually played out in the Bitcoin space, where people were initially mining Bitcoin on their CPUs when Bitcoin was like this new network, you could do mining on your laptop, you can… 50 Bitcoin from mining a block on your laptop. And then people shifted to GPUs, and it got more competitive, and difficulty went up. And then people were actually doing this in tier four data centers that have 99.999% reliability.

Turns out you don’t need that for mining Bitcoin. All of the added CapEx to build a “five nines” of reliability data center was completely unnecessary for Bitcoin. And so people started building ASICs, Application-Specific Integrated Circuits just to do this SHA-256D hashing function, and they started moving them into these ultra low-cost data center infrastructure solutions where a tier five data center or tier four data center may cost you $10, $15 million a megawatt, a Bitcoin data center that’s essentially like a chicken coop or it’s like a power plug. That’s where it’s passive air-cooled. You don’t care about cleanliness. You try to get that down to $200,000 per megawatt, maybe even cheaper. So, you’re talking about a massive 98%, 99% reduction in overall CapEx per megawatt for this specific use case.

Well, I sort of felt like that thing was going to happen in AI as well, just watching the chips evolve. Cooling architectures are different. Reliability concerns are different. We’re seeing this massive shift in the industry away from five nines of reliability. Because you don’t need it, right? It’s like if you can have some amount of reliability, call it three nines of reliability, that’s plenty to support a training workload. It’s probably somewhere in between where Bitcoin is and where serving webpages is. But it’s definitely a new and unique application that requires new and unique infrastructure to support it, ranging from everything from the building design, the cooling design, the mechanical, electrical, the whole thing. It’s just fundamentally a new thing.

And I think the more people get in their heads that this is a new thing that requires new infrastructure, I think that’s the massive opportunity that we’re focused on going after.

Ben Gilbert: Awesome. Well, I always like talking to Chase because, to me, AI is software. It’s me interacting with Chat, me interacting with Cloud Code, me interacting with image generation, video generation, and my head never goes to, “AI is a completely rethought building with 7,000 people and moving dirt and drawing on new power sources.” And my eyes are always open when I talk to you with just how much physicality there is to AI.

Chase Lochmiller: Yeah, I mean, it’s a cool thing for people to be exposed to, but our goal with our cloud platform is really to abstract away all of that complexity. It’s all behind the scenes. So, our goal is to build these factories that can convert electrons into tokens, and so people can just interact directly with our core services, whether it’s managed high-performance virtualization of GPUs, managed Kubernetes, whatever core primitives you’re looking for, managed infrastructure, we deliver to you, and you’re abstracted away from all of this complexity.

If you want to host a model and run a high-performance managed inference service, you can do that on Crusoe Cloud. So, there’s a lot of cool ways you can interact with us, and we’ve tried to create different solutions for different participants across the AI industry.

Ben Gilbert: Awesome. All right. Give it up for Chase.

 

Related Insights

    Founders Who Built Until the World Was Ready: Quadrupling down on Temporal
    Can We Trust AI? The Future of Verified Reasoning in High-Stakes Systems
    The New Rules for Fundraising in the AI Era

Related Insights

    Founders Who Built Until the World Was Ready: Quadrupling down on Temporal
    Can We Trust AI? The Future of Verified Reasoning in High-Stakes Systems
    The New Rules for Fundraising in the AI Era