In this week’s IA40 Spotlight Episode, Investor Sabrina Wu talks with Credo AI Founder and CEO Navrina Singh. Founded in 2020, Credo’s intelligent responsible AI governance platform helps companies minimize AI-related risk by ensuring their AI is fair, compliant, secure, auditable, and human-centered. The company announced a $12.8M Series A last summer to continue its mission of empowering every organization in the world to create AI with the highest ethical standards.
Navrina and Sabrina dive into this world of governance and risk assessment and why Navrina wanted to make governance front and center rather than an afterthought in the quickly evolving world of AI. Navrina is not shy about what she thinks we should all be worried about when it comes to the abilities of LLMs and generative AI and her passion for an “AI-first, ethics-forward” approach to artificial intelligence. These two discuss the different compliance and guardrail needs for companies within the generative AI ecosystem and so much more.
This transcript was automatically generated and edited for clarity.
Sabrina: Hi everyone. My name is Sabrina Wu, and I am one of the investors here at Madrona. I’m excited to be here today with Navrina Singh, who’s the CEO and founder of Credo AI. Navrina, welcome to the Founded and Funded podcast.
Navrina: Thank you so much for having me, Sabrina. Looking forward to the conversation.
Sabrina: So Navrina, perhaps we could start by having you share a little background on Credo and the founding story. I’m curious what got you excited to work on this problem of AI governance.
Navrina: Absolutely. Sabrina. It’s interesting. We are actually going to be celebrating our three-year anniversary next week, so we’ve come a long way in the past three years. I started Credo AI after spending almost 20 years building products in mobile SaaS and AI at some of large companies like Microsoft and Qualcomm. And I would say in the past decade, this whole notion of AI safety took on a very different meaning for me.
I was running a team which was focused on building robotics applications in one of the companies, and as we saw these human-machine interactions in a manufacturing plant where these robots were working alongside a human, I would say that was really an aha moment for me in terms of how are we ensuring safety of, obviously, humans, but also thinking about environments in which we could control these robotics applications when they go unchecked. And I would say that, as my career progressed, moving to cloud and building applications, especially focused on facial recognition, large language models, NLP systems, and running a conversational AI team at Microsoft, what became very clear was that same physical safety now was becoming even more critical in digital world. So when you have all these AI systems, literally as our agents working alongside us, doing things for us, how are we ensuring that these systems are really serving us and our purpose? And so a couple of years ago, we really started to think about is there a way that we can ensure that governance is front and center rather than an afterthought. So six years ago, we really started to dive deeper into how can I bridge this gap, this oversight deficit as I call it, between the technical stakeholders, the consumer, the policy, governance, and risk teams to ensure that we are not having these AI-based, ML-based applications all around us becoming this fabric of our society and our world completely going unchecked.
For me, that was an idea that I just could not shake it off. I really needed to solve for especially in the AI space, there’s a need for multi-stakeholders to come in and inform how these systems are going to serve us. So that led me to really start looking at the policy and regulatory ecosystem.Is that the reason? Is that going to be the impetus for companies to start taking governance more seriously? And Credo AI was born out of that need as to how can we create a multi-stakeholder tool that is not just looking at technical capabilities of the systems but is also looking at the techno-social capabilities of these systems so that AI and machine learning are serving our purpose.
Sabrina: And I think at Madrona, we also believe that all applications will become intelligent over time. Right? This thesis of taking in data and leveraging that data to make an application more intelligent. But in leveraging data and in using AI and ML, there becomes this potential AI governance problem, kind of what you had just alluded to a little bit there.
We even saw GPT4 was released, and one of the critiques among all the many, many amazing advances that came with it is how GPT continues to be a black box. Right? And so, Navrina, I’m curious, how exactly do you define responsible AI at Credo? What does that mean to you, and how should companies think about using responsible AI?
Navrina: That’s a great question and I would say one of the biggest barriers to this space growing at the speed at which I would like, and the reason is there’s multiple terms: AI governance, AI assurance, responsible AI, all being sort of put in this soup, if you will, for companies to figure out. So there is a lack of education. So, great question. So let me step back and explain what we mean by AI governance. When we think about AI governance, it is literally a discipline of framework consisting of policy regulation, company best practices, , sector best practices that guide the development, procurement, and use of artificial intelligence. And when we think about responsible AI, it is literally the accountability aspect. How do you implement AI governance in a way that you can provide assurance? Assurance that these systems are safe, assurance that these systems are sound, assurance that these systems are effective, assurance that these systems are going to cause very little harm.
And when I say very little, I think we’ve found that no harm is, right now, an aspirational state. So getting to very little harm is certainly something companies are aspiring for. So when you think about AI governance as a discipline, and the output of that is proof that you can trust these AI systems, that entire way of bringing accountability is what we call responsible AI.
Who is accountable? When that person is accountable for ensuring AI systems actually work in the way that we, uh, expect them to. What are the steps we are taking to minimize those intended and unintended consequences? And what are we doing to ensure that everything, whether it’s the toolchain, whether it’s the set of company policies, whether it is regulatory framework? All of them evolve to manage the risks that these systems are going to present.
And I, I think that for us, I would say in, in this very fast and emerging space of AI governance has been critical to bring focus and education too.
Sabrina: Maybe we could just double-click on that point. How exactly is Credo solving the problem of AI governance?
Navrina: So Credo AI is an AI governance software. It’s a SaaS platform that organizations use to bring oversight and accountability to the procurement and development, and deployment of their AI systems.
So what this means is, in our software, we do three things effectively well. The first thing that we do is we bring in context, and this context can come from new standards, existing standards like NIST RMF This context can come from existing regulation or emerging regulations, whether it’s EU AI act as an emerging regulation or existing regulations like New York City Law number 144. Or this context could be company policies. Many of the enterprises that we work with right now are self-assessing. They’re providing proof of governance. So in that spirit, they’ve created their own set of guardrails and policies that they want to make sure gets standardized across all their siloed AI implementation.
So the first thing that Credo does is bring in all this context, standards, regulations, policies, best practices, and we codify them into something called as policy packs. And these policy packs, you can think about them as a coming together of the technical and business stakeholders. Because what we do is we codify them into measures and metrics that you can use for testing your AI systems. But we also bring in process guardrails, which are critical for your policy and governance teams to be able to manage across the organization. So this first stage of bringing in context is really critical. Once Credo AI has codified that context, the next step is this assurance component. How do you actually test the data sets? How do you test the models? How do you test input and outputs, which are becoming very critical in generative AI, to ensure that whatever you’ve aligned on in the context, actually you can prove soundness, you can prove effectiveness against those guardrails. So our second stage is all about assurance and testing, and validations of not only your technical system but also your process. And then the last component is supercritical, which is translation. And in translation, we are taking all the evidence we have gathered from your technical systems, from your processes that exist within your organization, and we convert them into governance artifacts that are easily understandable by different stakeholders. Whether you are looking at risk dashboards for your executive stakeholders, whether you need transparency report or disclosure reports for your audit teams, or whether you are looking at impact assessments for a regulator. Or whether you’re looking at just a transparency artifact to prove to consumers that within the context of which, as a company, you’ve done your best.
So as you can imagine, just putting it all together, Credo is all about contextual governance. So we bring in context, we test against that context, and then we create this multi-stakeholder governance artifacts so that we can bridge this gap, this oversight deficit that has existed between the technical and business stakeholders.
Sabrina: I’m curious as it relates to the policy packs, are they transferable across different industries? Do you work with different industries? And, and are there certain regulations that are coming out where Credo is more useful today? Or do you see that kind of evolving over time?
And then I have a couple of follow-up questions after that, but maybe we could start with that.
Navrina: Right now, as you can imagine, the sectors that Credo AI is getting a lot of excitement in are regulated sectors. And the reason for that is they’ve been there, they’ve done that, they’ve been exposed to risks, and they’ve had to manage that risk. So our top performing sectors are financial services, insurance, and HR. And HR has been, I would say, a new addition, especially because of emerging new regulations across the globe. So having said that, when we look at the regulated sector, the reason companies are adopting Credo AI is because, one, they already have a lot of regulations that they have to adhere to, not only for old statistical models but now for new machine learning systems.
However, what we are finding, and this is where the excitement for Credo AI just increases exponentially, is we are finding unregulated sectors, whether it is high tech, whether it is even government, which, as you can imagine, has a lot of unregulated components. We are finding their companies are adopting AI governance because they are recognizing how crucial trust and transparency is as they start using artificial intelligence. And also, they’re recognizing how critical trust and transparency is for them to win in this age of AI. If they can be proactive about showing whatever black box they have, what were the guardrails being put around that black box? And by the way, it goes way beyond explainability. But I think the transparency around what are the guardrails we are putting across these systems. Who potentially can be impacted by these systems? What I, as a company, have done to introduce a way to reduce those harms, and being very proactive about those governance artifacts, we are finding that there’s an uptick in this unregulated sector around brand management, around trust building. Because these sectors want to adopt more AI. They want to do it faster, and they want to do it by keeping consumers in the loop around how they’re ensuring at every step of the way that the harms are limited.
Sabrina: When You talk about explainability, I think one thing that’s interesting is being able to understand what data is going into the model, understanding how to evaluate the different data sets. Is Credo evaluating certain types of data? Like is it structured versus unstructured data? How are you guys thinking about that level of technicality, and how are you helping with explainability?
Navrina: I think this is where I’ll share with you what Credo AI is not. And this goes back to a problem of education and a problem of nascency in the market. So, Credo AI is not an ML ops tool. For many companies that have in the past, I will say five to six years, adopted ML Ops tools, that ML op tools are fantastic at helping test experiment, develop, productionized ML models primarily for developers and technical stakeholders. And they are, Many of the ML ops tools are trying to bring in that responsibility layer by doing much more extensive testing by being very thoughtful about where could there be fairness, security, reliability issues. The challenge that happens right now with ML ops tools, it is very difficult for a non-technical stakeholder. If I am a compliance person, if I am a risk person — if I am a policy person — to understand what those systems are being tested for, and what are the outputs. So this is where Credo AI comes in. We really are a bridge between these ML ops tools, and if you can think about the GRC ecosystem, the governance, risk, and compliance ecosystem, so that’s an important differentiation to understand. We sit on top of your ML infrastructure, sort of looking across your entire pipeline, entire AI lifecycle to figure out where there might be hotspots of risk. That basically aligned with the context that we’ve brought in with the policies, with the best practices that these hotspots are emerging. And then Credo AI is also launching mitigation where you can take active step.
So having said that. To address your question a little bit more specifically, right now, Credo AI, over the past three years, has built a strong IP moat where we can actually tackle both structured and unstructured data extremely well. So, for example, in financial services, which is our top-performing sector, Credo AI right now is being deployed to provide governance for use cases, from fraud models to risk scoring models. To anti-money laundering models, to credit underwriting models. And then, if you think about the high-tech sector, we are being extensively used for facial recognition systems. We are being used for speech recognition systems. And in government, where we are getting a lot of excitement, there is a big focus on object detection on the field. So situational awareness systems, but also back office. As a government agency or as a government partner, they are buying a lot of commercial third-party AI systems. So Credo AI can also help you with evaluation of third-party AI systems, which you might not even have visibility into.
So how do you create that transparency which can lead to trust? But we do that very effectively across all the sectors. And I know we’ll go a little bit deeper into generative AI and what we are doing there in just a bit. But, but right now we, we’ve built those capabilities over the past three years, both structured and unstructured data sets and ML systems are a focus for us, and that’s where we are seeing the traction.
Sabrina: Is there some way that you think about sourcing the ground truth data? As we think about demographic data in the HR tech use case, is there some data source that you plug into, and how do you think about this evolving over time? How do you continue to source that ground truth data?
Navrina: It’s important to understand why do customers use Credo ai and then it then addresses the question that you just asked me. There are three reasons why companies use Credo AI. First and foremost is to standardize AI governance. Most of the companies we work with are global two thousands, and as you can imagine, they have very siloed ML implementation, and they’re looking for a mechanism by which they can bring in that context and standardize visibility and governance across all those different siloed implementation.
The second reason that companies bring in Credo AI is that they can really look at AI risk and visibility across all those different ML systems. And then lastly, why they bring in Credo AI is to be compliant to existing or emerging regulations.
What we are finding is in most of these applications, there are two routes we’ve taken. One is that we source the ground truth for a particular application ourselves. So, in that case, we’ve worked with many data vendors to create grounds through data for different applications that we know are going to be pretty big and massive, and we have a lot of customer demand from. However, on the second side, where a customer is really looking for standardization of AI governance — is really looking for compliance. In that case, we work with the ground truth data that the company has, and we can use that ground truth data to test against. Because, again, they’re looking for standardization. They’re looking for regulatory compliance, and they’re not looking for that independent check where we are providing the independent data sets to do ground truth.
Sabrina: In the compliance and audit use case, is this something that companies are going to have to do year after year? How should they be thinking about this? Is this something they’ll do time and time again, or is it a one-time audit, and then you check the box and you’re done?
Navrina: The companies that think about this as a once and done, checkbox, they’re already going to fail in the age of AI. The companies we work with right now are very interested in continuous governance, which is one, from the onset, I’m thinking about an ML application. How can I ensure governance throughout that development process or throughout the procurement process? So that before I put it in production, I not only have a good handle on potential risk but once I’ve put that in production and then through the monitoring systems that they have, which we connect to, we can ensure continuous governance. Having said that, the regulatory landscape is very fragmented, Sabrina. Right now, most of the regulations that are upcoming will require, at minimum, an annual audit, an annual compliance requirement. But we are seeing emerging regulations which need that on quarterly basis. This is where, especially with the speed of advancements we’ve seen in artificial intelligence, and especially with generative AI, where things are going to change literally on a week-by-week basis. It is not so much about the snapshot governance viewpoint, but it is going to be really critical to think about continuous governance because it takes that one episode. I always share with my team. I’m like, AI governance is like that insurance policy you wish you had when you are in that accident. So the companies that are going to say, “Oh, let me just get into that accident and then I’ll pay for it.” It’s too late. Don’t wait for that moment for everything to go wrong. Start investing in AI governance and especially make it front and center to reap the benefits of AI advancements like generative AI that are coming your way.
Sabrina: I love that analogy around the insurance — you get into that accident and then you wish you had the car insurance. I think this is a good place to pivot into this whole world of generative AI, right? There’s been a ton of buzz in the space. I think I read a stat on Crunchbase that was saying there was something like 110 new deals funded in 2022 that were specifically focused on generative AI, which is crazy. I’m curious, when it comes to generative AI, what are some of the areas that you see there being more need for AI governance? And I know Credo also recently launched a generative AI trust toolkit. So how does this help intelligent application companies?
Navrina: Yeah, that really came out of a need that all our customers right now want to experiment with generative AI. Most of the companies we work with are not the careless companies. So just let me explain how I view this generative AI ecosystem.
You have the extremely cautious who are banning generative AI. Guess what? They’re not going to be successful because we are already getting reinvented. We got reinvented with GPT4. So any company that is too cautious in saying, I’m not going to bring in generative AI, already lost in this new world. And then you have the carelessness category, which is the other extreme spectrum. That let’s wait for that accident before I’ll take an action. But by that time, it’s already too late. And then there is the central category, which I am super excited about, the clever category. And this clever category is one, understanding it’s important for them to use and leverage generative AI.
But they’re also very careful about bringing in governance alongside it because they recognize that governance keeping pace with their AI adoption, procurement development is what’s going to be the path for successful implementation. So, in the past, I would say, couple of months, we heard a lot from our customers, that we want to adopt Gen AI, and we need Credo AI to help us adopt generative AI with confidence. Not like necessarily solving all the risks and all the unknown risks that Gen AI will bring, but at least having a pathway to implementation for these risk profiles.
So the generative AI trust toolkit that we have right now, we are literally building it as we speak with our customers, but it already has four core capabilities. So the first capability that we’ve introduced in the generative AI trust toolkit is what we call Gen AI policy packs. So as you can imagine, there’s a lot of concerns around copyright issues, IP infringement issues. So we’ve been working with multiple legal themes to really sort of dissect what these copyright issues could be. So as an example, just this week, the Copyright Office has released a statement about how it handles work that contains material generated by AI. And they’ve been very clear that the copyright law requires creative contributions from humans to be eligible for copyright protection. However, they’ve also stated very clearly that they’re starting a new initiative, which is going to start thinking about this AI generator content and who owns that copyright. But till that happens, really making sure the copyright laws are something that companies abide by, understand, and especially in their data sets, is critical.
So one of the core capabilities on our trusts toolkit is a policy pack around copyright infringement where you can quickly surface and I wouldn’t say quickly, there is obviously work involved based on the application, but quickly understand. So, for example, we have copyright policy pack for GitHub co-pilot, we also have for generative AI, especially coming from stable diffusion. The second category in our trust toolkit is evaluation and test. And so what we’ve done is we’ve extended Credo AI lens, which is our open source assessment framework, to include increased assessment capabilities for large language models like toxicity analysis, and this is where we are working with multiple partners on understanding what are new kinds of assessment capabilities for LLM that we need to start bringing in into our open source.
And then the last two components that we have in our trust toolkit, is a lot around input output governance, and prompt governance. A lot of our customers right now, in the regulated space, are being clever because they don’t want to use LLM for very high-impact, high-value application. They’re using it for customer success. They’re using it for maybe marketing. So in that scenario, they do want to manage what’s happening at the input and what’s happening real time in the output. So, we’ve created filter mechanisms by which they can monitor what’s happening at input output. But also, we’ve launched a very separate toolkit, it’s not part of Credo AI suite, for prompt governance so that we can empower the end users to be mindful about, is this a right prompt that I want to use? Or is this going to expose my organization to additional risk?
I’m very excited about the trust toolkit, but I do want to caveat it. We should all be very worried because we don’t understand the risk of generative AI and large language models. If anyone claims they understand, they’re completely misinformed, and I would be very concerned about it. The second is the power of this technology. When I think about things that keep me up at night, LLMs/ generative AI have literally the power to either make our society or completely break it. Misinformation, security threats at large. We don’t know how to solve it, and Credo AI is not claiming we know how to solve it, but this is where we are actually going to be launching really exciting initiatives soon. Can’t share all the details, but how do we bring in ecosystem to really enable understanding of these unknown risks that these large language models are going to bring?
And then thirdly, companies should be intentional about can they create test beds within their organization and, within that test bed, sort of experiment with Gen AI capabilities, alongside governance capabilities, before they open that test bed and take generative AI to full organization. And that’s where we come in. We are very excited about what we call Gen AI test beds within our customer implementations, where we are testing out governance as we speak around unknown risks that these systems bring.
Sabrina: Wow, a lot to unpack. I think, a lot of exciting offerings from the Gen AI trust toolkit, and I totally agree with you in terms of making sure that people are using responsible AI — large language models in ethical ways and responsible ways. Right. I think one of the critiques is that these LLMs may output malicious, or it just falsely incorrect information and can guide people down potentially more dangerous paths. And I think one thing that I’m always interested in trying to better understand are there certain guardrails that companies can put into place to make sure that these things don’t happen. And I think you just alluded to one — the test bed example here. So I’d love to understand more about other potential ways that companies can use Credo to put these guardrails into place. Maybe it’s more from a governance standpoint and saying, “Hey, are you making sure that you’re checking all of these things when you should be?” Or potentially, it’s, “Hey, are we testing the model? Are we making sure that we understand what the outputting before we take it out to the application use cases?”
It’s certainly a question and big risk in my mind of the technology, right? And we don’t want to get to a place where maybe the government just shuts down the use of larger language models because it becomes so dangerous and because it is so widely accessible in the public’s hands. Just curious how you’re thinking about other guardrails that either companies can do using Credo or otherwise.
Navrina: This is where our policy packs are literally, I would say, the industry leader right now in putting those guardrails. Because again, how do you, when you have an LLM, maybe you’ve retrained it on your corpus of data, or it’s basically just sort of searching on your corpus of data? I think there’s a little bit more relief that you can point to factual information. So the propensity of these LLMs to hallucinate sort of decreases if you are putting those guardrails around, what can you go through, if you can go through only this customer data, which my company owns and just use that corpus of data, those guardrails become really critical. And this is where Credo AI policy packs for copyright for guardrail on systems, what corpus of data you should be using become really critical. And then the input output governance, as I was mentioning, becomes really critical.
Recently I was having a conversation, and I’m not going to name this company, uh, because I think they’re doing phenomenal work, but there was this statement made by an individual from this organization saying that, we should not be overthinking the risk of generative AI systems, but just launch them in the market and let magically the world converge to what the risks are. And then, magically, we will arrive at solutions.
And I think that is the kind of mindset that’s going to take us down that road of AI and completely being unmanaged. And that’s what keeps me up at night when you have so much belief in technology that you turn a blind eye to managing risk. And we do have lot of people in this ecosystem right now that do have that mindset. So, the carelessness category that I was mentioning. So I think this is where education becomes really critical because as we have seen, and I have been exposed to in the past six weeks, is the capacity building within regulators right now is very limited. They are not able to keep up with the advancements in artificial intelligence.
They’re really looking to technologies like us to help work with them, to really think about these guardrails. So, either we are going to run into a future scenario where, there’s heavy regulation, nothing works, and technology is very limited. Or we are going to run into a situation where there is no thinking around these guardrails that we are going to see mass national security threats, misinformation at large.
And I think, I’m trying to figure out right now with the ecosystem, what is that clever way to implement this? And I think one of the cleverest ways is public-private partnership. Because there’s an opportunity for us to, for example, for Red teaming, bring in more policymakers, bring in impacted communities, and make sure that the outputs of those red teamings are exposed to the folks around what potential harms have been uncovered and what commitments a company can make to ensuring that harm does not happen.
Or if you think about system cards, I’m, I’m excited for ChatGPT as well as GPT4 to release their system card. But there are a lot of questions and I think the mechanism by which those questions can be answered around these system cards is going to be really critical. Or work being done by Kudos to Hugging Face around rail license. We are partners in their rail initiative, which is a responsibility AI license, which is being very prescriptive about where and where not this AI and machine learning system can and cannot be used. I think that’s the area and opportunity we are getting into is being very clear around the gap between the intent of an application to the actual use. And how do you bring transparency between those two is going to be a lot of responsibility of the developers building it, but also, the enterprise is consuming it. And Credo AI has such a unique role to play in that as an independent third party, bringing this transparency. And then I think that’s the world that we are getting into right now.
Sabrina: And I wonder if there are other ways that we as a collective community — as investors investing in this space, and then also as company builders, how can we continue to educate the ecosystem on AI governance, what that means, and how we should collectively make sure that we’re implementing these responsible AI systems in an ethical way.
Navrina: So Sabrina, there are a lot of actually great initiatives that are being, worked on. We are an active partner of Data and Trust Alliance, which was started about a year and a half, two years back by the founder of General Catalyst, and it has really attracted some of the largest companies to this partnership.
And we worked with Data Interest Alliance on an assessment, so as investors are looking at, and whether these investors are VCs looking to invest in AI companies, or whether you are part of a corporate venture group or you’re part of an M&A team doing due diligence on a company, what are the questions you should be asking of these AI companies to really sort of unpack what kind of artificial intelligence is being used? Where are they getting their data sets from? How are they managing risk? If they’re not managing risk, why not? What are applications, and what is their categorization of risk profilers application?
The hype is exciting in generative AI. I’m excited about the productivity gains. I’m super excited about the augmentation and creativity it’s already unleashing for me and my eight-year-old daughter, by the way, she’s a huge fan of ChatGPT. Loves writing songs. She’s a fan of Taylor Swift too, so she mixes the two. So I see that. But the issue is really making sure we are being very intentional about when things go wrong. when things are going right, phenomenal, right? It’s when things go wrong. So Data Interest Alliance highly encourage you to look at them.
SD is another initiative. It has investors that have a total of about $2 trillion assets under management. It is investors for sustainable development ecosystem. The investors are asking the same questions, right? What, how do we think about AI companies maybe contributing to misinformation? How do we think about an investment? How can we create disclosure reporting for public companies as part of their 10Ks? Is there a way that we can ask them to report on their responsible procurement, development, and use of artificial intelligence and more to come on that because we are right now working pretty hard, similar to carbon footprint disclosures on responsible AI disclosures? So we’ll be able to share more with you end of this year on an initiative that is gaining a lot of steam to have public companies actually talk about this in their financial disclosures. So good work happening, more needed, and this is where Credo AI can really work with you and rest of the ecosystem to bring that education.
Sabrina: I’m excited to check out those different initiatives and continue partnering with Credo. And I think just to shift a little Navrina. You’re also a member of the National AI Advisory Committee, and as a part of that, to my understanding, you advised the president on National AI Initiatives, and as we were just chatting about, this is extremely important as it relates to the adoption of new regulations and standards. What are some of the initiatives that you’re advising on? And do you have any predictions as to how you see the AI governance landscape shifting in the years ahead?
Navrina: So Sabrina, just FYI, and full disclosure — I’m here in my personal capacity. What I’m going to share next is not representation of what’s happening at NAAC. Couple of things I can share, though, and this is all public information, is first and foremost, NAAC was really emerged from this need of, when we look at United States globally, we are not regulators. We are the innovators of the world. Europe is the regulator of the world if you will. But when we have such a powerful technology, how do we think about a federal, a state-level, and local ecosystem to enable policy making, to enable a better understanding of these systems, and bringing that private-public sector. So that was the intention behind NAAC. Having said that, as I mentioned, I can’t talk to the specific things we’ve been working on, put NAAC aside, I do want to give you a little bit of frame of reference on as Credo AI and myself personally, I’ve been very actively involved with global regulations, whether it is with European Commission on the EU AI act. Whether it’s with UK on their AI assurance framework, whether it is with Singapore on their phenomenal model governance, or whether it’s with, Canada on their actually just recently launched AI and data work. So having said that, couple of things that we are seeing. We are going to see more regulations, and we are going to see more regulations that are going to be contextual. And what I mean by that, in, in United States as an example, New York City has been at the forefront of it with the New York City law number 144, which is all around ensuring that automated employment decision-making tools, if any company is procuring them or using them or building them, have to provide a fairness audit for those in the next month, so April 16th is going to be an interesting day to really see which enterprises take that responsibility very seriously, and which enterprises are bailing on that responsibility. And the question is then the enforcement and how is that going to be enforced? So we are going to one, first and foremost, continue to see a lot of state and local regulations.
On a global stage, I think EU AI Act is going to fundamentally transform how enterprises work. And this is going to have, if you thought GDPR was groundbreaking, think about EU AI Act 10x of that.
So we are going to see brussel effect in its best in the next year. EU AI Act is going to go into effect this year, and it’s going to be enforced in the next two years. So this is the moment that companies have to start deeply thinking about how do they operate in Europe. Having said that, there is a little bit of a curve ball that was thrown at the regulators because of generative AI. And right now, there’s an active debate in European Commission around what EU AI Act covers, which is general-purpose AI systems, and do all generative AI fall in general-purpose AI systems. And there’s active lobbying, as you can imagine, from some of the larger, powerful big techs to avoid generative AI being clubbed in that category because there’s a lot of unknowns in generative AI.
So what we are going to see this year is a very interesting policy landscape, which needs that capacity building to come from the private sector. But this is also going to be a really critical foundation for how are we going to govern and how are we going to keep stakeholders accountable for generative AI.
Sabrina: Do you think there are ways that enterprise companies can start getting prepared or ready for this?
Navrina: First and foremost, I think the C level really needs to acknowledge that they have already been reinvented yesterday. So once they acknowledge that, now they have to really figure out, “Okay if I am going to be this new organization with new kind of AI capabilities in the future, do I want to take that carelessness approach or do I want to be clever approach or cautious approach?” I think right now what, what is going to be really critical is, and this is a big part of the work that I do, in addition to selling Credo AI product, is really sitting down with C-level executives on sort of honing in on the point that why AI governance needs to be an enterprise priority, similar to cybersecurity, similar to privacy. And we’ve learned a lot of lessons in cybersecurity and privacy. So how does AI governance become an enterprise priority? Why you need to do that and how you need to adopt AI with confidence. It is less about, I would say, regulation and trying to be compliant with that. Right now, it’s more about how can I be competitive in this age of AI and how can I bring new AI technologies, and how can I have a good understanding of what the potential risk can be. I think managing regulatory, compliance, managing that brand risk comes little bit secondary right now. It’s literally, do you want to compete in this new age of AI or not?
Sabrina: I think that if you’re an enterprise company not thinking about leveraging generative AI or, AI in some ways, it’s going to be a very tough couple of quarters and years ahead for those companies. Just to wrap up here, I have three final lightning questions, which we ask all of our I40 the first question is, aside from your own company, what startup are you most excited about in the intelligent application space and why?
Navrina: I would say that I am a big fan of the work companies like OpenAI have done. Because we are going to see, uh, this whole notion of co-pilot, someone who is with you wherever you are working and augmenting your work, is something that I get really excited about and especially the ease of use.
Sabrina: Yeah, I love the notion of a co-pilot, right? It’s the ability to democratize AI and allow people that may not have a technical understanding of what’s going on in the backend to really be able to use and leverage the application. Okay. Second question. Outside of enabling and applying AI to solve real-world challenges, what do you think is going to be the next greatest source of technological disruption in the next five years?
Navrina: Wow. Right now, my head and brain is literally all about artificial intelligence. The things that keep me up at night, as I mentioned, is really thinking about will we have a future that we are proud of or not. So I spend a lot of time thinking about climate companies, sustainability companies, and especially how these two, the AI and climate world, are going to come together to ensure that one, we have a planet that we can live on and two, a world that we are proud of, which is not fragmented by misinformation and these harms that AI can cause.
Sabrina: Third question. What is the most important lesson that you have learned over your startup journey?
Navrina: Wow. I’ve learned so many lessons, but I think the one that was very early on, shared by one of my mentors 20 years ago, and holds even more importance to me now. He would always say that a good idea is worth nothing without great execution. And I would say in my past three years with my first startup, all things being equal, the fastest company in the market will win. So when I think about a market that has not existed, and you are a category creator in that market, I am okay with, if the market doesn’t pan out. I’m okay with if the enterprise customers are not ready and they need change management. But the thing that I share with my team is I’m not okay if everything is working in our favor, and we get beat because we didn’t move fast. So that is really important. And we have within Credo AI, our, one of our values is what we call intentional velocity because as you can imagine, the speed by itself doesn’t do much good. It has to be married with this intentionality.
Sabrina: I love that. Well, Narvina, this has been really fun for me. I’m excited to continue following all the great work Credo AI is doing and thank you again.
Navrina: Thank you so much for having me, Sabrina. This was fun conversation.
Coral: Thank you for listening to this week’s IA40 Spotlight Episode of Founded & Funded. If you’re interested in learning more about Credo AI, visit Credo.AI. If you’re interested in learning more about the IA40, visit IA40.com. Thanks again for listening, and tune in in a couple of weeks for our next episode of Founded and Funded with the CEO of Github.