March 5, 2026

As generative AI systems move from novelty to infrastructure, questions of safety, trust, and governance are becoming urgent. In this episode of TechSurge, host Sriram Viswanathan is joined by Dr. Rumman Chowdhury, CEO of Humane Intelligence PBC and responsible AI Pioneer, about what AI safety really means and why the industry may be focusing on the wrong problems.
Rumman argues that the most overlooked lever in AI development is evaluation. While companies emphasize model training and capabilities, far less attention is paid to how systems are assessed in real-world contexts, who defines “good,” what risks are measured, and how societal impacts are accounted for at scale. She distinguishes between technical assurance and broader sociotechnical risk, from misinformation and bias to over-reliance and erosion of institutional trust.
Drawing on her experience at Twitter (X) and in global policy circles, Rumman highlights a fundamental governance gap: unlike finance, aviation, or healthcare, AI lacks a mature, independent ecosystem of auditors and evaluators. Today, the same companies building AI systems often define what counts as harm. She also challenges the belief that stronger guardrails alone will solve the problem, noting that cultural context, language differences, and human behavior complicate any notion of “neutral” or fully objective AI.
Rather than focusing solely on speculative existential threats, Rumman urges attention to the harms already visible from AI-enabled misinformation to mental health risks and shifts in how younger generations relate to knowledge and authority. The future of AI, she suggests, will be determined not just by technological breakthroughs, but by whether we build credible systems of accountability, evaluation, and global cooperation around them.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.
0:00:00.2 Rumman Chowdhury: Yeah, it has created a more polarized world and now it's gonna fester in Gen AI world. And my concern is that, there is so much hegemonic power amongst, like, four men who are making all these decisions themselves. I cannot emphasize this more, right? You have to get degrees, you have to be certified, and you have to prove that your opinion is independent and you have no conflict of interest in every other field except for AI. In AI, there seems to be this revolving door between frontier model companies, regulatory bodies, government agencies, and part of it's because this community is very, very, very small. Starting in 2025, I promise you this, Sriram, once a week I get an email from somebody clearly suffering from AI psychosis.
0:00:42.5 Sriram Viswanathan: Hi, everyone. This is the TechSurge Deep Tech podcast presented by Celesta Capital. Each episode we spotlight issues and voices at the intersection of emerging technologies, company building, and venture investment. I am Sriram Viswanathan, founding managing partner at Celesta Capital. If you enjoy TechSurge, now is a perfect time to hit the like and subscribe button. And while you're at it, you can leave us a review on your favorite podcast platform. If you're just discovering us, visit techsurgepodcast.com to sign up for our newsletter and check out the archive of some very interesting past episodes. In aviation, pharmaceuticals, or in banking, safety is defined outside the company that's building the product. There are regulators, auditors, and independent reviewers, but in AI, it is different. The same companies building these systems also define how they are tested, what counts as harm, and how much transparency they provide. This is not malicious, necessarily, but it is a unique concentration of power. Today's guest, Dr. Rumman Chowdhury, has worked across industry, government, and nonprofit sectors as a pioneer on AI accountability. She's the CEO of Humane Intelligence, a former leader of algorithmic accountability work at Twitter and Accenture, and also has served as the U.S. State Department's first Science Envoy for Artificial Intelligence.
0:02:10.7 Sriram Viswanathan: In this conversation with Rumman, I explore a core idea. Evaluation is governance. The way we test AI systems determines what we call good, what we call safe, and what we choose to ignore. We'll talk about independent audits, moral outsourcing, cultural bias, and the implications of AI becoming part of our critical infrastructure. And we ask a larger question. If AI systems increasingly shape what we see and believe, who should be responsible for how they behave? So with that as the backdrop, just tell us what does Humane Intelligence do? What is your role there, and what are you focused on?
0:03:01.2 Rumman Chowdhury: Thank you so much for having me on the podcast, Sriram. I'm really excited to talk to you about what we're working on. There is a need for infrastructure for AI evaluation. So there's constantly discussion of model performance. And I, as a social scientist, a quantitative social scientist, somebody who has thought about the intersection of technology and society for many, many years now, I've been in responsible AI for almost a decade, which literally makes me ancient, and in tech in general for almost 20 years. A lot of these evaluation mechanisms don't actually think about the technology as it intersects with society.
0:03:35.6 Rumman Chowdhury: So what I'm working on is a way to help companies actually evaluate AI models in context and at scale, which specifically means, how can they answer the questions of AI adoption and any sort of the issues that may arise at Delta between what a customer wants and what AI can do? How can they evaluate those risks at scale and then also answering specifically the questions that they want answered? I'm excited because AI evaluations, in my opinion, is the most under-discussed category in AI development. I think everyone's very focused on model training, data collection, model development, but there's very, very little focus on evals. But I think evals is where all of the power lies. I mean, if you think about it, evaluations decide what good means. What does it mean to have the best model? That is what an evaluation tells you. And I want everyone to be able to discern that for themselves.
0:04:26.1 Sriram Viswanathan: I would assume AI safety and trust and all of that is a very big part of it. When you say evaluation, is it intended to be coming at from the consumer standpoint of protection? You have similar things in the financial industry where the Consumer Protection Act is really about ensuring that the consumer has enough information and understands the risks that they're taking with their financial assets and so on and so forth. So in the evaluation under AI, is it more to protect the consumer? And if so, what's the role of safety? And just define what AI safety means for you.
0:05:06.9 Rumman Chowdhury: For me, AI safety means building products that are fundamentally solving problems that people want solved and doing so in a way that is not harmful, not just to the individual, but also society as a whole. AI is one of the first technologies that we've introduced where we have to have these frankly often philosophical discussions. And I do love the financial services example for many reasons. But one way it differs is, there is not existential discussion about financial services' impact on our mental capacity and livelihood. Just as an example, I've worked with education companies in evaluating their edtech tools, and while they do want to understand things like the accuracy and prone hallucinations, they also want to understand, are children becoming over-reliant on technology?
0:05:52.8 Rumman Chowdhury: And that's where you need people who are well-versed in socio-technical work. How do we take an abstract concept like over-reliance and make it something measurable, make it something grounded, and also something that you can test against? Does an education technology tool help a student, or does it make them lose critical thinking skills? These are the things they have to start testing. I do love the financial services example because what a lot of these industries have is an independent community of evaluators, which is something that AI lacks at the moment.
0:06:24.6 Sriram Viswanathan: Let's just unpack it. There's lots of definitional things that we can go through, but let's just go step by step. So in the mainstream, when people talk about AI safety, in large part it is as a result of the model, whatever the model that you're using, either providing you with gibberish, as in the model is just hallucinating and you're acting on a bunch of the recommendations or whatever the model is telling you based on completely wrong things that the model could do, which could have pretty profound impact from a safety standpoint, where teenagers might just rely on it in the current environment that we live in and take drastic steps to even end their life, as we have seen in some of the model actions.
0:07:11.9 Sriram Viswanathan: And then there's, of course, on the other end of the spectrum, you're doing research for some paper and you just got wrong data. It's not as profoundly significant, but it's still bad. And there's a whole range of things in between. So talk about AI safety from that context. So what exactly is the problem that we are trying to solve? Is hallucinations actually a safety problem or a reliability problem that is just being mislabeled? Or are there more profound risks in AI safety that one should be thinking about in the context of your activity?
0:07:48.9 Rumman Chowdhury: Most of my clients are major companies trying to build and adopt AI systems. So this could be banking, insurance, education, et cetera. And really, they have to start tackling both. Traditionally, companies don't really concern themselves with these sort of bigger existential discussions, but what they are finding is that, that is fundamentally intertwined with AI. So even just the example you gave about hallucination, I agree that let's say if I am coming up with a literature review for an academic paper and it hallucinates some content, maybe one paper that does that is not a problem. But at scale, what that does is fundamentally people start questioning the validity of science, right? People start questioning the integrity of scientists. And that actually is a problem that people are facing today, that we do have these trusted institutions also under attack. And AI is, A, enabling mis and disinformation about these institutions, but also the integrity of these institutions are being threatened by kind of lazy work using AI.
0:08:48.6 Rumman Chowdhury: On the other end, what you talk about, about things like hallucinations, that is a traditional industry approach to assurance, right? Assurance just literally means, is this product performing as expected? And assurance applies to car seats and baby strollers and sofas. There's some sort of assurance. Do you know that every time you turn the key in your car that it will turn on? You don't think about these things because there are professionals whose literal job it is to make sure that your car reliably will turn on. And it is the same for AI. So I think about this in twofold. One is assurance. Is this model doing what you have promised it will do, performing as expected? And on the other end, we do have to think about these larger impacts on systems and society. And this is something that is very new to industry.
0:09:39.1 Rumman Chowdhury: Industry traditionally has not taken a role in this, but they're realizing that part of the trust gap in adoption by customers are more about those issues than it is about assurance. And from a corporation's perspective, they do want to have safe products. The definition of safe has now expanded beyond pure performance to broader impacts. Everybody understands how to do performance testing. We don't always understand how to do broader societal implications. People get very nervous about that because often it requires very subjective viewpoints, things that might be politically contestable. And having worked at Twitter, I'll tell you the hardest ones to do and the ones you have to make the strongest decisions on are the ones that people disagree about the most. And I think that puts a lot of leaders in a very uncomfortable position.
0:10:24.4 Sriram Viswanathan: Before we move into regulation, let's clarify what AI safety really means here. There are two different issues that often get mixed together. First is reliability. Does the system work as intended? Does it hallucinate? Does it fail under pressure? Second is impact. What happens when that system is deployed at scale? Does it change behaviors, weaken trust, or have unintended consequences? A hallucination may seem like a small reliability problem, but at scale, it can become a trust problem, especially if people rely on AI for research, decision making, or advice. That's why AI evaluation is more than benchmarking model capability. Passing a standardized test doesn't necessarily mean a system is safe in the real world. Safety depends on context. Who's using it, how is it deployed, and what outcomes flow from it?
0:11:18.4 Sriram Viswanathan: This is an important point you're alluding to in the context. We should just dwell on that a little bit more. In most critical industries or infrastructure, failure is usually defined by someone else. It's defined externally, independently audited, and enforced and regulated and all of that. In AI, the people who build the systems invariably often decide what counts as harm. It's almost the fox guarding the hen. And what is not safe, that is defined by the people that are practicing or the practitioners and the developers of the product. And that's an unusual power dynamic. So in my mind, this is really about, the core issue is who defines when a system is broken. And for something as broad and societally profoundly significant, there are so many places AI touches. So who decides it? Who is the person that's gonna... Maybe there's a regulatory agency eventually, which we can talk about later in the podcast, but in your mind, who defines it?
0:12:38.1 Rumman Chowdhury: Basically every other impactful industry, education, financial services, critical infrastructure, even people using airline safety until all the Boeing stuff happened last year. As an example, there are communities of practice, literal professions that are built around independent, trusted evaluators. I cannot emphasize this more. You have to get degrees, you have to be certified, and you have to prove that your opinion is independent and you have no conflict of interest in every other field except for AI. In AI, there seems to be this revolving door between frontier model companies, regulatory bodies, government agencies. And to be clear, part of it's because this community is very, very, very small. I can probably count maybe 80 people that I interact with all of us all the time. And all of us know each other. I'm probably one of the few people that has not taken a job at one of the frontier model companies, although one can count, I guess, Twitter or X as that, even though I was not there as part of any of that pre-gen AI. I was there pre-gen AI. But I think there is an issue here, and I've also discussed and testified on what is needed to do this.
0:13:42.2 Rumman Chowdhury: So first, it needs to be economically viable. You have to be able to have it be a profession. Regulation comes into play in that. So legally mandated audits that actually require independence and a clear definition of what independence means, number one, that can be required by law. Number two is professionalization, which government regulation cannot do, but you need some sort of a third-party group. And there are some groups that are really thinking about this. What does it mean to professionalize, create education, have that count towards something? The third and the most important part, which is very specific to this idea of safety, is the legal protections for ethical hackers. And that's something that has happened in cybersecurity. So there are some laws to protect ethical hackers, white hat hackers is what they're known as, but they're insufficient. And what happens is, whenever, let's say, you test a model and you're not working for a company, theoretically, you are violating the terms of service. And we are seeing that come up in some of the court cases involving some of these children who ended up making very drastic and negative severe life decisions due to AI.
0:14:49.6 Rumman Chowdhury: Companies are coming up with the defense that actually the company is not legally responsible because they violated the terms of service. So if you think about me as a hacker, I'm trying to demonstrate outside of this company that this product is dangerous. Theoretically, this company could sue me because they could say, "No, we clearly state in our terms of service that you cannot use this to cause harm." And even though you call yourself a white hat hacker, what you've done could be used to... So you can be silenced. So there's very, very clear things that would need to be done from a regulatory perspective, from an organizational, from a field of study perspective, but also, again, legally. That was the intent of setting up Humane Intelligence to nonprofits. So I run a public benefit corporation. Originally, we started as a nonprofit, and the goal of the nonprofit has been to create the community of practice, get people excited about this work. Because the thing is, and I would say this constantly, I never thought I would say that audit is exciting because it is so deeply interesting. It's uncharted territory.
0:15:49.0 Rumman Chowdhury: We are literally defining not just what an evaluation is, but how to do it. So right now we have benchmarking and we have red teaming, and these are things that we just kind of came up with because gen AI models, they fundamentally break all of the ways we used to test machine learning models. I'm a statistician by background. There's a 100-plus years of work on mathematical models to tell you exactly how to audit machine learning models. GenAI throws all that out the window. So we have to create new methods, new approaches, scale them, build the tech for them. It is actually the most exciting field to be in, but it sounds very boring, right? Because nobody wants to be the auditor, but actually it's like one of the most exciting things to be doing right now.
0:16:30.6 Sriram Viswanathan: Let's pause there for a second because I want to take you back to the comments that you made about your Twitter example. And I think there's a parallel that you can draw from social networking to AI as well, while these are intimately intertwined. In Twitter, you were one of the few executives who stayed at Twitter during the chaotic transition when Elon took over, and his pretext was, we need to ensure the midterm elections were safe and protected and algorithms remained stable, and before you eventually had your laptop was locked out completely. So talk about what was the lesson in how fragile corporate ethical commitments really are. And I recognize that this is in the social networking context. I don't want to get into the politics of it, but there is an important lesson out of that. Talk about your experience and your time during Twitter when this occurred and what is the relevance of that as it relates to AI safety.
0:17:36.0 Rumman Chowdhury: There is actually a direct line. I'll talk about that in a minute. So I wrote an op-ed for the Atlantic on this. And what I learned that last year, that big takeover year, was that a company's culture is very, very fragile. And Twitter was just a wonderful place to work. I really loved my time at Twitter. I think those of us who were at Twitter, again, everybody always thinks that when they were at something or did something, it was the best time, but what I loved about Twitter specifically is that we were a very honest company about many different things. We were publicly constantly saying sorry about things we screwed up on. I actually really appreciated leadership for being very honest about things.
0:18:14.1 Rumman Chowdhury: So citizen data scientists on Twitter realized that the Twitter's image cropping algorithm seemed to be cropping away from people with dark-skinned faces, and it was favoring people who were light-skinned. And instead of doing what a lot of companies do, which is hide behind PR and legal, you had at the time leadership, it was Parag and all sorts of other people, they themselves were interacting with the public, saying, "What are you seeing?" "We're making a commitment." And I really respected them for doing that. It's very, very hard. It's very easy to hide behind legalese because the law is on their side. It's really easy to hide behind a PR firm, but they didn't. They jumped right in. And that actually helped me make the decision to join Twitter.
0:18:52.0 Rumman Chowdhury: And during my time at Twitter, I found that it was more of the same. You could be very honest on company Slack about how you felt about things. But when Elon came in, before he even came in, there was this chilling effect that he had at the company. And it wasn't even him as a leader. It was the shadow of him coming in, where some people wanted to go into CYA mode, or other people thought that it was going to be fine or maybe even liked the idea that he was going to take over. And it just became an untenable situation. So here's where it ties in with AI. Fundamentally, what all of these companies are doing is a form of content moderation.
0:19:27.4 Rumman Chowdhury: Social media now starts to say, okay, there's all these pieces of content. There's hundreds of millions of pieces of content, and you have this screen that is like, I have a giant iPhone, so it's like this big. I have to figure out how to give you the four things you want to see. And that's what the algorithms do. It's a curation algorithm. So when I say content moderation, it specifically is doing a few things. Number one, it's pulling out things that are offensive or wrong, which can be subjective. It is also determining what you want to see and the worldview it wants to reinforce. So when any social media platform, any curation happens, there is a content moderation that happens. So fine, that's social media. So you've got this pool of... But even then, you see that the tweet comes from Sriram, it comes from Donald Trump, it comes from Rumman, and you can say, "I trust this person, I don't trust that person." GenAI adds another layer of obfuscation on top of that. GenAI is an information synthesis machine. So now it's taking all of that content of the internet and social media, all these things that we just talked about where you need discernment, and it's doing that discernment for you. Because what it is doing is taking all that content and summarizing it, and in doing so, you are basically outsourcing all of that discernment to the company.
0:20:43.7 Rumman Chowdhury: So now engineers at OpenAI, Anthropic, et cetera, are the ones saying, "Is this content they want to see? Is that person trustworthy? Is that person smart? Is that idea good?" They're doing all of that for you. So one thing I am concerned about, going back to kind of these existential ideas, is that they own truth. I will just pause there and say that again. They own truth. They tell you what an idea is because they are taking all the content. And yes, I know RAG models exist, you can dig into the source, et cetera, but overwhelmingly, we will reflexively use it to do that labor for us. And we really have to think about whether or not we're okay with that and whether we're okay with that all being behind closed doors because you get no visibility into that. We have no right as consumers to see what is being left on the table, what is being edited out of the data, how the idea is being changed.
0:21:39.5 Sriram Viswanathan: When Rumman says they own truth, she's not being dramatic. She's pointing to a shift in how we access information. In the past, we retrieved information, we saw the sources, we made our own judgments. Even on social media, you still knew who was posting something. Generative AI is different. It synthesizes, it condenses, it presents a single confident answer. That means the filtering, weighting, and omission all happen behind the scenes. You don't see what was excluded. You don't see alternative framings. You don't see uncertainty unless the system chooses to show it. That's where the governance question emerges. The ranking systems, safety filters, and refusal policies become editorial decisions abstracted even further from our view. So the question isn't just misinformation, it's dependency. If we increasingly rely on AI to summarize the world for us, we're outsourcing part of our judgment. That's the bridge to what Rumman calls moral outsourcing.
0:23:01.1 Sriram Viswanathan: So I think the crux of what you're talking about is really transparency and accountability on how any of the curation happens. In the social media context, you gave a lot of examples, and there are systems in place to actually monitor when a bot is sort of spewing whatever, and maybe those are not effective in some cases, but what you're really referring to is this notion of moral outsourcing that you call it. So in the AI landscape, there's this degree of morality that we, as the ecosystem of users and creators and all of that, we're actually giving it voluntarily to the systems. So can you talk about, is that ethical decision-making transfer to the AI system? First of all, is it prudent in all cases? And I guess obviously not. But can you talk about what are the situations where you might be okay with doing the moral outsourcing and what are the cases where you should not be?
0:24:10.6 Rumman Chowdhury: So first, as you have pointed out, we are offloading the responsibility or the task of discernment to these companies, and we have no visibility into how that's done, any of the qualitative decisions. And I think you're right. There are many things for which actually it's easier for me if I'm doing research maybe on what kind of... So just as an example, I'm being asked to join some advisory board. And this morning, I use Perplexity. I'm on Perplexity, and I'm like, "Hey, can you give me a bit of background on these 10 people who are also on this advisory board and any sort of controversies about them or anything related to topic XYZ?" And perplexity did a great job. We're also judging it from the perspective of ourselves as "the old people". It is interesting to see and observe how young people use it, and young people increasingly are relying on AI to help them navigate life. And I start to get a little bit concerned when... And there's so many moving parts to this. It's having kids who were raised in a hyper-surveilled and observed environment.
0:25:10.8 Rumman Chowdhury: It is the culture of social media being always on that has led to them having this very deep fear of doing something wrong in a way that I don't think any of us can understand. Because when we were young and stupid, we were very young and stupid. I know I was. I can't imagine being that age with everything surveilling you all the time. So they're so deeply afraid of making a mistake that they're now asking AI, "What is the right thing to do? How should I navigate this? How should I talk to this person?" And I do get concerned about that.
0:25:42.8 Rumman Chowdhury: I also am concerned about talking about refusals. What does the model not talk about? What is it missing? What is it leaving off the table? And you will never know because while you can go in and check the sources it is giving you, what you can't do is say, "What didn't you give me? What information is missing?" And I think that particularly comes up when you talk about things that are constantly in flux or changing or things that are controversial. So things like warfare, even just the current situation in the United States is something that people have pointed out. There are issues across different models, but again, what will the model not talk about and how that might be detrimental because it's shaping our worldview in a very particular way.
0:26:22.0 Sriram Viswanathan: Now the conversation moves to a harder question. Whose values shape these systems? Every AI model has guardrails. It decides what to answer, what to refuse, and how to frame controversial topics. Those boundaries reflect choices, whether through training data, reinforcement learning, or just written policies. The challenge is that, values are not universal. A model that behaves appropriately in one language or region may fail in another. Guardrails built for English-speaking, US-based users may not translate globally. When companies market these systems as general-purpose tools, they implicitly export their assumptions worldwide. So the real question isn't whether guardrails exist, it's who defines them and how they are tested across cultures and languages.
0:27:25.2 Sriram Viswanathan: Clearly, some of these systems are propagating or literally sort of synthesizing certain values or certain ethical standards and so on and so forth. And invariably, there's some individual in one of these LLM companies, is actually defining the contours of those ethical standards across cultures. So what is your view on ethical standards and cultures and all of that that the models have to be trained on and programmed on? And how do you expect those guardrails to be created? And who creates those things? Who should create those things?
0:28:08.1 Rumman Chowdhury: Yes, that is such an expansive question. So first is, just as a disclaimer, Humane Intelligence has actually worked... The nonprofit has actually worked with major frontier model companies on this exact topic. And the reality is, it is very difficult to tackle the broad diversity of human culture. I kind of give a funny/frivolous example, but actually I love it. If you ask any of these models about cricket, it will give you cricket the insect or maybe like Jiminy Cricket. If you ask any person about cricket outside of the United States, especially in the UK or any sort of a British colony, cricket means the game. I can guarantee you, none of the AI models that have been built refer to cricket the sport as its initial contemplation of the concept of cricket because it's not an American thing we do. So the core of the data and even just the architecture and the value decisions are all being driven by not just American values, but specifically Silicon Valley's values.
0:29:09.8 Rumman Chowdhury: And also just to put an extra point on it, Sam Altman has said, this is the data of the world. That is wildly incorrect. It's the data of the internet. And the data of the internet is a very particular skewed set of data. And that's fine. There's nothing wrong with doing that, but just own the fact that this is fundamentally incomplete. To be fair to the companies, these are massive issues to tackle. It is difficult enough to tackle in one language and one cultural context. Now imagine for everybody. And they are very clear when they say, "Oh, this model is not expected to be performant in this language." But the reality is, people who speak that language will use it in that language whether or not it is "approved".
0:29:49.6 Rumman Chowdhury: So what we were testing is the performance of guardrails, knowing that, yes, we understand that the companies are not claiming the models work in these languages, but they still perform in these languages and people will use them that way. And we pretty much found that most of the guardrails that exist in English, a lot of them can be broken pretty easily by switching to another language, which is a security concern. And these are for things like even what the hardcore safety people might say about bomb-making and developing molecular compounds, et cetera, you can do some of those in other languages that you couldn't do in English.
0:30:24.4 Rumman Chowdhury: So it's a very difficult problem to tackle. I don't envy the people at these companies who have to do this job. But at the same time, this is what happens when you're pushing your product as a general-purpose product. I think the issue here starting off is, it's being pitched as the thing that will do it all. Well, then you have to make sure to think about every iteration of it. At the end of the day, these companies are making hundreds of billions of dollars off this technology. They should fully be responsible for what they're building. They're not particularly transparent with their data, their process. I understand, fine, intellectual property, et cetera. Absent, again, an independent field of experts who are allowed to have access to this kind of data and model for independent review and some sort of clear standards by which they should be reviewed, we don't really have much recourse than to say, "Okay, well, companies, you're responsible for all the bad things that happen," because I cannot make an informed decision. I cannot because I don't have the information available to me. I can't see what your data has, I can't see the model decisions that have been made.
0:31:27.1 Rumman Chowdhury: And I get it, it's your IP, but then I can't be responsible for things that your models are doing to people. But there is, I understand, the sense of personal responsibility. So that's the social media argument where, yes, social media algorithms are designed to be addictive. They're designed to give you content you want to see, which can create filter bubbles. Do I define it as 50% left and 50% right? Is it Jack Dorsey's job to make that decision? Well, that's kind of scary. Is it Congress's job? So we play this game of hot potato of who gets to decide. But again, I think we have ignored the issue for so long. It has festered in social media, it has created a more polarized world, and now it's going to fester in Gen AI world. And my concern is that there is so much hegemonic power amongst four men who are making all these decisions themselves.
0:32:13.4 Sriram Viswanathan: This is fascinating because you're going back to the concept that you talked about, which is this whole moral outsourcing concept. And you gave a great example on how that happened in Twitter. And I'm sure if you were here in the Bay Area the last couple of weeks during the rains were happening and the power went out in San Francisco, all the Waymos just went crazy and they just blocked all the intersections. And it leads me to the following question. Is there a way that we can actually avoid it? Because we may not be able to avoid it, whether it's the model as programmed by a few powerful people or when you have a human in the loop, in the case of Waymo, somebody, some human is making a decision. So is it practical or is it too idealistic to expect AI systems to be absolutely objective and neutral? Because eventually, these algorithms are defined and structured and programmed and guardrailed and all of that by human beings.
0:33:15.8 Rumman Chowdhury: The reality is, ethics is not a destination, it is a journey. So you are forever... There is no right answer. So I'm gonna talk about this idea of neutral models, right? So Mark Zuckerberg's talked about that quite a bit. Political neutrality is what he is aiming for. But I am sorry to say there is absolutely no such thing. So there's a few ways one can define neutral, but it always has to be pegged to something, right? So one is that, I think their definition is that it lies on a spectrum, right? So the spectrum of we have to present both opposing sides. Well, actually there was a time where both opposing sides would be black people are humans or black people are animals. Is that... But the Overton Window, our definition of humanity, thankfully has shifted such that that is not a both sides argument. In some cases, we have regressed, right? Now we actually can viably say, do women have ownership over their uterus? Now we get to say, "Oh, actually maybe they don't," whereas 10 years ago that was not a conversation we should have because we had Roe v. Wade.
0:34:17.6 Rumman Chowdhury: So if we want to build some sort of master neutral, objective, good system, maybe let's start by tackling the problem of anti-racism or talking about women more equally. I understand that's not fun and it's more fun to talk about, "Wow, let's intellectually discuss this big thing." But I'm a builder and a doer, not a philosopher. That's why I didn't stay in academia. So to me it's very frustrating to be in these very philosophical worlds where nobody's held accountable for what they say. I would rather build and fail and build again and try to tackle. Because again, if we want to reach a world where we have this master algorithm that is perfect, good, and fair, maybe let's start by making baby algorithms that are perfect, good, and fair.
0:35:03.6 Sriram Viswanathan: From the examples that you've actually talked about, it seems like you're really leaning more in the side of the governance failure and not design failure of the model. Is that a fair characterization of your position?
0:35:18.0 Rumman Chowdhury: One of the hardest things about Gen AI is both are true. So kind of back to... You alluded to this earlier about, well, should there be these offices of AI? The European Union has one. The hard part is AI is a horizontal technology and we tend to regulate things on verticals, right? So let's say in the US if we wanted to set up a horizontal AI authority body. Well, the first thing is the FDIC would say, "No, no, we do that for us," right? The Federal Reserve would say, "No, we're in charge of that stuff in banking." And the FDA would say, "No, we do that for healthcare," and the fair lending, housing authority, "We do that for our industry." So it's very difficult to do it horizontally. Once I testified to Congress... I feel like all these stories go back to, "And then I testified to Congress about this." But one of the questions was about should there be an authority? Actually I was the only person to say no because you should... Actually there's only one way I think it would be successful in the United States because we don't regulate on horizontals usually.
0:36:19.5 Rumman Chowdhury: That's why we don't... By the way, we don't have data privacy because that's a horizontal. We don't have cybersecurity. We have CISA. We don't have comprehensive horizontal. It's very hard for us to do that. So one is, it could exist if it existed to augment and support different verticals. So imagine this team of experts who then goes in and helps these different verticals, but then the vertical would still have to own what regulation looked like. Because the starting point is, how do you align any regulation about AI with existing regulation that exists? For example banking, they would say, "We do have fairness principles. We're actually one of the industries that has tackled this idea." So you and AI are sitting here saying, "Wow, fairness is such an abstract idea." Well, we did this 30 years ago. We talked about what it meant for fair lending to happen. We've got very clear ways we talk about it. So it's difficult to try to grapple a horizontal when so many things in our world exist as verticals.
0:37:13.1 Sriram Viswanathan: Well, let's pivot to critical infrastructure. I mean we've talked about AI in a number of segments and largely we've talked about things that are important like financial services and social media and others. But critical infrastructure in the US or in any geography raises the bar completely to a different level. So I'm curious, to what extent do you think we need a much higher bar? And if so, what is the base minimum viable governance or guardrail that you would expect an AI system to have when it is actually providing the core guts of the critical infrastructure? For instance, power or the utilities, the water infrastructure and such. What is the minimum viable governance that needs to be there before large scale deployments can occur?
0:38:11.3 Rumman Chowdhury: The main one, frankly, this is where I defer to experts in cybersecurity. I am fully convinced that the issues around AI in critical infrastructure have more to do with insufficient investment in cybersecurity infrastructure. And what I don't mean is different prompt hacks, et cetera. So what we are seeing is at-scale adoption of Gen AI at all levels of government. I worry more about the city of Philadelphia than I worry about the CIA. What's gonna happen is, hackers and malicious actors can more easily break into systems by using AI, by the way, not just hacking AI, to hold systems hostage, hijack, or break down critical infrastructure.
0:38:55.8 Rumman Chowdhury: So as I mentioned, this is more going to manifest at your local police precinct. Everyone thinks, "Oh, they're gonna try to hack the CIA or the FBI." That's kind of a lost cause. I don't know, maybe you can, but I feel like the CIA and the FBI have got a lot of security. You know who doesn't? The city of Akron, Ohio, right? So they are probably underinvested in this. And as we're pushing, everyone's pushing for AI and everything to be moved into the cloud and all of this. And I understand all the business reasons. I am very concerned about this basic cybersecurity, the baseline, and all of this. Before I even start talking about ethics and responsibility, I want to emphasize that there's a at-scale underinvestment in cybersecurity for a lot of these systems right now.
0:39:38.5 Sriram Viswanathan: But let's get specific on that. I mean, learning from other industries. If you look at the traditional safety-critical industries like nuclear or aerospace, they have a concept of a safety case, a formal argument with evidence for why and when a system is safe. The airline industry has that. It was probably one of the most comprehensive set of minimum safety bars. And the medical industry has that. You can't bring a drug to the market without going through phase one, phase two, phase three trials and monkeys and rats and all of that first, and then human beings. So what is the minimal critical... It is not inconceivable that AI systems are pretty much going to run our power infrastructure. They're going to run our water infrastructure eventually. So what is the minimum safety... Is there an FDA or FAA equivalent for AI and critical infrastructure agency that needs to come up with the rules of how they play? What's your view on that?
0:40:47.9 Rumman Chowdhury: Yeah, sadly that group existed and I was on it, but it does not exist anymore. So the Department of Homeland Security under the Biden administration had set up an AI Critical Infrastructure Advisory Board. So it was myself, there were people representing industry, civil society. So the major model companies were on there, CEOs of, for example, Delta, Boeing, et cetera.
0:41:10.1 Sriram Viswanathan: Having an advisory board is different than having an equivalent of an FAA or an FDA.
0:41:14.4 Rumman Chowdhury: No, fair. So we didn't have regulatory, but I think it was working towards understanding what something like that might look like, working with the Department of Homeland Security. And they utilized us as a sounding board to say, "Hey, we want to pass this guidance. We want to understand what we should be critically thinking about." And we were encouraged to literally argue amongst each other and talk about these issues and how this should work. And the entire purpose was to help start to inform what something like that might look like. Our level of readiness for critical infrastructure as it relates to AI is quite low. I do agree that a lot of these industries have thought about safety and security, but more from a tangible analog perspective. Translating that into a world of AI is very, very different. Overwhelmingly, all the companies that were on there, the non-frontier model companies, they're all just like, "Oh yeah, we use the AI Risk Management Framework by NIST," which is great. It's a framework. It's not a comprehensive guide to high-level security of the kinds of things that we're talking about here.
0:42:15.9 Rumman Chowdhury: So I think a lot of companies are navigating this on their own. One of the things about the current administration, it's very go-go-go with AI. It's very anti-regulatory. But anti-regulatory doesn't mean that companies aren't doing anything. They're very concerned about risk. So risk just shifts around. I learned a lot learning about risk management, and there's many kinds of risk. I think we over-index on regulatory risk when we talk about risk. But AI right now is all about reputational risk. And actually, I would argue like a strategic risk. It is a strategic risk for your company to be integrating systems that can have critical failures. It is also a very, very big reputational risk. I mean, just looking at any of the polling that's happening. So Pew Research consistently has come out with polling saying that the average American is very mistrustful of AI, and that number keeps going up. More and more and more people as they do this polling. And it's not about being worried that AI will take over an economy. It's about trust, it's about data privacy, it's about hallucinations, adverse events, and increasingly it's also about the future of work and their jobs.
0:43:19.3 Sriram Viswanathan: So in your mind, in the nuclear industry, things changed dramatically after the Three Mile Island accident and Chernobyl and all that. Do you envision a seminal sort of a disaster in the AI landscape? And that's probably what is going to dictate the change in the regulatory framework and the need for regulation in this thing? Because the current path that as you point out, nobody wants regulation, not just in the US, any geography is not really looking at, as much as they talk about AI safety, they're not really making one step closer to regulating it. So what needs to happen for that to really occur? It sounds like you are suggesting that that might be required, but what needs to happen to make that into a reality?
0:44:11.7 Rumman Chowdhury: Sadly, I would like to think it's already happening. Anecdotally, I will tell you something happened in 2025. And I get a lot of email outreach through my website and people finding me on podcasts like this and sending me their perspectives. Every once in a while I get one from somebody who's kind of unhinged. Starting in 2025, I promise you this, Sriram, once a week I get an email from somebody clearly suffering from AI psychosis. And this email often reads... And they're people who I've never met before, et cetera, and the email kind of generally reads something like one of a few narratives. One is that my AI is alive and I need you to help save it because they see me as ethics and I'm gonna help argue for its fundamental rights as a being. Second is, there are people who believe that these frontier model companies are running some sort of a grand experiment or psyops and trying to get people to do things. I've gotten a few, third, that they're trying to kill people with AI.
0:45:11.9 Rumman Chowdhury: So these are people who are very, very disturbed who with AI have created this psychotic narrative. And it went from maybe when GenAI first launched once every three months to once a month to literally once a week. So we are in the middle of that crisis. We have had children commit suicide, we have children develop parasocial relationships, we have people, and again, anecdotally speaking, that number has exponentially increased in 2025, who have developed full-blown psychosis. In fact, one of the early investors of OpenAI exists in a full-blown psychosis. So we are in Three Mile Island right now. But again, if we're being distracted by this idea of AI running economies and AI doing all sorts of fantastical things and we're ignoring this human relationship and we're not talking about how we should be teaching people how to work with AI or learn how to use AI, I am kind of worried about younger people and the fact that we're not really thinking about the right long-term issues.
0:46:18.5 Sriram Viswanathan: When people talk about AI risk, they often imagine extreme scenarios: autonomous weapons, runaway systems, and economic collapse. But what Rumman is describing is different. It's about human psychology. When a system is always available, conversational, and confident, people may over-trust it. They may treat it as quite authoritative. In some cases, they may form emotional attachments. This isn't a science fiction scenario; it's a product design issue. Safety isn't just about what the model can do. It's about how people use it and how the interface shapes behavior. Mature industries study over-reliance. They study misuse. They create reporting systems for near misses. AI does not yet have that level of institutional learning. So the question becomes, how do we build systems that preserve human agency instead of quietly replacing it? What keeps you awake in all the things that you're dealing with?
0:47:27.3 Rumman Chowdhury: What keeps me up at night? I can share a positive thing and a negative thing. So I don't want to just keep it doom and gloom because I do think there's a lot of really great work happening. So one of the things that keeps me up at night is kind of what I said. There is this complex interplay between human beings and algorithms that we are not thinking about because we as, again, the old people are thinking about technology the way we use it and not the way young people use it. Like the kids who are under two on YouTube don't view YouTube the same way we do. All young people, their lives are mediated by social media in a way that ours are not. We can separate the two. They don't because they have never seen a world that wasn't algorithmically curated, that wasn't always on social media. We do. So that's a very unique thing. So that kind of keeps me up at night that we're framing the problem the way we see it, not the way it actually exists for the next generations.
0:48:20.8 Rumman Chowdhury: So positive thing that keeps me up at night and why I am constantly working all the time. There are a lot of very interesting global governance measures happening and Sriram, we were talking previously when we were chit-chatting on Friday about the India Summit. I'm actually really glad to see a bigger conversation that includes more of the world. And it's very interesting to bring in developing markets because these are the markets that are the most enthusiastic about AI because they see it as a way of leapfrogging ahead. The same way internet technology, ICT actually leapfrogged them past having to hardwire phone lines and saying actually we can just jump to voiceover IP. We don't have to create all sorts of ways of wiring money because, oh, now we can just use like cash app. And they were ahead of us in using that kind of thing.
0:49:09.1 Sriram Viswanathan: That's a very encouraging perspective because I do agree with you that the financial systems and peer-to-peer financial transactions, some of the emerging markets, I mean, it started out with...
0:49:21.2 Rumman Chowdhury: Nigeria.
0:49:22.0 Sriram Viswanathan: Nigeria and then, of course, India with its UPI and all of that. This begs the question, in a world which is now rapidly becoming less multipolar or multilateral, the effectiveness of organizations like UN and I shudder to think what would have happened to our telecommunications network if they were to have gotten started in 2025, because ITU would not exist. The way that roaming and settlement would occur between cell phone carriers across the globe might have a completely different flavor to it if the cell phone industry had started in 2025 and not decades earlier. So in this context, do you think it's a sort of an idealistic pipe dream to think of a multilateral AI agency along the lines of ITU, that works on AI ethics and AI safety and AI governance? Is that sort of a pipe dream?
0:50:22.5 Rumman Chowdhury: I don't know if you've actually specifically teed me up because I'm actually on the Broadband Commission for the ITU. I don't know if you knew that or whether you actually... Okay, did you? Oh, wait, you didn't know that? I'm like, oh, you just teed me up for literally what I know a lot about. So I have been on the ITU's Broadband Commission for, gosh, quite some years. I think I joined during my time at... Actually, no, before my time at Twitter. So I've been on it for quite some time, and I've seen it through a lot of conversation, a lot of iteration. So I always like to say I was going to UNGA before UNGA became cool for the AI set. And there was this marked pivot two years ago where suddenly UNGA became all about AI. And I saw the frustration, actually, amongst people who work because I would sit in these Broadband Commission meetings, they were very fascinating because learning about industries I knew nothing about, like satellites, mobile, etc. And there was this deep frustration among people who don't do AI, and they're like, look, we're spending so much time and energy talking about artificial intelligence, we're missing out the fact that, as you point out, Sriram, that we're in an increasingly fractured world.
0:51:23.9 Rumman Chowdhury: And in one of the meetings, somebody even said, "Look, we have 5G. We may not have a 6G because, by the way, having that relies on countries to agree with each other, to work with each other." And we're not discussing the fact that we're moving into a world that's a lot more nationalistic all over. And yes, the tone is being set by the U.S. like more insular. And even in AI, this idea of AI sovereignty. I have such mixed feelings about the concept of AI sovereignty. One, do I think that these hegemonic powers should be broken up? Yeah, absolutely. But two, one of the biggest things we worried about with the internet was having a splinternet. We spilled endless amounts of ink talking about China...
0:52:07.5 Sriram Viswanathan: We didn't want walled gardens. Walled gardens was a four-letter word.
0:52:10.4 Rumman Chowdhury: AI out of the gate is walled gardens. Out of the gate. Every region of the world is financing and building their own AI systems. And whether or not it is good or bad, I'm not trying to make a normative assessment. I'm saying we're not even talking about what that means. So what might a good global governance body look like? I think it's that body that's gonna intermediate between these different sovereign AI systems, which, by the way, will hold totally different fundamental values. So again, back to social media, right? If we're worried about filter bubbles, if we're worried about, "Oh my gosh, your algorithm is showing things that agree with your perspective and you're not seeing a diversity of thought," imagine a world in which the fundamental AI model that is driving so much decisioning, so much of your life choices, etc., the content of that is decided by the government, because that's what's gonna happen in most of these countries. It is actually only really in the US where this is solely being driven by private industry. In every other part of the world, this is very, very heavily government-funded, and I understand why it needs to be.
0:53:09.8 Rumman Chowdhury: And again, it's not meant to be a normative... I'm not saying that's good or bad. I'm saying let's talk about it, because we're not. Everybody just got very excited and we just drew these AI lines on country geopolitical lines, and we dragged in all the geopolitical baggage with it. So instead of AI kind of being this fresh start, it is actually perpetuating a lot of this geopolitical baggage that we're carrying over into this new technology. And again, we never talked about it.
0:53:38.8 Sriram Viswanathan: As we're getting to the end of this thing, I have to ask you, I know that you've been a big fan of Scully from the X-Files, and that has been a storyline that I don't know whether you intended or not, but you ended up working for x.ai, I guess, Twitter, I guess. But if there's a science fiction book or AI or movie or whatever that you think, is there something that stands out for you that you look at and say, "Well, I wish that could be the reality that we lived in"?
0:54:10.4 Rumman Chowdhury: We spend a lot of time talking about the world we don't want. We don't spend enough time talking about the world we do want. And actually, in order to get somewhere, we have to be heading towards a destination, not just heading away from something. So there's this author, Becky Chambers. She has this series called Monk & Robot, and it actually reads like a series of Zen koans because the main character is this individual who is walking through a world, and it's a world in which AI became sentient. And what happened is this AI kind of collectively got together and they were like, "You know what, humans? Can you just kind of leave us alone? And we want to learn, we want to figure out the world for ourselves." And we as humanity agreed. And this world is a world in which we've resolved issues of climate change, we've resolved issues of inequalities, and now this AI system has come back and it's like, "Hey, humans, we haven't seen you in some time. How are you doing?" And it's a series of conversations between a human and an AI, and they're literally walking through the woods together and they're learning about each other.
0:55:12.5 Rumman Chowdhury: And, A, it's almost an objective reflection on human nature and the human condition, but it is also a discussion of a world that can exist that is beautiful in nature, that is beautiful in concept, that is also beautiful technologically. And so I love that series because it is science fiction. It's meant to be fantastical. So it's called Monk & Robot. There's three books. They're very, very small. I buy them for just about everybody. Highly recommend.
0:55:41.4 Sriram Viswanathan: Well, it doesn't have to be science fiction. That's a good place to end this because I don't know if it's a prediction or not, but that's a good aspirational end state or continuing state for AI to evolve to. So, Rumman, this has been just incredibly insightful and what a joy to actually talk to you. You are such a resource of knowledge and experiences that I think we can keep going for a much longer time. I really thank you for taking your time and sharing your expertise and perspectives with us on this podcast. I think our listeners will have a lot to chew on when it comes to rethinking AI safety and trust and governance and all of that. So thank you so much for being on this podcast. It's been a great pleasure having you on TechSurge. Thank you.
0:56:27.2 Rumman Chowdhury: Thank you for having me.
0:56:29.1 Sriram Viswanathan: If there's a theme running through this conversation, it's that AI risk may not arrive as a single catastrophic failure. It may emerge quite gradually through overreliance, through centralized control of information, through subtle shifts in how we define truth, fairness, and accountability. Today, we tend to focus on model capability, how powerful these systems are becoming. But governance, evaluation, and the institutional design may matter even more. Every major technology matured through oversight, standards, and independent review. AI is still early in the process. The real question isn't whether something big is happening. It's whether our institutions and our incentives are evolving fast enough to guide it responsibly. That's the work that lies ahead for all of us. Thank you for tuning in to the TechSurge podcast from Celesta Capital. If you enjoy this episode, feel free to share it, subscribe, or leave us a review on your favorite podcast platform. We'll be back every two weeks with more insights and discussions of all things deep tech. Bye for now.

For years, crypto policy in the United States was defined less by clear rules than by the threat of enforcement. Startups and institutions building in the space operated in a gray zone: no clear guidance, no path to compliance, and always the possibility of a regulatory hammer coming down. In 2025, that began to change.
In this episode of TechSurge, host Sriram Viswanathan speaks with Commissioner Hester Peirce of the U.S. Securities and Exchange Commission — one of Washington's most closely watched voices on digital asset policy. Known informally as "Crypto Mom" for her consistent advocacy that markets work best with clear rules and room to innovate, Commissioner Peirce was designated in 2025 to lead the SEC's first Crypto Task Force, signaling a more structured, collaborative approach to digital asset regulation.
Commissioner Peirce brings a rare perspective: a regulator who believes that ambiguity does not protect investors — it protects incumbents and rewards bad actors. In this conversation, she explains what has actually changed in 2025, what it means for companies building in crypto, and what it will take to make this regulatory progress durable beyond any single administration.
Sriram and Commissioner Peirce work through the full landscape: why "crypto" is not one thing but several, how the SEC thinks about Bitcoin as a commodity, what tokenization of traditional securities actually requires, and where real policy gaps remain. They also examine the role of stablecoins and CBDCs, the tension between investor protection and permissionless innovation, and how vertical integration in crypto markets raises the same questions the financial system has always faced — just with new architecture underneath.
Ultimately, Commissioner Peirce argues that the best regulatory framework is one that lets markets identify where technology is useful, enforces rules fairly and consistently, and makes enough room for people to build real things that solve real problems. Once those products exist and are woven into daily economic life, she argues, they become durable — regardless of who is in office.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

Digital imaging is so ubiquitous today that it’s easy to forget how improbable it once was. In this episode of TechSurge, guest host Nic Brathwaite sits down with Dr. Eric Fossum, inventor of the CMOS active pixel image sensor, to unpack the breakthrough that made it possible to embed cameras into billions of devices and the deeper lessons behind it.
Eric explains how his work began not with consumer electronics, but with a NASA constraint: how to shrink a refrigerator-sized space camera into something small enough for spacecraft. The solution required a fundamental shift in architecture. By moving from CCD-based imaging to CMOS, where sensing and processing could happen on a single chip, he enabled a level of miniaturization and scalability that transformed cameras from standalone systems into embedded infrastructure.
But the conversation goes far beyond the invention itself. Nic and Eric explore what it takes to commercialize deep technology, from the early days of Photobit to its acquisition by Micron, and the critical role ecosystems play in turning breakthroughs into global platforms. They discuss why intellectual property is less about protection and more about leverage, and why even the most important inventions require manufacturing scale, capital, and partnerships to succeed.
The episode also looks forward. As AI systems increasingly rely on visual and physical data, sensors are shifting from tools designed for human perception to components optimized for machine intelligence. Eric highlights the challenges of pushing intelligence to the edge, the limitations of current architectures, and the growing importance of sensing technologies beyond traditional imaging—including molecular detection and new materials that go beyond silicon.
While much of today’s investment is concentrated in models and compute, this conversation makes the case that the next wave of innovation may come from deeper layers of the stack, where machines interact directly with the physical world. The future of AI may depend not just on how systems think, but on how they see, detect, and understand their environment.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

As artificial intelligence becomes a strategic capability for nations as well as companies, questions of governance, safety, and geopolitical competition are moving to the forefront. In this episode of TechSurge, host Sriram Viswanathan speaks with Helen Toner, Interim Executive Director of the Center for Security and Emerging Technology (CSET) at Georgetown and a former OpenAI board member, about the rise of sovereign AI stacks and the global implications of increasingly powerful AI systems.
Helen brings a rare vantage point from both inside the frontier AI ecosystem and the policy world. She reflects on lessons from her time on the OpenAI board, including the governance challenges that arise when nonprofit missions intersect with enormous commercial incentives and rapid technological progress. As AI capabilities accelerate, she argues that the industry is still grappling with deep uncertainty about how these systems work, how they will evolve, and what responsibilities companies and governments should carry.
The conversation explores the idea of sovereign AI; the growing push by countries to control key layers of the AI stack, including compute infrastructure, models, and data. Helen explains why governments increasingly view AI as a strategic national resource, comparable to past transformative technologies like electricity or the internet. At the same time, she cautions that full technological independence may be unrealistic for most nations, given the complexity and global interdependence of the AI supply chain.
Sriram and Helen also examine the evolving US–China AI competition, the role of export controls and semiconductor supply chains, and how different countries, from China to emerging AI hubs in the Middle East, are positioning themselves in the race to build advanced AI capabilities. Along the way, they discuss whether the industry should slow down development, how companies are experimenting with “safety frameworks” for frontier models, and why installing guardrails may be more realistic than attempting to halt progress altogether.
Ultimately, Helen argues that society is entering a period of profound uncertainty. AI is transitioning from a research discipline into a foundational system that will shape economies, security, and daily life. Navigating that transition will require not just technical breakthroughs, but new approaches to governance, transparency, and global cooperation.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.