March 19, 2026

As artificial intelligence becomes a strategic capability for nations as well as companies, questions of governance, safety, and geopolitical competition are moving to the forefront. In this episode of TechSurge, host Sriram Viswanathan speaks with Helen Toner, Interim Executive Director of the Center for Security and Emerging Technology (CSET) at Georgetown and a former OpenAI board member, about the rise of sovereign AI stacks and the global implications of increasingly powerful AI systems.
Helen brings a rare vantage point from both inside the frontier AI ecosystem and the policy world. She reflects on lessons from her time on the OpenAI board, including the governance challenges that arise when nonprofit missions intersect with enormous commercial incentives and rapid technological progress. As AI capabilities accelerate, she argues that the industry is still grappling with deep uncertainty about how these systems work, how they will evolve, and what responsibilities companies and governments should carry.
The conversation explores the idea of sovereign AI; the growing push by countries to control key layers of the AI stack, including compute infrastructure, models, and data. Helen explains why governments increasingly view AI as a strategic national resource, comparable to past transformative technologies like electricity or the internet. At the same time, she cautions that full technological independence may be unrealistic for most nations, given the complexity and global interdependence of the AI supply chain.
Sriram and Helen also examine the evolving US–China AI competition, the role of export controls and semiconductor supply chains, and how different countries, from China to emerging AI hubs in the Middle East, are positioning themselves in the race to build advanced AI capabilities. Along the way, they discuss whether the industry should slow down development, how companies are experimenting with “safety frameworks” for frontier models, and why installing guardrails may be more realistic than attempting to halt progress altogether.
Ultimately, Helen argues that society is entering a period of profound uncertainty. AI is transitioning from a research discipline into a foundational system that will shape economies, security, and daily life. Navigating that transition will require not just technical breakthroughs, but new approaches to governance, transparency, and global cooperation.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.
03:00 Lessons from the OpenAI Board: Governance in the Age of Frontier AI
05:00 The Big Unknowns in AI Development: Why Experts Still Disagree
12:05 Public Trust and the Risk of an AI Backlash
14:20 When AI Became Infrastructure: From Research Field to Societal System
16:00 Is AGI a Meaningless Term Now? Rethinking the Goalposts
19:05 AI’s True Scale: Internet-Level Impact or Something Bigger?
23:15 Why Frontier AI Labs Struggle to Slow Down
24:40 What “Sovereign AI” Actually Means for Nations
28:10 Mapping the AI Stack: Chips, Cloud, Models, and Applications
33:38 The US–China AI Competition: Who’s Ahead and Why
39:44 China’s Progress in AI: Compute Constraints and Fast Followers
44:03 US AI Policy: Export Controls, Regulation, and Federal Preemption
48:40 Frontier AI Safety Frameworks: How Labs Define Dangerous Capabilities
51:36 The Future of AI: Utopia, Industrialization, or Something Worse?
56:04 Rapid Fire: AI Misconceptions, Governance Reforms, and Regions to Watch
Connect with Helen: linkedin.com/in/helen-toner-4162439a
Learn more about CSET: https://cset.georgetown.edu/

As generative AI systems move from novelty to infrastructure, questions of safety, trust, and governance are becoming urgent. In this episode of TechSurge, host Sriram Viswanathan is joined by Dr. Rumman Chowdhury, CEO of Humane Intelligence PBC and responsible AI Pioneer, about what AI safety really means and why the industry may be focusing on the wrong problems.
Rumman argues that the most overlooked lever in AI development is evaluation. While companies emphasize model training and capabilities, far less attention is paid to how systems are assessed in real-world contexts, who defines “good,” what risks are measured, and how societal impacts are accounted for at scale. She distinguishes between technical assurance and broader sociotechnical risk, from misinformation and bias to over-reliance and erosion of institutional trust.
Drawing on her experience at Twitter (X) and in global policy circles, Rumman highlights a fundamental governance gap: unlike finance, aviation, or healthcare, AI lacks a mature, independent ecosystem of auditors and evaluators. Today, the same companies building AI systems often define what counts as harm. She also challenges the belief that stronger guardrails alone will solve the problem, noting that cultural context, language differences, and human behavior complicate any notion of “neutral” or fully objective AI.
Rather than focusing solely on speculative existential threats, Rumman urges attention to the harms already visible from AI-enabled misinformation to mental health risks and shifts in how younger generations relate to knowledge and authority. The future of AI, she suggests, will be determined not just by technological breakthroughs, but by whether we build credible systems of accountability, evaluation, and global cooperation around them.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

As global supply chains fracture, AI reshapes productivity, and technology becomes a core instrument of national power, India is making an ambitious push to redefine its role in the world economy from IT services provider to deep tech superpower.
In the season 2 premiere of TechSurge, host Sriram Viswanathan brings together three defining perspectives to examine how India is positioned to become a global leader in frontier technologies, and what must go right for that vision to succeed.
The episode begins with S. Krishnan, Secretary at India’s Ministry of Electronics and Information Technology, who outlines how India is treating deep tech as national infrastructure. From the India Semiconductor Mission and AI compute investments to the new RDI (Research, Development & Innovation) framework, Krishnan explains how long-horizon industrial policy is being used to derisk private capital, strengthen domestic design and manufacturing, and accelerate commercialization.
Next, former G20 Sherpa Amitabh Kant places India’s technology ambitions in a global context. As post-WWII institutions weaken and supply chains are redrawn, Amitabh argues that India’s decade of structural reforms, digital public infrastructure, and global partnerships has created a historic opening, if India can sustain free enterprise, execution discipline, and state-level reform.
Finally, T.K. Kurien, CEO and Managing Partner of Premji Inves, brings the investor and operator lens. Kurien explores why India has excelled at services and business-model innovation but lagged in core technology creation and what it will take to build globally dominant deep tech companies. From patient capital and university-led innovation to focused national bets in AI applications, biotech, and semiconductors, he outlines the path from ambition to execution.
Across policy, geopolitics, and capital, one message is clear: India’s deep tech future will not be decided by vision alone but by alignment between government direction, private risk-taking, and long-term discipline.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

In TechSurge’s Season 1 Finale episode, we explore an important debate: should AI development be open source or closed?
AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.
Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.
From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening!