Plausible Tomorrows: What's Ahead in the Age of AI

The US Crypto Awakening

April 16, 2026

ABOUT THE EPISODE

For years, crypto policy in the United States was defined less by clear rules than by the threat of enforcement. Startups and institutions building in the space operated in a gray zone: no clear guidance, no path to compliance, and always the possibility of a regulatory hammer coming down. In 2025, that began to change.

In this episode of TechSurge, host Sriram Viswanathan speaks with Commissioner Hester Peirce of the U.S. Securities and Exchange Commission — one of Washington's most closely watched voices on digital asset policy. Known informally as "Crypto Mom" for her consistent advocacy that markets work best with clear rules and room to innovate, Commissioner Peirce was designated in 2025 to lead the SEC's first Crypto Task Force, signaling a more structured, collaborative approach to digital asset regulation.

Commissioner Peirce brings a rare perspective: a regulator who believes that ambiguity does not protect investors — it protects incumbents and rewards bad actors. In this conversation, she explains what has actually changed in 2025, what it means for companies building in crypto, and what it will take to make this regulatory progress durable beyond any single administration.

Sriram and Commissioner Peirce work through the full landscape: why "crypto" is not one thing but several, how the SEC thinks about Bitcoin as a commodity, what tokenization of traditional securities actually requires, and where real policy gaps remain. They also examine the role of stablecoins and CBDCs, the tension between investor protection and permissionless innovation, and how vertical integration in crypto markets raises the same questions the financial system has always faced — just with new architecture underneath.

Ultimately, Commissioner Peirce argues that the best regulatory framework is one that lets markets identify where technology is useful, enforces rules fairly and consistently, and makes enough room for people to build real things that solve real problems. Once those products exist and are woven into daily economic life, she argues, they become durable — regardless of who is in office.

If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

Show Notes

Episode Links

Timestamps

  • 00:00 Permissionless Innovation
  • 02:05 Crypto Basics Explained
  • 09:25 State of US Crypto Policy
  • 11:13 Howey Test and Tokenization
  • 14:25 Crypto as Strategic Advantage
  • 23:17 2025 Policy Turning Point
  • 30:06 DeFi Consumer Protection
  • 40:01 Bitcoin’s Unique Role
Transcription
RECENT EPISODES
April 7, 2026

Pixels to Intelligence: The Next Era of Imaging

Digital imaging is so ubiquitous today that it’s easy to forget how improbable it once was. In this episode of TechSurge, guest host Nic Brathwaite sits down with Dr. Eric Fossum, inventor of the CMOS active pixel image sensor, to unpack the breakthrough that made it possible to embed cameras into billions of devices and the deeper lessons behind it.

Eric explains how his work began not with consumer electronics, but with a NASA constraint: how to shrink a refrigerator-sized space camera into something small enough for spacecraft. The solution required a fundamental shift in architecture. By moving from CCD-based imaging to CMOS, where sensing and processing could happen on a single chip, he enabled a level of miniaturization and scalability that transformed cameras from standalone systems into embedded infrastructure.

But the conversation goes far beyond the invention itself. Nic and Eric explore what it takes to commercialize deep technology, from the early days of Photobit to its acquisition by Micron, and the critical role ecosystems play in turning breakthroughs into global platforms. They discuss why intellectual property is less about protection and more about leverage, and why even the most important inventions require manufacturing scale, capital, and partnerships to succeed.

The episode also looks forward. As AI systems increasingly rely on visual and physical data, sensors are shifting from tools designed for human perception to components optimized for machine intelligence. Eric highlights the challenges of pushing intelligence to the edge, the limitations of current architectures, and the growing importance of sensing technologies beyond traditional imaging—including molecular detection and new materials that go beyond silicon.

While much of today’s investment is concentrated in models and compute, this conversation makes the case that the next wave of innovation may come from deeper layers of the stack, where machines interact directly with the physical world. The future of AI may depend not just on how systems think, but on how they see, detect, and understand their environment.

If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

March 19, 2026

Sovereign AI Stacks: The New Strategic National Resource

As artificial intelligence becomes a strategic capability for nations as well as companies, questions of governance, safety, and geopolitical competition are moving to the forefront. In this episode of TechSurge, host Sriram Viswanathan speaks with Helen Toner, Interim Executive Director of the Center for Security and Emerging Technology (CSET) at Georgetown and a former OpenAI board member, about the rise of sovereign AI stacks and the global implications of increasingly powerful AI systems.

Helen brings a rare vantage point from both inside the frontier AI ecosystem and the policy world. She reflects on lessons from her time on the OpenAI board, including the governance challenges that arise when nonprofit missions intersect with enormous commercial incentives and rapid technological progress. As AI capabilities accelerate, she argues that the industry is still grappling with deep uncertainty about how these systems work, how they will evolve, and what responsibilities companies and governments should carry.

The conversation explores the idea of sovereign AI; the growing push by countries to control key layers of the AI stack, including compute infrastructure, models, and data. Helen explains why governments increasingly view AI as a strategic national resource, comparable to past transformative technologies like electricity or the internet. At the same time, she cautions that full technological independence may be unrealistic for most nations, given the complexity and global interdependence of the AI supply chain.

Sriram and Helen also examine the evolving US–China AI competition, the role of export controls and semiconductor supply chains, and how different countries, from China to emerging AI hubs in the Middle East, are positioning themselves in the race to build advanced AI capabilities. Along the way, they discuss whether the industry should slow down development, how companies are experimenting with “safety frameworks” for frontier models, and why installing guardrails may be more realistic than attempting to halt progress altogether.

Ultimately, Helen argues that society is entering a period of profound uncertainty. AI is transitioning from a research discipline into a foundational system that will shape economies, security, and daily life. Navigating that transition will require not just technical breakthroughs, but new approaches to governance, transparency, and global cooperation.

If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

March 5, 2026

Governing AI Before It Outpaces Us: Safety for Critical Infrastructure

As generative AI systems move from novelty to infrastructure, questions of safety, trust, and governance are becoming urgent. In this episode of TechSurge, host Sriram Viswanathan is joined by Dr. Rumman Chowdhury, CEO of Humane Intelligence PBC and responsible AI Pioneer, about what AI safety really means and why the industry may be focusing on the wrong problems.

Rumman argues that the most overlooked lever in AI development is evaluation. While companies emphasize model training and capabilities, far less attention is paid to how systems are assessed in real-world contexts, who defines “good,” what risks are measured, and how societal impacts are accounted for at scale. She distinguishes between technical assurance and broader sociotechnical risk, from misinformation and bias to over-reliance and erosion of institutional trust.

Drawing on her experience at Twitter (X) and in global policy circles, Rumman highlights a fundamental governance gap: unlike finance, aviation, or healthcare, AI lacks a mature, independent ecosystem of auditors and evaluators. Today, the same companies building AI systems often define what counts as harm. She also challenges the belief that stronger guardrails alone will solve the problem, noting that cultural context, language differences, and human behavior complicate any notion of “neutral” or fully objective AI.

Rather than focusing solely on speculative existential threats, Rumman urges attention to the harms already visible from AI-enabled misinformation to mental health risks and shifts in how younger generations relate to knowledge and authority. The future of AI, she suggests, will be determined not just by technological breakthroughs, but by whether we build credible systems of accountability, evaluation, and global cooperation around them.

If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.