July 1, 2025
In TechSurge’s Season 1 Finale episode, we explore an important debate: should AI development be open source or closed?
AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.
Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.
From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening!
00:00 The Debate on AI Development: Open vs Closed
05:51 Understanding Open Source vs Closed Source AI
11:55 The Economics of AI Models
17:47 Trust and Transparency in AI
23:43 The Future of AI Governance and Global Impact
Slate.ai - AI-powered construction technology: https://slate.ai/
World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/
EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Senthil Kumar: What sparked the debate, Michael, wasn't that they open sourced a powerful model. It was that they delivered GPT full class performance at a fraction of the size, cost, and compute. What's truly at stake, I believe, is this who gets to shape the trajectory of intelligence in the digital age? Will it be a handful of corporations or can we architect a more inclusive, distributed, and responsible future?
In the end, the legacy of AI will not be written in. Lines of code. Keep written in how we choose to lead and who we choose to empower.
Michael Marks: Hi everyone. This is the Tech Surge Deep Tech podcast presented by Celeste Capital. Each episode we spotlight issues and voices at the intersection of emerging technologies, company building and venture investment. I'm Michael Marks founding managing partner at cel.
If you enjoy Tech Search, subscribe and leave us a review on your favorite podcast platform. Visit tech search podcast.com to sign up for our newsletter and find our video episodes on YouTube. Okay, today we have a very interesting topic. Everybody's talking about artificial intelligence, and uh, people are probably tired of hearing about it, but today we're gonna take a slightly different look at it.
And talk about a, a more technical aspect, which is the difference between open source and closed source AI development. It's, it is a discussion that has a lot of implications, so we're gonna, we're gonna dive into it. We're very lucky to have Central Kumar to guide us through this conversation today, central serves as an AI technology leader and advisor for many companies, think tanks and academic institutions, as well as a CTO at Slate Technologies, one of Celeste's companies.
This is a company delivering advanced AI products to the construction industry. So we'll dive into that one a little bit. You've contributed the advancement of solutions and standards in support of several important technology areas like autonomous vehicles, FinTech, smart buildings, and so on. You've also contributed to helping draft and create AI standards for the European Union.
Your work has been recognized through global commendations, including by the World Economic Forum. You are recently recognized as a senior fellow for your contributions to inclusive AI innovation by the United Nations. So thank you so much for taking the time to talk with me.
Senthil Kumar: Michael, it's a true privilege to join you in this conversation.
Thank you for having me. It's an absolute honor.
Michael Marks: Well, thank you for saying that. All right, let's jump in. So open versus closed development's been a big debate in AI circles. You read a lot about it these days. The deep seek release when they, when they re-released recently, of course, everybody's, you know, buzzing about it.
They released their AI models and turned the markets upside down and brought this debate back into the spotlight again. Could you remind us exactly what, what it was about the deep seek that created so much attention? And, and then, uh, well, let's start there. Let's, let's just talk about Deep Sea itself.
Senthil Kumar: What sparked the debate, Michael, wasn't that they open sourced a powerful model. It was that they delivered GPT-4 class performance at fraction of the size, cost, and compute. That efficiency caught the anterior industry of God, including some of the most advanced proprietary labs. Now, it fundamentally challenged the status quo and assumptions about frontier models demonstrating that smaller well-trained models.
Or even surpass state-of-the-art systems in specialized tasks, challenging the bigger is better narrative. Now the model was lean, uh, elegant and widely capable. And suddenly you, we had to ask, is the future of AI going to be defined by trillion parameter giants or by training recipes that are open, efficient, and accessible now, but it went deeper than just performance.
It rekindled the code tension and debate in ai Should intelligence be centralized or democratized or should we build it in secret or build in the open? And if small teams can now create world-class models, what does it mean for competition safety and global equity? I will like to share some of my technical analysis.
Uh, please, that aligns with the context of this conversation. They employed a highly efficient training pipeline using what is known as curriculum learning across modalities. Now, much like human education, the model was trained step by step. It started with the simple single model task, like natural language understanding, then graduated to a moderate multimodal task like caption, image pairing, and finally advance to more complex reasoning, such as generating code from visual input.
Now that curriculum structure improved generalization and maximize data efficiency. Then came the use of multi turn reasoning alignment, training the model, not just to answer, but to think through the problems conversationally, a step-by-step, and stay aligned with the user intent across multiple turns.
That's an emerging frontier in alignment strategy, and they leaned into it early. That prompted a deeper industry question. One that's far more consequential than parameter counts or benchmark scores. Is scale still the game, or is a game quietly changed? Because when a smaller, faster, and more transparent model can rival the best kept secrets in ai, the paradigm shifts dramatically.
It's no longer just about how much compute you have, it's about what you choose to optimize and how well you align the model and how thoughtful you train it.
Michael Marks: That's fascinating. And there is a lot of stuff right there between scale and how you train models and so on. So let's just dive into the open versus close because probably is oversimplified when people talk about that.
But why don't we start at the high level and, and what does it mean to be open sourced versus closed source? And then we can dig into what implications that has on some of the other issues.
Senthil Kumar: That's a great place to begin, Michael. At an elevated view, the debate reflects two competing visions, two schools of thought.
How should AI evolve and be governed? On the one hand, open air advocates argue that transparency in models, data sets, and methods fosters trust. Accelerates research and democratizes access to advanced technology. It allows startups, researchers, and even developing nations to innovate without being locked out by proprietary systems.
It also makes it easier for us to audit the models, spot risks, and build safeguards collaboratively. On the other hand, closed ai, those proponent stress, the importance of control, safety, and commercialization. They argue that the frontier models are too powerful, too dangerous to be fully open, and that keeping them closed helps prevent misuse, protects intellectual property, and ensures kind of alignment and oversight necessary at scale now.
But here's a subtle nuance, Michael. This is not a binary conversation, right? It is about how we create hybrid models of openness where certain capabilities might be open for innovation. Education while others are ref restricted for safety and national security. Now it's about evolving governance frameworks that are as intelligent as the models themselves.
Now, what's truly at stake, I believe, is this who gets to shape the trajectory of intelligence in the digital age? Will it be a handful of corporations or can we architect the more inclusive, distributed, and responsible future? Now, this is a big philosophical and technical divide. Um, all the major AI companies are seemingly taking a stance.
Meta was closed at first, then it open source laa models, right? Open ai, of course, was it started very open, but then later they pulled back and decided not to open source some of their models because they felt it was too dangerous to be released to the public. And that's why this debate matters. It's not just a technical issue, I believe it's a societal one that deserves more attention.
Michael Marks: Well, it's funny that you mentioned meta being closed and then open and open, AI being open and then closed. So I know that there's lots of debate about this, but you're a technologist, so I wanna know, where do you stand?
Senthil Kumar: That's a, that's a great question, Michael. And I think, uh, it's one that cuts across the heart of not just how we build ai, but why we build it, uh, as a technologies.
I deeply appreciate the tension here. On the one hand, openness has been the engine of innovation in our field. The internet, the early days of machine learning, the open software ecosystems, they flourished because ideas were shared, refined, and scaled through collective effort. The open release of foundational models like Bert Stable Diffusion, they empowered thousands of researchers and developers globally leveling the playing field.
But I recognize that AI today is not what it was five years ago. We are no longer talking about academic prototypes. We are dealing with models that can generate code impersonate humans and manipulate narratives at scale. So the stakes are existential in some domains, and it is entirely reasonable to apply precautionary principles when releasing these capabilities into the wild.
That said, I personally advocate for a tiered openness model where we open up the scaffolding. The idea is the governance principles and the safe building blocks. Remain selective about releasing highly capable general purpose models that could be misused. Now, this is similar to how we share research in synthetic biology or cybersecurity with control, transparency, and responsible disclosure.
And as builders, we have a responsibility to create the conditions for safe progress, not just faster progress. So where I stand today is, uh, to advocate for a tiered openness model.
Michael Marks: Let me dig into different aspects of that. Let's start with the, with the economics around the models. So there is an argument, and I think an appropriate argument to some level, that keeping the models close allows the creator of the model to, to collect money for people who use it.
And the more money, the more they can invest in creating the product as opposed to in an open model where. Maybe you can collect a little bit, but it's very different. And so the, the motivations of those two parties are very different. So where do you stand on the, on the economics of the, this dis debate?
Senthil Kumar: This is one of those rare debates where both sides are not only valid, but they're essential to the innovation ecosystem. The real insight, I think, Michael, is lies in recognizing that open and closed development do not exist in opposition. They exist in a kind of a creative tension, and the tension is what drives progress.
Historically, open source has been the great accelerator for technological adoption. It lowers the barrier to entry. It invites a global community to iterate and uncover edge cases and creative users at a pace. No single organization can match. Now we saw this with Linux. We saw this lin flow right by touch, and more recently with open models like llama, minstrel, and falcon.
Now, these ecosystems do not just move fast, they evolve collectively at the same time. We have to acknowledge that many of the breakthrough capabilities, we marvel it today, the GPT-4, Gemini, Claude, they came out of closed labs. Back the enormous investments. These models often require hundreds of millions in compute curated data sets and teams of engineers and researchers working in close-knit coordination.
So we usually do not see innovation of this scale riser from grassroots, not without seismic shift now. So where does that leave us? I believe we are seeing the contours of a hybrid innovation model that's emerging where closed labs push for. They push the boundaries of what is possible and open ecosystems ensure what's possible, becomes widely usable, safe, and extensible.
Now, one generates a breakthroughs the other democratizes and refines them.
Michael Marks: Well, it's interesting. Maybe I'm gonna put this in different terms than you put it, but we have a company in our portfolio H2O, which started as pure open source. We got every benefit you just described, which is lots of people innovated on it, lots of people used it and moved it all along.
But from a company standpoint, um, it wasn't sustainable because the company couldn't generate enough capital to pay for itself. So then we started a closed system, but with very specific objectives, like in financial services, in government management, things like that. And now we have both. We have an open source model that people can use and innovate.
As they would like. And we also have a closed model where we have an opportunity to recover our costs. Is that, when you say these things can co-exist, do you see it being collectively some open, some close, or do you see it more like in a single company like h George has both open and closed.
Senthil Kumar: I think I would see it in a larger context.
Uh, Michael, Linda with uh. The larger rollout, larger place actually, than a single organization manipulating it.
Michael Marks: Okay. Um, that's a perfectly good way for me to segue into the work that you actually do on a day-to-day basis, which, is it slate.ai? Is it open, is it closed? Is it some of each? How are you thinking about it in, in that specific application, which is using AI to help with construction projects?
Senthil Kumar: Oh, yes. So that's a very important question. I believe that every serious tech founder has to confront early on. Open sourcing is a powerful lever, but it may not always be the right one, especially when you're building differentiated capabilities at the frontier. Uh, at Slate, we are not opensourcing our technology at the present time, but some aspects of our technology, like the construction language model, we may open source it at some point.
Now, in our case, we have made a deliberate choice not to open source, at least not yet. Because we are focused on creating long-term defensibility, we stay closed while we are compounding value. And when the moment is right, we may open up, not to chase adoption, but to empower an ecosystem where we can serve with real conviction.
Now at present, we are in the phase of value compounding, not value distribution. So defensibility is in the full stack experience, not just the model, but the orchestration, the outcomes, the guarantees. That's what our customers pay for and that's what our investors bet on.
Michael Marks: Well, look, I like that answer.
And for the listeners here, you know, there's a lot of discussion about this in the public, and you know, we've just heard, you know, we, we know that meta went from close to open and open. AI went from open to close, and I. H2O went from open to a hybrid, and we just heard Central say, slates closed, but is open to opening things.
So I think the, the listeners' takeaway is that, is that the details matter here. It matters on what the applications are, it matters on who's creating them, and they're gonna be different answers for different companies. But that's also a good segue into the trust portion of this discussion. Right? There's this question, there's a, there's a view, not a question.
There's a view that, that open sourcing creates for more trustworthiness. 'cause if it's closed, you don't know what, what, what's behind the firewall. You don't know how this stuff was created. You have a point of view on that, on the trust, uh, issue here,
Senthil Kumar: uh, we do not need to choose between reckless openness and, uh, opaque secrecy.
What we need is a principle, middle path tiered access frameworks where capabilities are open in safe context, but gated. When there's high misuse potential licenses with enforceable guardrails, so that openness does not mean unaccountability, independent auditing and direct teaming, even for closed models to ensure safety without full exposure.
Now, the goal is not just open the ai, it is responsible openness, openness with wisdom and transparency with guardrails, and that is a future I think we need to design for one, where innovation and safety aren't seen as trade offs. But as responsibilities we can carry together. So
Michael Marks: can you
Senthil Kumar: give an
Michael Marks: example of guardrails?
I mean, everybody has this concern. You open source, powerful ai, bad actors can use it. You just use some examples of it, you know, with fake information, all this kinda stuff. How do you put guardrails
Senthil Kumar: around open ai? We need to understand the usage of actually how are people going to be using it and so forth.
And what levers can we pull that this information cannot be misused or the, the core power of the AI cannot be misused. This morning I was, uh. Listening to an, um, a new snippet where people are able to create fake passports and fake IDs using some of these powerful models and so forth, without any guardrails, without any controls on top of it.
It's gonna be extremely difficult to, to release these kinds of powerful models out in the open. But, but how do you stop that? We have to design the models from ground up for these kinds of use cases, and if the model is going to be prompted. To help design in a fairly complicated theme. So it has to know that.
Michael Marks: So you, so your idea is that certain questions and certain uses, the AI will shut down and say, no, you can't do that. You can't do that, and it's going to inform the creators about it. All right. I hope there's a way to do that. So, uh, look, you, you, you give advice to, to different countries and different governments and so on like that are there.
Let's dive in on that. 'cause, you know, not all countries have the same approach and the same ideas about transparency and guardrails and all of that. Are some countries getting this right and others wrong? What's your view about that?
Senthil Kumar: I have had the opportunity to advise on AI strategy in both public and private sectors.
And I can say this, unfortunately, no one has it fully figured out yet. Some regions are moving in the right direction. The eu, for example, with its AI Act, has taken a risk based approach, one that differentiates between low risk cases and high risk systems like those used in healthcare education, law enforcement.
It's not perfect, but it is a signal that governments can take AI seriously without freezing innovation. The UK is exploring a sectoral agile approach, which avoids overregulating too early, but emphasizes accountability. Singapore is quietly emerging as a global model for AI governance by design, embedding safety and explainability and transparency into the fabric of national innovation.
The US has world leading AI talent and institutions, but we are still a bit behind on regulatory cohesion, and what is needed isn't a heavy handed control, but clear guardrails. Transparent reporting standards and shared safety protocols across platforms.
Michael Marks: But we do know that the world has trouble coming together on any kinds of standards, right?
So, and what you just said is a kind of a classic technologist answer. Let's all figure this out and make sure that we're all doing it the same, but it's unlikely to develop that way. So do you see a world where there's like an American AI system, a Chinese AI system, a Singaporean AI system, or a European AI system?
Because all governments bring different things to the table when they set regulatory policies. So what do you think about that?
Senthil Kumar: It's not, uh, whether there's a AI for us versus Europe versus Singapore, but definitely there's local standards and will take precedence and prevalence. Some of these are localized standards, localized controls.
The Chinese are going to have their own standards compared to the the Americans and so forth. So. Eventually the systems will all converge at some point of time, but for a period of time we're going to have each domain enforcing their own regulations on top of what AI can do.
Michael Marks: You also recently spoke at at, at AI for developing countries forum.
So we just mentioned all the developed countries. Right. I. So are there uses for AI in developing countries that are different from the uses in the develop? Maybe more in terms of developing agriculture or infrastructure, things like that?
Senthil Kumar: Certainly. Definitely the use cases for AI in developing countries is quite different than in what we are looking at here.
They don't need autonomous vehicles to be driving in the streets and so forth. As you rightfully said, agriculture is, is core to them actually as the AI technologies that could improve that. Affordable housing is one more area. Can the smart systems help design and house that is tuned towards a local ecosystem and one that is, you know, in, in line with the ecosystems and in line with their economy and so forth.
So these are areas that AI can definitely do for the developing nations as opposed to the advanced countries. Do, do you think there's
Michael Marks: a chance that AI will help the developing countries develop faster? You know, sort of close the inequality gap because they have better tools, if
Senthil Kumar: you will? It'll definitely do that actually.
And, uh, the developing nations have to wake up to the fact that AI here is here to help them out with all of these things they can leverage the, the learnings that we all went through, all the developed nations went through and using that, I think this will empower the developing nations to step ahead and move forward.
Move fast.
Michael Marks: Well, let's hope so. I mean, one of the things we all hope for is that these new tools that the technologists of the world develop will make the world a better place. So we've talked about, you know, safety and security and transparency, but if the poor countries could become less poor faster, that would be a wonderful use of open ai.
Okay. Well, we're gonna wrap up closely here. You got a crystal ball. What do you see happening? Just describe how you see. This whole close versus open, which is the centerpiece of, of the discussion today. How do you see that evolving? You've, you've talked about how, how you'd like to see it evolve, right?
Yeah. Which is everybody. There's no bad actors and everybody puts the proper guardrails on it. But is that how it's gonna turn out? What's your view?
Senthil Kumar: All right. Uh, so a few closing thoughts here, Michael, because we are at an inflection point and what happens next in AI won't. Just be shaped by the breakthroughs in code, but by the choices we make as a global community.
So looking into the near future, I believe few key developments will shape whether we lean more towards open or close. If we go into the trout first, trust will be a forcing function. As AI systems become more powerful and more embedded in everyday life, people, institutions, governments will demand transparency, auditability and explainability.
That will naturally push parts of this industry towards openness, not as a philosophical stance, but as a necessity for credibility. Second, risk will shape policy. If we face a major AI related crisis, whether it's misinformation at scale, security breaches, or autonomous system failures, we could see a regulatory shift towards tightly controlled systems, closed systems.
Safety model tightly
Michael Marks: controlled,
Senthil Kumar: both open and closed. Closed systems. True. Okay. And safety may override accessibility in the short term. A third, the market forces will play a quite but decisive role. If open models prove commercially viable, or if open ecosystems become the default infrastructure for innovation in emerging markets, then openness becomes not just ethical, but economical.
But here's a more realistic scenario to consider. And something I believe in the future won't be fully open or fully closed. It'll be selectively open, strategically governed, and deeply contextual. Now, the real shift I foresee is not in one direction or the other, but in a new paradigm. One where openness is a design choice, not a dogma.
The trust becomes a currency of adoption and where power is measured by, not by control, but by contribution. If you want AI to serve the many and not the few, we will need to design systems that are not only intelligent, but inclusive. Not only powerful, but principled because the legacy of AI won't be what it can do, but who it'll empower.
Now, the future of AI will not be determined by. Openness or closure alone, it'll be determined. It'll be shaped by governance, trust, and global intent. So across boardrooms and policy rooms alike, we must come to terms the simple truth. If you think about it, AI is no longer a technological artifact. It's becoming a geopolitical infrastructure, shaping economies, narratives, and the balance of power.
In the end, the legacy of AI will not be written in lines of code. It'll be written in how we choose to lead and who we choose to empower. That is our opportunity and that's also our responsibility.
Michael Marks: Well, I have to say, first of all, thank you so much for, for joining. I'm gonna make a, a statement about this.
You know, we're lucky to have this, this podcast, and be able to interview great technologists like you. What I'm always taken by and. And again, today is how all the great technologists see technology as a force for good in the world. And I am hopeful that you're all correct. Thank you. Thanks so much for joining us.
Thank you. It's in honor.
Thanks for tuning into the Tech Surge Podcast from Celesta Capital. If you enjoyed this episode, feel free to share it, subscribe or leave a review on your favorite podcast platform. We'll be back every two weeks with more insights and discussions of all things deep tech.
Legendary technologist and investor Bill Tai joins our latest episode for a wide-ranging conversation spanning decades of Silicon Valley innovation. Bill shares his remarkable journey from being employee #1 at TSMC to becoming one of the first seed investors in Zoom and Canva, and his early embrace of Bitcoin when it was priced at just 7 cents.
The conversation explores Bill's unique investment philosophy shaped by mentorship from Don Valentine of Sequoia Capital, his innovative approach to building entrepreneurial communities through kiteboarding, and his insights into the intersection of AI, energy infrastructure, and cryptocurrency. Bill discusses the massive energy crisis facing AI data centers, drawing parallels to the telecom infrastructure buildout of the 1980s.
From his early days as a kid reverse-engineering electronics to his current role as Chairman of Hut Eight Mining and his partnership with the Trump organization on American Bitcoin Corp, Bill provides invaluable insights into recognizing structural market changes and backing the right entrepreneurs and emerging technologies at the right time.
Will AI displace more jobs than it creates? How can the U.S. win the AI race? How can AI benefits be evenly distributed across businesses and society?
We explore these questions and more as Sriram Viswanathan sits down with Ronnie Chatterji, Chief Economist at OpenAI, for an in-depth exploration of AI's economic impact and policy implications. Ronnie brings a unique perspective, having served as an economic advisor in both the Obama and Biden administrations as a senior economist, supply chain advisor, and architect of the CHIPS Act.
The conversation dives into the economic opportunities and challenges of AI adoption, from productivity gains and job market transformation to the critical need for workforce retraining and AI upskilling in schools. Ronnie also delves into America's competitive position in the global AI race, the critical need for infrastructure investment in order to continue scaling this emerging technology, and the lessons learned from implementing major industrial policy like the CHIPS Act.
In this episode of TechSurge, host Sriram Viswanathan sits down with Nishant Batra, Chief Strategy and Technology Officer at Nokia, for a deep dive into the evolving landscape of telecom and wireless technology. Nishant shares insights into the seismic shifts transforming network infrastructure—from core networks to edge computing—and discusses how Nokia leverages artificial intelligence to optimize performance and drive innovation.
The conversation also highlights Nokia's unique innovation framework, spanning its corporate venture investments, internal incubator, and expansion of the legendary Bell Labs. Today Nokia leverages Bell Labs’ groundbreaking research into emerging technology for internal innovation and new venture spinouts in collaboration with venture capital firms, including recently announced inaugural spinout startup, Astranu.