David Sacks, President Trump’s czar on AI joined HuhonTuesday:
Audio:
Transcript:
HH: So pleased to welcome back to the program David Sacks. David is the chair of the President’s Council of Advisors on Science and Technology. He’s the AI and crypto czar. He’s actually a double czar, which is really something I haven’t heard of before. There’s czars all over D.C., but he’s a double star. David, are you the only double czar you know of?
DS: As far as I know, yes. I guess that’s right, Hugh. Good to see you again.
HH: It’s good to see you again, and thank you for spending time with me. I want to remind people before we start, David went to Stanford, he’s a Chicago lawyer, so he knows what he’s talking about on the law. He’s one of the original PayPal mafia, he’s been on the show before. He’s on the All In podcast, so it’s a smart guy alert. David, I want to remind you we’re talking to Steelers fans, so we’ve got to slow everything down. First question is out of the box. When Oppenheimer blew up the bomb, he turned to his friends and said now, I have become death, the destroyer of worlds. Have you had that feeling come over you as you deal with AI as the White House czar, yet?
DS: Well, Hugh, I’m personally not that worried that AI is going to be this catastrophic technology. I understand that a lot of people are afraid of it. And in fact, for the last couple of years, you’ve heard this analogy that AI is like nuclear weapons. I think that analogy is highly overstretched, and the reason I say that is because first and foremost, AI is a consumer technology. The best AI models are the ones that you can get as a consumer. You know, whether it’s ChatGPT or Grok, or the new Gemini model which is available on Google search, you know, the best AI models are ones that you can use for yourself. And so every consumer is going to have a powerful AI model in their pocket, on their phone. Every business is going to use this technology. And I do think that makes it quite different than nuclear weapons, although certainly, there is, you know, a potential dual use there that AI is a technology that can be used for military purposes. And I certainly acknowledge that. But I think it’s also, like I said, a very powerful consumer technology, and that does make it quite different than something like nukes.
HH: Well, a few years ago, a few days ago, I was on the Special Report panel, and was asked by Bret my opinion of the President’s AI E.O., and I said I’m all in favor of it, because there is no silver medal in AI. There’s only a gold medal, and we have to win it. China can’t win it. So because it has national security ramifications that are immense, and consumer ramifications that are immense, but mostly national security, we can’t afford to come in second. Was that a fair statement?
DS: Yes.
HH: And…
DS: Yes, I think it is. I mean, I think we completely agree on this, Hugh. We are in an AI race. It’s a global competition, and at the national level, our main competitor is China. They’re really the only other country that has the talent, the technology, the expertise to potentially beat us. I don’t think they will if we do the right things, or keep doing the right things. Under the President’s leadership, we are ahead in the AI race, but it is definitely a competition. It’s going to have major ramifications for our economy and for our national security, and it’s definitely a competition we want to win. And that’s where you and I are completely aligned on this, Hugh.
HH: So the thing we need to then explain to people is why can’t we catch up once we fall behind? Is there an exponential sort of growth in advantage that occurs to whoever makes the breakthroughs in AI first?
DS: Well, that’s possible, but I think we just don’t want to fall behind, because once you do, it’s hard to catch up. I mean, there are moats that get established, and whether they’re business moats, or they’re research and development breakthroughs, it’s definitely the kind of thing where you don’t want to fall behind. Another reason why you don’t want to fall behind is that in technology races, what we see is that the companies or the technologies that build the biggest ecosystems are the ones that win. And this is why our leading technology companies want to do things like have app stores or build platforms is because if you get the most apps in your app store, or you get the most developers writing on top of your platform, then that creates powerful lock in. And we don’t want to get into the situation where we’re behind China in those areas, and they build the biggest ecosystem, and then the whole world will be using Chinese technology instead of American technology. And then, it’ll be Chinese values that are spreading across the world instead of American values. And that’s definitely a situation we don’t want.
HH: Now as soon as I made my sweeping endorsement, I guess it was to Aishah, not Bret, Phil Wegmann over at Real Clear Politics chimed in. Then why are allowing them to buy Nvidia’s last generation of chips? To which my only answer is they’re not that good compared to Blackwell that comes up. Why are we allowing them to buy the last generation as opposed to Blackwell?
DS: Right. I think that’s a fair question, Hugh. I mean, here’s the way we see it, is that the American policy, and this is bipartisan. It goes back into the Biden administration, is that China is not allowed to buy our leading-edge chips, but they are allowed to buy our lagging chips. The H200 chip that you’re talking about, and before that, the H20 chip, it is not a leading-edge chip. It may have once been a leading-edge chip, but it is now a lagging chip. And the reason why you’re willing to sell them lagging chips is because it takes market share away from their national champion, Huawei. And that’s important, because it deprives Huawei of revenue that they can use to then scale up. And also, there’s a very important developer/network effect built on top of these chips. So it’s not just about the chips. It’s about the software that’s running on top of the chips. Nvidia has this operating system called Cuda. There’s a bunch of developer tool kits, tool chains and libraries that run on top of Cuda. And if you deprive China, and so if you have, if you allow China to use the old generation of Nvidia chips, then at least you keep them dependent on that software ecosystem. China has several hundred thousand AI developers, and they would rather use the Nvidia ecosystem and stay on Nvidia. But if you force them onto the Huawei ecosystem, then they will start writing software for the Huawei stack. And that would be something that’s undesirable. So we think there’s some value in maintaining a Chinese dependency on the previous generation of chips. Now for that very reason, the Chinese government, according to an article in the Financial Times, is saying they’re not going to allow the Nvidia chips in there. So I think the Chinese government sees the same thing that we do, which is they want to be semiconductor independent the same way, Hugh, that we wanted to be energy dependent as a nation. They want to be semiconductor independent, and they’re saying they’re not going to allow those Nvidia chips in, because they want to prop up and boost up Huawei, their national champion. So this is, you know, this is a case where I think that there’s always going to be room for disagreement, because there’s that, you know, where exactly we draw that line between leading and lagging chips is always going to be up for debate. But it looks to me like right now, the Chinese government’s not going to allow in our chips, because they want to be independent.
HH: Well, if they don’t allow in the old chips, that is a complete answer to the argument that Philip put forward, because then the indigenous, they want to grow their indigenous chip sector.
DS: Yes. There’s no question they want to indigenize. They definitely want to indigenize their chip sector. There’s no question about it.
HH: Now let me ask you…
DS: And just, and then just, Hugh, one last thought on this is that China is the second-largest market in the world for chips. So if China is able to indigenize, and they give that market exclusively to Huawei, Huawei will use that to then scale up, achieve critical mass, and then they will then export chips at low cost all over the world. So that is basically their strategy. And again, if we can take share away from Huawei and slow them down from doing that, then that’s a worthwhile pursuit. So that is the thinking, but I think that the Chinese government sees the same thing we do, and that’s why they are essentially giving this huge market subsidy to Huawei.
HH: That makes perfect sense to me. I’m coming right back with David Sacks. He is chair of the President’s Council of Advisors on Science and Technology. He’s the AI/crypto guru, the double-hatted guru. Don’t go anywhere, America. I’ll be right back with David Sacks after the break.
— – – — –
HH: David, one market question. I’m not a market guy. I don’t give market advice. But I read the Journal and the Financial Times. Is there an AI bubble, in your view? Are we going to see a rapid deflation in value in some sectors in the AI realm?
DS: I don’t think there’s a bubble, Hugh. I would say that there’s a lot of volatility, because anytime you’re underwriting to a very big outcome in the future, and we’re talking about investments that are being made on, you know, a 5-to-10-year time frame, there’s going to be a lot of volatility in that. But you know, one important indicator here is that all of the GPU’s that are being bought and run in data centers are being used. I mean, there is tremendous demand for this type of computing/processing power. That makes it a little different than the dot com bubble. If you go back to the late 90s and we had this big fiber buildout, there was a lot of what was called dark fiber back then where, again, there were these huge investments being made that weren’t being used, yet. Now eventually, all that fiber got used, but people realized in the Year 2000 that the fiber wasn’t being used, and we had a big crash. In this case, all the GPU’s are being used. So there’s tremendous end user demand for these services.
HH: Now I want to thank you for one thing. You’re a very successful investor, innovator, entrepreneur and lawyer, and you’ve agreed to go back to D.C. How do you find the D.C. bureaucracy compared to the Silicon Valley entrepreneurial investing environment?
DS: Well, it’s a big culture clash. I mean, these are two very, very different cultures. Silicon Valley obviously moves very quickly. There’s tremendous competition there. Companies perceive that they’re in a race for survival a lot of the time. And they just move very quickly. And there’s also a mentality that again, like a mentioned before, you want to build the biggest tech ecosystem. So there is a partner mentality. You want to encourage other people developing software on top of your applications. D.C. is a little different. It’s a different mindset. It’s a little bit more of a command and control mindset. It’s a little bit more bureaucratic, obviously. So things move more slowly, and people update their assumptions a little bit more slowly about what’s happening. So two very, very different cultures.
HH: All right, now I’m saving the vegetables for the last segment and the segment on the podcast, which has to do with preemption. And I’ll explain why when we get there. Let me give you the last sexy question. In my world of punditry, there’s no one greater than the Late Dr. Charles Krauthammer. He proposed the theory that the reason we’d never heard from another advanced civilization is that all advanced civilizations always blow themselves up, that they always take their technology to the logical level of destroying each other. Skynet. Do you view that as the inevitability?
DS: (laughing) Well, I’ve seen those movies, too, and I understand the concern. But I think it’s pretty much in the realm of science fiction. We don’t really see that happening right now. If you, I mean, what I would recommend that everyone do is just try these new tools. You go to ChatGPT or you go to Grok, or you use Google, and what you’ll see is that these AI chatbots are good at giving you answers, but it’s really like a better web search. It doesn’t have a mind of its own. It can’t provide its own objective. It doesn’t, it doesn’t, again, it doesn’t try to figure out what it’s supposed to be doing other than what you tell it to do. And there’s really no evidence that that is not the case. So you know, we’re building these. They’re called large language models. And that’s quite different than I think what’s portrayed in a lot of the science fiction.
HH: Oh, in my law school faculty meetings, all we hear about LLM’s is that we can’t give take home exams anymore. Okay, that’s a change, but it’s not the end of the world, because people will go home. Will it wipe out the millions of jobs that the doomsayers say? I don’t think it will, but what do you think?
DS: No.
HH: Why not?
DS: I don’t think so, and we don’t see any, well, we don’t see any evidence of that. So in fact, it’s quite the opposite. We’re seeing a job boom right now. So you look at the buildout right now of data centers and infrastructure, construction workers are seeing their wages increase 30% because of the demand. In fact, we’re seeing many categories of labor, we have unfilled jobs. It’s everything from plumbers to electricians to carpenters, the people who pour concrete or hang drywall, drivers, even architects, engineering. There’s a tremendous demand being created right now. And so wages are going up, because we can’t hire enough labor. So it’s just not the case that we’re seeing a lot of job loss, and in fact, there was just a study by Yale, by the Yale budget lab saying that in the 33 months since the launch of ChatGPT, there’s been no discernible disruption to the labor market. They’re seeing no major job loss. So again, you’re hearing a lot of these doomer forecasts, but it is, we’re not seeing any of it in the data so far.
HH: And a quick exit question to the next segment, how interested is the President in this? Does he call you up and say, “David, tell me about this every day?” Or is it quarterly or monthly? How interested is he?
DS: I think he’s very interested in this. I think that he wants America to win. He’s the one who declared that we have to win the AI race in his AI speech on July 23rd, and he understands how important this is for our economy. Right now, AI is providing roughly half the growth of the economy. The President gets that, and he wants us to win.
HH: When we come back, we’re going to talk about the E.O., and that is the vegetables. But you’ve got to understand it, especially you governors who are pissed off out there. Stay tuned.
— – – — –
HH: Now we get to the meat and the vegetables of the discussion. Last week, President Trump issued an executive order preempting the states from regulating artificial intelligence and its development. I stood up and cheered, because I teach preemption every year, and I explain to people the Congress was given the Interstate Commerce authority to make sure that the states didn’t screw it up. But I’ve never seen it done by executive order before, David. Now you’re a University of Chicago-trained lawyer, so you’re a smart guy. Have you ever seen an executive order used to declare preemption before?
DS: Well, Hugh, we know, we’re being careful about what we can do by executive order and what has to be done by legislation So in the executive order, we recognize that we need Congress to enact a federal framework. And so we’re asking for a law, and the E.O. tasks members of the administration to work with Congress to deliver that framework. We again want a bill that the President can sign. And the executive order provides a set of principles that we’d like to see contained in that bill. And then furthermore, it contains a set of tools that the federal government can use to push back on the most onerous examples of excessive state law in this area. So we are recognizing that this E.O. is not the preemption itself. It’s more of a roadmap to the preemption, but we will ultimately need a law from Congress.
HH: All right. Then, we are in 100% agreement. And it’s a good message to the governors and the state legislatures to stay away. Let’s get into the nitty gritty. Section three of the E.O. reads, “Within 30 days of the date of this order, and it was December 11th, the Attorney General shall establish an AI litigation task force whose sole responsibility shall be to challenge state AI laws inconsistent with the policy set forth in Section two of the order,” that’s the policy saying the feds are going to clear the ground here. And the task force shall consult from time to time with the special advisor for AI and crypto. That’s you. So are you going to have to sit down with a bunch of lawyers from Justice every now and then and talk about screwing the states out of screwing the feds?
DS: (laughing) Well, so this is one of the powers, or one of the tools that’s created by the
E.O. And let me just say that the DOJ already has the power to challenge state laws. So this is not a new power. But what the E.O. is doing here is enumerating and marshalling all of the tools of the federal government behind the President’s policy to create a national framework. And so yes, we are going to meet with the DOJ, this task force, to try and figure again, what are the most egregious examples of state laws that we should oppose. And we’re going to oppose them either on 1st Amendment grounds or on Interstate Commerce grounds. These are areas where the states do not have the right to impose these excessive regulations. So that is the purpose of that. But let me just say that there are areas that we will not challenge the states, and this is important, too.
HH: Oh.
DS: What we’ve said is that we recognize, we recognize that the states have an interest in protecting child safety, for example. And furthermore, we realize that localities have an interest in choosing what their local infrastructure is. We’re not going to interfere with the rights of communities to decide whether they want to have a data center or not. So there are areas that we’re not going to challenge, and those are enumerated in this executive order as well.
HH: Now you’re in Northern California and I’m a Southern Californian until I fled a decade ago because it had fallen off the cliff. We can’t let California screw this up. Are they trying to screw, they’ve screwed everything up. Literally everything they touch, they screw up. Is California trying to screw up AI?
DS: Yes. I mean, by the way, you and I would define it, they are absolutely screwing this up. They already passed on bill called S.B. 53, which creates this reporting regime for AI models. and it’s burdensome in and of itself, and now multiplied by 50. You’re going to have AI companies trying to file reports with different definitions in 50 states to 50 different state regulators with 50 different deadlines. This is going to be an absolute morass, and it’s going to be very expensive. Look, the big tech companies can always afford to comply. They’re going to figure it out. But it’s the little tech, it’s the start-up founders and entrepreneurs, the innovators who are going to get tangled up by this. And this is where we start to impede innovation, and it could cost us this AI race with China. So you know, what we need to do is move to a federal framework, not a patchwork of 50 different states. And it’s true that, you know, California is the state that has the most market power, and it has the most nexus to AI companies. So you know, I know there’s a lot of red state governors who are saying that they’re opposed to the federal role as well, but they can’t stop Gavin Newsom and the local assemblyman, Scott Wiener, who is writing all these laws, from foisting these laws on the entire country, because you know, almost all of the major AI companies are in California. And they’re going to have to abide by these California laws. And those California laws are therefore going to become de facto for the rest of the country. So you know, I understand some of the concerns of the red states, but they need to understand that President Trump is the only elected official in America who can protect the red states from what California is going to do. And you know, it’s not just Gavin Newsom. It’s Scott Wiener, for example. They already passed S.B. 53. Wiener’s got 17 more bills ready to go. And again, these are going to become de facto law for all of America because of California’s market power if we don’t move to some sort of national framework under President Trump’s leadership.
HH: Well, then, you’ll need a whack-a-mole lawyer or phalanx of lawyers over at DOJ to spread out over California. And every time they propose a reg or statute, challenge it under the Negative Commerce Clause. Now Clarence Thomas doesn’t like the Negative Commerce Clause, but I think it’s clearly there. Is that what the AI litigation subsection is all about, standing up people to go out there and hit them as soon as they come up with a bad idea?
DS: Well, we, that, it does give us the power to do that. But I think in practice, we’re going to have to be selective and decide where to pick our battles. And again, what we say in the executive order is we want to go after the most onerous and excessive state regulation. So California is certainly capable of producing those regulations, and if they’re unchecked, this, again, this is not a stable equilibrium. They’ve already passed, I think, several AI laws, and there’s dozens more waiting to go. So yes, at some point, they’re going to do something that we just absolutely cannot except, but we’re going to be selective about picking these battles. We want to make sure that we win these battles in court, and the best arguments are going to be 1st Amendment arguments and then Interstate Commerce arguments.
HH: They are clearly the best. When we come back, I’m going to continue my conversation. It’ll be on the podcast, because preemption is not for the faint of heart. I didn’t put too many vegetables on your plate, but I assure you this is the executive order for which President Trump will be remembered for decades, if in fact it works and we establish for national security purposes an AI advantage which is insurmountable. And I think we’ll grow exponentially once we get it going. David Sacks will be right back with me after the break.
— – – – —
HH: We’re staying far away from crypto, because that makes my head hurt. David, tell me about this. Mike Gallagher, former Congressman, really smart guy now with Palantir, was on earlier this week telling me that Palantir is hooking up with the Department of Navy to build ships faster, which is music to my ear, because we need ships and we need ships faster. What kind of advantages does AI bring to national security?
DS: Well, Hugh, there’s a couple of things. First of all, economically, we’re not going to be the world’s leading economy if we’re not number one in AI. It’s such an important driver of future growth and innovation that we can’t maintain our economic supremacy unless we are the leaders in AI. And if we cripple our innovators and the development of AI by having the state by state regulatory approach, this patchwork of confusing regulations that are often contradicting each other, there’s no way we’re ever going to be able to maintain that lead. So I’d say the economy is number one. Number two is that there are going to be powerful applications that are developed militarily. We know that for example, AI is the, you could call it the brain of future drones, autonomy, things like that.
HH: Well, it’s Golden Dome. It’s the Golden Dome. It doesn’t work without it.
DS: Golden Dome. Yes. Whether you’re talking about offensive weapons, drones, autonomy, things like that, our defensive, the missile defense shield, those systems, the guidance systems are going to be powered by AI algorithms. And you can imagine that wars of the future are going to be algorithm wars. So it’s very important that we stay on the cutting edge of this technology.
HH: So my last two questions, have you invited the state regulators on the state wannabe bigshots in AI, for a variety of reasons, there are a lot of incentives to want to be the David Sacks of California or the David Sacks of Iowa or the David Sacks of Florida. Have you invited them all to D.C. to sit down and talk to them about why you’ll keep them informed, but they really can’t screw things up?
DS: Hugh, I did a call last week with the governors and their staffs to talk about the E.O. and walk them through what was in it and what was not in it, that sort of thing, and I think we’re going to do another call in the next week. So I am starting to have those conversations. And if any of them want to meet with me in D.C, I’m happy to do that as well. So we are trying to keep the dialogue open. And I think it’s important. I understand why our governors want, why they feel they have the right and responsibility to protect their state populations, but they just need to understand that there’s a huge externality created when you have 50 different governors or 50 different states running in 50 different directions. And that’s the issue, and only…
HH: Yeah, I came out of environmental law, and all it is, is the nightmare of non-preemption. Here’s where I want to close with our last four minutes. You’ve got a variety of committees in the Senate and the House. They wall want to be big. You’ve got 25 different agencies. They all want to expand their writ. Do we need a new agency for AI to be set up by a Congressional statute as the best way to go forward for a framework for AI development and regulation?
DS: I would lean against that. I think a preferable approach would be to allow our existing agencies to govern those aspects of AI that pertain to them. So for example, if there are concerns about the use of AI when it comes to drug development or something like that, I’d rather see the existing agency, which I guess would be the FDA, deal with that rather than create a new agency that would then be in competition with the FDA and so forth and so on down the line. So it’s called a sectoral approach, and I tend to think that the sectoral approach would be better, but we haven’t made a final determination on that.
HH: Oh, I’m going to want a chance to argue you out of that, only because I spent so long with the feds.
DS: Okay, well tell me.
HH: Well, the feds are all full of people who have been doing what they’ve been for 30 years, and they’ve been thinking the way federal regulators think for 30 years, and they’ve never been quick on their feet and nimble in their response to changing technology. So you’ve got 30 different bureaucracies that will move at the speed of glue, and they will never catch up to the innovators who are coming up on AI. So I would stand up, you might want to defer to the FDA’s recommendations, but create a new agency.
DS: Yeah, make, you want to see a new agency. That’s interesting. I mean, look, Hugh. We have not made that decision, yet. So I’ll take your opinion into advisement there.
HH: Well, I appreciate greatly your spending time with this. Congratulations on the executive order. I love preemption. I’ve always loved preemption. When I saw it, you made my week, so thank you, David Sacks. And thanks, by the way, for the service you’re giving to the United States. It is not without cost. There’s an opportunity cost to you for doing it, and I appreciate you taking the time and doing it for the country. Thank you, and have a great New Year.
DS: Likewise. Thank you, Hugh. You, too.
HH: Thank you. Bye bye, David.
End of interview.



![Hegseth Demands Fitness Requirements, Says 'Fat Troops' 'Not Who We Are' [WATCH]](https://teamredvictory.com/wp-content/uploads/2025/09/Hegseth-Demands-Fitness-Requirements-Says-Fat-Troops-Not-Who-We-350x250.jpg)






