Breaking Down the Senate AI Policy Roadmap with Senator Todd Young of the United States Senate
It's about values as much as it is about national security and economics
Senator Todd Young on the U.S. Senate AI policy roadmap
The latest updates on AI legislation
Upcoming AI events and conferences, including FT’s Future of AI
Before you go…check out the latest news about AI regulation
“Our ability to persuade others to work with us to harmonize these standards is also a function of the fact that we're trusted. We're not always perfect, we're not always consistent, but in the grand view of things, we are much more trusted than some of the adversary countries I've mentioned. And so this is an economic issue, it's a national security issue, but it's a big-time values issue, as well.” — U.S. Senator Todd Young
The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode of the Regulating AI podcast, host Dr. Sanjay Puri is joined U.S. Senator Todd Young.
Sen. Young shares insights into AI policy, national security, and the steps needed to maintain U.S. leadership in this critical field.
Read on (or watch the episode below) to catch the entire conversation!
In this episode:
Explain the genesis of the bipartisan Senate AI Working Group? Senator Young had a history of working with Senator Chuck Schumer on semiconductors (they co-authored the CHIPS and Science Act), and this was a natural extension of that work. “But really what catalyzed our work and focused our minds was the ChatGPT moment, when it became clear to the American people that we were already in the midst of an AI revolution, and we began to think critically about some of the implications of this,” said Sen. Young.
What are the key priorities you outlined in your AI roadmap? “One of the things that we learned very early on, as we began to prepare to draft this roadmap, was that our existing laws really addressed […] most of the concerns that people had,” said Sen. Young. Like privacy laws, consumer protection laws, etc.
“But the real challenge we saw was adapting these existing statutes and bodies of legal precedent to an AI enabled world,” he said. He characterized it as “tweaks” and “gap-filling.”
But they wanted to focus on what risks exist and what regulatory barriers there are to innovation. And they wanted to be sure the government fulfilled its obligations, like providing research funding.
“We believe that this world of AI enablement is going to require the United States to revisit some of our workforce training, and the way we educate folks and the laws around all of that,” he said. They want to make a “nimbler” environment in which people can resell and up skill if some of their work is displaced by automation.
They also wanted to be sure they’re securing experts in the government.
Do you see the government having a role in the workforce transition?
Private sector: He said that one thing they need to do is equip the private sector to go out and train their workers on AI, in whatever line of work they’re in.
Education: He believes we should educate kids K-12, too. “We need our young people to be comfortable with algorithms, [and] have some knowledge of coding, but more importantly, become adept and very used to using AI technologies and applying the latest artificial intelligence technologies to their work,” said Sen. Young.
And he mentioned that the federal government has long invested in paying to send people to post-secondary school.
What about the $32 billion the report said we need? Sen. Young said that figure came from the National Security Commission on artificial intelligence several years ago—that it would take a $32 billion public investment to make up the research gap that wouldn’t happen in the private sector.
How does China factor into the AI competition for the U.S.? Sen. Young said they want the Chinese people to have material prosperity. “But of course, when you look in the aggregate, the productivity of an entire economy, one's national security depends on that and your military wherewithal. And we don't share the values of the Chinese Communist Party,” he said. So they want to be more productive than the Chinese Communist Party.
How does the CHIPS and Science Act tie into your broader vision for AI policy? To ensure we lead the way when it comes to developing AI technologies, there are three components:
Compute (in other words, semiconductors).
Algorithms. “Those are a function right now of the brains of individual coders and teams of coders,” Sen. Young said.
Data—The more quality data you have to train systems, the more powerful systems you will have.
“One critical link which was interrupted in the midst of the global pandemic, exposing a real vulnerability in our supply chains, was the compute our access to semiconductors,” he said. “The purpose of the CHIPS and Science Act was to reshore much of the manufacturing of these semiconductors so we'd be less dependent on countries that didn't share our values.”
How do we democratize access to AI? “We want some innovators, perhaps, who are not-for-profits, or performing public research projects, or in the earlier stages of developing a business model, [to] have access to some of these chips,” he said.
Regarding open versus closed-source models: “The open source approach allows more people to innovate on top of earlier innovations. […] But we do have to get better at addressing some of the security risks associated with doing that,” he said. For example, we don’t want Iran or Russia to innovate on top of our work.
Do you see any value in harmonization or learning from different laws—like ones they’re passing in China, India, etc.? When it comes to, for example, working with China on security, he’s dubious, “But it doesn’t mean we shouldn’t keep talking,”
said. Sen. Young. “I think it's far more likely that the United States will be harmonizing our technical standards, even our norms around the use of AI technology, with friends and what we call partners.” He thinks harmonizing domestic laws and sharing a large body of agreed-upon technical standards and norms to inform future rulemaking and international standards will be in the best interest of the private sector and investors, but also positive geopolitically.
Hear the entirety of Raphael Townshend’s conversation with Sanjay Puri above and on Apple Podcasts and Spotify. Subscribe to the RegulatingAI podcast to never miss an episode.
The latest AI legislation
A.B.2355, California, USA. Political Advertisements: Artificial Intelligence. Mandates that election advertisements disclose the use of content generated or altered by AI; authorizes the Fair Political Practices Commission to enforce these requirements through legal action or other remedies under the Political Reform Act. Status: Enacted.
A.B.2602, California, USA. Contracts Against Public Policy: Digital Replicas. Requires contracts to clearly define the use of AI-generated replicas of a performer’s voice or likeness; mandates that performers must have professional representation during contract negotiations to safeguard their digital rights and protect against unauthorized AI replication. Status: Enacted.
A.B.2655, California, USA. Defending Democracy from Deepfake Deception Act of 2024. Requires large online platforms to restrict materially deceptive election content that has been digitally altered during specified timeframes; exempts certain media outlets under specific requirements; allows candidates and certain officials to seek injunctive relief for platform noncompliance. Status: Enacted.
A.B.2839, California, USA. Elections: Deceptive Media in Advertisements. Expands the period in which entities are prohibited from knowingly distributing election material containing deceptive AI-generated or manipulated content; expands the scope of existing laws to prohibit deceptive AI-generated content related to candidates, elected officials, and others. Status: Enacted.
Check out our live AI legislation tracker
Upcoming AI events
Regulating AI Partners With Financial Times, Future of AI
London | Oct 6-7, 2024 | In-person/Online
The Future of AI Summit returns in 2024 to assess the current landscape for AI innovation and examine the real world use cases for how companies are investing in AI, whilst navigating security, workforce and ethical concerns. Hear from AI experts across technology, business, and policy and learn about the most exciting advancements in machine learning, natural language processing, and robotics, and how they are being scaled for success and growth.
Taking place across two in-person days, the event gathers a cross-sector audience of strategy, innovation, technology, and business function leaders charged with creating, integrating, scaling and commercialising AI.
Explore the cutting edge of AI innovation and network with those shaping the future of AI in business! Check out the agenda.
Registration: Use code REGULATINGAI to get 20% discount : Click Here
The European AI Act Online Workshop
Online | Oct 24, 2024 | Online
Artificial Intelligence needs a proper legal framework. The European AI Act, which came into force on August 1, 2024, is the world's first law on artificial intelligence with a broad scope and intensive regulation of so-called AI systems. The impact of the AI Act on developers and users of AI systems in Europe and beyond, and whether the AI Act offers a convincing regulatory approach that could serve as a model for legislation outside Europe, is currently the subject of intense debate.
Voice & AI 2024
Arlington, VA | Oct 28-30, 2024 | In-person
As AI revolutionizes customer interactions, VOICE & AI 2024 brings together the conversational and generative AI communities, enterprise leaders, and AI innovators to explore the latest advancements in the field. Join us to delve into AI-powered language services, CX automation, the future of the conversational enterprise, and more — promising limitless connects with the people driving the transformation of customer engagement.
Bookmark this page to keep track of key upcoming AI events
Before you go…
AI regulation gets a bad rap—but lawmakers around the world are doing a decent job so far | Fortune
Emerging AI regulation will shape the future of data collection for business | TechRadar
AI leaders discuss responsibility, regulation, and text as a ‘relic of the past’ | TIME
AI in the election is about more than just misinformation | SciAm
Navigating AI compliance: How global banks balance innovation and regulation | PYMNTS
California’s ‘surgical’ approach to regulating AI is working | Bloomberg
How will regulators adapt to adaptive AI? | Fierce Biotech
Bookmark this page to stay up to date on the latest AI regulation and policy news
Connect with experts, mentors, investors, and organizations to foster collaboration, innovation, and the responsible development of AI!
Find us on Discord, LinkedIn, Instagram, X, Facebook, and YouTube