How AI Is Reshaping Industries and Society with Professor Ruslan Salakhutdinov
An optimistic view of the future of AI
Professor Ruslan Salakhutdinov on how AI is reshaping industries and society
The latest updates on AI legislation
Upcoming AI events and conferences
Before you go…check out the latest news about AI regulation
“I think what will happen in the future is that these agents are going to be so useful to us that we are going to be heavily relying on them to help us with daily tasks. And then there is sort of a fine line as to how much you rely on something versus how much you start thinking for yourself.” — Professor Ruslan Salakhutdinov
In this episode of the Regulating AI podcast, Dr. Sanjay Puri talks with Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University.
Professor Ruslan discusses the pressing need for AI regulation, its potential for societal transformation, and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.
Read on (or watch the episode below) to catch the entire conversation!
In this episode:
What, in your view, are the biggest risks from A. I. that need to be regulated? Salakhutdinov mentioned three risks:
The potential spread of disinformation, like being able to generate realistic-looking videos or AI agents convincing you of things.
The development of AI in military applications.
Replacing some of the labor that that’s done by humans, and what would it mean for economic development.
Are we getting to the point where everything is fake, there are only a few things that we can say with surety and certainty are real? “I personally think that we're still a little bit far from having the technology that's so real that you cannot separate what's real,” said Salakhutdinov. But he said that at some point in the future it will get very difficult for us to discern what’s real or fake.
“In the future, I think you and I would not have this conversation, but our avatars would have this conversation, and nobody would tell the difference,” he said.
There’s obviously going to need to be some kind of regulation around that when it really hits consumers, he added. The research community is working on that, like watermarking.
On AI agents that will do tasks for us in the future: “These agents will know everything about you,” said Salakhutdinov. “Much like Google and other systems know every time you search on something online, they know exactly what you're trying to do. And at that point, I think that the technology…would become very important because I would want my personal data to be protected.”
What about agents going out of control? Is that a fear you have? “I don’t think so,” said Salakhutdinov. He said it’s hard to extrapolate decades out, but the way these systems work now, they’re still under the control of humans. For example, right now, we give an agent a goal to achieve, and it figures out the steps to achieve it. It doesn’t create its own goals.
“But I think what will happen in the future is that these agents are going to be so useful to us that we are going to be heavily relying on them to help us with daily tasks,” he said. “And then there is sort of a fine line as to how much you rely on something versus how much you start thinking for yourself.”
Are agents going to interact with other agents? Absolutely. He gave the example of when his phone stopped working when he was traveling overseas. He had to call the phone company and spend an hour with customer support. In the future, his personal agent could simply communicate with the phone company’s agent and figure it out.
So, part of the research that’s happening now is creating systems that can essentially prevent these kinds of agents from taking dangerous actions.
What about job losses? “We have to be prepared for the case that as the AI becomes better and better, there's probably going to be some job displacement,” said Salakhutdinov. The balance he sees is this: Technological advances are good, but when it takes a couple of generations, we’re able adapt to those changes through education and new skills. If it happens too quickly, it can be problematic.
How should regulations evolve with AI research, which typically moves very quickly? He said that we don’t want regulations that halt progress, especially when other countries like Russia and China will keep moving forward. “And so it's important for research to continue, but it is important, perhaps, for the lawmakers and regulators to try to think about how do we regulate end products.” End products like a medical AI assistant to self-driving cars.
“I think that the best way for the lawmakers and the government is to perhaps be a little bit more embedded into AI community, into the research, and see where the field is moving,” said Salakhutdinov.
Are you worried about AI monopolies by big tech? Yes, he is, both on the regulation side and the open source side. He pointed out that Google and OpenAI used to be very open about their research, but not it’s more closed.
“Nobody knows how they're building these models, [and] what data they're using. So I am worried about this, and that's where I think the role of academia and the role of [the] open source community is going to be fairly important. Because part of it is also for us to be able to study these models,” said Salakhutdinov. That makes it hard for the academic community to study the models and create guardrails.
“The open source is lagging behind, just because it requires a massive amount of compute and massive resources to build these models,” he added.
“I’m definitely for open source,” he said. The architectures and that sort of thing that OpenAI uses are well-known. The differentiator is the ability of large, well-funded companies to scale up. The importance of open source is the ability to study AI models, see where they fail, and create guardrails. If only a few companies control it all, they don’t have to be accountable.
How do we balance privacy and utility when it comes to the data that’s used to train AI models? It’s a good question, he said, but “I don't know what the right answer is here.” He added, “That's something that I think that we'll have to think about as a community.”
He offered the example of medical data. Data privacy is really important there, but if AI is going to help doctors, it needs a lot of data. That’s where things like federated data can help.
These are also questions for regulators, economists, and social scientists.
We have to grapple with data attribution, which is very important.
He suggested that, in the same way you can prevent Google from indexing data on a web page, we should be able to do the same for AI systems.
There’s a balance. “If nobody shares the data, we're not going to have beneficial AI,” he said. But there needs to be regulation around using our data for training.
What about collaboration across different fields? “I definitely think there should be a lot of collaboration between different fields, because that's the technology that's going to go beyond just machine learning or optimization,” said Salakhutdinov. He noted that even in their department, they have faculty who work on foundation and optimization, faculty who work with economists, faculty who work with public health, and so on. “It's inevitable for us to start talking across these different disciplines and try to understand [their] concerns.
How important is it to get cultural and language perspectives beyond the Western world? “I think that's going to be very important,” said Salakhutdinov. He believes this is already happening with every region of the world building its own ChatGPT—in Europe, Asia, the Middle East, and China.
How important is global cooperation? “"I think collaboration is important because if you look at how this technology evolved, it didn't evolve from, you know, Google or OpenAI sort of building that technology and ‘here it is.’
It came out of decades of international research,” said Salakhutdinov. He pointed out that people are doing this kind of research everywhere in the world. And he acknowledged that there are some constraints, like political tensions between nations.
Are there any sectors where you think AI going wrong could be dangerous? “I don’t see that right now,” he said. But he did note that we have to be careful in some areas, like medicine and military applications. “As the technology matures, I'm hoping that we're going to have some regulations on the deployment side,” he added.
If something goes wrong, who should be held accountable? Simply, if AI simply helps humans make decisions by providing analysis and output, humans are still the ones making the final decisions. So the humans are liable.
There are exceptions, of course, like with self-driving cars. “If there is no person in the car, [and] the car drives itself and gets into the accident, whose fault is this? The fault is going to be with the company who developed that AI, because they did not make it safe enough,” said Salakhutdinov.
It gets more complicated in situation like judges determining if someone should get parole. AI may help a judges make a decision by providing historical data and analysis, but maybe the judge is biased and makes a contrary decision. It’s hard to say who’s right and who’s wrong. That’s where regulators can define key things, he said.
Final thoughts: “I think there is a lot of potential for AI…huge potential,” said Salakhutdinov. He gave examples: In education, you could have a personal tutor where education is otherwise impossible for you to access. It may become easier to research health issues. And so on. “Obviously there's a balance that we need to take into account,” he said. “But I'm very, very optimistic that if we do things right in the right way…we’re just going to continue improving.”
Hear the entirety of Professor Ruslan Salakhutdinov’s conversation with Sanjay Puri above and on Apple Podcasts and Spotify. Subscribe to the RegulatingAI podcast to never miss an episode.
The latest AI legislation
S.B.1213, Pennsylvania, USA. Unlawful Dissemination of Intimate Images. Prohibits the dissemination of intimate and sexually explicit images, including by
minors, and further provides that any individual who knowingly views or possesses any child sexual abuse material or artificially generated child sexual abuse material commits an offense. Status: Enacted.
(Draft Bill), Texas, USA. Texas Responsible AI Governance Act. Mandates transparency, consumer protection, and risk management for high-risk AI
systems; prohibits discriminatory uses of AI systems, enacts enforcement mechanisms, and funds workforce training grants to support ethical AI development. Status: Introduced.
H.R.9903, Washington, D.C. Next Generation Military Education Act. Directs the Department of Defense’s Chief Digital and Artificial Intelligence Officer to
create an online AI education course, requires military branches to participate in the “Digital On-Demand” initiative, and mandates the inclusion of AI risks and threats in annual cybersecurity training. Status: Introduced.
Check out our live AI legislation tracker
Upcoming AI events
Where’s the ROI in AI?
Virtual | Nov 14
, 2024 | Online
For all the hype about AI, one question remains unanswered: Is generative AI actually a good investment for a business in 2024? Some companies say yes — and brag about saving tens of millions of dollars per year. Others say they’re not seeing any returns and are ready to shut down AI deployment.
To answer the question, we’ve built the first conference laser-focused on AI ROI. We’re gathering top executives who are implementing AI with customers and employees, to get a no-BS take on the returns they’re seeing (or not).
Join us on November 14th. Get past the AI hype and into AI ROI.
QCon
San Francisco, California | Nov 18-22, 2024 | In-person
Make the right decisions by uncovering how senior software developers at early adopter companies are adopting emerging trends. Learn the emerging trends. Explore the use cases. Implement the best practices.
Big Data CONFERENCE Europe 2024 – AI Cloud And Data Conference
Vilnius, Lithuania | Nov 19-22, 2024 | Online/In-person
Big Data Conference Europe is a four-day event that focuses on technical discussions in the areas of Big Data, High Load, Data Science, Machine Learning and AI. The conference comprises a series of workshops and sessions, aimed at bringing together developers, IT professionals, and users, to share their experience, discuss best practices, describe use cases and business applications related to their successes. The event is designed to educate, inform and inspire – organized by people who are passionate about Big Data and Data Exploration.
Bookmark this page to keep track of key upcoming AI events
Before you go…
What Trump’s victory could mean for AI regulation | TechCrunch
BTPI releases new report on AI regulation | Cornell Chronicle
US laws regulating AI prove elusive, but there may be hope | TechCrunch
The case for targeted regulation | Anthropic
Now that Trump is president again, what will happen to AI regulations? | National Law Review
AI regulation faces tough path post-election | Axios
A state-by-state guide to AI laws in the U.S. | TechTarget
2024 State of Corporate ESG: Navigating new frontiers of regulation and AI | Reuters
The difference between EU and US AI regulation: A foreshadowing of the future of litigation in AI | National Law Review
Bookmark this page to stay up to date on the latest AI regulation and policy news
Connect with experts, mentors, investors, and organizations to foster collaboration, innovation, and the responsible development of AI!
Find us on Discord, LinkedIn, Instagram, X, Facebook, and YouTube