Understanding Robot Learning and Its Societal Impact with Dr. Abhinav Valada
AI and physical robots in the physical world doing physical things with humans
In this issue:
Dr. Abhinav Valada discuss his research on robot learning
The latest updates on AI legislation
Upcoming events, including Europe’s AI and Big Data Expo in Amsterdam
Before you go…check out the latest news about AI regulation
“So one of the large focuses of my lab here is … how we can create generalist robots at homes doing chores for you that you don't want to do.” - Dr. Abhinav Valada
Robotics is unique to the AI world because, unlike most AI applications, it’s inherently physical. Robots learn and exist in physical space, work with physical objects, and physically interact with humans.
That changes the paradigm of how we can approach compute power, guardrails, ethics, and more.
In this episode of the Regulating AI podcast, Dr. Sanjay Puri delves into the world of robotics, AI, and research with Dr. Abhinav Valada.
Dr. Valada leads the robot learning lab in Freiburg, Germany, where they focus on robot learning—that is, developing robotic systems that can learn from the world around us, and generalize that knowledge to different applications without the need for hand-engineering the robots. Applications range from self-driving cars to assistive home robots to medical robots.
Watch the episode below, and read on for a summary of their conversation!
In this episode:
Does robotics have the potential to get into the mainstream right now, like ChatGPT did for LLMs? Even though we’ve improved leaps and bounds over the past decade or so in terms of what robotics and AI can do, “I wouldn't even call the AI that we have actually intelligence,” said Dr. Valada.
Robotics is distinct from AI like LLMs because robots need to learn by physically interacting with the world, whereas LLMs are trained on internet data.
“How do we leverage the advances that we have made over learning from unstructured data that's freely available on the internet and translate this to robotics?,” he said. To that end, they have collaborated with Google on the Embodiment and X-Embodiment Dataset.
“We also use a lot of the capability of LLMs today in robotics,” said Dr. Valada. “What's great with large language models is because [they’re] trained on human language data, there's lots of reasoning that goes into how we structure our sentences,” he said.
It’s helpful in robotics for things like task descriptions—but not reasoning in terms of logic.
There’s research tackling that, though, largely in the form of “visual language action models.”
On advances in robotics via increased compute power: Yes, compute power has helped with robotics development. They’re always somewhat limited in terms putting embedded GPUs on a physical robot. But relying on the cloud and strong communication, but it’s still a challenge to deploy that on embedded GPUs.
What about robot emotions? They don’t work specifically on “emotions,” but rather “intent.” “If you want robots to work collaboratively alongside humans, you definitely need to convey intent to humans about what the robot is trying to do,” said Dr. Valada. The robot also needs to assess the intent of the people it’s collaborating with. In some ways, that’s also a safety measure—think people plus autonomous cars, and how they need to exist simultaneously and interact safely.
Speaking of human/robot interaction: Dr. Valada’s lab focuses a lot on creating generalist robots that cab, for example, do chores and work people don’t really want to do, like cleaning up a kitchen or folding laundry.
If there’s any AI on the robot, you have to account for any risks the robot needs to assess while interacting with a human.
For example, if the robot is going to hand you something, what are the risks that it bumps your hand? There are ISO standards coming to address this kind of thing. And it will come up in legislation, too.
Regarding privacy in human/robot interactions: Federated learning is one technique. In other areas, like training robots to autonomously perform surgery: “We are working together with regulators [and] medical professionals, trying to come up with frameworks where we can still collect data in these scenarios, protecting the privacy of the people involved, and to still use this data for training methods for robots,” said Dr. Valada.
How do we ensure that there’s compliance with ethical standards within the development and operation of these systems? Create guidelines that fit the reality of the robotic systems we have. But Dr. Valada feels that most guidelines that exist today are too general and unclear to effectively apply to robotics.
“I think it's really important to have interdisciplinary teams that basically formulate these guidelines,” he said. He also believes it needs to be an iterative process.
He also thinks there should be independent ethics boards to review. Moreno for products than for research, because it could stifle the latter.
He thinks we should be teaching AI ethics in every curriculum as early as possible, so it becomes a way of thinking.
On humans rights and autonomous systems: “[It’s] really important to maintain our human rights while using autonomous systems,” said Dr. Valada. In terms of things like surveillance, we need transparency and human oversight. This requires stringent regulation, particularly in high-risk applications.
On robots displacing human workers: He doesn’t think there will a mass displacement. The first goal is having robots help you do your job better and make your life easier—in some cases, doing jobs with you.
“I don't think this massive displacement is going to happen anytime soon. And that being said, it might displace some jobs that … we call in robotics dull, dirty, and dangerous jobs. And this is a good thing, because then we can use the human workforce in areas where they don't necessarily want to do these jobs,” said Dr. Valada.
International collaboration on AI regulation is imperative. And when one agency comes up with a useful regulation, others can copy and/or adapt it. That’s good.
Interdisciplinary collaboration is important. Dr. Valada’s lab collaborates with experts and professors in law, social science, and ethics.
Who should be held responsible for the design, operation, and use of robotic and unmanned systems? “I feel this, like any other product we would have, we would hold a company who's developing the product responsible,” said Dr. Valada.
On DEI and robotics: Again, collaboration with many diverse stakeholders is important. Dr. Valada is a proponent of participatory research in order to involve and address the needs of diverse people. In terms of training and education, they offer scholarships for underrepresented groups.
What regulatory measures do you think are essential to address safety, issues relating to self driving cars? Pilot programs, to test in real environments with a safety driver, not just controlled environments. And data collection and sharing regulations as we deploy the systems. And there is quite a ways to go before we reach Level 5 autonomy.
The final word: We have to all work together. “There's no magic golden key that's going to solve everything. [Working together] is something that we definitely need to do, and we should come together with people with [as many] diverse background as possible.” said Dr. Valada.
Hear the entirety of Dr. Abhinav Valada’s conversation with Sanjay Puri above and on Apple Podcasts and Spotify. Subscribe to the RegulatingAI podcast to never miss an episode.
The latest AI legislation
H.R.8939, Washington, D.C. Creating Legal and Ethical AI Recordings (CLEAR) Voices Act. Amends the Communications Act of 1934 to establish updated standards for AI-generated voice systems and protects consumers from fraud or privacy invasions from realistic automated calls. Status: Introduced.
H.R.9042, Washington, D.C. Civilian Agency AI Watermarks. Provides for civilian agency AI watermarks and for other purposes. Status: Introduced.
H.R.9043, Washington, D.C. AI Testing and Certification for Civilian Agencies. Provides for federal civilian agency laboratory development for testing and certification of AI for civilian agency use and for other purposes. Status: Introduced.
H.R.9044, Washington, D.C. Citizen Engagement in Federal AI Development. Provide for citizen engagement on the development and adoption of federal civilian agency use of AI and for other purposes. Status: Introduced.
Check out our live AI legislation tracker
Upcoming AI events
AI and Big Data Expo
Amsterdam, The Netherlands | Oct 1-2, 2024 | In-person
Discovering the intelligent future through AI and Big Data. Join us again for AI & Big Data Expo Europe 2024 on 1-2 October at the RAI, Amsterdam. We’re set to see dozens of speakers from across various industries come together to discuss the latest developments in the world of AI & Big Data.
It is a showcase of next-generation technologies and strategies from the world of AI and Big Data—an opportunity to explore and discover the practical and successful implementation of AI and Big Data in driving your business forward in 2024 and beyond.
MLCon – Conference and Training
New York, NY | Oct 7-10, 2024 | In-person, Online
Create and innovate: Harness the power of generative AI, LLMs, and machine learning.
From theory to reality: Build the next generation of AI–powered intelligent systems.
Make use of your data: Deep dive into advanced machine learning development.
From prototype to production: Mastering MLOps for scalable and secure machine learning.
World AI Week
Amsterdam, The Netherlands | Oct 7-11, 2024 | In-person
World AI Week is the world’s only week dedicated to the global AI ecosystem, a bustling, thriving series of 50+ cutting-edge business, science, tech, and networking gatherings showcasing how AI is transforming business and society, with a focus on automation, creativity, diversity, innovation, responsibility, and optimization.
World AI Week provides a platform for decision-makers, industry leaders, academics, and entrepreneurs to explore fresh ideas, implement progress, and kick-start partnerships.
Bookmark this page to keep track of key upcoming AI events
Before you go…
J.D. Vance’s A.I. Agenda: Reduce Regulation | NYT
Finance, housing sectors ripe for AI regulation: Congressional committee | The Hill
Britain's new government aims to regulate most powerful AI models | Reuters
EU’s AI Act gets published in bloc’s Official Journal, starting clock on legal deadlines | TechCrunch
Workshop consensus: Fixing healthcare AI regulation will take more than tweaks and patches | AI in Healthcare
Meta won't offer future multimodal AI models in EU | Axios
States strike out on their own on AI, privacy regulation | NC Newsline
Bookmark this page to stay up to date on the latest AI regulation and policy news
Connect with experts, mentors, investors, and organizations to foster collaboration, innovation, and the responsible development of AI!
Find us on Discord, LinkedIn, Instagram, X, Facebook, and YouTube