AI's Role in Accelerating Drug Development and Clinical Trials with Raphael Townshend, PhD, Founder and CEO of Atomic AI
Accelerating the right parts of a long process
Raphael Townshend, PhD, on using AI for drug discovery
The latest updates on AI legislation
Upcoming AI events and conferences, including World Summit AI
Before you go…check out the latest news about AI regulation
“The interesting thing about major pharma is they've been traditionally very good at running clinical trials and then commercializing drugs. […] But the part that they've struggled with more is almost the first bit, of getting the innovation to figure out which drugs should even be tested.” — Raphael Townshend, PhD, Founder and CEO of Atomic AI
AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.
In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with host Dr. Sanjay Puri to discuss the intersection of AI and RNA in drug development. They explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.
Read on (or watch the episode below) to catch the entire conversation!
In this episode:
How do you see AI democratizing drug discovery? “The key is really trying to use these AI technologies to get to better drugs cheaper and faster,” said Townshend.
He said you can use computational techniques to replace a lot of the previously expensive sort of in-the-lab experimental techniques that you would use to search through potential drugs and find the ones that are more potent, more selective, less toxic, etc.
That way, you can both get to the FDA clinical trials faster and lower the failure rate within those clinical trials. That therefore also reduces the cost of those trials.
“Part of the reason the entire process is so expensive—you say a billion dollars to get a new drug—is because of that high failure rate. It's very common to fail a phase two clinical trial,” he said.
This also reduces the overall timeline for a given drug you’re developing.
The process to even find a drug to run in a clinical trial is time-consuming.
“That piece is called the drug discovery process, versus the part in the clinical trial that's called the drug development process. And so we can reduce the failure rate in the development process (the clinical trials), and we can dramatically reduce the time in the drug discovery process, thereby reducing that 10-year time span that it usually takes to find a new drug,” he said.
A big breakthrough in the past few years: AlphaFold. From the site: “AlphaFold is an AI system developed by Google DeepMind that predicts a protein's 3D structure from its amino acid sequence.”
To understand: Knowing a protein’s shape is critical to understanding how it does and understanding how to fix it. Eg, the shape of a bicycle tells you a lot about what it does, and if the wheels are in the right place, etc.
“The problem is [that] finding the shapes of proteins or RNA or any other molecule through traditional means is very expensive. I'm talking, it can take years, and it can take millions of dollars,” said Townshend.
AI can reduce that years-long process to a few seconds.
How does your company ensure transparency and accountability in the algorithms and the processes that you're using for drug discovery? They test the predictions their AI makes. “And so we not only provide that sort of transparency in terms of showing what data we're training on and what predictions we're making, but…we're providing very clear validation of what the algorithms are outputting as well,” said Townshend.
How do you navigate intellectual property concerns and ensure that the benefits of your AI are accessible to stakeholders, including patients and healthcare providers?
Make sure your outputs are understandable to a wide swathe of those people. And be sure it’s auditable. “Understand where these predictions are coming from, what's informing those predictions, and you need to make sure that the people using these predictions deeply understand where this is coming from and are not misinterpreting that data,” he said.
It’s about educating people.
How can the industry collaborate with regulatory agencies like the FDA to ensure compliance with existing regulations, but also foster AI innovation?
It’s good that we already had the FDA and a standardized set of clinical trials. What they do at Atomic AI does not circumvent that. “It's just the quality of what you're putting into that pipeline, and the speed at which you're getting to the entrance of that pipeline, is increased,” said Townshend.
How can an agency like the FDA keep up with the pace of innovation? “A lot of it comes down to having the right dialogue with the right people at the table,” he said.
For example, they bring AI scientists together with biologists and talk things through to get a better understanding of the capabilities and limitations of AI projects.
“Very similarly, engaging with regulators can give the regulators a better sense of where you really need to be paying attention, [and] where the potential dangers are,” he said. And vice versa: It helps AI drug discovery people understand where the regulators are coming from and learn where to focus their efforts.
How do you ensure the reliability and reproducibility of results generated by your AI-driven drug discovery platform? “What we've built is not just an AI, but it's really an AI integrated in with a wet lab that can test what the AI is predicting, and do that in a fairly tight cycle so that you can quickly feed back into the AI what is working and what is not,” said Townshend. It’s like a “lab-in-the-loop” model.
What about collaboration and knowledge-sharing? It’s tricky, because intellectual property is a major piece of the value in drug discovery. Atomic AI has tried to find a balance between holding onto trade secrets and publishing what they think will be beneficial to the broader field.
What about data privacy and security when one works with sensitive medical and pharmaceutical data? You have to follow existing regulations, like HIPAA. “But on top of that, there's specific technological advances you can leverage to help,” he said. For example, anonymizing data, federated learning, and security protocols.
What steps can the industry take to make things clear and useful for people with varying levels of technical expertise? Clear documentation as to the outputs of AI, and conditions in which we expect it to work and where we think it won’t. Also, confirming how those outputs should be validated. And guardrails around the AI approaches themselves. You can also give people access to the models directly.
Whose responsibility is it to educate users? Both the companies and the users themselves. “It certainly seems like there's room there as well for regulating the use of these tools specifically,” he added.
How about global regulators outside the U. S.? Selling in the U.S versus selling in, for example, Europe, requires a whole separate process, because there are different regulatory bodies. AI complicates things further, because there are also separate AI regulations.
“I'm a firm believer that really what you should be doing is regulating the outputs of these algorithms, as opposed to the development themselves of these tools,” said Townshend. AI is a tool. It can do different things. It’s really more about how you use the too. “I think it's such a fundamental piece of technology that regulating it itself, for example, in the drug discovery space, would probably mainly stifle innovation,” he said. Instead, he suggests regulating how it’s used.
How does Big Pharma view companies like yours? “From their perspective, they actually see this as synergistic, a bit of a symbiosis, I suppose,” he said. That’s because although big drug companies have historically been good at running clinical trials and commercializing drugs, they’ve struggled with the innovation to figure out which drugs to test in the first place. They’ve kind of outsourced that to biotech firms.
Hear the entirety of Raphael Townshend’s conversation with Sanjay Puri above and on Apple Podcasts and Spotify. Subscribe to the RegulatingAI podcast to never miss an episode.
The latest AI legislation
H.R.9626, Washington, DC. Secure AI Testing for Biological Data at DoD. Directs the Department of Defense to develop a plan for the establishment of a secure computing and data storage environment for the testing of AI trained on biological data, and for other purposes. Status: Introduced (Committee).
H.R.9639, Washington, DC. AI Ads Act. Prohibits the use of AI-generated content for fraudulent misrepresentations of political candidates; broadens existing fraud rules to include anyone falsely acting on the behalf of a candidate or political group, and removes the requirement that such misrepresentations must be “damaging” to a candidate or party. Status: Introduced (Committee).
A.B.1836, California, USA. Use of Likeness: Digital Replica. Prohibits the commercial use of digital replicas of deceased performers in entertainment media without estate consent; aims to prevent unauthorized use of the performer’s digital likeness. Status: Enacted.
Check out our live AI legislation tracker
Upcoming AI events
RegulatingAI partners with World Summit AI
Amsterdam, The Netherlands | Oct 9-10, 2024 | In-person
Last year we saw GenAI upend the entire industry. And now, the explosion of AI’s capabilities has ushered a combination of groundbreaking innovations and daunting challenges.
AI advancements have propelled industries into new realms of efficiency and effectiveness, offering solutions to some of the world’s most pressing issues. On the other hand, the rapid pace of progress has ignited a global debate on ethics, privacy, and public trust, stirring a call for a delicate balance between harnessing AI’s potential and safeguarding human values.
World Summit AI is the epicenter of AI innovation; where the boundaries of technology meet the forefront of business and societal transformation. Mark your calendars for the 9th and 10th of October 2024, as we return to Amsterdam to create a future where technology and humanity merge to create a more inclusive, equitable world. As we move forward, the choices made today will fundamentally shape the legacy of AI for generations to come, making 2024 a critical year for reflection and decisive action in the world of AI.
NVIDIA AI Summit India
Mumbai | Oct 23-25, 2025 | In-person
Connect with NVIDIA and our ecosystem of customers and partners doing transformative work in their industries. You’ll discover valuable insights from industry leaders in AI with more than 50 sessions and live demos in generative AI, large language models, industrial digitalization, supercomputing, robotics, and more. Today, 70% of use cases are solving India’s grand challenges. Together, we’re creating a platform that enables developers to build for India and the world. Don’t miss this opportunity to be part of the conversation. Register now to join us in person.
Gartner IT Symposium/Xpo
Orlando, Florida | Oct 21-24, 2025 | In-person
CIOs and IT executives convene on the future of technology and business. Your industry, your profession, and the role you play are constantly evolving. You need to be in the best position to discover new trends and meet challenges head-on. The Gartner IT Symposium/Xpo™ 2024 conference brings together actionable insight, new technologies, and lasting connections to help you navigate disruption, tackle AI, and lead your organization.
Bookmark this page to keep track of key upcoming AI events
Before you go…
FCC commissioner warns of ‘heavy handed’ AI regulation in political ads | NextGov
California’s vetoed AI bill: Bullet dodged, but not for long | InfoWorld
Decoding California’s Recent Flurry of AI Laws | Foley
Navigating new compliance horizons: GDPR meets EU AI regulation | Fintech Global
Companies are slowing AI launches in Europe, some say European Union regulations are why | Compliance Week
The AI Governance Arms Race: From Summit Pageantry to Progress? | Carnegie Endowment for International Peace
How Insurance Industry Can Use AI Safely and Ethically | Insurance Journal
Can AI regulation survive the First Amendment? | Platformer
Connecticut lawmaker leading national effort for states to adopt artificial intelligence regulations | New Haven Register
New York lawmakers eye lessons from other states to draft new AI regulations | Spectrum News 1
Canadian doctors overwhelmingly call for AI regulation | BusinessWire
Bookmark this page to stay up to date on the latest AI regulation and policy news
Connect with experts, mentors, investors, and organizations to foster collaboration, innovation, and the responsible development of AI!
Find us on Discord, LinkedIn, Instagram, X, Facebook, and YouTube