AI’s Shot in the Arm of Science and the Concerns We Need to Address

News
Article

Stakeholders need to convene honest, actionable discussions about the challenges and the safeguards needed to address appropriate use of artificial intelligence in health care.

Ten years and $2.5 billion—that’s what it takes, on average, to bring a new drug to market in the United States. Artificial intelligence (AI) promises to supercharge this process, drastically reducing the time and costs of bringing life-saving therapies to market. As the CEO of Dotmatics, a company that builds software used by more than 2 million scientists around the world, I see how excited researchers are by the promise of AI.

Image credit: Shuo - stock.adobe.com

Image credit: Shuo - stock.adobe.com

I share their excitement. I have spent nearly 2 decades in software and technology. I’ve been working together with others toward the inflection point the world’s now reaching—when advancements in technology and science mean that data science can finally keep up with the mounds of data that science produces.

What does that mean for the effort to create lifesaving or quality-of-life-improving therapeutics? With the power to use massive, complex data, researchers can predict how drugs will interact, their toxicity, and potential inhibitions. It also means researchers can identify potential new successful compounds far more quickly and cost-effectively.

This isn’t just theoretical. Biotech startups, such as Relay Therapeutics and Recursion Pharmaceuticals, have reported success in clinical trials of drugs developed through AI-powered processes. These are first-in-human trials—having passed laboratory and animal studies, these drugs are now being offered to patients.

It’s thrilling to see the potential of AI becoming a reality. At the same time, I can’t ignore the challenges it poses. My career has shown me firsthand how leaps in technology can transform how we live in both expected and unexpected ways.

For example, I spent years in the educational technology industry, and when we built a community for students to share study materials, I knew we had to put in safeguards to prevent plagiarism. A decade later, ChatGPT has educators questioning what constitutes plagiarism in the first place.

I suspect, however, that a moratorium on AI development like the one being debated by politicians and tech executives is not the answer. Instead, we need to convene honest, actionable discussions about the challenges and the safeguards we need to address.

Every AI expert you talk to will have their own opinion on which concerns are the most pressing. As I reflect on the massive transitions I’ve experienced in other industries and now guide Dotmatics today, here are just a few of the questions I’m thinking about:

Quality and accuracy are paramount for scientists from the earliest stages of drug discovery through human trials. With AI’s propensity to “hallucinate” now so well-documented in the various large language model systems, how will we ensure that insights that inform the development of real-world treatments are accurate?

The ethical considerations of AI are myriad. How do we harness the power of genetic data while protecting people against the potential harm? For just one example, imagine if health insurers could know—before considering coverage—if someone had certain gene signatures. They could decline coverage or make it more expensive on the basis of information that a patient may not have consented to make available.

When will it be appropriate to remove the human from the loop—if ever? As industries such as transportation march toward full autonomy, health care is rightly approaching with caution. Even companies building AI to diagnose conditions without physician input call their products an aide to physicians, not a replacement for them.

Personally, I find it hard to imagine that medical care will ever proceed without a human in the loop. If it does, the health care industry will need to reimagine everything from patient communication to liability frameworks.

I don’t foresee a world in which technology can operate without human ingenuity and creativity, but I do wonder what will happen when AI can handle tasks that used to fall to early-career scientists and technicians. Researchers are justifiably excited to leave behind the drudgery of data wrangling, analysis, and annotation, but schooling needs to change to keep up.

Curricula must shift away from wet lab skills and toward critical thinking to produce scientists who can make research and business decisions with the bigger picture in mind. And as it does, it will continue to be the scientists, not simply the AI, that will be the heroes in our future.

Intellectual property is the lifeblood of the pharmaceutical industry. When AI is generating novel drug candidates, who owns the IP? These questions are being played out right now among lawmakers.

Can AI be responsible for patent infringement, and if not, what happens to the patent system? What will the answers to these legal questions mean for the incentives that underpin drug discovery? Answering these questions will require stakeholders with conflicting interests and incentives to find common ground, without ignoring the voices of the scientists these systems are meant to empower.

In the immediate term, the technology community should focus on making our existing systems AI-ready. As the pundits and prognosticators debate what AI will look like in 10 years, our most pressing objective is to ensure that scientists using AI now have a reliable underlying data layer. Ensuring the data used to train AI is cleaned, organized, and unbiased is an industry-wide challenge. The promise of AI is only as good as the information it learns from.

We stand today on the cusp of a radical revolution in how life-saving drugs are brought to market and used. If we approach this crossroad thoughtfully and strategically—if we work together to shine a light on concerns and implement the requisite safeguards—the AI revolution, together with the researchers and scientists using it, will change scientific discovery for the better.

About the Author

Thomas Swalla, CEO at Dotmatics, has spent his 25 year career building software businesses both organically and through mergers and acquisitions. Today, Thomas is CEO and Board Director at Dotmatics leading a team of 800 mission-driven scientists and employees who are focused on helping scientists make the world a healthier, cleaner, safer place to live. He joined the Dotmatics team in 2018 as part of the Insightful Science and Graphpad team. Thomas received a degree in finance and management information systems at the University of Iowa. He resides with his family in Southern California.

Related Videos
schizophrenic man - mental disorder - Image credit: Andreza | stock.adobe.com
© 2024 MJH Life Sciences

All rights reserved.