Ethical Dilemmas that Artificial Intelligence Raise in the Lab
Eureka Staff

Ethical Dilemmas that Artificial Intelligence Raise in the Lab

As AI becomes prevalent in drug discovery, how can we deal with the thorny issues of job attrition and privacy? The sixth in our series 

Dr. John Mitchell, PhD, is a theoretical chemist at the University of St. Andrews, where he uses theoretical and machine learning techniques in pharmaceutical chemistry, condensed phase modelling, and structural bioinformatics. His group has developed novel applications for machine learning in computational biochemistry, such as drug side-effect prediction, identifying athletic performance enhancers, and competing against a panel of human experts. Recently Dr. Mitchell looked at whether humans or machines were better at predicting the solubility of organic compounds, and found then to be equally effective. Eureka connected with Dr. Mitchell to get his perspective on some of the thornier issues around AI: privacy and job attrition. Here are his responses.

Eureka: Using AI to discover new drugs, or new purposes for old drugs, shows a lot of promise, but will we be sacrificing patient privacy in the process?

JM: Where patient data is used in research, the standard will absolutely be that the data needs to be anonymized. Indeed, within Europe my understanding is that GDPR legally requires this. So if AI is using patient data to find new drugs or therapeutic areas, patient identities will have been removed. There’s a substantial body of research on methods for anonymizing data to ensure privacy without undermining the usefulness of the information to science.

Is this 100% secure? I don’t think data is ever 100% secure. It’s well understood that a determined and resourceful adversary might try to reconstruct anonymized data with reference to other available fragments of information. But medical data will be safer than most of the trails of digital footprints we leave in the sand of the Internet.

The longer term presents more interesting and potentially challenging issues. AI-driven medicine may well be personalized medicine, and dependent on each individual’s genetic code. That means that in order to benefit, our genomes will have to be out there in some sense. If my genome is out there, what if it contains potentially bad news, such as a likelihood of developing a debilitating disease in the future? Perhaps I want to know and want my doctors to know, in order to find the optimal preventative measures and if necessary treatments. Maybe I’d rather not know, in order not to waste the present worrying about something that may or may not happen in the future. Should insurers have access to such information? Or what about family members? These questions are likely to become more pressing in the coming years.

Eureka: You recently looked at whether humans or machines were better at predicting the solubility of organic compounds, and concluded that they were essentially equally effective. So why not just have machines identify our drugs?

JM: This was an example of a problem where expert human and computer were almost equally good at performing the task; perhaps akin to where chess was when Gary Kasparov and Deep Blue were well matched in 1996 and 1997. That’s far from unique, but of course there are many tasks where we already know that machines are much faster and more accurate than us; and fortunately others where humans are still well ahead.

For solubility prediction, the usual methodology is to use computation, nowadays typically machine learning. That’s not just because computers might be accurate, but also because there’s a certain repeatability and consistency to these algorithmic predictions. There’s also a large reporting bias because “Solubility Prediction by a new Machine Learning Algorithm” is of interest to the academic community whereas “Dr. Smith and Prof. Patel Predict Solubilities” is probably not publishable. Nonetheless, I expect that in practice there are plenty of times when human predictions of solubility inform decision making about which molecules to synthesize or which leads to pursue, whether or not accompanied by computed numbers.

Eureka: In terms of privacy or job attrition, what do you think will be the bigger challenge for society as AI takes hold?

JM: Probably the latter. Most of us will agree that on balance the Industrial Revolution was a good thing, but of course as a process it was disruptive and many people suffered in the short and medium term when their jobs were taken by machines. We see the same thing whenever an economy transforms, for instance when the coal mines closed in the UK. I don’t want to get into politics here, but the temptations for governments have been either to let the modernizing economy sort itself out without adequate intervention and safeguards when jobs are lost, or alternatively to pump money into maintaining a status quo that has become outdated and uncompetitive. The challenge is to manage an inevitable process of change and transformation while minimizing negative consequences for people and their communities.

Maybe AI and robots will make society wealthy enough that we can afford to pay a universal citizen’s income at a decent level with extra income available for those who take on paid jobs; this still provides the challenges of distributing work, opportunity and leisure time fairly while giving people something worth living for. It’s also possible that we’ll just create more jobs for humans, probably roles as unimaginable now as a web designer or social media engagement coordinator would have been in the 1960s.

Eureka: What do you think are the biggest challenges in using AI to help discover drugs?

JM: I believe that managing the hype will be a major challenge. There are lots of different things that AI has a good chance of doing a little better than alternative technologies, for example:

  • Identifying drug targets. This includes linking molecular mechanisms to diseases to identify what kinds of biochemical interventions might be relevant.
  • Choosing suitable small molecules from virtual libraries.
  • Identifying suitable chemical modifications to lead molecules so as to enhance potency, ADMET properties, and solubility, while minimizing side effects.
  • Repurposing drugs to new diseases, where the state of the art has been to leave a lot to happy accidents. AI should be able to exhaustively assess each available medicine or treatment against each known disease or condition.
  • Predicting side effects of drugs and associating these with molecular processes.
  • Personalizing medicine, using genetic data to identify which of the available treatments is most likely to work for a particular individual patient.
  • Knowledge discovery both from the scientific literature and from the large quantity of genetic and medical information available in databases.
  • Rigorous and dispassionate analysis of clinical trial data, including merging information from multiple clinical trials to perform a meta-analysis.

Each of these things has the potential to improve the overall drug discovery process. However, none are easy. Often new approaches to drug discovery get overhyped, like combinatorial chemistry did a few years ago, and failure to manage unrealistic expectations leads to an inevitable let down and to a perceived bursting of the bubble. It’s better to expect incremental advances and perhaps to be pleasantly surprised than to promise a revolution that never materializes.

Eureka: What do you think will be the “New Big Thing” in the application of AI in drug discovery?

JM: Hopefully lots of new little things that together make the drug discovery process more effective, as suggested in my answer to question above.

Eureka: Robots are ubiquitous in entertainment. Who is your favorite robot?

JM: An intelligent hologram rather than an android, but definitely my favorite AI character is the Doctor from Star Trek: Voyager, wonderfully played by Robert Picardo. Originally created as a purely functional Emergency Medical Hologram, he acquires a very full personhood complete with musical and literary talent, together with human-like emotions and a wicked sense of humor. The character works because the viewer never for one moment questions that he is a thinking, feeling (if not breathing) person. Whatever our actual views on the nature of human and machine consciousness may be, the way the character copes with challenging colleagues and problematic situations, as well as with being obviously both different from and also the same as his flesh and blood colleagues, means that any disbelief on our part is quickly suspended.

Thanks for tuning in. Our final Q&A in this series on AI in Drug Discovery will be with patent attorney Takeshi S. Komatani, PhD, who also worked for Swiss pharmaceutical company F. Hoffman-La Roche. Dr. Komatani has written extensively about intellectual property issues around the use of AI in drug discovery You can follow our series here.