One of the finest things artificial intelligence is adept at is sifting through hundreds of chemical compounds to find medication possibilities. Researchers have discovered, however, that it is also incredibly adept at imagining hypothetical chemical weapons — frighteningly good. A team from Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI in a recent work published in the journal Nature Machine Intelligence. In just 6 hours, it discovered 40,000 new possible chemical weapons, some of which were very comparable to the most deadly nerve poison ever developed.
The researchers were surprised by how simple it was, according to an interview with the Verge. “My main worry was how simple it was to accomplish.” Many of the items we utilized were available for free. A toxicity dataset can be downloaded from any location. “If you have someone who knows how to code in Python and has some machine learning capabilities, they could definitely develop something like this generative model driven by hazardous datasets in probably a decent weekend of effort,” Fabio Urbina, the paper’s primary author, told the Verge.
“That was the thing that really had us thinking about getting this paper out there; there was such a low barrier to entry for this sort of abuse.” To get the AI to recommend something that causes damage rather than healing, the researchers had to steer it toward toxicity detection. Using their AI MegaSyn, which normally rewards bioactivity (how effectively a medicine interacts with a target) while penalizing toxicity, they simply flipped the toxicity parameters while keeping the bioactivity reward, giving pharmaceuticals a higher toxicity score.
The AI made several startling advancements over the 6 hours it was running. It suggested VX, the most deadly nerve toxin ever made, which was used in the death of Kim Jong-brother un’s Kim Jong-nam, as well as other chemical warfare agents once they focused it towards making nerve agent-like chemicals. Undaunted, it went on to develop compounds that were projected to be much more dangerous than VX. The researchers claimed that while the forecasts have not been validated and that they “definitely don’t want to check that” they, MegaSyn’s predictive models have shown to be trustworthy thus far.
There will almost certainly be some false positives, and testing would need the molecule being manufactured, so it’s uncertain how many of these compounds are genuinely dangerous. The researchers feel that this will serve as a wake-up call for the use of AI in drug discovery, emphasizing the potential for these algorithms to be abused.
“Without seeming alarmist,” the authors write, “this should serve as a wake-up call for our colleagues in the ‘AI in drug development’ community.” “This isn’t science fiction, believe it or not. We are a tiny part of a vast universe of hundreds of organizations who use AI technologies for drug discovery and de novo design. How many of them have contemplated the potential of reusing or misusing them?”