“It really is an evil way to use the technology, and it didn't feel good doing it. I had nightmares afterward.”
scientists Sean Ekins and Fabio Urbina were working on an experiment they had named the “Dr. Evil project.” The Swiss government’s Spiez laboratory had asked them to find out what would happen if their AI drug discovery platform, MegaSyn, fell into the wrong hands.
, an odorless and tasteless nerve agent and one of the most toxic and fast-acting human-made chemical warfare agents known today.conference—a biennial meeting that brings experts together to discuss the potential security risks of the latest advances in chemistry and biology—in a presentation on how AI for drug discovery could be misused to create biochemical weapons. “For me, it was trying to see if the technology could do it,” Ekins says. “That was the curiosity factor.
“It is particularly tricky to identify dual use equipment/material/knowledge in the life sciences, and decades have been spent trying to develop frameworks for doing so. There are very few countries that have specific statutory regulations on this,” says, a senior lecturer in science and international security at King’s College London and a coauthor on the paper.
Since March, the paper has amassed over 100,000 views. Some scientists have criticized Ekins and the authors for crossing a gray ethical line in carrying out their VX experiment. “It really is an evil way to use the technology, and it didn't feel good doing it,” Ekins acknowledged. “I had nightmares afterward.”
“I initially wondered whether it was a mistake to publish this piece, as it could lead to people with bad intentions using this type of information maliciously. But the benefit of having a paper like this is that it might prompt more scientists, and the research community more broadly, including funders, journals and pre-print servers, to consider how their work can be misused and take steps to guard against that, like the authors of this paper did,” she says.
“Research that involves human subjects is heavily regulated, with all studies needing approval by an institutional review board. We should consider having a similar level of oversight of other types of research, like this sort of AI research,” Williams says. “These types of research may not involve humans as test subjects, but they certainly create risks to large numbers of humans.”
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Plan to feed phone data of NHS mental health patients to AI mothballedAn AI was designed to predict when people are at risk of having a mental health crisis, based on their NHS health records, but plans to extend the project with mobile phone data seem to have been scrapped
Read more »
The Co-Founders of DeepMind and LinkedIn Have Raised $225 Million for Their A.I. LabInflection AI, the new start-up from DeepMind co-founder Mustafa Suleyman and LinkedIn co-founder Reid Hoffman, has secured $225 million in funding.
Read more »
Neuron Bursts Can Mimic a Famous AI Learning StrategyA new model of learning centers on blasts of neural activity that act as teaching signals—approximating an algorithm called backpropagation.
Read more »
Revolut CEO Nik Storonsky To Launch AI-Led Venture Capital FundI joined Forbes as the Europe News Editor and will be working with the London newsroom to define our coverage of emerging businesses and leaders across the UK and Europe. Prior to joining Forbes, I worked for the news agency Storyful as its Asia Editor working from its Hong Kong bureau, and as a Senior Editor in London, where I reported on breaking news stories from around the world, with a special focus on how misinformation and disinformation spreads on social media platforms. I started my career in London as a financial journalist with Citywire and my work has appeared in the BBC, Sunday Times, and many more UK publications. Email me story ideas, or tips, to iain.martinforbes.com, or Twitter _iainmartin.
Read more »
11 AI-Powered Tools For Business | HackerNoon'11 AI-Powered Tools For Business' aiapplications businessintelligencetools
Read more »
AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving CarsAI adversarial attacks are usually employed by evildoers, but we might find useful the use of such attacks against evil AI, raising thorny AI Ethics issues (including for self-driving cars).
Read more »