It doesn’t take much to make machine-learning algorithms go awry
The rise of large-language models could make the problem worse
The algorithms that underlie modern artificial-intelligence (AI) systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the AIs susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.
This article appeared in the Science & technology section of the print edition under the headline “Digital poisons”
From the April 8th 2023 edition
Discover stories from this section and more in the list of contents
Explore the edition
Satellites are polluting the stratosphere
And forthcoming mega-constellations will exacerbate the problem
AI models are dreaming up the materials of the future
Better batteries, cleaner bioplastics and more powerful semiconductors await
Mice have been genetically engineered to look like mammoths
They are small and tuskless, but extremely fluffy
Is posh moisturiser worth the money?
Don’t break the bank
How artificial intelligence can make board games better
It can iron out glitches in the rules before they go on the market
The skyrocketing demand for minerals will require new technologies
Flexible drills, distributed power systems and, of course, artificial intelligence