Blindspots AI Privacy 2 Esm W900

Researchers have found a potential silver lining in so-called adversarial examples, using it to shield sensitive data from snoops. Learn more in an interesting Wired article:

Machine learning, for all its benevolent potential todetect cancersand create collision-proofself-driving cars, also threatens to upend our notions of what's visible and hidden. It can, for instance,enable highly accurate facial recognition,see through the pixelation in photos, and even—asFacebook's Cambridge Analytica scandal showed—use public social media data to predict more sensitive traits like someone's political orientation.

Those same machine-learning applications, however, also suffer from a strange sort of blind spot that humans don't—an inherent bug that can make an image classifiermistake a rifle for a helicopter, or make an autonomous vehicleblow through a stop sign. Those misclassifications, known asadversarial examples, have long been seen as a nagging weakness in machine-learning models. Just a few small tweaks to an image or a few additions of decoy data to a database can fool a system into coming to entirely wrong conclusions.

Now privacy-focused researchers, including teams at the Rochester Institute of Technology and Duke University, are exploring whether that Achilles' heel could also protect your information. "Attackers are increasingly using machine learning to compromise user privacy," says Neil Gong, a Duke computer science professor. "Attackers share in the power of machine learning and also its vulnerabilities. We can turn this vulnerability, adversarial examples, into a weapon to defend our privacy."

The link for this article located at Wired is no longer available.