In Neural Networks, Unbreakable Locks Can Hide Invisible Doors

 Cryptographers have shown how perfect security can undermine machine learning models. The post In Neural Networks, Unbreakable Locks Can Hide Invisible Doors first appeared on Quanta Magazine 

Machine learning is having a moment. Yet even while image generators like DALL·E 2 and language models like ChatGPT grab headlines, experts still don’t understand why they work so well. That makes it hard to understand how they might be manipulated. Consider, for instance, the software vulnerability known as a backdoor — an unobtrusive bit of code that can enable users with a secret key to obtain…

Source

 Read More 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top