[artigo] Does AI Remember? Neural Networks and the Right to be Forgotten
If a malicious party can mount an attack and learn private information that was meant to be forgotten, then it implies that the model owner has not properly protected their user’s rights and may not be compliant with the General Data Protection Regulation law. We present a general threat model to show that simply removing training data is insufficient to protect users. We further propose and evaluate three defense mechanisms (deemed neuron removal, scattered unlearning, and class unlearning) that could help model owners protect themselves against such attacks while being compliant with regulations. We show that these defense mechanisms enable deep neural networks to forget sensitive data from trained models while maintaining model efficacy.