Commentary

Deepfakery: Two Ways Of Fairly Fighting Cyber Attacks

Publishers and consumers alike are being annoyed and threatened by deepfakes, images where a person’s likeness has been replaced by another with the help of deep learning technologies, judging by Improving Fairness in Deepfake Detection, a paper worked on by academics at several colleges. 

Case in point, deepfake artists have posted fake nude images of Taylor Swift, a bogus audio recording of President Biden telling New Hampshire residents not to vote and a fraudulent video of Ukrainian President Volodymyr Zelenskyy telling his troops to lay down their arms, according to an article by two of the study authors, Siwei Lyu and Yan Ju, in Nieman Lab.  

Deepfakes can also be used for political propaganda or cyber attacks, and we imagine they can also create legal liability for an unsuspecting publisher. 

advertisement

advertisement

One problem with current methods of fighting deepfakes is that biases can lead to “disparities in detection accuracy across different races and genders,” the paper states. 

The authors propose two methods for combatting this (we warn you, they’re wonky):

DAG-FDD (demographic-agnostic FDD) “does not rely on demographic details (the user does not have to specify which attributes to treat as sensitive such as race and gender) and can be applied when, for instance, these demographic details have not been collected for the dataset,” they write. 

To use DAG-FDD, the user needs to specify a probability threshold for a minority group without explicitly identifying all possible groups. The goal is to ensure that all groups with at least a specified occurrence probability have low error. 

DAW-FDD (demographic-aware FDD) leverages “demographic information and employs an existing fairness risk measure, the study continues. “At a high level, DAW-FDD aims to ensure that the losses achieved by different user-specified groups of interest (e.g., different races or genders) are similar to each other (so that the deepfake detector is not more accurate on one group vs another) and, moreover, that the losses across all groups are low.”

The first method worked best. 

“We believe fairness and accuracy are crucial if the public is to accept artificial intelligence technology,” two of the authors, Siwei Lyu and Yan Ju, write. “When large language models like ChatGPT 'hallucinate,' they can perpetuate erroneous information. This affects public trust and safety.”

They continue: “Likewise, deepfake images and videos can undermine the adoption of AI if they cannot be quickly and accurately detected. Improving the fairness of these detection algorithms so that certain demographic groups aren’t disproportionately harmed by them is a key aspect to this."

In addition to Ju, Lyu and Shan Jia of the University at Buffalo, State University of New York, the authors include Shu Hu of Indiana University-Pursue University Indianapolis and George H. Chen of Carnegie Mellon University.

 

 

Next story loading loading..