Abstract
Experts in the public and private sectors have vocalized concerns over the potential harms that can be inflicted when artificial intelligence (AI) is used maliciously. As AI technology increases in availability, it will become more accessible to criminal actors and allow for the emergence of new kinds of fraudulent schemes. Deepfakes are highly realistic AI-rendered depictions of individuals that criminals have already used to perpetrate fraud on an international scale. These renderings mimic third parties known to victims, allowing fraudsters to leverage the trust and familiarity of an existing relationship to perpetrate their schemes. The deepfake is used to convince the victim to send money to the fraudster under the guise of legitimacy.
This Article examines the increasing role that deepfakes play in the commission of criminal fraud schemes and suggests a methodology for federal criminal prosecutors to effectively respond to their growing threat. The Article first provides a general overview of deepfake technology: what deepfakes are, how fraudsters are using them, and how easy they are to create. It then suggests a methodology for federal prosecutors to follow when investigating and charging fraudsters that use deepfakes to perpetrate their schemes. Finally, the Author proposes an increase to the offense level of deepfake-based wire fraud under the U.S. Sentencing Guidelines based on its specific offense characteristics.
Included in
Commercial Law Commons, Communications Law Commons, Law and Economics Commons, Science and Technology Law Commons