New York Legislators Take a Stand on Deep Fakes
August 14, 2023 | Amanda Griner | Deborah M. Isaacson | |In April 2021, the Administration for Children’s Services (ACS) filed a petition in Brooklyn Family Court alleging that an uncle who was legally responsible for his nephew had neglected the child by providing inadequate supervision and guardianship and by neglecting his education. Among other things, ACS offered the court voice recordings that it claimed supported its position.
The boy’s uncle conceded that the voice on the recordings was his voice, but he argued that his identity had been stolen and that the voice recordings were “deep fakes.”
The court rejected the uncle’s deep fake argument as non-credible but nevertheless dismissed ACS’s petition, finding that the uncle had not neglected his nephew under the theories alleged by ACS. Matter of Armani V., 76 Misc. 3d 1213(A) (Fam. Ct. Kings Co. 2022).
That a deep fake contention arose in a Family Court matter illustrates how prevalent the concept of deep fakes already has become in the legal system. When combined with the growing number of times that deep fakes play or allegedly play a role in everything from social media posts to political advertisements, it is clear why New York legislators are willing to address the harms that deep fakes can cause.
After briefly describing deep fakes (also commonly referred to in one word as “deepfakes”) and highlighting several relatively famous (or infamous) examples, this column will focus on a number of bills introduced recently in the New York legislature – including one in particular that may soon be heading to the governor’s desk.
Deep Fakes
The term deep fakes, which has been in common usage for at least five years, refers to images, videos, audio, or other media that have been digitally altered to appear authentic. See “Words We’re Watching: ‘Deepfake,’” https://www.merriam-webster.com/words-at-play/deepfake-slang-definition-examples. A report last year by the U.S. Department of Homeland Security (DHS) explained that deep fakes, “falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.” See DHS, “Increasing Threats of Deepfake Identities,” https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf.
The DHS report cites three different deep fake techniques.
First is the face swap, where the face or head of one person is put on the body of another person.
A second deep fake technique discussed in the DHS report is lip syncing, which involves “[m]apping [a] voice recording from one or multiple contexts to a video recording in another, to make the subject of the video appear to say something authentic.” See “Deepfakes,” https://www.belfercenter.org/sites/default/files/2020-10/tappfactsheets/Deepfakes.pdf. Lip syncing technology allows the user to make a target say anything the user wants through the use of recurrent neural networks (RNN).
A third technique recognized in the DHS report is the puppet technique. This allows the user to make a targeted individual move in ways the target did not actually move. This can include facial movements or whole-body movements.
Of course, audio, video, and photos have long been subject to tampering and manipulation, resulting in them appearing to be different from the originals. For example, several years ago, a video of then-Speaker Nancy Pelosi went viral after it was slowed down to make her appear drunk. See “Faked Pelosi videos, slowed to make her appear drunk, spread across social media,” https://www.washingtonpost.com/technology/2019/05/23/faked-pelosi-videos-slowed-make-her-appear-drunk-spread-across-social-media/. A key difference between that kind of “cheap fake” video and more recent deep fake videos is the sophistication of the tools now available to the creators of deep fakes.
Deep fakes are appearing in social media and are being distributed from person to person more and more often today than ever before. Recall the fake image of former President Trump being arrested by police, see “AI-faked images of Donald Trump’s imagined arrest swirl on Twitter,” https://arstechnica.com/tech-policy/2023/03/fake-ai-generated-images-imagining-donald-trumps-arrest-circulate-on-twitter/; the fake image of Pope Francis wearing a puffy coat, see “We Spoke To The Guy Who Created The Viral AI Image Of The Pope That Fooled The World,” https://www.buzzfeednews.com/article/chrisstokelwalker/pope-puffy-jacket-ai-midjourney-image-creator-interview; a song created earlier this year ostensibly by Drake and The Weeknd, see “When you realize your favorite new song was written and performed by . . . AI,” https://www.npr.org/2023/04/21/1171032649/ai-music-heart-on-my-sleeve-drake-the-weeknd; and a deep fake video of a principal apparently created by high school students in New York to portray the principal in a racist, profanity-laced rant, see “High Schoolers Made a Racist Deepfake of a Principal Threatening Black Students,” https://www.vice.com/en/article/7kxzk9/school-principal-deepfake-racist-video.
Explicit Content
Perhaps the most troublesome and malicious deep fakes are those that result when photos and videos are manipulated to create explicit content, including deep fake child pornography. These images often combine an innocent photograph of a child, pulled from a parent’s social media posts, with a sexual or nude picture of an adult, creating a realistic depiction that often is widely circulated. Pornographers seem to be able to generate a virtually unlimited number of these deep fake child images. In one recent case, a Texas man was convicted of possessing over 30,000 pornographic files, including ones that were altered to include the faces of the man’s young grandchildren. One video showed his granddaughter’s face on an adult female having sex, with the defendant’s face superimposed on the male in the video. See United States v. Mecham, 950 F.3d 257 (5th Cir. 2020).
More recently, and more locally, in Long Island’s Nassau County, a defendant was sentenced to six months’ incarceration and 10 years’ probation with significant sex offender conditions after sharing sexually explicit deep fake images of more than a dozen underage women on a pornographic website and posting personal identifying information of many of the women. See Nassau County District Attorney, “Seaford Man Sentenced To Jail And 10 Years’ Probation As Sex Offender For ‘Deepfaked’ Sexual Images,” https://nassauda.org/civicalerts.aspx?aid=1512. Importantly, given the current lack of a state law criminalizing deep fake pornography, prosecutors relied on other factors to prosecute the defendant, including that he stalked some of his victims. See “What Can You Do When A.I. Lies About You?,” https://www.nytimes.com/2023/08/03/business/media/ai-defamation-lies-accuracy.html.
The extent of the problem is so great that the FBI recently issued a public service announcement warning the public of malicious actors creating synthetic content (i.e., deep fakes) by manipulating benign photographs or videos to target victims. See “Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes,” https://www.ic3.gov/Media/Y2023/PSA230605#fna. And several months ago, the Federal Trade Commission warned people about deep fakes involving “cloned audio.” See “Scammers use AI to enhance their family emergency schemes,” https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes.
Legislators in New York, recognizing problems with deep fakes in general, and with pornographic deep fakes in particular, have taken action and made an effort to combat the problems associated with deep fakes.
New York Bills
For example, in February, Assembly Member Amy Paulin introduced A3596A, which would make it unlawful to disseminate or publicize intimate images created by digitization.
The bill would amend Penal Law Section 245.15 to state that a person is guilty of unlawful dissemination or publication of an intimate image when the person intentionally disseminates or publishes a still or video image depicting a person with one or more intimate parts exposed or engaging in sexual conduct with another person, including images created or altered by digitization, where such person may be reasonably identified, and without that person’s consent. The bill defines digitization as the act of altering an image “in a realistic manner utilizing an image or images of a person, other than the person depicted, or computer generated images.” See https://www.nysenate.gov/node/12013278.
The legislative history for A3596A, and for the similar bill introduced by Senator Michelle Hinchey, S1042A, reflects the sponsors’ concerns that deep fakes “are being weaponized against innocent and unsuspecting victims” and that “they are becoming increasingly more common.” It notes that a cybersecurity firm recently reported that of 85,000 deep fakes currently circulating on the internet, 90 percent demonstrate nonconsensual porn featuring women. Moreover, as this technology improves, these deep fakes “appear more realistic and it becomes nearly impossible to depict what is a real image and what is doctored.”
The justification for the bill adds that the “weaponization” of deep fakes against young women “is extremely concerning” and contends that it is important to “update the penal law to keep pace with advancements in technology.” Simply put, the bill would “make it unlawful to disseminate or publicize digitized intimate images of another person without such person’s consent.” See https://nyassembly.gov/leg/?default_fld=&leg_video=&bn=A03596&term=2023&Summary=Y&Memo=Y.
As of this writing, A3596A/S1042A has passed the legislature but has not yet been sent to Governor Hochul.
Other examples of bills introduced by New York legislators that also would apply to deep fakes are S6859 and its Assembly counterpart, A216-A. These bills would require deep fakes used in advertising, social media, and other digital media to be clearly labeled as such so that the average user would be able to easily discern, at the point of viewing, that the image that he or she is looking at is not real. Additionally, if the deep fake is based on a real person without that person’s consent, the disclaimer must note that the image has been generated to create a human likeness and is not an actual person. These bills would impose a $1,000 civil penalty for the first violation and a $5,000 penalty for any subsequent violation. See https://nyassembly.gov/leg/?default_fld=&leg_video=&bn=S06859&term=2023&Summary=Y. Other bills that seek to criminalize deep fakes that have been introduced recently in the legislature include A7519, https://nyassembly.gov/leg/?default_fld=%0D%0A&leg_video=&bn=A7519&term=2023&Summary=Y; 2021 A6862, https://www.nysenate.gov/legislation/bills/2021/A6862; and 2021 S6304, https://www.nysenate.gov/legislation/bills/2021/S6304.
Conclusion
New York is not the only state proposing to regulate deep fakes. For example, New Jersey S.B. 3926 would extend the crime of identity theft to include fraudulent impersonation or false depiction by means of artificial intelligence or deep fake technology. See https://legiscan.com/NJ/bill/S3926/2022. In addition, New Jersey A.B. 5510 would prohibit a person from knowingly or recklessly distributing deceptive audio or visual media (i.e., deep fakes) within 90 days of an election in which a candidate will appear on the ballot, with the intent to deceive a voter with false information about the candidate or the election. See https://legiscan.com/NJ/bill/A5510/2022.
As technology becomes even more sophisticated and the potential for deception and harm increases, legislators must continue to monitor deep fake developments and propose new legislation to address particular problems as they are identified. In the absence of governmental oversight, victims of deep fakes quite often are simply unable to protect themselves.
Reprinted with permission from the August 15, 2023, issue of the New York Law Journal©, ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.