Unmasking Deepfakes: Legal Insights for School Districts
Unmasking Deepfakes: Legal Insights for School Districts

Several school districts across the country have recently been forced to confront negative uses of “deepfakes,”[1] a new and concerning type of generative AI technology.  Deepfakes are hyper-realistic video or audio clips of individuals that can depict individuals as saying or doing things that they never actually said or did.  Although the process is complex, the online interface and software is quite accessible to anyone who wishes to create a deepfake. https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01.  Someone who wants to create a deepfake only needs to input video or audio clips of an individual and direct an AI program to synthesize artificial video or audio of that individual that by all measures appears real, even if the individual never actually engaged in the depicted activity.  These manipulations have the ability to spread misinformation, undermine individuals’ credibility, and sow distrust on a very large scale.

As we previously discussed in a recent Alert, deepfake technology creates several issues for local educational agencies (“LEA”).  Some of the most pressing legal issues that deepfakes present to school districts include the following: (1) defamation of teachers, administrators, and other members of the school community, (2) cyberbullying and harassment of students, particularly young women, and (3) how to appropriately respond to inappropriate creation and circulation of deepfakes.  As educators strive to create a safe and supportive learning environment for their students, understanding and addressing the threats posed by deepfakes is crucial to maintaining the integrity and security of our schools.

1.     Impact of Deepfakes on Administrators, Teachers, and Parents

Recent news articles indicate a worrying trend of school educators being “deepfaked” by both students and fellow co-workers. One high-profile instance involved an audio clip of a principal at a Baltimore-area high school which depicted him making racist and anti-Semitic remarks.  The audio clip was shared more than 27,000 times and resulted in a flurry of angry parents and students within the community, and the school district placed the principal on administrative leave while it conducted an investigation.  After several months, AI experts determined that the audio clip was artificially generated, which exonerated the principal and led to the arrest of a teacher who allegedly created and circulated the audio clip as a means of retaliation against the principal.  Needless to say, the damage to the principal’s reputation had already been done.

School districts risk reputational harm to their employees who may become victims of similar deepfakes.  Due to their positions of authority, administrators and teachers are particularly vulnerable.  If school districts are not aware of this technology and how to immediately respond to it, administrators (and members of the school community) may be inclined to believe the evidence without question.  In addition, disciplining employees in response to deepfakes without a full evaluation of whether they are legitimate depictions of the individuals in question may expose school districts to liability for, among other things, defamation lawsuits over damage to reputations.  Ultimately, school districts are better served by becoming educated about deepfakes and coming up with a plan for how to swiftly address them should their employees become future victims.

School districts must also consider how to best relay information about deepfakes containing violent threats and discriminatory statements to the school community.  For example, in February 2023, students in Putnam County, New York created a deepfake video of their principal which depicted him making violent threats and other inappropriate comments directed at minority students.  The Superintendent sent letters to parents stating in part that disciplinary action was taken against the students who created the deepfakes and condemning “the blatant racism, hatred and disregard for humanity,” and the school district hosted forums with families and law enforcement.  Regardless, some parents reported that they felt that the school district minimized the severity of the videos and failed to address specific statements in the videos, particularly threats against minority students.

The often highly offensive nature of deepfakes and their impact on the school community may pressure school districts and administrators to take quick and decisive action.  However, it is important to act methodically, rather than reactively, to investigate and take corrective action as needed in accordance with Board policies and collective bargaining agreements. 

2.      Deepfake Bullying Particularly Affects Female Students

One of the more disturbing aspects of deepfake technology is the ability to fabricate and distribute pornographic images of individuals.  As the prevalence of generative AI increases in schools, so are instances of cyberbullying and harassment involving the generation of deepfakes depicting fellow students, particularly female students.  For example, in October 2023, male students at a New Jersey high school created pornographic deepfakes depicting some of their female classmates. Although the school district immediately opened an investigation, parents admonished the school district for their perceived silence and alleged that the district had not done enough to publicly address the deepfakes or update school policies in order to combat improper uses of generative AI.

In addition, school districts may not be fully aware that these types of AI-generated images should be reported to law enforcement.  For example, during the Fall 2023 semester, a student at a Seattle-area high school created and circulated deepfake images of his female classmates.  The high school failed to report the incident to law enforcement. In a later statement, the district noted that their legal team had advised that they were not required to report “fake” images to the police, but acknowledged that if a similar situation arose in the future, they would do so. 

To be sure, schools are required to report the distribution of pornographic AI-generated photos and videos of minors to law enforcement.  Not only might schools face liability for failing to report pornographic deepfakes to law enforcement, but if the generation or distribution of exploitative images occurs on school grounds or could have been prevented by policy safeguards, it is possible that courts may find reason to hold districts civilly responsible.

3.      How Should Schools Approach Discipline for Misuse of Deepfake Technology?

Current school discipline policies may not be adequately equipped to address the misuse of deepfake technology by students or staff.  Given the rapid development of generative AI technology, schools should directly address generative AI and its misuse in trainings, policies, and guidelines.  A recent report by the Center for Democracy and Technology reports that only 38% of students surveyed have received guidance from their schools on how to spot AI-generated images, text, or videos.  Importantly, 71% stated that they believe it would be helpful if their schools supplied such trainings.  Disciplining students without first providing official guidance on the responsible use of generative AI may pose issues of ambiguity and unfairness. Indeed, many students and staff are also seeking guidance on navigating the use of generative AI on campus.

Of added legal concern, any expansion of disciplinary policies potentially invokes issues of equity across different student demographics.  The aforementioned report also found that students with IEPs and/or 504 plans report higher generative AI usage with 72% saying they have used ChatGPT or other forms of generative AI.  Additionally, Title I and licensed special education teachers report higher rates of disciplinary actions among their students for generative AI use.  Without further guidance on responsible generative AI use, vulnerabilities in certain student populations may be exacerbated when disciplining AI misuse.

4.      The Limits of School Authority on Disciplining Students for Misuse of AI

As schools adapt to AI’s proliferation, there is a question about the extent to which schools have authority to discipline students for its misuse.  In Tinker v. Des Moines School Dist., 393 U.S. 503 (1969), the Supreme Court held that although students do not shed their constitutional rights to freedom of speech while on school grounds, school districts may place certain restrictions on these rights. These limitations historically could not be enforced once students were off campus. However, as the internet began to proliferate, “off-campus” speech increasingly affected schools, forcing courts to determine exactly how far the school’s boundaries on constitutional rights run.  For instance, a school may be allowed to punish a student’s cyberbullying even though this “speech” took place entirely within his or her own home and on a private device if it can demonstrate the speech substantially or materially interfered with school operations.  (Kowalski v. Berkeley Cty. Sch., 652 F.3d 565 (4th Cir. 2011).)

Social media connects students to each other off campus in ways that Tinker, decided more than 50 years ago, could not have foreseen, testing the limits of school districts to discipline students for off-campus activities. In 2021, the Supreme Court held that a Pennsylvania high school violated a student’s First Amendment rights by punishing her for a profanity-laden Snapchat that she made off-campus.  The Supreme Court noted that “the special characteristics that give schools additional license to regulate speech always disappear when a school regulates speech that takes place off campus.”  The opinion observed that a student’s off-campus speech will generally be the parents’ responsibility and that if schools are allowed to regulate such speech, this covers essentially everything a student says outside of school.

Student misuse of AI-generated content likely raises similar First Amendment concerns.  A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school.  (See Tinker, 393 U.S. 503 at 509 [holding that to justify suppressing student speech that is otherwise covered by the First Amendment, school officials must demonstrate that the speech materially and substantially interferes with the operation of the school].)  In instances such as the above-referenced events in Baltimore or Putnam County, it is obvious that deepfakes of school administrators can cause a substantial disruption to students, parents, and the school district.  However, not all instances of deepfake misuse will be so clear cut.  A court may not always find that a school has authority to suppress certain AI usage if it occurred predominantly off-campus and was not directed towards the school. As many push for schools to develop significant AI discipline policies, the reach of these policies could be found unconstitutional.

Over the last two years, there have been a number of attempts by state legislators to introduce legislation that criminalizes misuse of deepfakes.  For example, in January 2024, California legislators introduced AB 1831, which would expressly criminalize the creation, possession, and distribution of AI-generated sexually explicit images of children. In addition, in June 2024, several U.S. Senators introduced the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (the "TAKE IT DOWN Act"), which would, among other things, criminalize deepfake pornography and other "non-sensual intimate imagery" and require social media and other websites to implement procedures for the removal of such content upon being notified by a victim of such images.  These and other bills are currently being considered at both the state and federal levels.

5.      Moving forward

As the use of generative AI technology continues to develop, school districts should become educated about deepfakes and ensure that they are prepared to properly and quickly address them.  They should also review and update their student and employee discipline policies to clarify the parameters under which the school may intervene when students or employees create pornographic and other inappropriate deepfakes.

Should you have any questions concerning this topic, please do not hesitate to contact the authors or your usual counsel at AALRR for guidance.

[1] The term “deepfake” is a combination of the terms “deep learning” and “fake.” 

This AALRR post is intended for informational purposes only and should not be relied upon in reaching a conclusion in a particular area of law. The applicability of the legal principles discussed may differ substantially in individual situations. Receipt of this or any other AALRR publication does not create an attorney-client relationship. The Firm is not responsible for inadvertent errors that may occur in the publishing process. 

  © 2024 Atkinson, Andelson, Loya, Ruud & Romo

Other AALRR Blogs

Recent Posts

Popular Categories

Contributors

Archives

Back to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.