Several school districts across the country have recently been forced to confront negative uses of “deepfakes,”[1] a new and concerning type of generative AI technology. Deepfakes are hyper-realistic video or audio clips of individuals that can depict individuals as saying or doing things that they never actually said or did. Although the process is complex, the online interface and software is quite accessible to anyone who wishes to create a deepfake. https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01. Someone who wants to create a deepfake only needs to input video or audio clips of an individual and direct an AI program to synthesize artificial video or audio of that individual that by all measures appears real, even if the individual never actually engaged in the depicted activity. These manipulations have the ability to spread misinformation, undermine individuals’ credibility, and sow distrust on a very large scale.
As we previously discussed in a recent Alert, deepfake technology creates several issues for local educational agencies (“LEA”). Some of the most pressing legal issues that deepfakes present to school districts include the following: (1) defamation of teachers, administrators, and other members of the school community, (2) cyberbullying and harassment of students, particularly young women, and (3) how to appropriately respond to inappropriate creation and circulation of deepfakes. As educators strive to create a safe and supportive learning environment for their students, understanding and addressing the threats posed by deepfakes is crucial to maintaining the integrity and security of our schools.
1. Impact of Deepfakes on Administrators, Teachers, and Parents
Recent news articles indicate a worrying trend of school educators being “deepfaked” by both students and fellow co-workers. One high-profile instance involved an audio clip of a principal at a Baltimore-area high school which depicted him making racist and anti-Semitic remarks. The audio clip was shared more than 27,000 times and resulted in a flurry of angry parents and students within the community, and the school district placed the principal on administrative leave while it conducted an investigation. After several months, AI experts determined that the audio clip was artificially generated, which exonerated the principal and led to the arrest of a teacher who allegedly created and circulated the audio clip as a means of retaliation against the principal. Needless to say, the damage to the principal’s reputation had already been done.
School districts risk reputational harm to their employees who may become victims of similar deepfakes. Due to their positions of authority, administrators and teachers are particularly vulnerable. If school districts are not aware of this technology and how to immediately respond to it, administrators (and members of the school community) may be inclined to believe the evidence without question. In addition, disciplining employees in response to deepfakes without a full evaluation of whether they are legitimate depictions of the individuals in question may expose school districts to liability for, among other things, defamation lawsuits over damage to reputations. Ultimately, school districts are better served by becoming educated about deepfakes and coming up with a plan for how to swiftly address them should their employees become future victims.
School districts must also consider how to best relay information about deepfakes containing violent threats and discriminatory statements to the school community. For example, in February 2023, students in Putnam County, New York created a deepfake video of their principal which depicted him making violent threats and other inappropriate comments directed at minority students. The Superintendent sent letters to parents stating in part that disciplinary action was taken against the students who created the deepfakes and condemning “the blatant racism, hatred and disregard for humanity,” and the school district hosted forums with families and law enforcement. Regardless, some parents reported that they felt that the school district minimized the severity of the videos and failed to address specific statements in the videos, particularly threats against minority students.
The often highly offensive nature of deepfakes and their impact on the school community may pressure school districts and administrators to take quick and decisive action. However, it is important to act methodically, rather than reactively, to investigate and take corrective action as needed in accordance with Board policies and collective bargaining agreements.
2. Deepfake Bullying Particularly Affects Female Students
One of the more disturbing aspects of deepfake technology is the ability to fabricate and distribute pornographic images of individuals. As the prevalence of generative AI increases in schools, so are instances of cyberbullying and harassment involving the generation of deepfakes depicting fellow students, particularly female students. For example, in October 2023, male students at a New Jersey high school created pornographic deepfakes depicting some of their female classmates. Although the school district immediately opened an investigation, parents admonished the school district for their perceived silence and alleged that the district had not done enough to publicly address the deepfakes or update school policies in order to combat improper uses of generative AI.
In addition, school districts may not be fully aware that these types of AI-generated images should be reported to law enforcement. For example, during the Fall 2023 semester, a student at a Seattle-area high school created and circulated deepfake images of his female classmates. The high school failed to report the incident to law enforcement. In a later statement, the district noted that their legal team had advised that they were not required to report “fake” images to the police, but acknowledged that if a similar situation arose in the future, they would do so.
To be sure, schools are required to report the distribution of pornographic AI-generated photos and videos of minors to law enforcement. Not only might schools face liability for failing to report pornographic deepfakes to law enforcement, but if the generation or distribution of exploitative images occurs on school grounds or could have been prevented by policy safeguards, it is possible that courts may find reason to hold districts civilly responsible.
3. How Should Schools Approach Discipline for Misuse of Deepfake Technology?
Current school discipline policies may not be adequately equipped to address the misuse of deepfake technology by students or staff. Given the rapid development of generative AI technology, schools should directly address generative AI and its misuse in trainings, policies, and guidelines. A recent report by the Center for Democracy and Technology reports that only 38% of students surveyed have received guidance from their schools on how to spot AI-generated images, text, or videos. Importantly, 71% stated that they believe it would be helpful if their schools supplied such trainings. Disciplining students without first providing official guidance on the responsible use of generative AI may pose issues of ambiguity and unfairness. Indeed, many students and staff are also seeking guidance on navigating the use of generative AI on campus.
Of added legal concern, any expansion of disciplinary policies potentially invokes issues of equity across different student demographics. The aforementioned report also found that students with IEPs and/or 504 plans report higher generative AI usage with 72% saying they have used ChatGPT or other forms of generative AI. Additionally, Title I and licensed special education teachers report higher rates of disciplinary actions among their students for generative AI use. Without further guidance on responsible generative AI use, vulnerabilities in certain student populations may be exacerbated when disciplining AI misuse.
4. The Limits of School Authority on Disciplining Students for Misuse of AI
As schools adapt to AI’s proliferation, there is a question about the extent to which schools have authority to discipline students for its misuse. In Tinker v. Des Moines School Dist., 393 U.S. 503 (1969), the Supreme Court held that although students do not shed their constitutional rights to freedom of speech while on school grounds, school districts may place certain restrictions on these rights. These limitations historically could not be enforced once students were off campus. However, as the internet began to proliferate, “off-campus” speech increasingly affected schools, forcing courts to determine exactly how far the school’s boundaries on constitutional rights run. For instance, a school may be allowed to punish a student’s cyberbullying even though this “speech” took place entirely within his or her own home and on a private device if it can demonstrate the speech substantially or materially interfered with school operations. (Kowalski v. Berkeley Cty. Sch., 652 F.3d 565 (4th Cir. 2011).)
Social media connects students to each other off campus in ways that Tinker, decided more than 50 years ago, could not have foreseen, testing the limits of school districts to discipline students for off-campus activities. In 2021, the Supreme Court held that a Pennsylvania high school violated a student’s First Amendment rights by punishing her for a profanity-laden Snapchat that she made off-campus. The Supreme Court noted that “the special characteristics that give schools additional license to regulate speech always disappear when a school regulates speech that takes place off campus.” The opinion observed that a student’s off-campus speech will generally be the parents’ responsibility and that if schools are allowed to regulate such speech, this covers essentially everything a student says outside of school.
Student misuse of AI-generated content likely raises similar First Amendment concerns. A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school. (See Tinker, 393 U.S. 503 at 509 [holding that to justify suppressing student speech that is otherwise covered by the First Amendment, school officials must demonstrate that the speech materially and substantially interferes with the operation of the school].) In instances such as the above-referenced events in Baltimore or Putnam County, it is obvious that deepfakes of school administrators can cause a substantial disruption to students, parents, and the school district. However, not all instances of deepfake misuse will be so clear cut. A court may not always find that a school has authority to suppress certain AI usage if it occurred predominantly off-campus and was not directed towards the school. As many push for schools to develop significant AI discipline policies, the reach of these policies could be found unconstitutional.
Over the last two years, there have been a number of attempts by state legislators to introduce legislation that criminalizes misuse of deepfakes. For example, in January 2024, California legislators introduced AB 1831, which would expressly criminalize the creation, possession, and distribution of AI-generated sexually explicit images of children. In addition, in June 2024, several U.S. Senators introduced the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (the "TAKE IT DOWN Act"), which would, among other things, criminalize deepfake pornography and other "non-sensual intimate imagery" and require social media and other websites to implement procedures for the removal of such content upon being notified by a victim of such images. These and other bills are currently being considered at both the state and federal levels.
5. Moving forward
As the use of generative AI technology continues to develop, school districts should become educated about deepfakes and ensure that they are prepared to properly and quickly address them. They should also review and update their student and employee discipline policies to clarify the parameters under which the school may intervene when students or employees create pornographic and other inappropriate deepfakes.
Should you have any questions concerning this topic, please do not hesitate to contact the authors or your usual counsel at AALRR for guidance.
[1] The term “deepfake” is a combination of the terms “deep learning” and “fake.”
This AALRR post is intended for informational purposes only and should not be relied upon in reaching a conclusion in a particular area of law. The applicability of the legal principles discussed may differ substantially in individual situations. Receipt of this or any other AALRR publication does not create an attorney-client relationship. The Firm is not responsible for inadvertent errors that may occur in the publishing process.
© 2024 Atkinson, Andelson, Loya, Ruud & Romo
- Senior Counsel
Alex Lozada is a seasoned attorney who provides legal counsel to school districts, community college districts, and county offices of education. With an extensive background in litigation, Mr. Lozada brings a wealth of experience ...
- Partner
Paul McGlocklin represents school districts, community college districts, and county offices of education, focusing on classified and certificated employment matters and other labor and employment issues. He also handles ...
- Senior Associate
Tien Le represents California school districts in education law matters. His experience in the field ranges from investigating complaints alleging violations of unlawful discrimination, harassment and intimidation, and ...
Other AALRR Blogs
Recent Posts
- Are You Ready for AB 2534? Our AB 2534 Toolkit Is Here to Help
- Don't Start from Scratch: Our AI Policy Toolkit Has Your District Covered
- Slurs and Epithets in the College Classroom: Are they protected speech?
- AALRR’s 2024 Title IX Virtual Academy
- Unmasking Deepfakes: Legal Insights for School Districts
- How to Address Employees’ Use of Social Media
- How far is too far? Searching Students’ Homes and Remote Test Proctoring
- Making Cybersecurity a Priority
- U.S. Department of Education Issues Proposed Amendments to Title IX Regulations
- Inadvertent Disability Discrimination May Lurk in Hiring Software, Artificial Intelligence and Algorithms
Popular Categories
- (55)
- (12)
- (81)
- (96)
- (43)
- (53)
- (22)
- (40)
- (11)
- (22)
- (6)
- (4)
- (3)
- (2)
- (3)
- (2)
- (4)
- (1)
- (1)
- (1)
- (1)
- (1)
- (1)
- (1)
Contributors
- Steven J. Andelson
- Ernest L. Bell
- Matthew T. Besmer
- William M. Betley
- Mark R. Bresee
- W. Bryce Chastain
- J. Kayleigh Chevrier
- Andreas C. Chialtas
- Georgelle C. Cuevas
- Scott D. Danforth
- Alexandria M. Davidson
- Michael J. Davis
- Mary Beth de Goede
- Anthony P. De Marco
- Peter E. Denno
- William A. Diedrich
- A. Christopher Duran
- Amy W. Estrada
- Jennifer R. Fain
- Eve P. Fichtner
- Paul S. Fleck
- Mellissa E. Gallegos
- Stephanie L. Garrett
- Karen E. Gilyard
- Todd A. Goluba
- Jacqueline D. Hang
- Davina F. Harden
- Suparna Jain
- Jonathan Judge
- Warren S. Kinsler
- Nate J. Kowalski
- Tien P. Le
- Alex A. Lozada
- Kimberly C. Ludwin
- Bryan G. Martin
- Paul Z. McGlocklin
- Stephen M. McLoughlin
- Anna J. Miller
- Jacquelyn Takeda Morenz
- Kristin M. Myers
- Katrina J. Nepacena
- Adam J. Newman
- Anthony P. Niccoli
- Aaron V. O'Donnell
- Sharon J. Ormond
- Gabrielle E. Ortiz
- Beverly A. Ozowara
- Chesley D. Quaide
- Rebeca Quintana
- Elizabeth J. Rho-Ng
- Todd M. Robbins
- Irma Rodríguez Moisa
- Brooke Romero
- Alyssa Ruiz de Esparza
- Lauren Ruvalcaba
- Scott J. Sachs
- Gabriel A. Sandoval
- Peter A. Schaffert
- Constance J. Schwindt
- Justin R. Shinnefield
- Amber M. Solano
- David A. Soldani
- Dustin Stroeve
- Constance M. Taylor
- Mark W. Thompson
- Emaleigh Valdez
- Jonathan S. Vick
- Jabari A. Willis
- Sara C. Young
- Elizabeth Zamora-Mejia
Archives
2024
2022
2021
2020
2019
2018
- December 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- January 2018
2017
- November 2017
- October 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
2015
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
2014
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
2013
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
2012
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012