What every parent and caregiver needs to know.
When AI Crosses the Line
AI technology has made it possible for anyone to create convincing, explicit “deepfake” images even of children and teens without their knowledge or consent. A recent GovTech article highlights just how quickly this problem is growing in schools, where students have already been caught creating and sharing fake nude images of classmates.
This is not harmless “digital pranking.” It is abuse and it leaves survivors feeling exposed, humiliated, and powerless. Worse, many schools and states are still scrambling to catch up, leaving parents, caregivers, and students with more questions than answers.
A Patchwork of State Laws on AI Deepfake Images
Right now, there is no single federal law that fully protects kids from AI-generated sexual images. Some states have acted, but many haven’t. Here’s a snapshot of what’s happening:
- Arizona: Enacted laws on AI deepfakes and political advertising.
- California: Has multiple laws covering digital identity theft and political deepfakes.
- Florida: Requires disclaimers on political ads and has criminal/civil penalties for related violations.
- Iowa: Enacted a bill addressing the sexual exploitation of minors through deepfake technology.
- Louisiana: Created a crime for the unlawful dissemination or sale of AI-created images of individuals.
- Minnesota: Prohibits election-related manipulated media within 90 days before an election if it is non-consensual and intended to harm a candidate.
- New Jersey: Made it a third-degree crime to create deepfake audio, video, or images with nefarious intentions.
- Texas: Makes it a criminal offense to create a deepfake video with the intent to harm a candidate or influence an election outcome.
47 states have passed some form of a law criminalizing deepfakes, but that still leaves gaps. Survivors in states without clear protections may have to rely on general harassment, defamation, or bullying laws which can be harder to enforce. We need one universal federal law.
What Parents and Caregivers Can Do
Even with laws catching up, prevention and quick action are key:
- Talk Early and Often – Teach kids that creating or sharing explicit deepfakes is a serious violation of consent and can be illegal.
- Secure Devices and Accounts – Use strong passwords, enable two-factor authentication, and limit app permissions.
- Check Your School’s Policies – Find out if “AI-generated images” are explicitly covered under harassment or bullying policies.
If You or Your Child Is Targeted by AI Deepfake Images
If a deepfake has been created or shared, act quickly:
- Document Everything – Screenshot images, links, and dates.
- Report Immediately – Use the platform’s reporting system to flag the image as non-consensual sexual content.
- Demand Takedowns – Under the new TAKE IT DOWN Act, platforms must remove reported non-consensual intimate deepfake images within 48 hours.
- Contact Legal Support – Our firm helps survivors pursue civil action to have images removed, stop further sharing, and seek damages for the harm caused.
- Get Emotional Support – This is traumatic. Connect with a counselor or support group to process the emotional impact.
Why Civil Lawsuits Matter
Criminal prosecutions are important, but civil lawsuits often create the biggest change. Civil cases can:
- Force platforms and perpetrators to remove the images.
- Provide financial compensation for emotional harm.
- Establish legal precedent that strengthens future cases.
- Pressure lawmakers to close loopholes and strengthen protections.
This is how we move from outrage to systemic change by holding wrongdoers accountable in court and making it costly for institutions to ignore these harms.
Deepfake abuse is one of the fastest-growing threats facing kids today, and the law is still catching up. Survivors deserve better than patchwork protections and slow responses.
At Andreozzi + Foote, we fight to hold wrongdoers and the systems that enable them accountable. If your child has been targeted with a deepfake image, you are not powerless. We can help you take action to have those images removed, file a civil suit, and push for the policy change needed to protect others. 1-866-753-5458