A Florida woman has admitted fabricating a sexual-assault report after using an AI-generated image of a vagrant to support her claim, a case authorities say was inspired by a viral TikTok challenge. According to the original report, Brooke Schinault, 32, called police to her St Petersburg home in October alleging a man had broken in and sexually assaulted her; officers found no evidence of a crime but were shown an image she said showed the intruder. [1][7]

Police charging documents reviewed by media outlets said the image was later identified as AI-generated and had been created days before the alleged incident. A detective noted the picture was in a deleted folder and recognised it as part of a social-media trend that inserts a homeless man into photos of people’s homes. The officer wrote that the challenge was called the "AI homeless man challenge" and said they "found [photos of] the same man the female claimed she took." [1]

When questioned, Schinault initially insisted she was telling the truth and claimed she had used AI only to enhance an existing photo; when taken into custody she admitted making the picture and told investigators she was struggling with depression and "wanted attention." Court records show she pleaded no contest to falsely reporting a crime, was placed on probation and ordered to pay a fine. [1][7]

Local detectives said the false report triggered a large emergency response. One Florida account obtained by reporters described officers, rescue personnel and forensic technicians being dispatched to the scene, underscoring how a single hoax can consume significant resources. Authorities emphasised the operational and safety risks when officers respond at speed to what appears to be an active intruder. [7][1]

Police departments across the United States have issued warnings about the same viral trend, saying it wastes emergency resources and can create dangerous situations. Yonkers Police Department in New York posted a public service announcement stating: "The 'AI Homeless Man' Prank isn't funny - it's dangerous". The department warned that officers respond "FAST using lights-and-sirens" and that the prank is "a real safety risk for officers who are responding and for the family members who are home." They urged parents to talk to children about misusing AI. [1][2][5]

Advocacy groups and commentators have condemned the prank for also dehumanising people experiencing homelessness and for causing unnecessary fear and distress. Industry and law-enforcement commentaries published this month note that beyond legal consequences for pranksters, the trend risks normalising the use of manipulated imagery to provoke panic and to shift real-world resources. [3][4][6]

The case in St Petersburg highlights wider legal and ethical questions as AI tools make realistic image fabrication easier to create and circulate. According to coverage of the incident and related warnings, authorities are urging social-media users to stop participating in challenges that manufacture false emergencies and to consider the real-world consequences before sharing AI-generated material. [1][3][5]

##Reference Map:

  • [1] (Daily Mail) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 7
  • [7] (WDEF) - Paragraph 1, Paragraph 3, Paragraph 4
  • [2] (Fox News) - Paragraph 5
  • [3] (Forbes) - Paragraph 6, Paragraph 7
  • [4] (Boston.com) - Paragraph 6
  • [5] (Good Morning America) - Paragraph 5, Paragraph 7
  • [6] (WBAY/Shawano County) - Paragraph 6

Source: Noah Wire Services