A new wave of remarkably realistic AI-generated videos is rapidly gaining traction on social media, driven by the release of Sora 2, an advanced text-to-video generator developed by OpenAI, the creators of ChatGPT. Previously accessible only by invite, Sora 2 has recently opened its doors temporarily to all users in select countries, including the United States, Canada, Japan, and South Korea. This accessibility boost allows users to create lifelike, cinematic scenes from simple text prompts, with the platform boasting superior visual style and storytelling capabilities compared to many rivals. Videos generated can be up to 20 seconds long, displayed in 1080p resolution, and come with optional watermarks to denote their AI origin. However, the free public use window is limited, and OpenAI has announced plans for custom pricing structures early next year. Notably, Sora 2 remains unavailable in certain regions such as the UK, EU countries, and Switzerland, reflecting ongoing regulatory and safety considerations.

The flood of AI-assisted content has elicited mixed reactions, blending admiration for its creative potential with serious concerns about authenticity and misuse. Cybersecurity firm DeepStrike highlights a steep rise in deepfake files, from 500,000 in 2023 to eight million in 2025, demonstrating how rapidly the technology and its application are expanding. For creators and consumers alike, distinguishing genuine footage from AI fabrications is becoming increasingly complex. Content creator Madeline Salazar, who leverages social media to educate audiences about technology, explains that earlier indicators such as abnormal limb counts or overt distortions have largely disappeared. Instead, subtle visual inconsistencies, like slightly shifting hair strands, rippling foam textures, and minor drifting of stationary objects, now serve as clues to the video's artificial nature. Complex scenes involving repetitive patterns or architectural details often reveal warping or alignment errors. Moreover, some AI-generated videos mimic grainy security camera footage to exploit viewers’ expectations of lower-quality visuals, intentionally deceiving audiences.

Salazar stresses that beyond visual signs, context is paramount in evaluating AI videos. The provenance of content, including the posting account’s history and the prevalence of watermarks, can offer critical insights. For example, an AI-generated image purporting to show trash invading homes in the Outer Banks was debunked due to architectural anomalies and the suspicious origin of the post. Such examples underscore the importance of scepticism and critical analysis amid a growing proliferation of AI media.

The darker side of the technology is manifest in real-world consequences from AI-driven pranks and hoaxes. In Ohio, fabricated videos depicting homeless intruders have prompted multiple emergency calls, mobilising police responses and diverting resources from genuine incidents. Two juveniles have faced criminal charges over these hoaxes, illustrating the tangible societal harm caused by malicious AI content. Law enforcement and legal agencies are increasingly focused on these emerging threats, with states like Ohio proposing legislation aimed specifically at curbing deepfake abuses. Attorney General Dave Yost of Ohio has voiced strong support for these measures amid skyrocketing incidents of AI-facilitated scams and fraud.

At the same time, public advocacy groups such as Public Citizen have condemned OpenAI’s release of Sora 2, arguing that it neglected essential safety and ethical protocols in the rush to compete in the AI video generation space. They warn that the unchecked spread of synthetic media risks undermining public trust in authentic visual evidence, disproportionately harming vulnerable populations and complicating democratic discourse. This concern is echoed by academics who describe a "liar’s dividend" phenomenon, where the presence of AI-generated content enables bad actors to dismiss genuine evidence as fake, eroding accountability. Although OpenAI has implemented certain restrictions, such as banning the depiction of public figures and embedding watermarks, these safeguards have been circumvented by users employing workaround methods, raising doubts about the company's ability to effectively police misuse.

The social ramifications extend further into personal privacy and consent. Platforms like Sora now treat AI-generated recreations of individuals as "cameos," notifying users if their likeness is used and allowing video removal requests. However, the viral nature of these clips means that once distributed, control over one's digital image is tenuous at best. This shift from deepfake stigma to social media feature raises complex questions about identity, agency, and the ethics of synthetic media creation.

Despite these challenges, many, including Salazar, emphasise the creative empowerment the technology can offer. The ability for independent artists and smaller production teams to generate high-quality media content at low cost could democratise content creation, opening new avenues for storytelling and artistic expression. She posits that the current surge in AI videos might also trigger a cultural "reset," encouraging viewers to engage more critically and sceptically with digital content, thus refining media literacy in an age of synthetic realities.

OpenAI acknowledges ongoing concerns and claims engagement with global stakeholders to improve safeguards and ethical standards. However, the technology’s rapid evolution and widespread adoption continue to outpace regulation and societal adjustment, signalling a pivotal moment in the intersection of AI, media, and public trust.

📌 Reference Map:

  • [1] Spectrum Local News - Paragraphs 1, 2, 3, 4, 5, 6, 7, 8, 9
  • [2] Tom's Guide - Paragraph 1, 2
  • [3] Reuters - Paragraph 1, 2
  • [4] AP News - Paragraph 5, 6
  • [5] Axios - Paragraph 5
  • [6] iSchool Berkeley - Paragraph 6
  • [7] Washington Post - Paragraph 7

Source: Noah Wire Services