Culture & Influence Copy RSS link Link copied to clipboard!
Eckart von Hirschhausen releases documentary on deepfake fraud
Eckart von Hirschhausen presents a new documentary titled 'Hirschhausen und die Deepfake-Mafia', investigating a criminal network using AI-generated videos of his face and voice for online fraud. The film, airing on ARD on 4 May, details how scammers target vulnerable consumers with fake health products. Hirschhausen, along with other public figures, highlights the erosion of trust and calls for stricter regulations against tech platforms and commercial entities facilitating this identity theft.
YouTube expands deepfake detection tools to wider group of users
YouTube has expanded its likeness detection tools to a broader group of users, including actors, athletes, creators, and musicians, regardless of whether they have a channel. Previously restricted to select groups like government officials and journalists, the feature now allows at-risk individuals to upload face scans for cross-checking against potential imposters. Developed starting in September 2024, the technology uses face scans and government IDs to alert users if their images appear in others' uploads, enabling them to identify misuse and request content removal as AI-generated misrepresentation becomes more prevalent.
Authenticity and trust become central as AI-generated content alters creator economy demand
The surge in AI-generated content has shifted audience demand towards trust and authenticity, elevating the value of human creators who share personal experiences. Reports indicate that while micro-influencers maintain higher engagement rates than macro-influencers, virtual influencers struggle to build comparable trust. Brands are advised to balance AI production efficiency with human storytelling to maintain credibility. As platforms consider stricter disclosure standards, the market is expected to segment into high-trust human creators and mass-market AI content producers.
Generative AI introduces uncertainty to character copyright protection
The rapid adoption of generative AI creates legal uncertainty regarding character copyright protection. While preexisting characters may remain protectable, their appearance in AI-generated works occupies an ambiguous legal space. Courts have previously denied protection to characters lacking consistent traits, such as the 'Eleanor' Mustang. The Copyright Office has also ruled that AI outputs lacking human authorship are unregistrable, leaving rights holders in a gray area where underlying characters might be protected but the specific AI output is not. Experts advise caution and documentation as courts and regulators seek to clarify how AI interacts with established doctrines.
Google AI overviews cause significant traffic loss for digital publishers
The introduction of AI overviews in search engines has led to a sharp decline in organic traffic for many digital publishers. These generative summaries provide immediate answers, increasing the zero-click rate and pushing traditional organic listings below the fold. Publishers relying on factual content face reduced click-through rates as users satisfy their intent without visiting websites. Experts recommend diversifying traffic sources, building strong brand recognition, and optimizing for complex queries to mitigate this impact.
Unions struggle to protect journalists' rights in the age of AI
A Reuters Institute survey indicates that while AI is increasingly integrated into journalism workflows for tasks like research, transcription, and content generation, job cuts due to AI remain marginal. Approximately two-thirds of media managers surveyed reported no job reductions attributable to AI, though the situation may evolve. The article highlights the challenges unions face in safeguarding journalists' rights amidst these technological shifts.
Major labels and platforms deploy AI tools to combat infringement while legal battles over training data continue
Major record labels are adopting AI systems to detect copyright infringement and monitor synthetic content. Simultaneously, Live Nation forecasts a strong 2026 touring season. YouTube has expanded its likeness detection technology to identify unauthorized use of artists' identities. In the legal sector, Anthropic has filed a motion for summary judgment in a lawsuit regarding the fair use of copyrighted material for AI training. The industry faces ongoing challenges in establishing standardized AI licensing frameworks for music publishing.
Alvarez & Marsal report finds US consumers increasingly open to fully AI-created films
Alvarez & Marsal report indicates US consumers are shifting towards AI-enabled entertainment, expecting to increase consumption by 29% while reducing traditional broadcast by 11%. The study 'Lights, Camera, AI' reveals 64% of consumers believe human-AI collaboration yields premium content, and over half are willing to pay standard prices for fully AI-generated films. However, only 51% feel confident distinguishing human from AI content, highlighting a need for transparency in the media and entertainment sector.
Synthetic media challenges authenticity and trust in the digital age
Synthetic media, created by artificial intelligence, allows for the generation of realistic images, videos, audio, and text, disrupting the traditional relationship between media and truth. While offering creative possibilities, deepfakes pose risks of misinformation and reputational damage. The phenomenon also creates a 'liar's dividend,' where genuine evidence is dismissed as fake. Solutions include AI detection systems, digital provenance, and improved media literacy. Ultimately, maintaining trust requires balancing innovation with accountability and verification systems.
Debevoise Data Blog explores definition and business impact of deepfakes
Debevoise Data Blog discusses the evolving definition of deepfakes, noting the lack of a single universal legal or technical standard despite increasing global regulatory attention. The article outlines key hallmarks of deepfakes, including deceptive AI-generated content with potential for harm. It highlights compliance obligations under frameworks like the EU AI Act and South Korea AI Basic Act, which require disclosure of synthetic content. Additionally, the piece addresses deepfakes as a vector for cybersecurity attacks and insurance fraud, advising businesses to update incident response procedures and implement governance controls to mitigate risks associated with impersonation and misinformation.
Website owners must adapt strategies for AI-powered search results
Search behaviour is shifting as artificial intelligence becomes central to online information delivery, replacing link lists with direct answers. Traditional SEO is no longer sufficient; website owners must now prioritise content quality, structure, and relevance to ensure visibility in AI-generated responses. Businesses need to understand how AI interprets content to remain competitive in this evolving landscape.
Media Copilot launches AI Quick Start workshop for journalists and PR pros
Media Copilot is hosting an AI Quick Start workshop on Friday, May 8, at 1 p.m. ET for journalists and PR professionals. The one-hour session covers prompting frameworks, deep research tools, custom GPT creation, and AI agent usage. The event aims to help attendees move beyond basic AI usage to practical applications in newsrooms and communications.
AI search optimization tools increase organic traffic by analyzing intent and optimizing content structure
Organic traffic patterns have shifted due to search engines relying on artificial intelligence for query understanding and generating direct answers. Traditional SEO is insufficient as click-through rates decline with the rise of zero-click searches and AI overviews. AI search optimization tools now offer a competitive advantage by analyzing search intent, identifying content gaps, optimizing structure, and monitoring visibility across both traditional results and AI-generated answers. These tools help businesses attract relevant traffic, improve search visibility, and strengthen authority, ensuring content is cited as a trusted source in the evolving AI search era.
Deepfakes require e-discovery counsel to address authenticity upstream
Martin Felsky, Senior Counsel, argues that the emergence of deepfakes necessitates a shift in e-discovery practices regarding audiovisual evidence. The article states that counsel must now address authenticity risks upstream within the discovery process rather than deferring concerns to trial. Recommended actions include early risk identification, preservation of original files and metadata, rigorous documentation of chain of custody, and proactive engagement of forensic experts to ensure evidentiary integrity under Canadian judicial standards.
Media leaders demonstrate practical AI deployments at Programming Everywhere conference
Executives from Sky, Gray Media, AWS, Genna, Luma AI, and A+E Factual Studios presented verified case studies on AI implementation at TVNewsCheck's Programming Everywhere conference in Las Vegas. Panelists detailed operational shifts including automated subtitling with 90-99% accuracy, article-to-video conversion generating revenue, and AI-generated historical imagery. The session highlighted measurable results in production efficiency and audience trust strategies, marking a transition from theoretical discussion to practical application in media workflows.
Emerging AI search platforms transform information retrieval with conversational answers
Generative AI search engines like Google AI Overviews, Microsoft Copilot, Perplexity, and SearchGPT are replacing traditional link lists with direct, synthesized answers. This shift prioritizes user intent and real-time data synthesis, necessitating a move from traffic-focused metrics to measuring brand visibility through citations and mentions. Businesses must adapt by implementing structured data, semantic SEO, and entity recognition to remain visible in AI-generated responses. The rise of zero-click searches challenges traditional advertising models, requiring marketers to focus on building topical authority and direct audience relationships.
Reality Defender leads top AI deepfake detection tools in 2026
A 2026 review identifies Reality Defender as a leading AI tool for detecting fake images, videos, and synthetic media, highlighting its real-time multimodal detection and API integration. Other tools listed include Sensity AI, Amber Authenticate, Hive Moderation, Illuminarty, Intel FakeCatcher, Deepware Scanner, IdentifAI, CloudSEK, and InVID. These solutions address risks such as financial fraud, political misinformation, and identity theft by analyzing pixel errors, lighting inconsistencies, and biometric signals. Pricing varies from free for journalists to custom enterprise rates.
Copyrightability of AI-generated works may determine future of creative labor
The future of creative labor depends less on lawsuits regarding AI training data and more on whether AI-generated works can receive copyright protection. Following the 2024 Thaler v. Perlmutter decision, which ruled autonomous AI works are uncopyrightable, major entertainment companies like Netflix and Hachette maintain human authorship to preserve licensing revenue and combat piracy. Without copyright protection, the financial models for film, music, and publishing could collapse, as AI content cannot be monetized as intellectual property. Consequently, industry gatekeepers continue to employ human creators to ensure content remains commercially viable and legally protectable.
Expert outlines seven methods to detect deepfake fraud
Taylor Lei of VidMage details seven practical techniques for identifying synthetic video fraud, including checking micro-expressions, verifying edges, and using out-of-band confirmation. Citing an iProov study showing only 24.5% detection accuracy by the general public, the article warns against relying solely on viral tricks like the three-finger test. Lei advises asking specific questions, checking audio-visual sync, and requesting unscripted physical actions to mitigate the risk of falling victim to deepfakes.
Joaquín Cuenca Abela says AI is revolutionizing Hollywood filmmaking and reducing production costs
Joaquín Cuenca Abela, CEO of Magnific, states that AI is transforming Hollywood by significantly reducing film production costs to one-third of previous levels and enhancing creative efficiency. He notes that while AI automates technical aspects and enables personalized content, it cannot replicate human individuality. Consequently, the demand for skilled storytellers is expected to increase as studios focus more on narrative depth and performance.