RedPeach, a premium content platform, is using facial-recognition checks to bar what it describes as "chatters" , human operators and AI tools that impersonate creators in private messages. The company says the aim is to make sure subscribers are speaking to the person they paid to access, rather than a stand-in. According to RedPeach's own face-verification page, creators must authenticate before private messaging, and the system is designed to block agencies, bots and AI from handling conversations.

Marco Cally, the firm's chief executive and co-founder, has cast the measure as a safeguard against what he calls emotional deception online. Speaking to the Daily Star, he said the platform had a zero-tolerance approach to AI bots and insisted that only verified creators could continue chats with subscribers. He said the process requires creators to pass facial recognition on their phones before they are allowed into private conversations.

The move comes against a backdrop of growing legal scrutiny around creator platforms and the use of paid intermediaries. In July 2024, a US class-action complaint accused one major platform and several management firms of letting "chatters" pose as creators. Although the case was later dismissed, it drew attention to an industry practice in which fans may believe they are speaking directly to a performer when they are not. Separate reporting on a High Court case has also exposed how agencies use third-party chat operators to keep engagement, and revenue, flowing.

RedPeach is trying to turn that controversy into a selling point. The company says its verification system is intended to protect paying users from false intimacy and to preserve what it presents as genuine one-to-one contact. In Cally's telling, the platform is positioning itself as a more transparent alternative in a market where trust has become an increasingly valuable commodity.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services