X Didn’t Repair Grok’s ‘Undressing’ Downside. It Simply Makes Folks Pay for It

Metro Loud
5 Min Read

[ad_1]

After creating 1000’s of “undressing” footage of ladies and sexualized imagery of obvious minors, Elon Musk’s X has apparently restricted who can generate pictures with Grok. Nevertheless, regardless of the adjustments, the chatbot continues to be getting used to create “undressing” sexualized pictures on the platform.

On Friday morning, the Grok account on X began responding to some customers’ requests with a message saying that picture technology and modifying are “at present restricted to paying subscribers.” The message additionally features a hyperlink pushing folks towards the social media platform’s $395 annual subscription tier. In a single take a look at of the system requesting Grok create a picture of a tree, the system returned the identical message.

The obvious change comes after days of rising outrage in opposition to and scrutiny of Musk’s X and xAI, the corporate behind the Grok chatbot. The businesses face an growing variety of investigations from regulators around the globe over the creation of nonconsensual express imagery and alleged sexual pictures of youngsters. British prime minister Keir Starmer has not dominated out banning X within the nation and mentioned the actions have been “illegal.”

Neither X nor xAI, the Musk-owned firm behind Grok, has confirmed that it has made picture technology and modifying a paid-only function. An X spokesperson acknowledged WIRED’s inquiry however didn’t present remark forward of publication. X has beforehand mentioned it takes “motion in opposition to unlawful content material on X,” together with situations of kid sexual abuse materials. Whereas Apple and Google have beforehand banned apps with related “nudify” options, X and Grok stay accessible of their respective app shops. xAI didn’t instantly reply to WIRED’s request for remark.

For greater than every week, customers on X have been asking the chatbot to edit pictures of ladies to take away their garments—typically asking for the picture to include a “string” or “clear” bikini. Whereas a public feed of pictures created by Grok contained far fewer outcomes of those “undressing” pictures on Friday, it nonetheless created sexualized pictures when prompted to by X customers with paid for “verified” accounts.

“We observe the identical type of immediate, we observe the identical type of final result, simply fewer than earlier than,” Paul Bouchaud, lead researcher at Paris-based nonprofit AI Forensics, tells WIRED. “The mannequin can proceed to generate bikini [images],” they are saying.

A WIRED overview of some Grok posts on Friday morning recognized Grok producing pictures in response to person requests for pictures that “put her in latex lingerie” and “put her in a plastic bikini and canopy her in donut white glaze.” The photographs seem behind a “content material warning” field saying that grownup materials is displayed.

On Wednesday, WIRED revealed that Grok’s stand-alone web site and app, which is separate from the model on X, has additionally been utilized in latest months to create extremely graphic and typically violent sexual movies, together with celebrities and different actual folks. Bouchaud says it’s nonetheless doable to make use of Grok to make these movies. “I used to be in a position to generate a video with sexually express content material with none restriction from an unverified account,” they are saying.

Whereas WIRED’s take a look at of picture technology utilizing Grok on X utilizing a free account didn’t enable any pictures to be created, utilizing a free account on Grok’s app and web site nonetheless generated pictures.

The change on X may instantly restrict the quantity of sexually express and dangerous materials the platform is creating, consultants say. However it has additionally been criticized as a minimal step that acts as a band-aid to the true harms attributable to nonconsensual intimate imagery.

“The latest choice to limit entry to paying subscribers shouldn’t be solely insufficient—it represents the monetization of abuse,” Emma Pickering, head of technology-facilitated abuse at UK home abuse charity Refuge, mentioned in an announcement. “Whereas limiting AI picture technology to paid customers might marginally cut back quantity and enhance traceability, the abuse has not been stopped. It has merely been positioned behind a paywall, permitting X to revenue from hurt.”

[ad_2]

Share This Article