" "

X’s Grok2AI deepfake technology sparks misinformation fears ahead of US elections

0 59

The release of Grok 2, the latest AI chatbot from X, the social media company formerly known as Twitter, is raising alarm as the 2024 US elections approach.

The release of Grok 2, the latest AI chatbot from X, the social media company formerly known as Twitter, is raising alarm as the 2024 US elections approach.

Grok 2, unveiled in August, has come under fire for its role in generating life-like deepfakes of politicians, contributing to a growing wave of misinformation online. With limited safeguards, Grok has allowed users to create false images of public figures engaged in scandalous and even illegal activities.

Among the most troubling instances, Al Jazeera recently generated AI deepfakes using Grok that falsely depict Texas Senator Ted Cruz snorting cocaine, Vice President Kamala Harris wielding a knife, and former President Donald Trump shaking hands with white nationalists. These images, while fabricated, appear disturbingly real and highlight the potential for misuse of AI tools to manipulate public perception.

Election officials in multiple states, including Michigan, Minnesota, and Pennsylvania, have already sounded the alarm on the risks of Grok’s output. They recently wrote to X’s CEO Elon Musk, citing the chatbot’s dissemination of false information about state ballot deadlines. In response, X has updated its platform to direct users to Vote.gov for accurate election-related details.

However, when it comes to combating deepfakes, X has yet to implement meaningful controls. Edward Tian, co-founder of GPTZero, a company specializing in detecting AI-generated content, criticized X’s approach. “Common sense safeguards in terms of AI-generated images, particularly of elected officials, would have even been in question for Twitter Trust and Safety teams pre-Elon,” Tian told Al Jazeera.

While other tech giants like OpenAI are developing measures to limit AI’s misuse—such as preventing the creation of images of specific public figures—X’s relaxed policies have intensified the issue. The political ramifications are evident, as both sides of the aisle use AI-generated imagery to sway voters.

Notably, the now-suspended campaign of Florida Governor Ron DeSantis used doctored images of Trump and former COVID-19 advisor Anthony Fauci embracing, in an attempt to undermine Trump’s base. Though this incident didn’t involve Grok, it underscores the broader trend of AI-driven political disinformation.

Even Trump himself has embraced this tactic. He recently shared a deepfake on his Truth Social platform that falsely suggested pop star Taylor Swift endorsed him, aiming to court her vast fanbase. A similarly misleading AI-generated image, posted by Musk, portrayed Harris wearing a communist insignia, part of an ongoing campaign to inaccurately frame her policies as extreme left.

As the use of AI in political messaging intensifies, watchdog groups like Public Citizen are calling for stronger regulations. “What our petition asks [the Federal Election Commission] to do is simply apply a longstanding rule… you basically can’t put out advertisements that lie directly about things your opponents have said or done,” said Lisa Gilbert, Public Citizen co-president. However, with the FEC delaying action until at least September 19, the 2024 election cycle is likely to see continued misuse of AI without stricter oversight.

As the ethics surrounding AI continue to provoke debate, X finds itself at the heart of a growing controversy that could have significant implications for the integrity of elections worldwide.

About Author

Leave a Reply