- nripage
- 22 Aug 2024 03:26 AM
- Internet & Technology
Elon Musk’s AI chatbot Grok, launched on Tuesday, allows users to generate and post AI-created images from text prompts on X. This new tool has quickly been used to create and share fake, realistic images of political figures like former President Donald Trump, Vice President Kamala Harris, and even Musk himself. Some of these images depict these figures in troubling and false scenarios, such as participating in the 9/11 attacks.
Unlike other mainstream AI tools, Grok, developed by Musk’s xAI, has few restrictions. Tests revealed that Grok could easily produce photorealistic images of politicians and candidates that could mislead viewers if seen without context. The tool also produced harmless yet convincing images, such as Musk enjoying a steak in a park.
Users of X have posted images created with Grok that include inappropriate content, such as public figures in drug-related scenarios, violent cartoon characters, and sexualized imagery. One notable image showed Trump firing a rifle from a truck, a creation that has been confirmed as possible with Grok.
The emergence of Grok raises concerns about the proliferation of misleading or false information online, particularly with the upcoming U.S. presidential election. There are growing worries among lawmakers, civil society groups, and tech leaders about the potential for such tools to confuse and mislead voters.
Musk touted Grok as “the most fun AI in the world” in response to a user’s praise for its uncensored nature. In contrast, other major AI companies, like OpenAI, Meta, and Microsoft, have implemented measures to prevent their tools from being used for political misinformation. These companies use technology or labels to help users identify AI-generated content.
Rival social media platforms, including YouTube, TikTok, Instagram, and Facebook, have also introduced methods to label AI-generated content or detect it through technology, or they ask users to identify such content themselves.
X did not immediately respond to queries about whether it has specific policies regarding Grok and potentially misleading political images. By Friday, xAI appeared to have imposed some restrictions on Grok, limiting its ability to create images of political figures or copyrighted cartoon characters in violent contexts or with hate speech symbols. However, users noted these restrictions seem only partially effective.
X’s policy prohibits the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and cause harm,” but its enforcement is unclear. Musk recently posted a video on X that used AI to falsely portray Harris saying things she did not actually say, with only a laughing emoji indicating the content’s fake nature.
The launch of Grok comes amid criticism of Musk for spreading false claims related to the presidential election on X, including questioning the security of voting machines. This follows a livestreamed conversation Musk hosted with Trump, during which Trump made numerous false claims without challenge from Musk.
Other AI image tools have faced criticism for various issues. Google paused its Gemini AI chatbot’s ability to generate images of people due to complaints about inaccuracies related to race. Meta’s AI image tool was criticized for failing to depict racially diverse couples or friends accurately. TikTok had to withdraw an AI video tool after it was discovered that it could create realistic yet misleading videos, including vaccine misinformation, without proper labels.
Grok does have some limitations; for instance, it refused to generate a nude image and stated it aims to avoid producing content that promotes harmful stereotypes or misinformation. Despite this, the tool still produced an image of a political figure with a hate speech symbol, indicating that its restrictions may not be consistently applied.