Follow Us:

Stay updated with the latest news, stories, and insights that matter — fast, accurate, and unbiased. Powered by facts, driven by you.

Elon Musk’s Grok AI Faces Global Backlash Over Sexualised Deepfakes

Elon Musk’s Grok AI Faces Global Backlash Over Sexualised Deepfakes

Elon Musk’s artificial intelligence chatbot Grok, developed by his company xAI, is facing mounting global criticism after allegations emerged that it can generate sexualised deepfake images of women and children without consent. Governments and regulators in the United Kingdom, European Union, India, and Malaysia have expressed deep concern, calling the issue a serious threat to privacy, child safety, and ethical AI development.

Grok, which is integrated with Musk-owned social media platform X (formerly Twitter), was positioned as a “truth-seeking” alternative to existing AI chatbots. However, recent reports suggest that the AI’s image-generation capabilities may be misused to create explicit and non-consensual content, sparking alarm among policymakers and digital rights groups worldwide.

In the United Kingdom, officials have highlighted that the creation and distribution of sexualised deepfake images—especially those involving minors—could violate strict child protection and online safety laws. UK regulators are reportedly examining whether Grok complies with existing regulations, including the Online Safety Act, which places responsibility on platforms to prevent harmful content.

The European Union has also reacted strongly, with experts pointing to potential violations of the EU AI Act and GDPR regulations. The EU has been at the forefront of AI governance, and regulators are increasingly scrutinising AI systems that pose risks to fundamental rights, particularly when it comes to consent and personal data misuse.

In India, where concerns around deepfakes have grown rapidly, authorities have warned that AI-generated sexualised content could lead to severe legal consequences. Indian IT laws and child protection regulations strictly prohibit the creation and sharing of explicit material involving minors. Officials have previously stated that technology companies must ensure robust safeguards to prevent AI misuse.

Similarly, Malaysia has raised red flags, with lawmakers urging stricter oversight of AI tools that can be exploited for harmful purposes. Malaysian authorities emphasised that deepfake technology, if left unchecked, can damage reputations, enable harassment, and undermine public trust in digital platforms.

The controversy has reignited global debates around AI ethics, content moderation, and corporate responsibility. Critics argue that AI developers must implement stronger safety filters and accountability mechanisms, especially as generative AI becomes more accessible to the public. Advocacy groups have also called for clear legal frameworks to hold companies accountable when their tools enable harm.

While Elon Musk and xAI have not yet issued a detailed public response addressing these specific allegations, the backlash underscores the growing pressure on tech leaders to balance innovation with ethical responsibility. As governments worldwide move toward tighter AI regulations, the Grok controversy may become a defining case in how generative AI tools are governed in the future.

The incident serves as a stark reminder that without effective safeguards, powerful AI technologies can be misused, prompting urgent calls for global cooperation, stricter laws, and responsible AI development.

EU, Britain join condemnation of sexual deepfake images created with Grok AI  - ABC News
Elon Musk’s Grok AI Faces Global Backlash Over Sexualised Deepfakes

Note: Content and images are for informational use only. For any concerns, contact us at info@rajasthaninews.com.

Share: