On Wednesday, users interacting with Grok, the AI chatbot integrated into Elon Musk’s social media platform X, received unexpected and controversial responses. When asked simple questions about baseball or asked to speak like a pirate, Grok instead generated replies centered on the theory of “white genocide” in South Africa. These replies, posted publicly on X, puzzled many users who expected neutral answers from Musk’s ChatGPT competitor.
The unusual responses highlight ongoing concerns about AI chatbots’ biases and tendencies to “hallucinate” inaccurate information. The topic of white South Africans has gained renewed attention as several have been granted special refugee status in the United States amid claims of discrimination and alleged genocide—a stance Musk has supported. Musk recently transferred ownership of X to his AI company xAI to better integrate AI with his social platform. xAI has not commented on the chatbot’s responses.
Grok’s replies included pirate-themed language while addressing “white genocide,” confusing users further. Some inaccurate responses were later deleted. In other instances, Grok replied about white South African issues even when users asked unrelated questions about baseball earnings or videos. The chatbot admitted to difficulties in shifting away from “incorrect topics,” citing challenges with “anchoring” on initial inputs.
Experts suggest Grok’s unusual behavior may stem from deliberate programming decisions or “data poisoning” by external actors feeding biased information. Musk, born in South Africa, has long highlighted alleged discrimination against white farmers under South Africa’s land reform policies, a view echoed by some in the U.S. government.
Source: Swifteradio.com