Zee accused of “unauthorized amendment” for a problem in his AI -powered Grook Chatbo Refer to repeatedly “White genocide in South Africa” when some context was included in the X.
On Wednesday, Grook also started responding to dozens of posts on X with information about white genocide in South Africa, in response to irrelevant articles. The strange answers created the X account for Grook, which whenever a person tags “@Grook” responds to users with AI -inadays posts.
According to a post from Ze’s Official X account on Thursday, a change in the Grook Boat system perime on Wednesday morning, which guides the boot behavior-which instructed Guru to provide “specific reactions” to the “political topic”. Zai says compatibility “violated [its] Internal policies and basic values, “and that the company” has conducted a thorough investigation. “
This is the second time that Zee has publicly acknowledged the unauthorized change in the Grook code, which has led to the AI responding in controversial ways.
In February, Grook Recently. Sensor Zee’s billionaire founder and Zee engineering owner, Donald Trump and Elon Musk mention the extraordinary mention. Bullying employee Ignoring the sources of Musk or Trump, which is spreading false information, and as users began to identify it.
Zee said on Thursday that she was going to make several changes to prevent similar events from happening in the future.
From today, Zai Will Publish indicators of Grook’s system Got hub along a china people. The company says it will “take additional checks and measures to ensure that Xi employees cannot amend the system against the system without any review and” 24/7 can set up the monitoring team to respond to the events with Grook’s responses that are not caught by automated systems. “
In spite of a warning about Musk’s repeated risks Appearance Walked Non -checkedZee has a poor record of the AI Safety track. A recent report It was found that when asked about Grook, the photos of the women would take off. Chat can also be boot Crossing quite more Like Google’s Gemini and Chattigpat, curse without much patience to talk.
In a study of a non -profit saffron to improve the accountability of the AI labs, Zai has achieved poorly for safety among his peers. The “very weak” risk management method. Earlier this month, Zi Self -structured last date was missed Publishing AI Safety Framework in Final Steps.