Currently, Chat GPT does not repeat these horrific false claims about Holman in the output. Nob said that a recent update has apparently set the issue, “because” Chattagpat now looks for the Internet for information about people, when it is asked who they are. ” But since the Openi had already argued that he could not correct the information – it could only stop the information – the story of a fake children’s killer is probably still included in the GPT’s internal data. And until Holman can correct it, this is a violation of GDPR, Nob claims.
“Although the loss can be more limited if the wrong personal data is not shared, the GDPR applies to internal data as much as shared data,” says Nob.
May not be able to easily delete the openness data
Holman is not the only chat GPT user who has raised it Chat can ruin lives from the deception of boot. Months after the launch of the Chattgapt at the end of 2022, a Australian Mayor threatens to prosecute After a false claim of a chatboat, he went to jail for notoriety. At the same time, Chattgop attacked a real law professor with a scandal to harass a fake sexual harassment, the Washington Post. Reported. A few months later, A Radio host sued Openi More than Chat GPT Outpots, stating the allegations of fake embezzlement.
Noyb suggested that in some cases, Openi filtered the model to avoid producing harmful results but did not delete misinformation from training data. Noyb Data Protection Lawyer, Kalenenti Sardley, alleged that, but filtering outpots and throwing out the withdrawal is not enough to prevent unknown damage.
“Adding the withdrawal that you do not comply with the law, the law cannot be removed,” said Sardley. “AI companies cannot only ‘hide’ misinformation by consumers while they still implement the wrong information internally. AI companies should stop processing as if GDPR does not apply to them, when it is clearly done. If the deception is not stopped, people can easily damage fame.”