The judge is not ready to rule whether AI is output speech
Google and Character Technologies also pushed forward, and discussed the case on the basis of claims for the first amendment. C.Ai users have the right to listen to chat boot output As considered “speech”.
The corner agreed that the Character Technologies could emphasize the first amendment rights of its customers in this regard, but “the court does not stand that the role of LLM is a speech at this stage.”
The CAI tried to argue that the chat boot output should be kept safe from the video game roles, but Kanway said the argument was not meaningful. The Garcia team pushed back, note that the dialogue of the video game characters was written by humans, while the chat output is the result of LLM, which is predicted to come forward.
“The defendants fail to explain why words are spoken by LLM,” said Kanway.
As the case develops, the Character Technologies will have the opportunity to improve the claims of the first amendment, perhaps better than saying that chat outputs are similar to other inhumane speakers.
A CAI spokesperson provided a statement to the ARS, which shows that the earway is confused.
“It has long been true that it takes time to adapt the law to the new technology, and the AI is no different,” a CAI spokesman said. “In today’s order, the court made it clear that he was not ready to rule all the roles at this stage and we look forward to continuing the defense of the case.”
The CAI also noted that it now provides the “separate version of its LLM” for the U -18 users, as well as “parents’ insights, filtered character, time -spending notification, latest prominent withdrawal, and more.”
“Additionally, we have a number of technical reservations that aims to find and prevent talks about self -harm on the platform,” said a CAI spokesperson.
If you or you know that you are committing suicide or is in trouble, please call suicide prevention Lifeline Number, 1-800-273 talk (8255), which will contact you at the local crisis center.