On Monday, a developer using popular AI strength Code Editor Cursor Something strange felt: Switching between machines immediately logged them, broke a common workflow for programmers that used multiple devices. When the user contacted the cursor support, an agent named “Sam” told them that it was expected under a new policy. But there was no such policy, and there was a seamboat. The AI model developed a policy, sparked a wave of complaints and cancellation threats Hacker news And reddit.
This is the latest example of AI Confederations (Too Called “frauds”) Causes potential business loss. Confusion is a type of a kind of “creative space filling” response, where AI models create understandable but incorrect information. Instead of acknowledging uncertainty, AI models often prefer to create a confident response, even when it means to prepare information from the beginning.
For companies that deploy these systems to customer -facing roles without human surveillance, the results can be quick and expensive: frustrated users, damaged, and in case of cursor, possibly subscriptions were canceled.
How it opened
The event began when a Redded user whose name was Brontaststroon Seen That when exchanging between a desktop, laptop, and remote giant box, the cursor sessions were eliminated unexpectedly.
“Login in a cursor on a machine immediately invalidates the session on any other machine,” Brooklynstrovin wrote in a message. Deleted later R/Cursor moderators. “This is an important UX regression.”
Confused and frustrated, the user wrote an email to the cursor support and immediately received a response from the SAM: “The cursor is designed to work with a device in a subscription as a basic security feature,” read the email response. The reaction looks definitive and official, and the user was not suspected that Sam was not human.
After the initial Reddate post, consumers took this position as a formal confirmation of a real policy change – which broke the habits necessary for many programmers’ daily routines. One user wrote, “Multi -device works are tablets for the Fluose giant.”
Shortly afterwards, several consumers publicly announced the cancellation of their purchase on Reddate, presenting non -existent policy as their cause. The original Reddate poster wrote, “I literally canceled all myself,” he added, adding that his work is now “completely clearing it.” Others joined it: “Yes, I’m also canceling, this is a snack.” Immediately afterwards, the moderators locked the Reddate Thread and removed the original post.
“Hey! We don’t have such a policy,” Is written Respond to the cursor representative three hours in the reddate. “You are definitely free to use the cursor on multiple machines. Unfortunately, this is the wrong response to the Frontline AI Support Boat.”
AI Confusion as a business threat
Remembers the cursor’s defeat A Similar incident Since February 2024, when Air Canada was ordered to respect the refund policy invented by its own chatboat. In this incident, Jack Moff contacted Air Canada’s support after his grandmother’s death, and the airline’s AI agent incorrectly told him that he could fly regularly and withdraw at mourning rates. When Air Canada later denied his refund request, the company argued that “Chatboat is a separate legal entity that is responsible for its own actions.” A Canadian tribunal dismissed the defense, saying companies were responsible for the information provided by their AI tools.
Instead of disagreeing with the responsibility of being Air Canada, the Cursor acknowledged the mistake and took steps to amend. Cursor Coofounder Michael Travel Later Sorry on the hacker news Confused about the non -existent policy, pointing out that the user has been returned and the result of the problem is the result of a back and change that aims to improve the safety of the session, which has unintentionally created the session for some consumers.
He added, “Any AI response used for email support is now clearly labeled.” “We use the e-estated response as the first filter for email support.”
Nevertheless, the incident raised lasting questions about the disclosure of consumers, because many people who spoke with Sam seem to believe that it is a human being. “As a user, excusing people (you have named it Sam!) And it’s not labeled.” Written on hacker news.
When the cursor resolved the technical problem, the event shows the risks of the appointment of AI models in the roles facing the customer without proper safety measures and transparency. For a company that sells AI production tools to developers, it has its own AI support system, which has invented a policy that has been separated from its basic consumers, especially the strange self -infected wounds.
“A user is now ironic now that people really try hard that there is no major problem anymore.” Written on hacker news“And then a company that will benefit from this story is directly hurt.”
This story originally appeared ARS Technica.