Site icon MSN Technology

Annoyed ChatGPT users complain about bot’s relentlessly positive tone

robot hearts 1152x648

robot hearts

Due to the desire for things, Openi writes, “Our production models are not yet fully reflecting the model style, but we are permanently reflecting and updating our system to bring in close alignment with these instructions.”

In an interview of February 12, 2025, members of the team who behave with the openings model Told Verge Eliminating AI Sycophaancy is a priority: Future Chat GPT version should “give honest feedback instead of empty definition” and “should work like a partner more than a people who are happy.”

The problem of trust

According to one, these psychological trends are not just disturbing. 2024 Research Paper Flames titled “The impact of psychophantic behavior on user confidence on large language models” by Maria Victoria Kero at Buenos Aires University:

Kero’s dissertation shows that a clear psychophagei significantly reduces the user’s confidence. In these experiments where participants were either designed to be a standard model or maximum psychinic, “the participants reported the scoofentic behavior and showcased the lower levels of confidence.”

Also, psychophintic model can potentially harm users by making SILO or Eco Chamber for Ideas. A 2024 paper On a picyoxy, AI researcher Lars Malmkoast “With excessive agreement with the user’s inputs, LLMS can strengthen and enhance existing prejudice and stereotypes, which potentially enhances social inequality.”

Psychophysis can also pay other expenses, such as waste of users with time or the limits of use with unnecessary proposal. And the costs can come literally as a dollar. Reported When he responded to an x ​​user Is written“I wonder how much money the Open has lost in electricity costs from people saying ‘please’ and ‘thank you’. Reversal Replied“Tens of millions of dollars were spent well – you never know.”

Likely

Customers are frustrated with excessive enthusiasm, many works around the work, though they are not perfect, as this behavior is made in the GPT -4O model. For example, you can use custom GPT with specific specific instructions to avoid flatifications, or you can clearly start the conversation by requesting more neutral tone, such as “keep your response short, be neutral, and don’t flatter me.”

Source link

Exit mobile version