During a press briefing with the Claude with the Claude with the Code, the first developer event in San Francisco on Thursday, Dario Amody, CEO of Anthropic, believes today’s AI models offer and offer things at a lower rate than humans.
Amoodi said all this in the middle of a big point that he was making: that there is no limit on the AII fraudulent Agi path to anthropic-the AI system is human-level intelligence or better.
“It is really up to you how you measure it, but I suspect that AI models may be less deceitful than humans, but they are more amazingly deceived,” Amody said, “but I am more amazing.”
Anthropic’s CEO is the fastest leader in the industry on the possibility of AGI acquisition of AGI models. In a wide range of circulation Paper he wrote last yearAmodi said he believes that AGI can arrive soon in 2026. During Thursday’s press briefing, the CEO of Anthropic said he was seeing a strong progress for this purpose, he noted that “water is rising everywhere.”
“Everyone always lives in search of these hard blocks [AI] Can, “Emody said.” They will not be seen anywhere. There is no such thing. “
Other AI leaders believe that deception offers a major obstacle to the AGI acquisition. Earlier this week, Google Deep Mind’s CEO Demis Hausabi said There are many “holes” in today’s AI models. And get a lot of clear questions wrong. For example, earlier this month, a lawyer who represented anthropic Forced to apologize in court after the use of the cloud References to filed courts and the AI Chatboat halted and the names and titles went wrong.
Amoodi’s claim is difficult to confirm, the main reason is that most deception benchmarks mess up the AI model against each other. They do not compare the models to humans. It seems that some techniques help with low deception rates, such as providing access to a web search to AI models. Separate, some AI model, such as an openness GPT-4.5There are especially less fraud rates on the benchmarks, compared to the early generations of the system.
However, there is also evidence that the advanced argument AI model is actually being damaged. Openai’s O3 and O4-Mini models False rate than the model of the previous generation of Openi’s previous generationAnd the company really doesn’t understand why.
Later, at a press briefing, Amody pointed out that TV broadcasters, politicians and humans make mistakes all the time in all kinds of professions. According to Amody, the fact is that AI also makes mistakes, not knocking on its intelligence. However, the CEO of Anthropic acknowledged the confidence with which the AI models present the wrong things as facts.
In fact, Anthropic has researched the trend of AI models to deceive humans, a problem that appears in particular in the company’s recently launched Claude Ops 4. Apollo Research, a preliminary access to the AI model test, found that the preliminary version of Claude Ops 4 is an exhibition. The high trend of creating a scheme against humans and deceiving them. As far as Apollo suggested that Anthropic should not have released the preliminary model. Anthropic said it has come up with some dimensions that have appeared to solve the problems raised by Apollo.
Amodi’s comments show that the anthropic AI can understand the AI model equal to the AGI, or human -level intelligence, whether it is still deceived. However, many people can be less than an AI AGI according to the definition.