Site icon MSN Technology

Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

GettyImages 1333209932

A Analysis A unprecedented AI Research Institute, through Apoch AI, suggests that the AI ​​industry cannot achieve widespread performance from the AI ​​model reasoning for longer. According to the report results, as within a year, the reasoning models can slow progress.

Models of reasoning such as an ophani O3 In recent months, a benchmark, especially measuring math and programming skills, has benefited the AI ​​Benchmark. Models can apply more computing to these issues, which can improve their performance, with the negative side that they take longer to complete more than the traditional model.

The reasoning models are first developed by a traditional model training that is widely trained by the traditional model on data, then applies a technique called Enforcement Learning, which effectively gives the model “feedback” to the solution of difficult problems.

According to Epoch, so far, Frontier AI labs like Open AI have not applied a large amount of computing at the stage of learning of contradictory model training.

It’s changing. Openi has said that it has applied 10 10x more computing of O3 than his predecessor, O1, and Apech has speculated that most of it was dedicated to learning computing reinforcements. And Open Researcher Dan Roberts recently revealed that the company’s future plans seek To prefer less learning To use more computing power more than initial model training.

But there is still an upper bound on how much computing can be applied to learning per time.

Epoch Request Model Training
According to an Epoch AI analysis, the reasoning for model training scaling may be reduced.Image Credit:Epoch ai

Josh Yu, APOC analyst and author of analysts, explained that the benefits of performance with standard AI model training are currently increasing four times a year, while learning in every 3-5 months is increasing tenfold. Reasoning training progress will probably be combined by the Frontier by 2026. “

Taxkarnch event

Berkeley, ca
|
June 5 June


The book right now

Apoch analyzes many assumptions, and partially attracts public comments from AI company executives. But it also makes it a matter of the fact that the model of scaling reasoning can prove to be a challenge for reasons for computing, including more head costs for research.

“If research requires a permanent overhead cost, the reasoning models cannot be measured as far as expected.” “The fast computing scaling is a very important component in the progress of the model, so it is worth tracking it closely.”

With any indications that models of reasoning can reach some extent in the near future, the AI ​​industry is likely to worry, which has invested many of this type of model -making resources. Already, studies have shown models of reasoning, which may Incredibly expensive to driveHave serious flaws, such as trends More fraud Compared to some traditional models.

Source link

Exit mobile version