Site icon MSN Technology

Microsoft’s most capable new Phi 4 AI model rivals the performance of far larger systems

GettyImages 1883327378 e1730136121848

Microsoft on Wednesday Launched several new “Open” AI modelWhich is most capable, is competitive with at least one benchmark with Openi’s O3-Mini.

As it says on tin, all newly legitimate licensed models-PHI4 mini-reasoning, PHI4 arguments, and PHI4 arguments-are the “reasoning” models, meaning they can spend more time checking the facts of complex issues. He extended the Microsoft PHI “Small Model” Family, which the company launched a year ago to present a foundation for apps making AI developers in Edge.

The PHI4 mini -reasoning was trained for about 1 million artificial mathematics problems arising by the Chinese AI startupdipidic’s R1 argument model. Microsoft says nearly 3. 3.8 billion parameters, PHI4 mini -reasoning is designed for educational applications, Microsoft says, like “embedded tuition” on lightweight devices.

Parameters are compatible with the skill of solving almost a model’s problems, and more parameters are generally performing better than lower parameters.

The PHI4 reasoning, a 14 billion parameter model, was trained from the aforementioned Openi O3-mini using “high quality” web data as well as “curatic demonstration”. According to Microsoft, it is best for mathematics, science and coding applications.

As far as the PHI4 argument, it has been molded into an argument model for Microsoft’s pre -released PHI4 model special works to achieve better accuracy. Microsoft claims that PHI4 reaches the performance level of DPC R1 in addition to the reasoning, which has significantly higher parameters (671 billion). The company’s internal bench marking also has PHI 4 reasoning, in addition to O3-Mini’s matching Omnimath, Mathematics Skills.

PHI4 mini -arguments, PHI4 arguments, PHI 4 arguments, and their detailed technical reports, are available on AI DEV platform hugging face.

Taxkarnch event

Berkeley, ca
|
June 5 June


The book right now

“Using ascens, reinforcement learning and high quality data, it [new] Balance size and performance in the model, “Microsoft wrote in a Blog Post. “They are quite small in the low -ranking environment, yet maintain strong reasoning capabilities that counter very large models. This compound also allows limited tools from resources to effectively perform complex reasoning tasks.”

Source link

Exit mobile version