On Thursday, Wind Surf, a startup that produces popular AI tools for software engineers, Announced AI software launching your first family of engineering models, or briefly SWE-1. Startup says it trained its new family’s AI models-SWE-1, SWE-Lite, and SWE-1-Mini-not just coding for “whole software engineering process”, not just coding.
Inner AI models of the wind surf may begin as a trauma for some people, seeing that Openi allegedly allegedly Closed $ 3 billion contract to get wind surf. However, this model launch shows that the wind surf is trying to move beyond the only manufacturer applications so that the models can be prepared as well.
According to the wind surfSWE -1, which performs competitively on internal programming benchmarks with the largest and most capable AI model of the flag, Claude 3.5 Swant, GPT -4.1, and Gemini 2.5 Pro. However, it seems that SWE-1 software is less than the Frontier AI models like Claude 3.7 Swant on engineering works.
Wind Surf says its Swe-1-Lite and SWE-1-mini model will be available free or free for all users on its platform. Meanwhile, SWE-1 will only be available to paid users. Wind Surf did not immediately announce the prices of its SWE-1 models, but claimed that Claude is cheaper to serve more than 3.5 sunts.
Wind surf is known for these tools that allow software engineers to write and edit code through conversation with AI chat boot, a exercise called “vibing coding”. Other famous Veb coding startups include the cursor, which is the largest in the place as well as cute. Most of these startups, including Wind Surf, have traditionally relied on open AI, Entropic, and Google AI models to strengthen their applications.
In a video announcing SWE models, the Wind Surf Head of Research Comments, Nicholas Moi, identify the latest efforts to differentiate its view of the wind surf. “Today’s Frontier Models are better for coding, and they have made widespread progress in the last two years,” says Moi. “But they are not enough for us … coding software is not engineering.”
Wind Surf notes in a blog post while other models are good in writing codes, they strive to work among multiple levels – as programmers often do – such as terminals, ideas and the Internet. Startup says SWE-1 was trained to use a new data model and “training prescription that incorporates incomplete states, long-lasting tasks and numerous levels.”
Startup describes the SWE-1 as its “preliminary evidence of concept”, which suggests that it may issue more AI models in the future.