Google added a new, experimental “embedding” model for the text, the text, the text, the text in its gymnasium developer API on Friday.
The embedding models translate the inputs of the text into numeric representation such as words and phrases, known as embedding, which achieve the meaning of the text. Embings are used in various applications, such as document recovery and classification, partially because they can reduce costs while improving the delay.
Companies, including Amazon, Kohir and Openi, offer embedded models through their own APIS. Google has previously presented the embedding models, but Gemini Embiding is his first training on the Gemini family of AI models.
“Trained on the Gemini model, this embedding model has inherited understanding of Gemini’s language and controversial context, which applies to widespread use.” Said in a blog post. “We have trained our model to be extraordinarily normal, providing extraordinary performance in diverse domains, including finance, science, legal, search and more.”
Google claims that Gemini embedded the performance of his previous sophisticated embedding model, text embedding -004, and gains competitive performance on the popular embedding benchmark. Compared to the text embedding-004, embedded gymnasium can also accept large parts of the text and code at the same time, and it supports double more languages (over 100).
Google notes that Gemini is in an “experimental phase” embedded that has limited capacity and is subject to changing. “[W]The company wrote in its blog post, “In the coming months, a steady, commonly available is working towards release.