Cohere has released a new embedding model that it says offers superior search and retrieval functions for AI agents.
Seventy-two percent of early AI adopters expect autonomous agents to take over some tasks from their employees by the end of 2025.
Embedding models turn complex information, such as text or images, into vectors or numbers that AI models can process. In other words, they encode meaning for large language models (LLMs).
Embed 4 is Cohere’s latest embedding model, which the company claims is “the optimal search engine for AI agents.” The model powers intelligent search platform Compass and integrates with the company’s AI enterprise platform North.
Having introduced AI agents to its enterprise customers with North, Cohere is now making them more powerful with enhanced data retrieval. This comes as Cohere competes with other giants in the enterprise AI space, such as OpenAI, Anthropic, and Microsoft—the latter of which sees agents as the next frontier in commercial AI.
Embed 4 allows enterprises to build custom AI apps and agents that can search across more than 100 languages and various document types. With its multimodal functions, it can sift through documents and pull data from PDF documents, images, charts, and code.
RELATED: Did Cohere give Canada its DeepSeek moment?
Elliott Choi, staff product manager at Cohere, told BetaKit that agents will continue to be an important part of enterprise AI adoption and “free up time” for employees.
Seventy-two percent of early AI adopters expect autonomous agents to take over some tasks from their employees by the end of 2025, according to a global study of over 3,300 large companies by Snowflake.
However, a problem with these agents is their ability to generate the best responses from complex data.
“It’s important that we remain focused on the challenges to AI implementation at scale, like the struggle of retrieving information from complex, unstructured and mixed-modality data sources—which is something Embed 4 helps solve,” Choi said.
The Snowflake report noted that Canadian companies are “earlier in their gen AI journeys” and are more likely to only be pursuing one use case. They were also less likely to say that investments in AI will represent more than 25 percent of tech budgets. The data contrasts with recent pronouncements from the CEO of Canada’s largest tech company, Shopify, affirming that effective use of AI is now a baseline expectation of all employees.
Embed 4 and its predecessor, Embed 3, are retrieval-augmented generation (RAG) systems. RAG is a fine-tuning technique for LLMs to be directed to specific external data sources for particular topics. In theory, it makes retrieving specialized knowledge easier and more reliable.
RELATED: Microsoft thinks AI agents will eat the world
The new model also supports a larger context length. It allows the search of documents up to roughly 200 pages, or 128k tokens, which the company says is helpful for dense legal documents or financial reports. Like Cohere’s other models, it can be deployed in the cloud or on-premise to keep data secure. It’s equipped with domain-specific expertise for finance, healthcare and manufacturing.
According to a blog post from Cohere, clients such as Hunt Club have seen 47-percent relative improvement in Embed 4’s performance over the last model. Hunt Club uses AI to search professional candidate profiles and match talent with skills, which requires sifting through “messy” data.
Today, Cohere also announced that it now has access to some of Nvidia’s most advanced computing infrastructure through its cloud provider, Livingston, NJ-based CoreWeave. Nvidia’s GB200 NVL72 platform, which leverages more than 100 computer chips in a data centre rack, is designed to deliver significantly faster LLM performance for applications such as agents and reasoning.
Cohere has partnered with CoreWeave to build an AI data centre in Canada with $240 million in backing from the federal government as part of its Canadian Sovereign AI Compute Strategy.
Cohere recently released Command A, its most powerful LLM yet, which the company claimed could outperform leading models from OpenAI and DeepSeek with less computing power.
With Embed 4, Cohere seems to be doubling down on the “max performance, minimal compute” mission. It claims that Embed 4’s retrieval accuracy—meaning the accuracy of generated responses—outperforms that of competing models, such as OpenAI’s text-embedding-3-large.
Cohere also says that Embed 4 is more efficient in data storage and energy footprint. By outputting “compressed embeddings,” the model reduces storage costs. This means that it turns text, images, or other data into numbers that take up less space.
As for inference, or response generation, Embed 4 requires “far less compute” than other models on the market, Choi said.
Feature image courtesy Cohere.