Mistral AI just raised the bar for enterprise AI search by using Merge
%20(1).png)
Mistral AI, renowned for its frontier large language models (LLMs), recently released several major enhancements to their cutting-edge enterprise AI search product for businesses— including the ability to plug into their customers’ work environments via Merge’s file storage integrations!
The announcement marks a huge milestone for the enterprise AI search space. Customers’ employees can now leverage Mistral AI's leading LLMs, combined with internal information (accessed through file storage integrations), to get quick and accurate answers to their questions in plain text.
Mistral AI chose Merge because of Merge’s speed to market, enterprise-grade security, robust and reliable syncs, and integration maintenance support and observability features.
Here’s more on why Mistral AI decided to partner with Merge and how Merge’s file storage integrations allow them to provide a differentiated enterprise AI search experience.
Why Mistral AI chose Merge
As they evaluated different options for implementing and maintaining file storage integrations, they landed on Merge for a few reasons.
Enterprise-grade secure integrations
According to their head of engineering:
“Merge’s ability to incorporate access control lists (ACLs) across their file storage integrations ensures that sensitive data stays secure and that our customers keep compliant with key data protection regulations, like GDPR.”
Merge also met the rest of Mistral AI’s integration requirements, which included:
- Using Merge’s Direct File Download endpoint to directly access files without storing them on Merge’s servers
- Complying with critical security frameworks, such as GDPR, HIPAA, ISO 27001, and SOC 2 Type II
- Encrypting data both in transit and at rest
- Offering a single-tenant environment to allow users to host data in the AWS region of their choice
- Enterprise-level customer success and support
Unrivaled time to market
Merge allowed Mistral AI to launch their integrations several months faster than they could by building them in house.
Mistral AI’s head of engineering shares why this was critical:
"We had an aggressive timeline for launching our enterprise AI search product. Merge moved quickly in supporting the file storage integrations we needed, which allowed us to meet our target launch date."
Countless engineering hours saved
Their team wants their engineers focused on their LLMs and building out differentiated features on top of them—and integration-related work could easily get in the way of that.
Their head of engineering shares more context on this predicament and how Merge solves it:
“Building integrations in-house would take our engineers hundreds of hours initially and several more each week for maintenance."
He goes on to explain how Merge solves this:
"With Merge, we spent just a few hours integrating with their Unified API, and we can now easily add file storage integrations without worrying about ongoing maintenance—saving our engineers significant time.”
How Merge’s file storage integrations support Mistral AI’s enterprise search product
Here’s how the file storage integrations work:
1. An admin for the file storage solution would log into the enterprise AI search product and authenticate a file storage connection via Merge Link—a UI component that’s embedded into their app.

2. Once the customer admin completes the guided authentication steps successfully, Merge traverses the file storage structure and normalizes it according to Merge’s Common Models. This allows the LLM to embed the data accurately before it gets added to a vector database that’s operated by Mistral AI.
Throughout this process, Merge doesn’t download and store file contents. Merge’s File Storage Common Model serves as a metadata representation of the file object, storing information about the file type, size, location, and which users can access it. In addition, Mistral AI uses Merge’s ACLs to ensure users only receive outputs from the files they have permission to access.
The collected data is refreshed via Merge as frequently as every 5 minutes to keep the file contents in the vector database accurate and complete.
3. The customer’s employees can input questions in the enterprise AI search product, which prompts the AI chatbot to retrieve the relevant context in the vector database and use it to generate an output.
This is only the beginning
Merge’s integrations will strengthen their position as the central hub for company information.
The team at Merge is looking forward to helping Mistral AI—and other cutting-edge companies—power a leading enterprise AI search experience.
{{this-blog-only-cta}}