A leading LLM provider just raised the bar for enterprise AI search by using Merge
.png)
One of the most valuable AI startups, renowned for its frontier large language models (LLMs), recently released several major enhancements to their cutting-edge enterprise AI search product for businesses— including the ability to plug into their customers’ work environments via Merge’s file storage integrations!
The announcement marks a huge milestone for the enterprise AI search space. Customers’ employees can now leverage the company’s leading LLMs, combined with internal information (accessed through file storage integrations), to get quick and accurate answers to their questions in plain text.
This company chose Merge because of Merge’s speed to market, enterprise-grade security, robust and reliable syncs, and integration maintenance support and observability features.
Here’s more on why their team decided to partner with Merge and how Merge’s file storage integrations allow them to provide a differentiated enterprise AI search experience.
Why a leading LLM provider chose Merge
As they evaluated different options for implementing and maintaining file storage integrations, they landed on Merge for a few reasons.
Enterprise-grade secure integrations
According to their head of product, “Merge’s ability to incorporate access control lists (ACLs) across their file storage integrations ensures that sensitive data stays secure and that our customers keep compliant with key data protection regulations, like GDPR.”
Merge also met the rest of the LLM provider’s integration requirements, which included:
- Using Merge’s Direct File Download endpoint to directly access files without storing them on Merge’s servers
- Complying with critical security frameworks, such as GDPR, HIPAA, ISO 27001, and SOC 2 Type II
- Encrypting data both in transit and at rest
- Offering a single-tenant environment to allow users to host data in the AWS region of their choice
- Enterprise-level customer success and support
Unrivaled time to market
Merge allowed the LLM provider to launch their integrations several months faster than they could by building them in house.
Their head of product shares why this was critical:
“We had an aggressive timeline for launching our enterprise AI search product, and we couldn’t meet it by building the file storage integrations ourselves or by using another integration platform. Merge was the only way we could launch our product as fast as we did.”
Countless engineering hours saved
Their team wants their engineers focused on their LLMs and building out differentiated features on top of them—and integration-related work could easily get in the way of that.
Their head of engineering shares more context on this predicament and how Merge solves it:
“Building integrations in-house would take our engineers hundreds of hours initially and several more each week for maintenance. With Merge, we spent just a few hours integrating with their Unified API, and we can now easily add file storage integrations without worrying about ongoing maintenance.”
How Merge’s integrations strengthen the LLM provider’s enterprise AI search product
Here’s how the file storage integrations work:
1. An admin for the file storage solution would log into the enterprise AI search product and authenticate a file storage connection via Merge Link—a UI component that’s embedded into their app.

2. Once the customer admin completes the guided authentication steps successfully, Merge traverses the file storage structure and normalizes it according to Merge’s Common Models. This allows the LLM to embed the data accurately before it gets added to a vector database that’s operated by the LLM provider.
Throughout this process, Merge doesn’t download and store file contents. Merge’s File Storage Common Model serves as a metadata representation of the file object, storing information about the file type, size, location, and which users can access it. In addition, the LLM provider uses Merge’s ACLs to ensure users only receive outputs from the files they have permission to access.
The collected data is refreshed via Merge as frequently as every 5 minutes to keep the file contents in the vector database accurate and complete.
3. The customer’s employees can input questions in the enterprise AI search product, which prompts the AI chatbot to retrieve the relevant context in the vector database and use it to generate an output.
This is only the beginning
Merge’s partnership with this leading LLM provider is just getting started.
The LLM provider will continue investing in Merge’s integrations to strengthen their position as the central hub for company information. They plan to expand their use of Merge’s file storage integrations and grow into new categories, like ticketing.
The team at Merge is looking forward to helping the LLM provider—and other cutting-edge companies—power a leading enterprise AI search experience.
{{this-blog-only-cta}}