Announcing Single Tenancy at Merge
Still, we understand that many enterprise-level users require more than just best-in-class security— they often require industry specific compliance standards or simply corporate peace-of-mind.
With this enterprise-first focus in mind, we’re excited to announce the launch of our new single tenant offering. Think this might be the solution for you? Drop us a line at email@example.com!
What is Single Tenancy?
Single tenancy is like being able to rent your own, private AirBnB as opposed to staying in a hotel. The place is all yours. While this architecture pattern can be more resource-intensive to set up and to maintain, it provides a few critical benefits to an enterprise:
- Increased flexibility: Resource allocation can be attuned to the exact needs of the customer.
- Custom specifications: Merge single tenancy customers are able to specify database options to comply with industry requirements or internal policies.
From speaking with our customers, it was clear that we would need to offer single tenant architecture to meet these needs. Below are some of the challenges we faced when building single tenancy at Merge.
The Data-Sync Problem
Our Unified API works with two types of data: customer-specific and operational.
Single tenant architectures are inherently designed to segment customer-specific data, but we needed to still find a way to allow operational data to pass through. Consistent operational data— the data that lets our integrations talk to our Unified API— would enable our single tenant customers to receive integration updates as we release them.
Thus, the data-sync problem was born: how do you share data models in systems designed to be segmented?
We at Merge like to consider ourselves API experts, and our initial approach was to share data the same way our users share theirs: through secure API calls. To the uninitiated this seems simple enough: expose some API on our single tenant that could generically take in our model data, fire off some POST requests whenever our production data changed, and call it a day.
To the initiated this quickly proves insufficient.
We use relational database models built on Postgres, and preserving foreign key relations across our models would require data to be processed and saved in a specific order. Problem: standard POST requests couldn’t guarantee that order. Furthermore, our production server would need to keep track of the URL’s required to sync with each tenant. This pattern was bad enough for scaling purposes, and that was before we started to worry about isolated tenants having downtime and failing to receive messages!
SNS & SQS to the rescue
Fortunately for us, Amazon Web Services offers two services that are perfect for our problem: Simple Notification Service (SNS) and Simple Queue Service (SQS).
SNS is a push-based notification service. Data gets packaged as a “message” and sent to all subscribers of a given “topic.” By using SNS we’d no longer need to manage a list of remote URLs— we could simply blast out the updated data once and let SNS handle the fan-out for us.
SQS, on the other hand, is a message queue. It allows any receiver (in our case, a single tenant) to process messages in an ordered and reliable manner. By simply allocating a queue to each tenant server, and having said queue subscribe to our SNS topic, we could propagate any data changes on our main server to isolated tenants in a reliable and scalable fashion.
After data-sync, there were plenty of additional challenges we had to solve. How to best sync large batches of data when a new tenant is spun up? How do we handle data that exceeds SNS/SQS’s 256 kb message size limit? How do we optimally check for and process new messages in our tenants? Countless completed Asana tasks later, we couldn’t be more thrilled with the results.
Solving engineering problems like these are a huge part of what makes it so exciting to be an engineer at Merge. If these seem like interesting problems to you, then great news— Merge is hiring! Why not apply today?