How Merge accelerates AI adoption across engineering without compromising security
.png)
As AI tools proliferate for engineering, our security team has faced a tough balancing act: enable engineers to leverage these tools while keeping our company data safe.
Over the past year, Chris Bailey, our Head of Security, and his team have taken several measures to navigate this balance successfully.
Here’s just a snapshot of what they’ve done.
Upgrade or block developer tools with data ownership policies
Before even looking at a developer tool’s technical security posture, our security team reviews their terms of service and data ownership policies.
Many dev tools reserve rights to use or retain customer data for model training. This means that if your team uploads sensitive information, you may unknowingly grant them broad usage rights.
Chris explains why this is so harmful:
“You can expose code repositories, internal libraries, and CI/CD workflows to third parties, effectively compromising your intellectual property and eroding your product’s competitive advantage.”
Whenever our security team sees this during a vendor evaluation, they’ll determine whether there’s a strong enough business case to warrant upgrading to their enterprise plan (where they’d typically waive ownership and training rights).
For dev tools, this involves taking a holistic and quantitative approach for reviewing each tool:
- Estimated time savings for our engineers on a weekly basis
- Percent of coding errors that can be reduced
- Ease of integrating the tool with our existing tech stack and CI/CD pipelines
- Level of responsiveness when issues arise (i.e., the terms in their SLA)
- How much influence our engineers would have on the vendor’s roadmap
Based on our security team’s answers to these items, they can quickly glean if a tool doesn’t offer a compelling ROI. And when it doesn’t, they’ll typically block it from being used and identify the best alternative options.
Develop a tiered security review process based on risk level
Chris’ team classifies data into categories, such as end-user data, code and internal documentation, and they evaluate risk based on the type of data processed.
Based on the type of data that’d be collected and/or created in a tool, they can then assign it a certain level of risk—which informs the security review they’d go on to perform.
Here’s how this looks for dev tools:
- If it touches intellectual property, like source code, it’ll go through a full security review
- If it helps with productivity (e.g., auto-generating docs) and doesn’t process or access sensitive data, they may fast-track it after confirming there aren’t secondary risks, such as plugins or external APIs
- If it only uses mock data or demo content, they move even faster to help engineering begin testing the tool quickly
This tiered approach helps teams move fast when risk is low and remain cautious when there’s potentially harmful exposure.
Partner with engineering leaders to choose the right tools
Chris and his team work closely with engineering leadership to identify the best tools for specific use cases and purchase enterprise licenses for them.
Chris explains why this is a win-win:
“By giving our developers access to secure, high-performing tools, it gives them fewer reasons to go rogue with unapproved options.”
For example, for code generation, they evaluated tools based on the frequency and nature of hallucinations, the prevalence of insecure code patterns, and the potential for synthetic vulnerabilities.
After reviewing several code generation platforms, our security and engineering teams found that Windsurf and Claude best met our security requirements and engineering use cases, so they purchased enterprise licenses for both.
Chris has already seen this approach help build goodwill with engineering:
“Our engineers can tell that we’re 100% committed to helping them stay on the bleeding edge of AI, and that’s led them to trust our process and follow our policies.”
Provide timely, relevant communications on AI tools
Chris’ team provides communications on AI developments (that’s not only relevant to developers but also the team at large) in highly-visible ways, from posting in the "#general" Slack channel to speaking at all-hands meetings.
They time these communications by the type of AI advancement they see and its potential impact on our security posture.
For example, if there’s a trending AI product, and they immediately see potential security risks, they’d notify the team that it hasn’t been vetted yet and shouldn’t be used until further notice.
In addition to providing proactive communications, Chris has found it helpful to remind Mergies of a tool’s security policy right when it’s launched:
“We’ve found that timing communications on AI products as soon as they’re released is extremely effective, as employees may forget about our security policies on a tool by the time it’s formally launched.”
Final thoughts
Our security team has proven that you don’t have to sacrifice security to embrace AI (and vice versa).
By pairing smart data classification with proactive enablement, strategic reviews, and continuous education, they’re empowering our engineers—and the rest of our team—to leverage AI successfully.
“At the end of the day, AI risk management isn’t about reinventing the wheel. It's the same preventative risk assessment we’ve always done, just through a new lens,” says Chris.
When you think about it this way, you should feel confident in helping your team use AI securely.

.png)


.png)
.png)