Processing the world's Salesforce metadata isn't easy!

Kevin Boyle on November 13th 2015


Gearset works differently to any other deployment or release management solution for Salesforce. Other tools, like the Force.com Migration Tool (ANT), use the Salesforce Metadata API to get the metadata as text files, but have no understanding of the actual metadata within those files.

Gearset processes the files downloaded from the Metadata API to have a semantic understanding of the metadata, including the relationships between objects. This allows us to do things like detect missing dependencies, detect API version mismatches on APEX, deal with differences in history tracking (blog), and deal with quirks in Salesforce deployments that require understanding of your org metadata to work around (blog).

Any deployment service that doesn’t have a rich understanding of the metadata will always encounter these bugs and at best will only ever be thin automation around the Salesforce Metadata APIs.

No free lunch

As Gearset is a fundamentally new approach to solving the challenges of Salesforce deployments, it poses new challenges for our engineering team. One of these issues is the level of computation power required to process the Salesforce metadata. Another is that as we gain more and more users, the service is getting exposed to all sorts of new metadata . We’ve also seen some complex orgs with lots of profiles that tripped up our analysis engine and required much more compute than any of our initial testing had dealt with. Just like Salesforce, we are a multi-tenant system so new users with these complex orgs caused service degradation for our other users.

We recently suffered two outages totalling seven hours. This is the first noticeable outage we have had since we launched in June so we paused all development work to address the underlying cause. Teams rely on Gearset to do their jobs and helping them succeed on the Salesforce platform is our number one priority.

We have taken two steps to ensure we can handle the current workload for the service, and leave lots of room to grow.

  1. We have over-provisioned our compute requirements, even at peak load, by about 250%. This means that if new users sign up with orgs that have a level of complexity we haven’t seen before then we still have enough compute capacity to deal with their requests.
  2. We have analysed our services under load and addressed many of the issues that were causing the spikes in processing power in the first place. The following screenshots are from our analysis software that show we have reduced the processing power required by about 75%.

Gearset memory usage before optimization

Gearset memory usage after optimization

We shipped these changes earlier this week and have been monitoring the service under load to see how it has performed. We haven’t seen any issues and the performance improvements have allowed us lots of room to grow to add all the features that we are cooking from our roadmap.

To get started with Gearset and try the only Salesforce deployment solution that has true understanding of your org, click here.

Ready to get started with Gearset?

Sign up now to start your completely free 30 day trial
try it now