The process of deploying data between Salesforce orgs can be a difficult and fraught affair. There’s so much to consider and remember, especially when deploying critical data to production. Complex data relationships with all manner of references between record types mean that you have to consider dependencies, relationship handling and field mapping before deploying records to your target. All of this can be a time-consuming and error-prone process to say the least.
Common data deployment errors
How you structure your data in the first place can give rise to some inherent challenges when you come to deploy it later on. Here, as an example, are a few common errors that you may encounter when you deploy your data to another Salesforce org:
- The absence of unique external IDs or a primary key can result in duplicate records or duplicate object IDs when you deploy to your target org.
- A duplicate rule in your the target org can cause a data deployment to fail if the values in one or more fields on an existing record mean a new record is identified and rejected as a duplicate.
- Inconsistent metadata between two orgs can cause data deployment errors.
- Missing dependencies can also cause data deployments to fail.
- A new custom validation rule or trigger added to the source and target orgs can cause an error if existing records deployed to the target org fail to meet the new rule.
Five tips for structuring Salesforce data
Perhaps you have run into one of these errors yourself. If so, you will be pleased to know that there are a few simple strategies for structuring your data in line with good data practices. Ensuring data integrity in this way is important regardless of which data migration tool that you use.
1. Use unique IDs (entity integrity)
We recommend that you use a unique external ID field to identify records. The unique value might be a product code, a contract number or a social security number, depending on the type of record. You could also add a field specifically for the purpose of setting a unique ID. What’s important is that you have control over the identifier. Don’t rely on Salesforce’s AutoNumber fields, for example, as the AutoNumber sequence can differ between orgs when you come to deploy your data. In order to prevent the deployment of duplicate records to another org, it’s best to use a unique ID as the record’s upsert field, i.e. as the field that determines whether a record is created, updated or rejected as a duplicate in the target.
2. Use lookup fields for IDs (referential integrity)
Never use a text field to reference another record by ID, always use a lookup relationship field. By using a lookup field, you can make sure that the ID remains attached to the record if you modify or delete it. Salesforce sets lookup fields to empty if the record referenced by the lookup field is deleted. This ensures that the lookup field doesn’t point to a non-existent record.
3. Keep metadata consistent (structural integrity)
Before deploying data between orgs, always make sure that the metadata is identical in your source and target orgs. By keeping your metadata consistent, you can help maintain the referential relationships between records when you deploy your data. For example, you might have removed a required relationship in the source org that’s still present in the target org. If this is the case, you won’t be able to create new records if there’s no matching record to insert into the target org. Similarly, if you’ve added a required relationship in the source org that isn’t present in the target, you will end up deploying two independent records that won’t be related to each other in the target org.
4. Maintain data relationships (semantic integrity)
When deploying data, always include any related objects or dependencies referenced by the records that you are deploying. In particular, this applies to records that are in a master-detail relationship.
5. Check the validity of your data (user-defined integrity)
When adding new validation rules to your data, check that your existing records pass the new rules. There are different ways of doing this. You could, for example, deploy your records to an empty sandbox org that doesn’t have the new validation rule enabled while, at the same time, applying filters to include any records that would fail to meet the new rule. This way, if there are any invalid records, these will be deployed to the sandbox.
What Gearset’s data loader does for you
Thankfully, Gearset’s data loader has made the data deployment process much simpler, quicker and safer, as it plans and executes complex data deployments for you. Smart relationship handling means that you can automatically migrate entire hierarchies of records, deploy selectable dependencies and upsert records with automatic cross-referencing. You can also deploy records from multiple objects at the same time rather than having to make separate deployment runs. As a consequence, you can make fast, straightforward data deployments while preserving data integrity at the same time.
Does this mean that Gearset has completely eliminated all possible issues that can arise when executing a data deployment? Well, not quite – the following errors are common when deploying data between orgs:
- DUPLICATE_VALUE or DUPLICATE_EXTERNAL_ID
- DUPLICATES_DETECTED
- INVALID_OR_NULL_FOR_RESTRICTED_PICKLIST
- REQUIRED_FIELD_MISSING
- FIELD_CUSTOM_VALIDATION_EXCEPTION
Even with Gearset, these errors can still occur if you don’t structure your data in line with the tips above. That said, a combination of good data practices and Gearset’s data loader makes laborious and error-prone data deployments a thing of the past.
Get started and deploy your data now!
To get started with Gearset and take the stress out of your data deployments sign up to our free 30-day trial today!