Adopting Salesforce DevOps summit 2022

Share with


Description

How do you confidently guide your team into DevOps adoption? Monica Thornton Hill, Senior Technical Consultant at Fusion Risk Management, shares insights from her extensive experience.

Learn how to:

  • Question the effectiveness of your existing Salesforce release management process
  • How to avoid common challenges and pitfall for DevOps adoption
  • Evolve your DevOps workflows and evaluate different options

Learn more:

Related videos:

Transcript

Okay. So today, we're gonna be talking about, adopting Salesforce DevOps. So this is specifically, we're gonna be walking through how to evaluate your current state, and discussing some tools and process improvements you can focus on to avoid common challenges and what some of these types of changes to your process might look like.

Alright. So my name is Monica Thornton Hill. I am a senior technical consultant at Fusion Risk Management in Chicago. I started out as a Salesforce administrator in twenty seventeen, but after a year or so, I realized I wanted to learn how to code.

And in twenty nineteen, I became Salesforce developer. In twenty twenty, a senior developer. And last October, I joined Fusion's professional services technical architecture team. I've been using CareSet for almost, four years now.

So I wanna walk through some of the, common questions that teams should be thinking about in terms of making changes to their deployment process. These are common questions, but I think it's worth, worth addressing them and and kinda starting some of these discussions. It's worth slowing down over, because I think some of these changes that come from these discussions are gonna be really worth it long term. I I also say it's it's worth slowing down over because I've heard, I've heard before teams say when when talking about adopting DevOps stuff like, we've got too much going on right now.

You know, we're too busy to alter our deployment strategy. And I and to me, that feels like all the more reason to consider change. Like, this is this is the right time to think about change. This is the right time to think about those benefits.

So in your current state, how long does it take your team to deploy? How much of their day to day is spent on moving completed work? Besides delivering value more frequently to our end users or, you know, to our customers, I think the biggest benefit of adopting DevOps best practices is around the time that you give back to your team. The often tedious and excessive time spent deploying, is time that could, yeah, not only be spent on more development efforts, but it's also it's also time, that impacts your team's work life balance.

It's it's time and effort, that can cause burnout. And and it's with the right tools and processes, you can really improve you can really improve how how how much your team is spending on deployments. So as an example, this is time spent not only on chain sets, which are are definitely time consuming time consuming, but this is also time spent maybe manually on doing work that has unintentional impacts in production. It might just be one person who's often staying after hours doing these types of fixes or one person that's staying after hours just to click deploy on a chain set because there's no way to schedule a deployment.

You know, there's no way to do a rollback and so on.

So I also want to stress that this is as much about the tools that you have available as it is about your overall process. And, actually, process, depending on your current state, could be even more crucial.

I think one of the big challenges that teams face today when adopting DevOps is focusing only on the tools, which, yes, is a big part of Salesforce DevOps. But you can have the fanciest tools around. And if you're still only deploying once or twice a month with an active development team, you're going to increase risk in the amount of time and effort it takes to move any completed work to your end users.

So let's talk about a worst case scenario.

This was one of the first projects that I ever did as a Salesforce admin. It was a really large project. It was months of work. So, like, six months worth of work.

There was over sixty plus, process builders and flows, new permissions. There were modified shared objects. By shared objects, I mean, like, the case object where there were already teams in production using this object actively. There were new layouts, custom apps, and so on.

This is not a visualization of that project's metadata specifically, but this image is an example of an org's metadata, and all of its connected dependencies. And it is certainly similar to what, my source org at the time, which was like a dev sandbox, and what the target org, you know, production looked like. You know, we had built what we had built had dependencies, and what we were updating or impacting also had its own dependencies to consider. And at the time, all we had for, as as a deployment tool was Salesforce standard chain sets.

And so for for some more context, I was maybe four to five months into my into the, Salesforce ecosystem. This was my first ever role. And so chainsets were actually all that I knew. This was all that I was familiar with.

And because of the amount of process builders and flows, I remember the chainsets pages taking, you know, forever to load. I remember waiting for them to upload and the validation failing and then having to to add a new a dependency I had forgotten and that effort feeling like I was, you know, completely starting from scratch every time. I'm sure many of you are all familiar with with this scenario if you've worked with chainsets or if you were new and working at chainsets. So this deployment naturally took evenings.

Right? It took way longer than an eight hour workday. It took an over the weekend validation as well because our business requirements at the time, required us to deploy on a Friday for a Monday go live and do a prod validation over the weekend. And I think I think that chainsets would have taken less time here with more prep.

Like, for example, you can build them as you develop features, so it's not something done at the end of a project.

And we could have had maybe more metadata type documentation ahead of the deployment, you know, a list of things that were changed, but both of those approaches also take a lot of time. And that's valuable time spent doing that type of documentation.

And I I think another big part of why it took so long is that I was an inexperienced admin. Right? This was my first role in the space. I didn't have this visualization in mind of all of the different dependencies in the Salesforce org, which I think is okay.

You know, this was a a learning experience. But I think with the with a good tool, inexperience like that doesn't directly result in days lost, you know, in in hours spent in the evenings and hours on the weekends. Right? It doesn't have to be trial and error.

That's a I think that's an avoidable disadvantage. So what we really needed was a tool that did problem analysis before going through a full validation to catch some of those validation errors before we even spent the time waiting for waiting for the validations to come back from Salesforce. A tool that could identify missing dependencies or surface warnings based on what was included or not included. A tool that could remove specific user references that wouldn't match between environments, or or permissions that weren't relevant to what was included in the deployment package and so on. Without these features, unless you've got years of experience in the space and, you know, you've got all the quirks of chain sets, in your tool belt, you're an you're at an avoidable disadvantage, and the time spent on deploying goes up substantially.

So what could have gone differently? Right? We talked about tools, but, essentially, there are tools out there that provide features like problem analysis, pre validation checks, you know, can smartly identify dependencies and and so on. That will definitely save time and effort, especially with larger deployments or deployments that span changes over over weeks and months like the one we were working with. But, again, your tools can only help so much. A big part of mastering Salesforce DevOps is focusing on process.

We should not have waited months to deploy our work. We should have been deploying smaller and more often as features passed QA and UAT.

Even if we were deploying them dark, meaning, you know, they they went to production, but our end users couldn't actually see them or interact with them yet, which was definitely a new concept for our our business owners and and our our product owners at the time. But, basically, the the more our dev environment became out of sync with production, the longer work stayed in the sandbox, the higher potential for issues during validation and inaccuracies during testing.

We also should have focused on reaching an MVP or a minimum viable product at the beginning of our project. Our partners, our our business owners, you know, our product owners, we're very focused on an all or nothing perfection, which is, you know, understandably so. They wanted to give our end users, their teams, the best experience that we could offer them, and they wanted us to get it right. But an MVP is not a test run.

It's not about temporarily settling for something that doesn't provide value. It's about building something that meets the user's needs that provides immediate value to end users that we can then enhance over time. It's about getting value now. So six months ago and end the scenario, and then prioritizing enhancements to improve it iteratively.

The other advantage of an m v MVP is getting immediate authentic feedback from end users. It is much easier to pivot or alter a feature potentially fundamentally earlier on than months down the road.

So one of my mentors since I started working in Salesforce, showed our team this illustration, over the years and and and, to remind us about the concept of an MVP or to use it to to, communicate and get on the same page with with our product owners about an MVP. I'm sure many people have have seen this illustration before. But as you can see, like, the goal here is to get the earliest testable, usable, lovable feature out there, and you'll more likely end up with something better in the end that's gonna fulfill the end user's need during each iteration and not just months later. In this example, the user's need being from, to get from point a to b, which a skateboard does, but, of course, a wheel or two wheels on an axis does not.

Alright. So the next next set of questions to to think about, around your current state. What's your plan when something goes wrong? For example, a validation rule goes to prod and it breaks the flow.

It breaks or breaks code. Permissions were altered, like field level security was removed. Custom metadata type access was maybe removed unintentionally from a profile, and now users can't access a fundamental part of the system. How does your team currently handle that scenario?

What about when an entire go live goes wrong, where multiple processes are not working, like they were working in dev, and the fix is greater than maybe toggling off a val a new validation rule? Do you have a rollback plan? Does someone update prod directly? And if they do, how do you ensure that fix gets back to dev environments so it doesn't accidentally get back to prod later? How does your team communicate today with other teams, or with your business partners? You know, what's your when something goes wrong, what's your plan for letting them know? But also before anything deploys to production, what's the process for communicating a deployment?

So some of the changes here, again, to consider are on your tools and process. As far as tools go, I think it's important to look for tools that definitely have rollback capabilities. That's one of my favorite features about Gearset as you're rolling a click away from viewing a snapshot of what your org looked like before the deployment and what it looked like after the deployment so that you can do an itemized rollback of specific things that need to be rolled back or or a a whole thing. Just complete complete rollback to the way it was before your deployment.

You can also look for tools and features that, like like, change monitoring jobs, which take, like, daily snapshots or, you know, snapshots in a certain cadence of a scope of metadata, automated testing jobs so that you're continuously running unit tests against, against your your source environment, continuous integration with unit testing, and then backups backups of metadata, or backups of, data or both.

Another another, process improvement or, is basically how I like to think about this is if if multiple teams are coming to me in my experience about, hey. Things are really going wrong. We have a lot of situations where, stops going to prod and it's breaking. It's often the case that there that there is a communication issue going on either between different dev teams, or, just internally or with the business partners. And it's also often the case there's an environment management issue as well. So teams working in a shared production should know who's changing what and when, ideally before it goes to a shared testing environment and not right before it deploys to prod or in worst case after it deploys to prod.

This can be done with collaborative source control tools like GitHub, or it's as simple as improving your process with a tool like Jira, which also integrates with with a tool like Gearset or or Slack, or other chat tools. Another process improvement to to prevent worst case scenarios is accurate testing. Yes. Everything should be tested, but everything should be tested by someone who did not build it and should also be tested in an environment that's otherwise in sync with production, and in an environment that is shared across all dev teams. If one team is working in an in a dev sandbox and and another team is working in a different dev sandbox, there's going to be be an increased, amount of risk if both of those teams try to deploy separately to production, without first first testing together in the same environment.

So we're gonna talk about what a deployment strategy might look like and the factors that could change depending on your team's preferences and your change management requirements.

So how often do you deploy today? And how and where do you deploy from? Who's responsible for deploying, for testing?

One of the biggest impacting factors to a success a successful deployment strategy is how often deployments occur. In my experience, this has also been one of the biggest adoption challenges for Salesforce DevOps processes.

Some dev teams don't get full control over when and how often they deploy because of the perceived or or feared potential impact to end users and the general ownership over change management processes.

I've seen change management processes, that have come from leadership that are on the stricter side where deployments are limited by the business to very specific windows, like twice a month or worse, once a quarter. Those policies often come from a fear of disruption, maybe a lack trust from business owners or partners, maybe poor communication, but it's a conundrum. Not deploying as often actually increases the risk of disruption. The goal here should be frequency. This very, very rough chart is an attempt at visualizing the difference between waiting months to deploy a project versus deploying in parts or in smaller features more frequently.

The longer we wait to deploy, the more out of sync our dev environment becomes and the longer it takes for end users to get the value out of what we're building.

So what might a deployment strategy for your team look like keeping this goal of frequency in mind? Once you decide what to change and how to change about your current process, what's a good starting point for your team? Think about DevOps adoption in first steps and then phases. The adoption, DevOps adoption process is kind of its own development life cycle.

It also benefits from aiming for an MVP or that minimum viable product. Another big challenge of of adopting, Salesforce DevOps is a forced timeline put onto your dev team or multiple or more multiple dev teams. It's not a great idea to outline a new process potentially with new tools as well and other, you know, learning curves to consider and expect a dev team to adopt it over a sprint. I think every team starting point and adoption timeline will look different, but starting simple and iteratively improving it over time is a beneficial approach.

These next few slides will be some examples of strategies, specifically around environment management, example tools and features to use, and general flow of work. But it really comes back to what your team is comfortable with and the pace of change they're comfortable with. These examples, also use Gearset as the deployment tool, just at because it covers the features we've described so far, and it's what I'm I'm most experienced with.

Okay. So some of these examples are gonna look really familiar to those who attended DevOps streaming this year. But these first examples are as simple as it gets, without updating prod directly. So changes are first made in an up to date, prod sandbox.

They're tested, by someone who did not build it, and then they're moved to production when ready. This is ideal for a single development team or a group of consultants, working on one project iteratively by deploying small and often just like we talked about. On the left, you can see, I'm sorry. On the right, you can see that feature a is live in prod, feature b is still in development, and feature c is on its way to production in the next scheduled deployment.

Not shown it's not shown here, but another thing you could use in this type of setup is still gear set c I c d or I'm sorry, gear set c I jobs for continuous integration. There there are validation only jobs where you can continuously check, the sandbox against production to make sure what you're moving past all unit tests and would validate, in a deployment.

Okay. So now we're getting more complex. This is just one example of a multi team or multi project approach. I think one of the the biggest differences here is that we're using sandbox still for our individual dev teams or projects, but we have a dedicated one now for integrated testing.

Right? So this is multiple teams working in different projects and deploying first to this this QA sandbox for as an example. We also maybe have one for staging. So this is, for, you know, validating before something goes to production.

Maybe we have business users that wanna do active, UAT in a sandbox environment before we deploy to prod. So that's that's this dedicated staging sandbox.

We're also using, the continuous integration or CI jobs here to periodically move changes from stage to production automatically.

Ci jobs will allow us to run those unit tests and prevent any changes, from moving if any of the tests fail. We might even be using Gear Sets change monitoring jobs here, which allows us to revert back to a previous version of our production metadata, or one of the sandbox's metadata sets of metadata if needed based on that daily snapshot of our org.

So here's a similar approach as before, but with a source control in place, and a simple branching strategy just as an example. With this approach, we now have communication and collaboration truly built into the process itself because of the tool and process choices here. We have teams in multiple up to date sandboxes, or working off of, locally with feature branches or scratch orgs.

Our feature branches here were created off of our main branch, which is our source of truth for our productions metadata.

We have two different, pull request steps. So approval is required to go to a shared testing environment and then again to deploy to production. In this case, peer approval so that someone who did not build the work or someone from another development team has to review the changes that are being proposed, between those two those two steps. And we also have a CI job between main and prod, our main branch in production, to automate some of those changes, to be to be, deployed on a cadence automatically. Maybe some of the changes that we know aren't gonna directly impact, logic in production. Right? Like, just updates to page layouts and, list views and things like that, but not updates to maybe a validation rule or to a new flow.

But what's important in this example really is that communication becomes a fundamental part of the process here.

Now we've got line by line tracking of change metadata because we're using a a source control tool. No one is spending their valuable time documenting everything that was changed and any background on the why something was changed, should be captured in commit messages, right, or or, PR descriptions.

And what's nice about tools like Gearset, which integrates with tools like GitHub or Bitbucket, is that anyone unfamiliar and experienced with Git can still protest participate in this process and utilize those tools. I also wanna point out that our staging environment's now gone. And this is kinda taken from some real world real world experience where, I had a team that was doing something similar like this. And then once they built in source control into their process, they realized the staging environment was not necessary.

That was a, an extra step. Right? An extra step of validation that was covered by these PR checks, and became redundant. And so that was that was after some team feedback, but we decided that was not necessary, for the to move forward with this process.

So speaking of team feedback, lastly, but most importantly, the feedback loop within the team. So everything we've talked about so far has a ton of documentation out there already. Industry experts have spent time proving that DevOps best practices work. There's evidence for agile development and so on, but the dynamics and the potential culture shift that happens within a team are not so straightforward.

One of the things I appreciated most, in the past, when experiencing changes, especially to deployment processes or just teamwork flows in general, is, our leadership and our peer mentors continuously checked in with everyone as a team. They created an environment where feedback was encouraged, where it's okay to ask questions and ask questions and share opinions, including critical ones. It's important to check-in maybe every week, two weeks, you know, what whatever's best for your team. It's important to have a scheduled dedicated sacred time to make sure everyone is on the same page to hear feedback and concerns, to hear frustrations.

There's more than likely going to be frustrations, especially depending on how big your team is or the level of eagerness for change. There are some admins and developers that actually like chain sets, or rather they like what they know. They they like or prefer what they're comfortable with. It's not so much that they like the pain points of chain sets, but they understand them.

They've been using them for years and it's familiar. And moving too much and too fast without giving them a chance to really understand the benefits of why is all this extra effort, worth it can be really discouraging. It's not likely that we can go from zero to a hundred, but it's also not likely we could go from fifty or sixty to a hundred. It would not be easy to go from chain sets to a source control over the course of a sprint or two sprints, especially if concepts like branches and get are new to your admins and developers.

That learning curve requires time and effort.

Also, I say sacred here, because that scheduled time wasn't something that got dropped when things got busy. It's important. Work will definitely have to slow down, and that's a good thing. This is worth slowing down over. But as I said, most people prefer what they are already comfortable with. And I think that preference for the familiar is exacerbated by stress. It is much easier to take on something new when you're given the space and prioritization to do so and the support.

Making changes to fundamentally improve your deployment process is a commitment, and it is only going to be successful if you have this feedback loop. And finally, once you get that feedback, close that loop, you should take that feedback into consideration, Alter your plan. As an example, like, you know, cutting out staging, cutting out UAT, deciding, like, you know, what steps are needed, what steps are are taking up too much time, you know, what's what's best for your team, and and and what's best for them right now. And and and don't be afraid to alter the pace of your plan, or take a step back if needed.

Finally, here are some resources, about these topics. I am not an expert in this space. Everything I've ever learned about DevOps has been from really great mentors that I've been lucky to have and also industry experts who have shared their knowledge, with others like Andrew Davis. So, I always recommend this book, Mastering Salesforce DevOps. It's it feels very topical for the summit. And, of course, DevOps Launchpad, which was mentioned in the keynote, and, GearSet help docs, which are which cover a wide range of topics, as well as these trail heads on Git and source control, which we we touched on briefly, but could easily be their own sessions or multiple sessions. So if you wanna learn more, definitely check those out.

But, thank you to, yeah, Gersett for having me, and thank you everyone for listening.

Thank you so much, Monica, for such a great session and some fantastic advice on how to start the adoption process. The book that you showed is actually one I do recommend to people, preparing for the CTA review board, so great to see that. I guess we'll need a few more minutes before our next speakers, so just sit tight for a second.

We do have some questions if you wanna answer them, Monica.

Yeah. It looks like we got some time.

Let's see.

You said you use Gearset CI jobs to deploy changes to an org from a branch after a PR merge. Can I ask if you specifically use Delta CI jobs, deploys specifically what was changed or normal CI jobs deploys the whole branch?

So yeah. So I use, what we what we used was a, in the CI job, you configure it you can configure it to deploy with a successful merge. But our CI job, depending on we had a few that ran that way. So it kinda depended on the scope of metadata.

So there was, like, a limited scope within that that CI job that was this is all the stuff from that branch that upon a successful merge should auto go to the target environment. So as that example, the one that was most commonly used was a CI job between Maine and prod that just ran whenever. So, like, throughout the week, the weekend, whenever someone, had an approved PR and a successful merge, a agreed upon set of metadata that we thought was safe did go to prod at any time went to prod. I hope that answers the question.

Thank you. We have time for one more question. Is user acceptance testing handled in the QA sandbox?

Yeah. So in that in that last example, that is where where UAT would happen.

What for some background on that, instead of doing UAT right before going to production, UAT started to be built into, our user stories. So, like, based on acceptance criteria and our user stories in Jira, that was the that was the the UAT, so that exercise of deciding on acceptance criteria. So during QA, that's what gets tested as the acceptance criteria. And any sort of, business owner, product owner sign off that might need to happen, they could also be a part of that, that discussion and that and that, that QA testing.

Thank you. Looks like one more. How frequently do you move changes to prod?

Yeah. So in that in that last example, that those changes were on demand, or or on on that basis of, that PR, being approved and and merged successfully into our main branch. So, you know, on a on a busy day, that could be ten ten deployments during the workday. But on a slower day, it would be fewer.

We still, for for altering, code, so Apex changes or, some of our heavier flows and things like that. Those changes, and and process builders, those changes were reserved for after hours. So they would be scheduled separately to make sure they were deployed automatically, you know, like eight PM, nine PM, at the end of the workday so that there was wasn't a potential to disrupt users during the day if something did go wrong, and that we had time to roll back in the morning if needed.