Gearset Pipelines – From Setup To Success

Share with


Description

Overcoming common hurdles with the Pipeline setup.

Transcript

Hello, everyone, and, welcome to the Gearset Pipelines Accelerator Series. As a quick introduction, I'm Maroof, and I'm here with Charlie today. We're part of your customer success team here at Gearset. We're also joined by Antonio, one of the onboarding managers at Gearset. And Antonio will be here to answer any questions you folks might have for us today.

So today we're gonna be talking about Gearset Pipelines. This is functionality available as part of the Gearset automation license. Many of our customers are using Gearset Pipelines to level up their DevOps and automate their source control driven deployments.

In this three part series, we'll talk to you, we'll talk you through the entire process so you too can make the most of your automation licenses.

So, here's a quick rundown of what to expect today. So first, Charlie will kick things off by talking about the benefits of version control. He'll cover some prerequisites and key considerations before you set up a gear set pipeline. And finally, we'll talk you through how to get a pipeline up and running.

If you have any questions, feel free to drop them into the chat. We'll do our best to answer you either during or after the, webinar. And with that, that's enough for me. I'll hand over to Charlie.

Hello, everyone.

Okay. Let me just share my screen. One second.

And we'll go into slideshow. Can you all see that okay?

Yeah. All good, Charlie.

Perfect. So, hi, I'm Charlie. And as Marieve said, this is the first session of a three part webinar series where I'll be going through pipelines in-depth, covering the setup, maintenance, and tips and tricks along the way. That being said, let's make a start. So this first session is going to be from setup to success, overcoming common hurdles with pipelines.

The first thing we're going to talk about is why do people use, version control.

Now they use version control for a variety of different reasons, but we can put these three on the right hand side as the most important.

Starting with overwriting, working in, isolated branches, there is often a chance that you'll be making changes to the same objects as other members in your team. If you're not using version control, there's limited visibility for when you're doing this, and this quite often causes overwriting.

However, when you use version control, you're able to work in isolated branches as it says there, which means that any changes you do make to the same work, a conflict will be found and a decision needs to be made to which version of the changes you want to choose and in turn move through the pipeline.

Finally, history of all changes.

Oh, no. We'll we'll go through, peer reviews first. So whilst using version control, it's easy to set review gates whereby other members of your team can check your changes and your code before it gets moved to the subsequent environment.

And then finally, history of all changes. So with version control, you get a history of all your changes and therefore an audit trail of everything that has been changed in the past. So if you need to go digging for whatever reason, it's very easy to go ahead and find those changes.

So now that we've talked about the benefits of version control, we can kinda get into the branching strategy.

These are the main concepts and processes required to implement a branching strategy.

So in the top left, you can see short lived branches. Within pipelines, these are called feature branches. Here, you'll split work into smaller deliverables, meaning they contain fewer changes and therefore fewer conflicts, and as a result, quicker mergers.

At the bottom here, you can see, code reviews. As I mentioned earlier, version controls allows you to, gate changes that are being made. And so within pipelines, this means that issues are caught earlier and therefore easier to fix.

Additionally, this means that you are reviewing smaller chunks of work more regularly rather than all in one go at pre release.

And then finally, in the top right hand side, the final part of the kind of branching strategy as an overview is continuous integration and continuous delivery.

This means that releases are made more repeatable and reliable through automation, and team members can perform releases more regularly as it is less manual.

Next slide is a bit of terminology that I might use throughout the webinar that will be useful to understand.

So the first one, as you can see here, is repository.

So repository is the container of the source.

Within your selected version control system, this is where all the history of your changes will be found. A branch is the copy of your source.

It's like making a copy of a document before editing it as you can see there, so that the source source of truth is preserved.

Thirdly is a commit. So a commit is saving changes to one or more files in your branch. Each commit is unique and identifiable.

Next up is last common ancestor. So this is the most recent commit that both branches have in common before they diverge into separate branches. It's the kind of fork in the road where a branch has been cloned from another branch and both started to have their own individual histories slash commits.

A pull request is the changes you propose to merge into a branch, and then a merge is, when you combine changes from two separate branches.

And this is how you kind of have an overview of the pipelines terminology.

Next slide.

So this, this slide is slightly hectic, but this is the branching strategy that we use within pipelines. It is the expanded branching model.

So as you can see on the diagram, we have a main branch here at the bottom, which is a direct copy of your production environment.

As discussed earlier, you create a feature branch off of main, and then you, make a small and deliverable change from your dev sandbox to this feature branch. These blue box these blue circles represent a commit.

So once you've made these commits, you open a pull request against the QA branch.

This is then followed by any kind of validations, of this pull request and code reviews.

And then, once you're happy with those validations and you get given the green light, it's then merged into the QA branch.

This PR is then, it then kicks off a CI job, into the QA environment up here, but that's preceded by any static code analysis and problem analyzers as well.

So once this has happened, the feature branch is still open, and the next step is to open a pool request against the UAT branch, as you can see here.

This pull request is then validated as well, and any code reviews happen much like the time before. Before then, it is then merged into the UAT branch.

Once this is merged, any static code analysis and problem analyzers happen during the CI job as it is deployed into the UAT environment.

This is much the same as the time before.

Following this, the final step is with the feature branch still being open, you need to merge it into the main branch. So the three steps happen exactly the same. You create a pull request against the main branch. You do your validations, and any code reviews, and then you merge the PR, which then kicks off a CI job into, production.

The static code analysis and problem analyzers happen the same.

So what can you see at the top here? So this is what happens behind the scenes in gearset pipelines, and then this is what happens, as you can see it within gearset, and the pipelines viewer.

So the developer boxes, which are Stripe blue, are used to create feature branches much like the same here, commit the changes as in with step two and three, and open the first pull request against the next environment as you can see here.

The static environment boxes, which are usually green, are CI jobs bringing the changes from the environment branches to the related orgs, as you can see here. So number five is similar to the green branch, the green box here. The numbers on the left and right are PRs which are open against that branch, and each box involves three steps. So you, create a pull request, you merge it into the branch, and then there's an automated deployment against, into the actual org.

So I'll kind of run through the process in parallel. So what happens is you make a commit from your dev box here on the far left hand side.

Once you've done that, you will follow the same process and open a pull request here, which you can see against the SIT environment or within this diagram, it is a QA environment.

And then the same process happens, following validations and code reviews, etcetera.

Once it has been merged, it will then kick off the, CI job and be deployed into that SIT org.

This process is then continued throughout the pipeline as it is within this branching model. So the changes get promoted from SIT, and then the pull request is opened against UAT. And finally, once it reaches production, the pull request is, opened against the main branch. All these validations and code reviews will take place, and then the CI job will finally kick off, and it will be your changes will be deployed into production.

If you have any questions about that, just let us know in the chat.

But moving on, there's a few prerequisites that you need to kind of understand and follow outside of Gearset.

So having understood the branching strategy, you kind of need to acknowledge that the branching strategy is something that you want to follow and make sure that you stick to as this is the only way that Gearset Pipelines is going to operate smoothly. This is a decision that the team as a whole has to get behind in order for things to work.

The second question is, have you got your repository set up? So you need a repository set up within your selected version control system. This includes seeding your main branch from production. I'll show you how to do this a little bit later on, but this is key as your main branch needs to be a direct copy of production.

Thirdly is have you got work in progress?

The main reason for this to think about is that, hopefully, there is no work in progress. But if you do have work in progress, there might be extra work required to set up the pipeline. I won't go into this now, but there is an article that helps you decide depending on your different scenario.

If you have got work in progress, you will need a code freeze. The reason behind this, and the reason being for this is that, as I say in the next point, is are your environments in sync? Your environments need to be in sync as if they're not. It could cause issues later down the line. There are a few ways which you can sync your environments either through a refresh, by pushing all your work in progress into production, or finally, by managing the work in progress using a couple of the different scenarios and options that I discussed earlier.

Next is have you got webhook permissions from your IT team? So you need to make sure that the user you are using to create the pipeline is allowed to create webhooks as public plan pipelines is based on webhooks. This is something you'll need to think about to see if you have the permissions to create webhooks from your IT team.

Next is who will own the pipeline. So this is a decision that needs to be made depending on your team structure. For example, is a release manager going to push the changes through the pipeline? Who's going to manage this process?

So those are all the prerequisites that you have to think about before setting up your gearset pipelines.

We'll now go on to a few golden rules.

So the first golden rule is that you should deliver changes small and often. This will allow there to be smaller deployments, which could cause less issues, meaning fewer conflicts, and therefore quicker mergers.

Secondly, you should always create the feature branch off of main. This is to make sure that you are always making changes to the most up to date version of your metadata.

Main is the source of truth, which is in sync with production. It does not include untested features or work in progress.

Thirdly, there should be no parallel workflows.

Changing the orgs without passing through the branches first, for example, a change made directly into product into the org, is not detected by the pipeline and might be overwritten the next time the CI job runs or cause the CI job to fail. And finally, a version control first approach. There should be a version control first approach for the same reason as above as if you, make any changes outside the pipeline, then it won't be picked up within the pipeline. And, also, this means that version control, you'll be able to get the most benefit out of version control, and this can be seen continuously.

Those are the golden rules. I'll just press next now.

And then our final slide of the presentation before we get into a little demo is the overview of the setup steps, and this is within Gearset.

So if you have a Teams deployment license, then we recommend using a generic deployment user with admin permissions within Salesforce, then creating a team shared pipeline.

This future proofs the pipeline. So if someone was to change roles or leave the company because they might win the lottery, hopefully, fingers crossed, Or if the rest of the team, make some changes, then someone is able to take over the pipeline.

Additionally, you need to decide on what you want to include in your filter. So each CI job within your pipeline needs to be set up with a superset of filters. This needs to be consistent across each job as this will reduce the risk of different results between one environment and the others. The filter needs to be all encompassing, considering all the different potential types that your development team are likely to work on.

Thirdly, which I will show you, is populating your main branch. So main, as I've said, has to be a perfect copy of production, whilst the environment branches will start as a clone of main.

And, yep, I'll show you how to do this.

And then create a CI job for each environment, as they will move the changes from the environment branch to the org.

Finally, you should connect your issue tracking system. This allows you to track your changes throughout the pipeline and gives you access to a few extra features, that I'll show you a bit later on.

Saying that, that's the slides done. Let's get straight into the demo.

So what you can see here is my compare and deploy screen. What we're gonna be doing now is populating, the main branch, the repository, as I spoke about earlier.

So what I'm gonna do is go into my Salesforce orgs. I'm going to find the prod demo, which I had connected previously, then go to my source control and find the pipelines repo that I have got set up and the additional main branch.

I'm going to use a comparison filter with no metadata types being filtered just so, as I said, that it is all encompassing.

I'll press compare now.

This might take a little while. So, Maroof, if we've got any, questions in the chat, it's a good time to to ask them.

Hi, Charlie. Yeah. Thank you. We've had a couple pop up.

So I guess the first question someone's asked around, you know, what happens if I forget to include a metadata type in the filter when setting up?

So the goal is to have a perfect copy of production or anything that you need to modify through the pipeline. So you just need to repeat this process that we are doing now again with the, metadata types that you forgot the first time.

Thanks, Antonio. And we have had another one, actually, which is what if production changes after I, have seeded the main branch?

Same principle as before. We cannot set well, we should not set up the pipeline with a main branch that is not a perfect copy of production. So you need to repeat this process, and integrate what you have already done, the first time. So after the first seed and you realize that something has changed in production, maybe it was, urgent change that you needed to do, repeat this process and top up what you already have in the in the branch before moving on to the next step of the implementation.

Awesome. Thanks, Antonio.

Lovely. Perfect timing.

So as you can see here, when I'm seeding the repo, what I'm gonna do is compare, and deploy all the types. So I'm gonna select all here and then press next.

So what's happening now is GearSet is checking the comparison for any problem analyzers.

The thing with seeding the repo is we want to make sure, as I've said before, that is a direct copy of production.

So when you do this, you may get lots of and lots of problem analyzers come through. I've only got a few, but you're going to want to deselect the problem analyzers just to make sure that this is a direct copy.

I'll then head to the pre deployment summary.

It's gonna suggest fixes. I'm gonna press continue to summary.

So what this is doing is creating the deployment package, and these are all the metadata types that are gonna be deployed into the main branch. I can write a commit message saying, seeding repo here, and I'm then going to commit those changes.

This might take a little while. So whilst that is happening, I am going to go to continuous integration.

So as you can see here, this is the pipelines viewer. I haven't, created one yet, so I'm going to head to my CI jobs.

Apologies for the busy screen. There's lots of CI jobs going on at the moment in the team. But what I'm gonna do first is add a new deployment job, and this is gonna be the initial steps that I take, when creating the pipeline.

So the job name is going to be the integration job.

The source type is going to be GitHub, and it's being slow.

And I'm going to find that repo that I created before.

So as you can see here, the source branch, and I'm going to create a new branch from main.

This is going to be the int branch, and I'm gonna create that there.

The target type is then going to be my integration environment as you can see here, the pipeline integration.

Once I've done that, I'm gonna press next.

So, yeah, once the source and target are all configured, I'm going to, yeah, press next. So deployment action, we're gonna want this to be a deployment job. And then synchronization type, we're going to want it to be delta ci, which is only changes in the commit. So what this means is that it's gonna take the difference between your latest commit and your git source and the commit that was most recent, successful, in the CI job deployment, deployment from this git source branch, and that is the, the difference that will be deployed into the environment.

Next is run job. So as I said, previously, you're gonna wanna run the job when the source branch is updated, which is essentially when the PR gets merged.

For the pull request, you can, validate pull requests targeting the source branch. You're gonna want to have that selected. And then it's up to you whether you want to send notifications for each, pull request, and its validations.

As you can see here, this is the difference types you want to deploy within the CI job. So new items are, metadata types, which are in the source but not in the target. Change items are ones that are in both but have changed. And with the deleted items, this is where you need to make a bit of a decision.

So firstly, deletions are things that Gear Set detects in the target but not in the source. So if you tick this box, then destructive changes will be passed through the pipeline as normal. However, if you choose not to tick this box, then you have a decision to make on how you're going to deploy deletions throughout your org workflow. So it kind of adds another thing just to think about. For now, I'm going to select deleted items.

Here, you can find, Apex unit tests. So you can choose which unit tests you want to run. The first option is Salesforce default test. You can choose to run all your tests on this selected CI job. You can specify tests which you want to run, or you can choose to not run any tests. This is entirely up to you and dependent on your workflow, and how you want to go about, doing your testing.

For now, I'm going to do Salesforce default tests.

Under this is the automated UI testing. So we have integrations with and Eggplant for, UI testing if you do so wish.

I'm not gonna run any tests, and I'm going to press next.

So for the metadata filters, as I said before, this is very important. You need to decide what you want to include in the filter. Each CI job within your pipeline needs to be set with the superset of filters as I described before. So this needs to be consistent across each job as this will reduce the risk of different results between one environment and the others.

The filter needs to be all encompassing, considering all the potential types that your development team are likely to work on.

For now, I'm just gonna select eight metadata types. Yours may be a lot more, but I'm going to press next.

Here are deployment gates. These are for setting static code analysis rule sets. So here you can choose the standard rule set. Or if you've created your own one, you can also select it here. As you can see, Emily's created her own rule set. So if I were wanted to put that on the CI job, then that is how I would select it.

Here are outgoing webhooks.

So say you have any third party toolings you might be using, this is where you would configure them. I'm not gonna do this for now. But if you do have third parties, this is where you would configure the webhook.

This is advanced. So if you wanna create or edit any problem analyzer templates, this is where you would change to that template.

So, for now, I'm going to apply all fixes to problem analyzers within this CI job.

And then finally is the notifications.

Here, you have a few options whether to add or get rid of notifications depending on what kind of settings you want on. As you can see, you have a variety of different options. You can email the results. You can have a Slack integration teams, etcetera.

I'm not gonna do any of that for now. I'm going to press save CI job.

As I said before as well, webhooks are key to pipeline functionality, so I'm then going to add this webhook here. As we said before, you need the permissions from your, IT team to be able to do this.

I'm going to type in Charlie here and find the integration job.

So what I can do here is now duplicate this job and change the name to, UAT.

Here, I'm gonna then change the, source, oh, to GitHub.

Source repository is much the same as the last time. I'm gonna find my repo, create new branch from Maine, and say, UAT branch.

The target is then going to be my, pipelines UAT org.

This is we can just skip through next. It's gonna have the same, metadata filters.

Can keep on going through this and save that CI job as well.

Get rid of these things. And then finally, I'm going to create the production job.

So this time, I'm not gonna create a branch. I'm simply going to have the source branch as main and the, target as production and skip through these.

Okay.

Now that I have my CI jobs set up, I'm gonna now implement them into the pipeline.

Firstly, however, I'm going to add a developer sandbox. This, as you saw earlier within the diagrams, is the blue box where you'll be developing and then committing into the pipeline.

So I'm gonna call this dev one.

I'm gonna choose an organize, organization, which is one that I've already connected. I'm not gonna assign it to a team member, and I'm not gonna, choose an environment to connect the sandbox to. I'm gonna press save here.

I'm then gonna add another developer sandbox.

It's gonna be dev two.

Dev two. And this is just as an example of what you might see if you have more than one sandbox available.

Now that the developer sandboxes are created, I'm then going to add the static environments, which are the CI jobs that I have just created. So I'm gonna choose from an existing CI job. It is oh, what am I looking at? Sorry about that.

I'm going to add a static CI job, create new environment.

Interesting. For some reason, it's not showing up.

I'm not sure why that's not showing up, Antonio.

There's always gonna be a bit of a hiccup.

I've created the job. Choose from existing.

That's weird.

Let me recreate the pipeline.

I'm gonna add a new pipeline.

Charlie pipeline demo. Sorry about this, everyone. I'm gonna choose the repository type, and it is going to be Charlie get set pipelines.

And save that.

Okay.

Hopefully, this should work. Let's see. Add static environment.

Choose from existing side job. Okay. There we go. That makes sense now. So there's the integration job.

I'm then going to add the UAT job, add the pipe production, And we are also gonna add those two developer environments.

If you do have any questions, let me know whilst I put this all together. I'm not sure if we do have an.

If not, I can keep on going.

We're all good for now, Charlie. I'm telling you, I'll address the question we've had come through.

Good stuff.

And then add another developer sandbox.

Sorry. This took a little bit longer than expected.

Okay.

Now that we have all the, environments within the pipeline, I'm going to press edit environment here.

For the classic, pipeline view, this is how you're going to lay them out, and then you're going to create these links here.

I'm gonna save the changes, and this is what it should like or depending on your, workflow, it should like look like, a variation of this.

Now that the pipeline's all set up, I'm gonna go into the settings and show you a few little details, where you can granularly, kind of get a bit more control over your pipeline. So you can change the name of your pipeline depending on whatever you want to call it. The default base branch for new feature branches should always be, your main branch as we discussed earlier.

Create back propagation pull requests and run validations on final environment. We'll talk about these settings, in the next, episode of the series, so don't worry too much about those at the moment.

So the first of the checkboxes is the auto delete feature branches option. If you do want to do it manually, then this is where you can change this setting. With this ticked, any feature branch, will be deleted when there are no PRs open against the branch.

Secondly is pull request descriptions.

So this is another deployment gate that we have that means the PRs need descriptions before they can be promoted to the next environment.

These kind of gates are important for creating security in your deployments.

Thirdly is copy version control, reviewers.

So when promoting a p PR through the pipelines, through the environments, you can select to have a reviewer. And with this tick, this means that you can have the same reviewers throughout the entire pipeline.

As I said at the start, this is a key part of using version control. So to have this configured in the correct way is essential to a working pipeline.

The final option here is only editable by pipeline owner. So if you do own the pipeline, you don't want other team members maybe to change some settings, then only the pipeline owner can edit this pipeline.

And then the final, option on this screen is developer sandbox visibility.

So as you can see in the background here, I have two, dev sandboxes connected.

If I wasn't to own one of these and I selected team members cannot view other users' assigned sandboxes, then I would only be able to see my own sandbox that I was have been assigned. But with this one selected, team members can view all dev sandboxes.

Now the final moment, that I can see here is the Jira and Azure work items. So as I said, once you've connected these integrations in pipelines, this is where you could, configure both the Jira and, Azure work items. You're able to automatically update issue status when a feature is promoted into an environment. Simply choose the environment.

You have to tick the box. Simply choose the environment, choose a project that you've been working on, and then you can update each issue status as the as the change gets promoted through the pipeline.

The Azure work items works in exactly the same way. I don't have an Azure DevOps connection, but if you do, this is where to find it.

And with that, that's the end of the pipeline setup webinar for me. There are a few hiccups in there, but, there always is.

So I'll hand over to Maruf and, Antonio in case there are any questions.

Wonderful. Charlie, thank you so much for talking us through that.

We haven't had any new questions pop up, but just in case folks missed it in the q and a, we did have a question from we had a question which was, we keep version control in Azure DevOps. All the dev work is committed from feature branch to release and PR is raised to Devorg.

The same PR is promote then promoted to QA, UAT, and production.

My question is, we have the same release branch as a source for all the orgs in the pipelines.

Is sorry. My question is, can we have the same release branch as a source for all the orgs in the pipeline? Antonio, would you like to expand on that?

Yeah. If you look at this diagram and, imagine that if you collect feature branches into a single release, it means that, all the pull requests when you enter the pipeline will be open from that release branch. So number four, number six, and number eight will be open from that release branch.

That also mean that, if you need to exclude from that pull request features that you don't want to move forward, it's a manual process, because the automation will pick the release branch as it is. So if you collect feature branches and they enter the pipeline as a group, they will go on as a group through the entire process.

That's wonderful. Thank you, Antonio.

And, thank you, Charlie, for taking us through the pipeline setup today.

So I can't see any more questions, but by all means, if you folks do have further ones, do not hesitate to reach out to us.

But it looks like, you know, that that's kind of it today. So, that brings us to the end of episode one in the pipelines webinar series. I hope you've all found this session useful. And, you know, this session has been recorded, so we'll make this available to you all, along with our, other resources and content, library content.

But we will be back. So Charlie, Antonio, and I are back this time next week, when we will be talking to you about effective pipeline use. We're gonna show you how to start your user story within a gear set pipelines, how to propagate changes both forwards and backwards across your different environments, and also how to deal with merge conflicts without having to leave gear set. In the meantime, if you have any additional questions about pipelines, automation, or anything else within gear set, you can contact Charlie, myself, and the rest of your customer success team by emailing success at GearSat dot com or by dropping us a message in in our chat. But until next time, have a great rest of your day. Thanks, and thanks for joining.