Description
Catch up on this session from Dreamforce 2025, and learn how to scale your pipeline from a simple setup to a multi‑team, multi‑region powerhouse — capturing every stage of growth and showcasing features for seamless delivery at any scale and complexity.
Speaker:
Claudia McPhail, Product Manager
Transcript
Hi. Welcome to my talk, and thank you so much for coming along today to hear about the journey that you can go on to evolve your CICD pipeline for a team of any size, any complexity, and any region as well.
As we go on this journey together, one of the things I hope that you'll be able to do is to recognize yourself at one or many of the various stages that we talk about and to see the vision for where you can go as a team as you evolve your CICD practice according to the needs of your team and the business. Little bit of an introduction. Nice to meet you. My name is Claudine McPhail.
I'm a product manager at Gearset. And over the six or so years that I've been working at Gearset, I've had the privilege to work with a lot of teams of varying sizes, complexity, the way from one man band startups and early system integrators to really large complex teams with lots of partners, projects, and expansions into multinational and multi org situations as well. And over that time, I've had the opportunity to talk a lot of teams and to think about the questions that a lot of them are confronted with as they begin to evolve their DevOps practice and seeing how they can expand and evolve what they're doing in a way that's sustainable, that allows them to achieve the outcomes that they want, and doesn't compromise on their ability to grow in the future.
A little bit about Gearset. So Gearset is leading DevOps platform. We work with over three thousand teams across all verticals. We also have, I would like to say, a fantastic customer success team who really do their best to partner with all of the companies we work with, meet them where they are, and grow with them. And actually working with them and understanding their journeys is very much the source of a lot of the things that we talk about today and how that informs our understanding of meeting people at the point in the journey they're on, understanding what drives them to that next stage, and helping them to mature and grow as they need to.
So the journey, let's touch on a couple of key points here. We're going to explore, first of all, good foundation of DevOps, making sure that we establish early on patterns of working, which will stand us in really good stead and make sure that we're able to grow from that point onwards. Want to make sure that no one gets left behind because we're in Ohana and make sure that we have processes in place to allow real transparency, a feeling of ownership, a sense of everyone understanding why things are happening in the way that they are.
Accommodating different timelines, looking how we can be flexible with what we're doing in a way that lets us add and remove project work, implementations of new clouds and products without compromising on our ability to deliver on those BAU tasks as well.
Want to develop sustainably. Want to make sure that everything we can do, we can keep doing indefinitely to a greater volume without compromising the psychological safety or the ability of our teams to deliver on work at high standards. Want to be consistent as well. Want to interrogate why we're doing what we're doing and to make sure that we're consistently applying principles throughout, not just taking on products or processes because we feel that we should be doing that rather than understanding why we're doing it. And also having a clear vision for the future, which I hope is what we'll look at today and maybe what you'll come away from this talk with.
So having said all of that, I'm actually going to get into a demo because I want to take you through a couple of examples of what that would look like for you.
So with that said, here we go. So what you're looking at here is a very, simple pipeline, which some of you may recognize yourselves from your early kind of establishing of core DevOps principles. So here, we recognize that we need to have separated environments.
We can see here that we're developing and testing our changes in the Claudia dev sandbox separately to our production organization. Now a lot of teams kind of struggle to make the jump between knowing that you need to separate your environments to a CIC pipeline. And often what will happen is they will use a manual process to the point of breaking and then make a jump to a much larger pipeline. Actually, I would always recommend starting a lot earlier and establishing a really robust framework as early as possible in this process.
Because by doing that, you then put in place the tools and the process you need to grow. So for example, here as the solo admin loper on my team, the solo architect, the solo everything for Salesforce, I recognize that I need to separate out my development and my production environments. I've also recognized I need to start using source control because it opens the door to a lot of really interesting things that I can do. So in this case, for example, I've got a couple of changes. For example, I've got an open branch here, tech deck cleanup part four. I want to open a pull request to production. Maybe I can summarize my changes or use a template, for example.
What this is going to do is it's going to now add a record to source control that I've made this new flow. It's part of a tech debt cleanup project I've got, the fourth part, as you can see. Consolidated a couple of flows, created a new one that's going to be more efficient, summarized what's happening. And what this is going to let me do now is go back into my own work, and I can see what's happened.
Obviously, that's part of the point of using source control. But what it also allows me to do is to start providing opportunities for automation because it's not really a good use of my time to put a package together and send it to the target and just kind of sit there and wait for it to validate and then run some tests and run something else and then then say, okay. Well, now I can validate it. Actually, it's much better for gear set to automate all of these things on my behalf.
So you can see these are running through, and here's what I made earlier, tech deck cleanup part three, where gear set is checking for merge Maybe slightly less likely considering it's just me working on all this stuff, but nonetheless something that's important to have. And one we'll come back to in a second, validation checks. We can see that we've validated the delta of my new changes.
Testing, are there any Apex tests that we've detected that we could be running? Any end to end tests we could add in here. Automated code review. So this is actually looking at the architecture of our code.
So Gearset will scan to see whether there are any issues with the way it's written that we want to flag. Reviewers and other checks. I'm reviewing my own code at this point, but, again, will become important later. Post appointment steps and, of course, any ticketing items I've included as well.
And because I've automated all of the things that at this point in my maturity level, it's sensible to automate. All that's left for me to do is to tick this and to promote my change, and that will promote it up to production. In this case, actually, I'm going to promote tech tech cleanup part one if it makes sense to do that first.
Of course, you're not potentially going to linger that long this stage, and that's perfectly sensible. In fact, let's go to the next stage here. Now this is potentially going to look a little bit more realistic to a lot of you because now someone else has joined my team. Toby's joined. He is a fellow individual contributor.
He is also pushing changes into one of our new environments, QA, up towards production. And this introduces a new issue to the landscape, which we now need to accommodate, which is that before, it was just me sending changes to production, then come back to my sandbox again. No problem.
But now Toby needs to receive the changes I've worked on. I need to receive the changes that Toby has worked on. How do we do that?
Well, technically, we could refresh our sandboxes, but I think I speak to the most people when we say that people don't generally enjoy refreshing sandboxes, and let's interrogate why we refresh them. I think generally, you would say that you refresh them to keep them up to date with production. But what if there was a way for us to keep them up to date with production, keep the metadata, maybe the config data as well, update with production without the administrivia of having to actually refresh them and set up all my integrations and all my users all over again. Well, there is actually a way to do that.
As changes are contributed to production, here's one being contributed right now, in fact, gearset checks to see whether those changes need to go back into the sandboxes, and indeed they do.
So here, for example, for my sandbox, I can see that quite a few updates have been pushed to production recently from myself and from Toby.
And what this pipeline setup gives me is a way to utilize and to leverage a combination of knowledge about what's happening in my org compared to knowledge about what's happened in source control to provide a list now of the issues which have been pushed into production. These are all, of course, linked to Azure DevOps tickets. I can see the changed items, any predeployment or post appointment steps that need to take place, and I can update those now. This also checks for work in progress, which, of course, is really important. We want to make sure that we are updating changes without overwriting anything that could be critical. It would be very upsetting to have overwritten.
Now what it also does is it makes sure that me and Toby always work environments which are up to date with prod, so we're confident anything we're developing will work in the environment it is destined to end up in. Now the other element, of course, that's really important here is that we've added a QA. This is the first point to which Toby and I's changes meet. Again, this can, for some teams, be a point at which during their scaling journey, they hit a real bit of friction.
And this is also where we can take advantage of an opportunity to introduce automation to soothe that away. Here, we have a merge check stage. Natively, of course, in Git, you have the ability to check whether changes conflict with each other. What gear set does at this stage, and you'll notice here that we say your VCS may display a merge conflict, which can be ignored.
If you do something quite clever, we assess the merge conflict, and we use our knowledge of Salesforce because, of course, we're a Salesforce DevOps platform to assess whether this merge conflict actually needs to be resolved by Toby or I. Because if, for example, Toby's updated a field of security on a profile, I've updated a totally different field of security on the same profile, those are not in fact overlapping changes. His understands that and can semantically merge those together. Therefore, that's not a merge conflict that needs to be surfaced to me.
If the two of us change the same element on a flow, for example, that is absolutely a merge conflict that needs to be surfaced. GIS will flag it. And what this does is it doesn't only automatically resolve things that would otherwise be noise, but it also makes the merge conflict results that we see much more meaningful because we know that anything that gets a surface to us automatically is worth looking at. We don't need to just go through and say, yep.
Choose right all the way through and accidentally overwrite each other's changes.
And, again, by establishing very early on these opportunities for us to utilize things which are actually native to source control layered to a Salesforce specific platform, it removes what otherwise would be a point of friction. The fact that we have an individual contributor who sandbox needs to be kept to date, totally resolvable. The fact that we have two people deploying changes, which may in fact overlap with each other, again, totally manageable. It's not a stress. And, actually, what that allows us to do is potentially move to the next stage, which is to add more people. Now as your company begins to grow, you're likely to begin to take on more teams. And in this example, we have a multinational pipeline because we have our UK team where me, Toby, Lawrence, and Nicole all work, and we also have our US team, Alastair, Kelly, and Ryan.
And this, again, gives us great visibility. Even though I don't necessarily work directly with the US team, we work in slightly different time zones, and we kind of work a bit asynchronously, I can actually see exactly what they're contributing. I can also see how in or out of date their sandboxes are, and I have a good idea of where we all work relative to each other. I have a really good clear picture of what everyone's doing, so it gives me confidence in what I'm contributing.
Again, we have this automatic merge conflict and quality control aspect happening. We can see here, for example, that change is blocked from going through because of some issues.
But we also have a certain fluidity because all of these changes can move independently up the pipeline. This is the other thing that allows us to scale really well, feature independence. Because each of these changes can move independently up the pipeline, what that allows us to do is to say, well, okay. APEC class four five six is blocked because it's completely failed validation, but that shouldn't block task eighty six from going forward.
That's my change. I want to promote it. It's passed all of its checks. It has a totally clean bill of health.
It's been reviewed. Maybe, for example, we might have integrated a GitHub action as well that's come up, and we can say, well, actually, in this case, for example, if I look at my reviewers and other checks, we have a GitHub action which could have automatically assigned someone to review this pull request. Again, there are opportunities here for automation throughout. And as you begin to scale, integrating those will reduce the friction that is traditionally associated with the number of people multiplying also multiplying the amount of work you face.
So we can go ahead and promote that if we need to.
Now what we also have as we begin to scale is more demands on what we need to deliver. It's not just a case of someone needs a new pick list. This permission set needs updating. We're now looking at longer term projects, which may span months, maybe even up to a year, though fingers crossed, maybe not that long.
They might involve partners. They might involve other parts of the business as stakeholders, so we need to be flexible enough to accommodate those as well. Now you notice the shape of the pipeline has changed as we've added our contributors. We've added our hotfixes.
We've also added a new element as well, which, of course, is our long term project.
Now this is a data cloud implementation, but it can be the implementation of any new Salesforce product. AgentForce, for example, very hot right now.
And we have put it in its own separate long term project environment because we want to make sure that these changes which are independently moving up the pipeline can continue to do so, and our changes will be back propagated, and that QA only really chain contains the changes which are ready to go up into production.
What we often find with teams as they begin to reach this project stage in their maturity is that they might have projects that are hanging around UAT for a really long time, which are affecting the fidelity of people's testing but aren't ready yet to go to production.
Or, for example, the projects are present in several sandboxes, dependencies form to those pieces of project work or that new metadata, and then those pieces of metadata become undeployable. A flow, for example, referencing data cloud in a pipeline where we're not ready to deploy data cloud becomes undeployable. We really want to avoid that situation. So what we do is we compromise. We have a long term project pipeline, which I'm gonna pop over to here, and this is essentially a pipeline within a pipeline.
We still have this concept of feature independence where we branch off the base branch, and that allows us to contribute our changes. But instead of branching off the main, we're branching off our project branch. So when data cloud development wants to contribute to a few changes, maybe through data kit one, two, three, we commit them to a feature branch. We open a pull request, and we merge it into data cloud implementation, and we do that over and over again. And while we're doing that was we're building up the sum of that project. Data cloud implementation is also receiving changes from our main pipeline back propagated in.
And this makes sure that the sandboxes we're working on develop and test our new product or our new project or our new implementation, maybe RevenueCard advanced or Revenue Management three sixty as it now is, is still up to date with production as a whole. Because the sandboxes we develop projects in often become useless if we don't keep them up to date. But, again, refreshing sandboxes, slightly fraught topic. So this, again, is how we balance the needs together. We need to keep the sandbox up to date, but we also need to make sure it is not refreshed until our project is ready to go live, and we need to make sure that our work can be contained before we're ready to take it into our larger pipeline like so.
And this accommodates all of these things. So when you're assessing how your pipeline is going to grow in the future, taking into account this need for separate timelines and flexibly able to add and remove different elements to your pipeline is going to be really important.
So once you've established all these things, we've, for example, looked at how we can individually move changes up, assess them automatically, look at all these windows for automation to be introduced to make sure that we're working as effectively and efficiently as possible to smooth over what could be the burden of more and more people joining the team. For example, maybe even we have a huge volume of changes, and what we really want to do now is to use continuous delivery rules. Where, for example, if we have forward promotions and they've passed every single one of those automated checks, we just wanted to move forward.
If they pass branch protection rules, if they don't, again, very flexible. But, again, we just need to make sure that we understand why we're having these problems. It's not necessarily we're having problems because the structure of how we're working is wrong. It may just be that there are places where work is beginning to multiply and become a time sink that really could be smoothed out through the application of a little bit of automation and some sensible interrogation of why these problems are happening.
So once you've established all of these things, you might feel like I'm lecturing you here, the world really is yours to take as you wish. And for example, if you wanted to go a step even further, we can actually go international because here we have a set of multiple production orgs as well as their new ATs as well as their own contributing sandboxes. Because in this pipeline, what I've done is I've actually created a new production organization on Alibaba Cloud because I want to do business in China.
And what I've done here is I've modularized my repository into three modules, a core module that contains the metadata that's common to both. So if I update it once, it deploys everywhere. A China module, contains China specific metadata, so that contains the message that's localized to the China market, like integrations with WeChat and Alipay, and the third module for rest of world metadata. So those, for example, might be my permission sets or flows or layouts, which specifically reference rest of world specific metadata.
And doing this again allows me to be incredibly efficient because I update my core module and I want changes to go to all of my orgs and the org specific modules when I want to divide those up. So there is really nowhere that you can't go with a really well established pipeline where you've thought about your processes and you've thought about why you're working in particular ways. And even if we start all the way back here with pipeline number one, you can actually hopefully see now how the things that we start doing at this stage grow and build and expand as we come on to that really mature international pipeline right there at the end if that is the right path for your team to go down.
So with that said, let's talk about the journey one more time.
We lay a good foundation to start with. We understand why we're doing it. We understand why we're starting, and we're starting early so that we can really bring everyone on that journey with us.
Because, of course, we're an Ohana. No one gets left behind. And making sure that people are included that actually, you'll have noticed throughout that there was no distinction between an admin or a developer. We're all contributing, and we're contributing in way that everyone understands, everyone has the same processes, everyone is kept to the same standards. So we have a reliable and predictable way of working, which allows us to expand as a team without undue friction.
We can accommodate different timelines. We're really flexible, and we understand a nice balance between project work, which needs to be built on iteratively, as well as the feature independence of our BAE work. So being flexible in the way we work allows us to accommodate the different timelines of the projects that may be needed of us in the future.
Developing sustainably, my personal favorite subject, making sure that what you're doing can be done forever without it beginning to have a negative impact, making sure that you have pathways to expand where you multiply the volume of work that you're producing without multiplying the effort that it also needs. These points of friction, these time sinks that sometimes emerge as teams expand and begin to work on more complex implementations can really negatively impact in the long term. So making sure that the way you're working is sustainable and you can carry on doing it without making compromises on behalf of your team is really important. So always keeping that assessment in mind when you're thinking about consistency as well, making sure that you're not adopting products, you're not updating processes because you feel like you should adopt them, but rather understanding what for you that means, what does it unlock in terms of potential for you.
And then finally, the kind of thing that ties all of this together is having a clear vision of the future.
Understanding where you're going, where you can go as a team, hopefully, something you've gotten out of this talk, but also thinking about why you want to go there as well, seeing whether you can consolidate down what you're doing, whether you have an opportunity to expand out. Having something that pulls you towards where you should be, where you want to be is going to be really key for you.
So with all of that said, I hope that you feel you've learned something from this little Dreamforce rerecord. If you have any questions, please don't hesitate to reach out. We clearly love talking about this. And, yeah, thank you so much for your time.