Description
In this 30-minute session, Richard Owen (Group Product Manager) and Iz Wright (Product Manager) will show you how high-performing Salesforce teams break the cycle — building release processes that get stronger as complexity increases. This session explores:
- Why scaling breaks most Salesforce pipelines — and the patterns that cause it
- The practices that help large teams release faster and break less
- How real teams go from release chaos to calm, controlled deployments
- What good looks like at scale, and how to get there without starting from scratch
- The tools that help teams scale confidently, including Gearset Pipelines, and more
Learn more:
Transcript
So without further ado, I'm very pleased to introduce Richard and who will be running today's session for us. So if everyone is ready, Richard and Is, I shall hand over to you.
Thanks very much, Amy. Thank you very much, everybody, for coming on today. So we'll start with some quick instructions. So, hi, I'm Richard. And as one of group as one of Gearset's group product managers, my day job is working closely with Gearset's over three and a half thousand customers around the world, understanding exactly what their teams need to make themselves and their businesses successful. And we're really excited to share some of that with you today. And I'm, delighted to be joined here today by Izz.
Hello. I'm Izz, and I'm a product manager here at Gearset. I spend a lot of my time chatting with our customers about usually our pipeline's products, and I'm super happy to be involved in this webinar. Looking forward to what we're talking about.
Thanks.
So here's what we're gonna be talking about today. The first, what stops teams from succeeding as they scale up? And then we'll go into some real world tips and tricks for success, things that we see teams using day in, day out to move faster and more successfully. And we're gonna dive into some of the things that we've been working on over the last few months to help teams do just that. And then finally, we're gonna step back and ask a bigger question. So most tools up till now have enforced working feature independence model to find success. But is there another way that teams can work?
So our property for the next twenty minutes or so is to give you a bunch of practical real examples of how teams can and have succeeded in implementing DevOps processes at scale with the hope that you can walk away with something that you can put into practice tomorrow or, next week with your teams. So let's think for a minute about some of those larger setups. What are the problems teams have when they're trying to succeed at scale?
So your company may have multiple Salesforce orgs, perhaps one for, Europe, one for North America, another APAC, or different orgs for different business lines. And when those global Salesforce teams are bogged down by release bottlenecks, what does that mean? Where's the pain for them?
So it means that critical new products information or pricing structure, which needs to roll out consistently across all regions worldwide, might get delayed in one of them, which creates an inconsistent customer experience.
Or that new compliance feature, which is vital for a specific market, can't go live, and that puts the whole business at risk.
You might have multiple development teams, internal teams, external consultants, maybe a center of excellence all trying to deploy change at the same time. And when you've got silos between those groups, they're all working in their own separate sandboxes, especially in a large shared Salesforce environment, the impact can be really magnified. So a team in Germany might unnormally override a crucial update. Maybe a team in the US, changes clash.
Critical fixes can get lost. And suddenly, some of your sales reports are showing incorrect data or key integration breaks. And that's not just efficient inefficient. It erodes it erodes trust in the very platform that you rely on day to day.
And then what about when there's unclear responsibility or a genuine fear of making changes in a really customized org that's been around for years?
So in this big complex environments, the fear is that a small tweak here can bring down a critical business process over there, maybe something that affects customer service across the world.
So then what happens next?
That innovative idea from the sales team, the one that could give you a competitive edge, that just gets shelved because the risk of breaking something in such a large and connected interconnected system is just too high.
And the business effectively chooses stagnation over innovation because the development process feels too fragile to be able to take those big bets with. And this isn't just the way things are in big companies. They're direct drags in your ability to adapt, to serve customers effectively, and for the business to achieve its strategic goals. So that lag, they might feel those system limitations that crop up just when you need to be most agile. That often traces back to those deep sea challenges in how Salesforce teams are equipped to deliver.
But it's not it's not just about this. It's there there are sort of evidence that backs this up, from some of our latest, sales reports. The data is pretty clear that complexity doesn't just frustrate teams. It holds back growth.
So only twelve percent of teams that we spoke to, working with disconnected or multitool setups can deploy in under an hour. But for teams who are using connected and consolidated tools, where you've got full visibility, unified workflows, everybody can see what's going on and is working on the same page, that number jumps to sixty six percent. And that's the difference between releasing once a week to releasing multiple times a day.
And pressure's only going up. So more than half of team told us they adopted DevOps simply to keep pace with their workloads. You got more users, more data, more integrations, adopt agent force, start implementing AI based solutions. There's a lot more that's going on, and you need DevOps to keep keep pace with that workload.
Of course, as that AI adoption rises, those cracks show even more.
So twenty two percent of teams, the security risks and unclear responsibilities were the biggest barriers for them to using AIC today.
In other words, when systems get more complex, then confidence erodes and innovation really stalls. That pain of bottleneck silo is fear of change. That's not theoretical. It's what we deal with, what we see teams dealing dealing with day to day.
Good news is that to a large extent, that's also fixable, and that's what we're gonna be focusing on today.
What does this mean? It's not just about tools or automation, although those are important.
At its core, DevOps is a way of working that brings people, processes, and technologies together to improve how teams build and release changes.
And for Salesforce teams, that means adopting practice like version control, continuous integration, continuous delivery, and automated testing. But it also means creating a culture where admins and devs can collaborate seamlessly working together.
So DevOps helps you to move faster and more safely with greater visibility and confidence, which is crucial when you're dealing with multiple logs, frequent releases, and complex business systems. So when it's done right, it turns Salesforce development from being a source of friction into a genuine competitive advantage.
And ultimately, Dell's well high performing Salesforce team isn't just about better tech.
It's about ensuring that your people, especially those who are building and maintaining your Salesforce orgs, have the processes and tools they need to confidently deliver value every single day. And when they can do that, whole business reaps the rewards. So we're gonna be focusing on a few particular points today around team culture, helping teams work together on the same page, being able to shift left in the process and be able to to consolidate actions earlier, and also be able to unify your process so that your whole team is working on the same page through the lens of some of those real world setups that we see teams using today.
So let's start with a small team. So let's take five to ten developers with a clear path to production, and it might look something like this. And this team is relatively easy to manage. So each team member has their own developer sandbox to isolate their changes, and this applies to both devs and admins, so anyone that's carrying out changes in the platform.
All changes are independent, and features can move past each other if needed or be held back if required. They have a partial copy QA org where changes are merged together, and early quality assurance and integration testing is carried out.
UAT is where their business users tend to come in and check the changes, meet their requirements and needs, and then, of course, production is production.
But as teams scale up, requirements will end up scaling with it and pretty quickly end up with a much more complex setup.
So this second setup is a really common architecture that we see as teams become larger.
The volumes increase to the point that the once unified team needs to break out into independent development teams, parallel streams, and project areas.
So here, we now have two separate development teams working on business as usual development. For example, team one works on Service Cloud, and team two works on Sales Cloud. The the work from these teams is released once a week to prod.
As this team has scaled, they now have many developers on each team, all with their own developer sandbox for their work, and they've also added a QA environment for each of these teams as well. There's also a project stream, which is working on much longer timelines and releases milestones each month. They need to build on top of each other's work in this in this area.
They still have a central integration and UAT environment where sets of teams changes are tested together as opposed to individually.
You can see that they also have an added staging org, which serves as a replica of production, which enables you to test releases before going to prod. So as you can see, this is a much more complex setup.
The number of environments is multiplied out, and we can see how many teams will struggle to keep moving fast as the volume goes up. The number of developers working on features increases, and so does the volume of changes going through.
There may be tens or hundreds of changes waiting for validation on environments, and this can rob the team of momentum. Teams face a ceiling of productivity where they grind to a halt.
But this has been a consistent pain point in Salesforce deployment for years across tools and vendors.
But we're here to tell you that this doesn't have to be the case. We're not focusing just on the frustrations. There are concrete tips, tricks, and techniques that you can start using today to help improve velocity and to make progress.
So let's say your team's growing. So you have more developers, more admins, more projects. You want to scale, but you also wanna stay fast and shift to production often. So what separates those elite teams from the ones that keep just adding process and still feel stuck?
Short answer is it's not headcount. It's habits.
Best teams invest in a culture of small, consistent, high confidence changes. They automate what's repeatable, and they make feedback loops as tight as possible.
So let's look at what those habits actually look like in practice.
We've learned again and again that teams embracing the core tenants of DevOps and embedding them team wide to see success across the board. So these ideas won't surprise anyone from a traditional software background, but applying them in Salesforce is really where the magic happens. So commit little and often. Keep every environment in sync.
And when your changes are approved, just push them. So green means go. So each of these principles shortens the distance between idea and delivery. They reduce risk, and they increase your velocity.
And importantly, you can do all of that whilst maintaining feature independence and good governance across the team.
So over the past few months, we've been working on some new functionality across our pipeline solution that helps teams embed those behaviors naturally. And we're gonna talk about some of this now. So let's start with the first one, committing little and often.
So long lived feature branches are one of the quiet killers of productivity. And the longer that a feature branch lives, the more it drifts. Merge conflicts can multiply, the work can get duplicated, and the risk of human error goes up. For many admin heavy teams, that is made worse by manual tracking.
So once you've built, you you might manually write down the changes that you've made so that you remember what to deploy later. And small frequent commits can help break that cycle. They can keep your main line healthy, your releases cleaner, and your developers happier. But for that to happen, the process has to be effortless, and especially for admins.
So that's why we have built the new Gearset browser extension.
And we aim here to have met admins where they already work inside Salesforce. As you make changes, the extension tracks them automatically against your feature branch. So no more jotting things down and no more guesswork.
When you're ready, you can send exactly those changes and only those changes straight through to your comparison in gear set, commit them, and move on. And this aims to level the playing field between admins and developers. Developers already have Git and Versus Code, and admins deserve the same visibility and control without needing to context switch. When everyone commits listen often, team performance compounds.
So let's see this in action. I'm gonna do a bit of a demo here. So I will just share my screen, and that should have shared my whole screen. So, let's imagine that I have picked up a new ticket in Jira, for example, and my ticket says that I need to create a new box on the admins at the zoo objects. And I want to understand the impact of what I'll need to do. So first, I go to org intelligence within Gearset. And org intelligence is a product that we launched last year to assist during the plan phase.
It helps you gain full visibility into hidden dependencies and potential breakpoints so that you can make changes faster and safe safer and with confidence.
And I know what I need to build. So now I've had a look at this. I can move forwards, and I can see any dependencies, any permissions, the change history, any documentation that is related.
I can do that by clicking through the tabs, or I can do this with my with AI on my requirement using the GISSA agent.
So now I've got to a place where I understand my requirement, and I understand my change in the context of my org. I want to go and use the GISSET extension to track my changes. So first, I need to start the tracking, and I do that by opening the extension here.
And I need to either select a feature branch that I've that is already existing, or I can track a new feature. So, if I wanted to track a new feature, I can attach a Jira ticket to it, or I can just go ahead and create this branch. So once I've created this new branch, I can start tracking my changes here. But I've got one that I've made earlier that I'm gonna use. So I will just use this one here.
So now I've done that. I can start building, and I'm gonna re build that required checkbox in my org that my ticket requires. So if I go here, I'm just gonna do a basic checkbox for the purposes of this demo.
So I'll build this, and the extension, once I save it, will track the change I've just built.
So in a usual scenario, you can just carry on building. You don't need to open the extension each time.
Would trust that it's tracking what I'm building. But today, for the demo, we wanna see that it's actually tracking this one particular change that I've just made.
The idea with this is to give you increased visibility of what you've just built. And because we've tracked against your feature branch and you're building as you go, this means that the comparison only shows your tracked changes. So these are ones that I've tracked earlier in the day as I'm building out my object. And then I can go here to compare with gear set. We can see that it's tracked the change that I've just made.
So when I open that comparison, I can see the items that I've just created, and these are my items, attached to my feature branch. This essentially means that I can follow the usual process now in terms of committing with Gearset, but it's just streamlined that comparison process and made it really easy for me to find my changes. And I can move forwards with whatever changes I want to at this point.
Thanks.
I'll pass it through.
Let me get that shared again.
Okay.
Thanks very much for the demo for that. It's and, yeah, that is generally available right now. So, yeah, take a look on the on the Chrome store in the in Chrome or Edge, and you'll be able to start using it straight away.
So, yeah, let's think about story volume now. So as your team count grows, so do the environment sprawl. So it's about dev sandboxes, integration, new AT staging. They all start drifting out of alignment.
And if you're working on the wrong baseline, you're much more likely to hit merge conflicts, validation errors, or even worse. So that drift directs confidence. You start hearing, well, it worked in QA, but then it broke in UAT. And what's the point of testing a new AT if you're testing something which won't behave that way in production?
So that's how bugs get shipped, regressions happen, and you end up with production downtime.
And that costs money. It has far more than it costs just to cut those issues early. The last year, we launched updates, which is a great new way to keep environments up to date with production. But how can we keep the the rest of your environments aligned?
So we believe the solution is to reduce moving parts where you can do it safely. So consolidate your changes early and bring releases back in a single shot. That's what we've done here. So create your release early before you can get to production, deploy it against your staging environment to give confidence, and then procured it forward as a single unit ready for your production release. And that's what we've we've released this week.
Then back propagate it also as a single unit. So keep your changes in sync, get back to a good line in one shot, deploying what you already deployed to production. And it's a synchronization which brings predictability back to the business. So think of that team with multiple team QA environments. This is how those changes can be pulled back without absolutely drowning our CFPRs.
So we believe that high performing teams treat a release as a single unit, and they can move it forward or back through environments with confidence.
So we've just talked about keeping environments in sync and treating a release as a single unit. But even when teams get that right, there's another painful pattern that we see over and over again. Changes that are ready, but they just sit there.
So a story might pass QA, all the checks are green, but instead of moving on, it hangs around in the pipeline for days or maybe weeks. But why is this? Often, it's because those key integration and testing environments become pinch points.
So there's a big release train already running, someone's waiting to bundle a few more changes just in case, or the one person that normally presses the button is in back to back meetings, so that can't happen. As the volume goes up, those critical environments become bottlenecks. Work piles up behind them, not because it isn't good enough, but because the process around promotion is still manual and fragile, and ultimately, that is expensive.
So you end up with a huge amount of almost done work sitting in the middle of the pipeline, and that itself introduces drift and reduces confidence in what you're testing. So when we say promote as soon as possible, this is the pain that we're talking about, changes that have already cleared the bar but you're that are stuck waiting in line because your integration and testing environments can't keep up with the volume.
So now you might be thinking, great. We'll just push changes through as soon as they're ready and enjoy the speed boost. But here's the catch. Speed without the discipline is chaos. So if you promote bad changes that need to be rolled back, that velocity can quickly disappear overnight, and teams need confidence that what they're promoting is valid. That means having strong quality gates, automated testing, code quality checks, peer review, and clear approvals.
When every step of your lifestyle feed life cycle feeds into those checks, you get quality and speed.
So once you've got those quality gates in place, test for reason checks, you're finally in a position to automate your confidence, and this is where continuous delivery rules come in. So they let you define exactly when changes should move forwards, when validations passed, when tests succeed, and when approvals land. And at that point, you shouldn't need to babysit the pipeline because Gearset will promote the change for you. But there's another challenge that can appear at scale. What happens when multiple changes become ready at the same time?
And this is something that we see constantly, especially in integration and new AT environment specifically.
So teams might end up with four, five, ten pull requests, all green, all waiting to go, but, of course, only one can move first.
So in that structure, you might get collisions, merge conflicts, people overwrite each other, and lots of Slack messages are flying around saying, like, who's pushing next? And this is why we've introduced the PR queuing. So queuing gives you predictable, safe, forwards movements in busy pipelines. And when a change is ready, it doesn't fight with everything else.
It simply enters the queue. So Gearset handles the order, the sequencing, the checks, and the promotions. And when you combine queuing with continuous delivery rules, you get the best of both worlds. You get automated promotions that never clash and pipelines that stay consistently unblocked even under a heavy load.
So this is what true continuous delivery looks like in a Salesforce context, automated, safe, and scalable even when your team has dozens of concurrent work streams.
And we've seen teams remove thousands of manual promotion steps since this went live. This is freeing up developers and release managers to focus on that higher value work instead of just watching progress bars.
Thanks, Sis.
So all these steps help to improve throughput and process when you're using feature independence. So reduce moving parts, commit little and often, stay in sync, promote quickly when you can, and get those quality gates in early to drive higher confidence in your process. So so far, so good. But this isn't the only way that we see successful and growing teams working software. There is another model, one which hasn't been used in Salesforce DevOps tooling before, which we believe can help teams move even faster.
Let's go back to our team structure and look at a third example. On the face of it, yes, there's a similar environment structure here to what we've seen before. But this team has taken on these principles that we've just been talking about. So commit little and often. Get your environments aligned. Move the quality gates left so you build confidence in what's going to production at an early stage.
And as a result of all that, the way in which that team works can change, and it can get faster.
So this team's got a really collaborative planning process and shifted those development checks left. They brought users in the QA team into development process and developed from research validation to make sure that they're building the right thing first time.
Those dev sandboxes that they're using, they're constantly in a good state. They've got robust test data. Developers take much more of a shared responsibility approach to doing testing in their sandboxes before shipping their changes to QA. Because they know that once the change gets there, it's basically confirmed that it will eventually go to production in the next release. So anything from there should just be a fix forward.
So in this case, instead of working on isolated stories, this team plans their work so that developers gain the ability to meaningfully build on earlier work that's been done in the sprint. So as a result, they can iterate and work on similar areas of changes directly from QA as QA becomes that reliable and tested state.
Then UAT or staging, that becomes a final test of the release. And everything that's made through that process, that's released in the next production release with no feature left behind. In some cases, yeah, you might need to fix forward, but reversing changes in this case is very rarely needed because of the fidelity and the planning and testing processes upfront.
So in this case, all those key changes checks have been shifted left so a team can work on testing, verifying, fixing up the release. Then they've got really high confidence from an early stage that it's good to go.
So some of you who worked in wider developments will find it familiar because it's not a there's not a new strategy that we're inventing here. It's tried and tested and mainstream development around the world, and it's called Gitflow.
And we now support this way of working. So we've got first class support for Gitflow and pipelines, and here's what it looks like. So we've got three stages of the process, integration, release, and then production. And one key advantage in the model is scalability.
There are a few moving parts. Teams are iterating more quickly on the same core branch, so there are fewer conflicts, resolve for good more quickly. You've shifted that decision making process left so you can move faster and release quicker because the model the model that you're using is simpler. And the barriers which have long prevented technical diversity from doing this, so VCS knowledge, ability to follow technical capabilities, they're not really an issue here because you can work it work it in exactly the same way as you would work in traditional pipelines.
So team members can create their featured NPRs just like they would in feature independent pipelines.
This time, it's from the developed branch, checking it back into integration. Then when you're ready to go, a release manager can create the release from that developed branch. So it can be cut at the point in which it's needed to, which means that it's then ready to deploy to those intermediate environments. You can fix forward if you need to, and then the PR can to release the production can be raised when it's ready.
Now if your team regularly gets to UAT or staging and you got stories held back or you got frequent drop ins from external stakeholders, maybe right now, feature independence is the right option. It's definitely not one size fits all. But for teams who can adopt Gitflow, if they can ship for the process of left, we believe that they're gonna move faster, they're gonna encounter fewer conflicts, and they're gonna ship overall more successfully.
That brings us to our conclusion today. It's a conclusion that tells us that succeeding at scale in Salesforce is about unlocking the power of all of Salesforce's capabilities.
It's around creating a proactive, repeatable, visible process that considers culture, shift left thinking, and adopting that DevOps mindset end to end across the whole life cycle of pushing changes through. And that's how you can make those incremental improvements that over time will compound to huge impacts and huge benefits for the org.
So with that, thanks so much for coming along today, happy to open to the room for questions.
Amazing. Thank you so much, Richard. Thank you, Ilz, as well. What a fantastic presentation. And we do have a few questions that have come up unsurprisingly. We will try our best to answer them all live for you, but let's just dive in.
So we've had, our first question come up, which is about the, Chrome extension. And Nicholas asks, would the extension track changes to profiles and layouts, etcetera, as a result of simply creating the new field?
Yes. It does track those dependencies and it pulls them through into the comparison as well.
Amazing. There you go, Nicholas. You heard it first to hear.
Fab, we've got a few more questions. This one is a bit more looking at the big picture of things like this. So this question's from Satish who asks, how does a CICD tool fit into a world with AI tools like Claude or Cursor where you can modify code at a metadata level and then commit directly to Git?
Great question. The the way that I think about it is that the the really important thing is a foundation that you're building this on. Because these tools, which are advancing at incredible speed, they allow you to do they the capabilities are are advancing very quickly. They allow you to do more and more complex things. But if you do if you're basing that on a foundation which isn't solid, then the maintenance burden will very quickly become unmanageable.
So part of our approach to AI and this technology is to treat it like we do other features and other things that we build. We look for the job to be done, and we help make sure that we're building this all on solid foundation so that you can use that technology with confidence, ensure that it's doing the right thing, and it becomes a partner in the process rather than just a tool that you're using.
Yeah. That's a fab answer, Richard. Satish, we have got a previous webinar.
I think it's called Scale AI Safely. It's on our website. I'll try and get you a link. That talks a lot about having guardrails for things like this when you're using AI. So that's probably gonna be really helpful for you there as well to check that out.
And we've probably got time for one last question. And this might be a bit of a nuance question specific in particular for this specific team. So if we can't answer it today, we'll get back to you. But this one's from Santiago who asked, what about if a project uses waterfall mythology and not, like, our standard CI pipeline that we've talked about here? So they described that everything would move from dev to integration to QA to UAT and and stays in there until, like, one huge release at the end. They're asking, can Gearsat handle kind of that kind of such a huge deployment?
Yes. Yes. It can. So for this, we we built a little while back. We haven't focused on it directly here, but we have our long term projects functionality which helps deal with that.
So with that, you can build on top of existing work and then create releases from that, which can then go through your testing cycle all the way through to production and release a project in slices as well if you need it. So we we saw this need coming up from from teams a while back. Basically, any large team has multiple streams of work which go at different paces. They may have a BAU stream.
They may have multiple other project streams, in fact, in the as in the architecture diagram that I was talking about a little bit earlier with the with the large teams. And in that case, you can have a part of the pipeline which branches from a different place. You're iterating on top of that development. And then when you're ready to release it, you can push that through the cycle to production whilst keeping it up to date with the BAU work through robust backpropagation processes as well.
So definitely reach out. I have to have a conversation with you about it because we think we have the right solution and a better class solution here as well.
Amazing. Yeah. Fab advice. We have one question which I can actually answer. Do we have a certification of Gearset?
Gearset has a massive certification platform called DevOps Launchpad. I'll try and get the link in the chat as well before the end. Go check out, DevOps Launchpad because we've got a bunch of CICD courses on there and also, how that works in Gearset specifically. So I'll try and link that for you before we go.
One last question. I know we run over by a minute.
This one says, does using a single Gearset license work well with Gearset's CICD?
So we have teams who do you you have that whether you have a single a single admin or a single person who's pushing changes through. It's what we're talking about here where you're scaling up the process, that will generally work when you've got multiple users who are collaborating well or collaborating on the platform because it comes into its own menu. It's highlighting those clashes as between changes which are happening between between different users as your team scales up.
As your team grows, you'll have the best time with each of those users having their own gear set license so you can collaborate most effectively on the platform.
Amazing. Thank you.
Thank you so much for all of your wonderful questions.
Thank you once again to Izz and Richard for presenting. We're so grateful to have you on. But that is all we have time for today. I think we answered pretty much everyone's question.
If you have a question that you've thought of after the fact, please don't hesitate to reach out to us at gear set dot com, or you can reply to the email I'll be sending out roughly about this time tomorrow. Just reply any questions. You'll have the webinar recording in there and some bonus resources as well, so keep an eye out for that in your inboxes. Once again, huge thank you for everyone for attending.
Great questions coming in. We hope you enjoy today's webinar, and we look forward to seeing you all very soon. Thanks all.
Thank you. Thanks.