Automate your Salesforce testing and ship quality with confidence

Share with


Description

When manual testing takes days and regression still misses bugs, every release becomes a trade-off between speed and quality — and both paths erode trust.

In this webinar, Kendra Von Achen, HLS Delivery Lead for Salesforce Practice at Turnberry Solutions, and Gino Toro Pereira, Senior Product Manager at Gearset, share real-world testing challenges from over 100 Salesforce teams and show how automation removes the trade-off.

What you’ll learn:

Why manual regression testing consumes 20–40% of team time every week — and why that burden falls hardest on teams without dedicated QA resources.

How brittle test scripts and specialist skill requirements cause 31% of teams to abandon or fail to adopt automation, shifting the pain rather than solving the problem.

What good Salesforce testing looks like across three dimensions — reducing friction so anyone on the team can test, shortening feedback loops from weeks to minutes, and catching bugs before users report them.

How Gearset Automated Testing uses human-readable steps resilient to UI changes, so tests survive without constant maintenance.

Why starting small, measuring outcomes like deployment success rate and lead time to production, and integrating tests into your pipeline matters more than maximizing test volume.


Learn more:

Relevant videos:

Transcript

But without further ado, I'm pleased to introduce Kendra and Gino, who will be joining, us as guest speakers on today's session.

So let's do some introductions. Kendra, would you like to go first?

Absolutely. Thank you, Amy. So I'm Kendra Von Achen. I've been a consultant at Turnberry Solutions for the past five and a half years.

And recently, last year, started a a new role as our HLS or health care life sciences delivery lead for our Salesforce practice.

I've been in the Salesforce ecosystem for over twelve years and in overall CRM consulting for over twenty years, and I'm very happy to be here today.

Amazing. Thanks, Kendra. Over to you, Gino.

Thank you. So, yeah, I'm Gino. I I've been in the tech industry for about two decades now.

I started out as a developer but gradually moved into product management where I worked across different areas like DevOps, cybersecurity, and developer tools. So I was excited to join Gearset a couple of years ago just to see teams do DevOps right.

And since then, I've been focused on helping Salesforce teams improve how they approach testing.

Awesome. Thank you so much, both. So before we dive into today's content, I'm just gonna quickly run through the agenda for today. So firstly, we'll be hearing from Kendra about her experience working with clients who've struggled with testing challenges, and then we'll be handing over to Gino to run us through some more real world examples of how teams are testing.

Before we take a quick look at what good testing practice actually looks like and how you can automate your testing, including a quick demo. And then we'll break down some of the key best practices that you can take away from today's session. And hopefully, we'll have lots of time for some live q and a at the end. So get your questions ready.

Pop them in the q and a down below.

Awesome. So to kick us off, as I mentioned, we're joined by Kendra today, the HLS Delivery Lead for Salesforce practice at Turnberry Solutions. So if you're not familiar with Turnberry, they're a consulting firm that helps organizations to really get Salesforce development right. So from implementation through to processes and best practices, they keep everything running smoothly as teams scale. And as such, they're also one of Gearset's partners. So Kendra joins us today with a brilliant perspective on the testing challenges that exist out there for Salesforce teams, especially before those best practices are in place. And that's exactly what we're gonna dive into today.

So I will stop sharing those, and we will get started with some questions.

So, Kendra, thanks again for joining us. Before we get into testing specifically, would you be able to paint a picture for us of the kinds of Salesforce orgs that you're typically walking into?

Sure. So Temari Salesforce practice works with mid sized and larger enterprise clients in various industries, mostly focused on health care, life sciences, manufacturing, financial services, and legal. So we encounter some complex orgs.

We work with clients that are either brand new to Salesforce or our existing customers who need either new features added or need help cleaning up tech debt and finding their path forward. We also provide managed services to clients that just need help on an as needed basis.

Fantastic.

So I'd love to dig into some of your experiences when you first start working with a client, say, before Turnberry helps implement some of those best practices. What does testing typically look like for Teams? Is it manual UAT, spreadsheets, someone, you know, clicking through screens? What does testing often look like when Teams come, come to Tambria first?

Yeah. Great question. So when clients first come to us, most of their implementation work is manual. Typically, test cases are written in Jira or the tool of their choice as subtasks. Uat depends on client tooling, you know, Jira, MS test manager, or other chosen tools.

Some are writing test scripts for UAT and providing them to user groups for sign off and feedback. Because we work with a lot of companies in regulated industries, compliance is key. It means having to really prove what was tested.

We use test cases, screenshots, and documentation to provide an audit trail, oftentimes following the client's specific CAM process. In HLS and financial services, PHI is a common concern, covering things like encrypted fields and masking patient data.

The QA process itself isn't necessarily different, but the evidence trail is nonnegotiable.

Yeah. Orgs can get slightly more complex when compliance comes into it. Right? And having that audit ability can be really tricky, especially, as I say, as things get more complex.

So as, the team's org grows, you know, more metadata, there's more users, more complexity, how do teams deal with testing? Does coverage sometimes naturally drop off?

It can. They often do risk based testing because there just isn't enough time to test everything. Regression suites focus on mission critical items, oftentimes, and they're finding things that would block the business but are not matching everything. For example, page layout changes often get missed, and frequently used fields get accidentally removed from a layout, and it may not be discovered until later when a user wants to use it and it's not there.

Right. And that's the trade off, isn't it? Right? You're prioritizing what could block the business and delivery, but those quieter changes just sometimes build up.

So let me ask you. One thing we hear a lot from Salesforce teams is that they end up running the same regression test, sprint after sprint and manually, and it really just eats up time. Is that something you see when your clients first come to you as well?

When when they're doing manual testing, yes. That said, not every sprint requires regression testing. It really depends on where the changes are happening and what's being deployed to production. But when it is needed, it's been a repetitive manual work to date. I asked one of our QA leads about challenges during regression testing, and he said that it's time consuming. During one of his first projects years ago, it took around three days to complete all the regression tests that the client had.

Granted, he was inexperienced then, but it was still a lot of work. So a full regression didn't happen very often due to the time and cost involved.

That makes total sense. Yeah. And when when it takes that long, it's easy to see how it just drops off the priority list for a lot of teams. So if you could give those teams back, you know, the hours they spend on manual regression, what would that actually mean for the teams that you work with?

Yeah. Obviously, it's gonna vary. But, typically, it means more time to focus on UAT prep, more confidence in what is being deployed to production, and potentially could mean fewer QA resources needed on larger projects.

Awesome. No. That's that's awesome to hear. So let's talk about what happens when things do slip through. So even with good processes, how often were your clients, when they first came to you, finding bugs that made it to production that just should have been caught earlier?

Yeah. It's the nature of software development. It's hard to catch everything every time. When it does happen, it's usually an edge case that wasn't considered or it's from a deployment issue, like a component didn't get deployed by accident, a flow wasn't activated during or after the deployment, things that are manageable and fixable but still shouldn't have happened.

Exactly. It's never ideal even if it is fixable. Right? So so let's talk about what happens when something does get through. What do you see as the real cost for for your teams? You know, is it just a quick fix, or does it sometimes cascade?

So mature teams and organizations would usually have a rigorous go live process in place with QA, UAT, and smoke testing and production before opening the system up to the users. Rollbacks in that situation would be rare.

However, there's many situations we've seen where there aren't mature processes in place and there are more ad hoc steps happening, which could cause more drastic measures needed to fix issues found in production, including rollbacks and downtime for users.

When we do an implementation project, we almost always provide hypercare to ensure a smooth transition from go live to full handoff to the client's team to make sure that that is minimized.

Right. Right. So I guess, almost in a way, fact that, you know, that that that rigorous process, that hypercare that you offer, the fact that that needs to exist as kind of a standard offering, I guess it a little bit shows you that, you know, things will come up. Right?

Yeah. And that's to be expected. Not every feature or bell and whistle can make it to production immediately due to its time and budget. Not every subject matter expert weighs in during the build, and they may come back with a requested change.

And sometimes a use case gets missed, and when it does pop up, it may throw an error for the user that needs to be resolved. So we provide that support to ensure a smooth process for our clients.

Amazing. So we've talked about the repetition, the coverage gaps, sort of the time spent that can be spent on building evidence trails and and such. Given all of that, do you see, a role for automation in solving some of these problems that teams are facing with testing?

Absolutely. With the right tooling that doesn't require QA to update the automation each time a page layout happens or the screen changes, I can see how this could save a lot of time and produce more consistency for our clients and our QA teams. It could also ensure regression testing is done more frequently. I can see how automated testing could save a lot of time and just be more productive.

Yeah. Yeah. Definitely. It's about being, like, consistent with that productivity. Right? That's great to hear.

So I guess one of my last questions for you. So the value kind of seems clear, right, especially for automation. But the teams who already recognize that automation is something that they might need for testing, what do you think are some of the barriers that that you see that stop teams actually getting started with automating their testing?

I think finding a tool that's truly point and click is has been a challenge, especially with custom lightning components involved.

Tools often require someone who understands location paths or can write macros, and not every QA resource can do that. There's a question of whether it's maintainable. Is the juice worth the squeeze? Do teams have the budget? Is it still working six months later?

A tool that adjusts QA scripts automatically when screens and layouts change would be a game changer.

Awesome. Yeah. That the technical aspect is is one definitely I know we'll be talking about in a little bit. Well, thank you so much for helping us explore some of the challenges that you're seeing in in Teams when it comes to testing. We're now going to hand over to Gino to dive into some real world examples, what good testing looks like, and how teams can start to tackle some of these challenges that we've just discussed. So, Gino, over to you.

Let's go through what good testing looks like.

So one of my favorite values here at Gearset is the job to be done. Now this gives us the permission to ask the question why and to go beyond the surface of the underlying problem. So over the last year, we've had hundreds of conversations with over a hundred Salesforce teams, and we deep dived into the challenges of manual and automated testing.

We also got feedback from teams who are using our product extensively. So what we're sharing is the voice of our customers.

These insights are relevant to you since we spoke to many different types of teams. So let us know in the chat what type of team you fit into. Half of the teams don't have a QA resource.

These teams build and test their own changes. A third of these teams have a single QA resource, which is usually a QA tester, but it can be filled by a BA, a developer, an admin. And finally, seventeen percent of the teams have a dedicated QA team. They generally work across multiple teams or sometimes works in other ecosystems outside of Salesforce.

But all of these teams had the same view of what good testing looked like. They had the same goals and the same challenges. The first is around people. They all wanted to be able to have anyone in their team to test whether they're a developer, an admin, or a QA. And the challenge here is to remove the friction in the whole testing process.

Secondly, it's about time. How quickly can they find out if something works or something is broken? So the best teams get feedback within minutes or hours rather than days or weeks.

Finally, it's about confidence. Are you confident that you're catching all your bugs before you release into production, which can cause a real impact?

So let's deep dive into these three different areas.

In terms of friction, it depends on how you test. So again, please share in the chat in how you do testing.

A large majority, forty three percent, do manual testing. They find this repetitive, it's time consuming, and it's prone to human error. If we take two testers and they need to test the quote generation process, are they both tested in the exact same way? Are they both checking the results accurately against what is expected? And can they be consistent if they're testing this over and over again throughout the year?

Now thirty one percent are adopting or fail to adopt a test automation tool.

Now one of the pain points here is not having the right skills. Coded platforms like Selenium require programming skills, but a lot of low code or no code tools may need programming skills to customize it for your Salesforce org. And what we learned is that every org is different. You may need it to debug and fix a test, but you definitely need it to be able to integrate this into your pipeline so you can have tests running after you promote a change and gate it until those tests pass.

So the upfront investment is pretty high. One team I spoke with spent a month building two tests. They failed to integrate it into their pipeline. Those tests weren't repeatable, so they broke and ended up doing manual testing for the rest of the year and abandoned their tool.

Twenty five percent of the teams do automated testing, but challenges still remain. So since they require specialist skills, it can create silos and handovers, which can slow teams down. But the biggest pain is around the brittleness of tests. If tests fail, you need to fix it, which is your maintenance problem.

UI tends to break these tests, can happen during Salesforce updates, which is three times a year. It can happen in any of your deployments if you have your own custom components. It can happen if the data changes on your org.

So, really, automation promises a lot, but the pain just shifts across.

In terms of the fast feedback loop, we need to know how teams are slowed down.

This comes from manual testing, and all of our teams that do manual testing say they spend around twenty to forty percent of their time testing.

This is one to two days a week every week. And teams without any dedicated QA, this means that people who are building changes are repeatedly doing the same thing rather than adding value.

The other pain is regression.

So regression can take days or weeks depending on how complex your org is. And since you're making continuous changes, this is left right before you do release.

One team shared they have a two week sprint, and they do regression at the end. And it takes them two weeks to find out something is broken. Another team that had a dedicated QA team, so they had more people, it scaled. However, they test one sprint later, so it took them a month to find out if something was broken. Then the challenge is figuring out why it broke and when it broke.

Finally, it's about finding bugs early. And, generally, this is due to poor testing or the lack of testing.

Sixty percent of the teams have reported that bugs are being caught in production, which is generally by the end user. And the impact of this bug can differ, but something small like a layout issue and a field has disappeared for a salesperson can also have an impact on the team. One team lead shared that the morale of this team member went down because they could have easily logged in as a different user and found it before they released it.

In more serious cases, bugs can cause outages, which can stop sales teams from working. It can impact the customer because a quote cannot be generated. And one team shared with us that outages had caused SLAs to be missed, which had financial and reputational damage. So this isn't a testing problem. This is really a business problem.

Now every time you need to release, you have a choice.

You can release on time, and if you're doing manual testing, you don't have full test coverage, so you're taking the risk of bugs being found in production. Or you can release with confidence, delay that release by days or weeks depending on how many cycles you need to do, how many bugs you find, which means you've got higher quality but at the cost of missed deadlines. Now both of these paths mean you lose trust and confidence within the team, by the business or even within the team even within the team. And the real challenge really, how do you strike this balance?

So let me share how we do it here at Gearset.

So first, let's start at where it matters most, right before you deploy to production or before you deploy to the next environment. If all tests are green, you should be able to promote with confidence. But here, the tests have failed. So let's deep dive.

You're not able to promote this into the next environment, so let's deep dive into the results. You can see what tests have passed, what tests have failed.

We can go in deeper to see the steps that have been executed, but also with the screenshots that associate with them, and you can figure out why, when why and when it's failed and fix it. You can organize these tests into different folders, and these folders can be associated to test jobs to do a full regression suite or smaller targeted tests, which can run on a schedule in a daily or weekly basis, or you can link it to your pipeline so it runs in every promotion, and you can see the history of those tests and see any trends if quality is going up or down. Finally, we're gonna build a test where you can interact with the browser, and every interaction is translated into human readable steps. This is what makes it resilient to UI changes. So anyone in the team can build tests, add assertions.

We've made it that simple.

So what is the real value with testing? Firstly, we made it easy to build, organize, integrate, and run these tests. So we're removing the friction that teams face basically face when they're doing test automation, but also integrating it into their dev DevOps pipeline. For running these tests regularly, whether it's daily, weekly, or part of your pipeline, you can get fast, continuous, and visible feedback.

This means you can catch issues before your users do. As a result, this frees up your team's time, so builders can add value, QA can build more effective tests or be more strategic instead of repeating the same manual tests over and over again. Ultimately, it comes down to the point that matters most, giving you the confidence to make that go or no go decision. So knowing that your changes are both high quality and ready to ship is where the real value is.

Some of the key things or the key takeaways here is one, measure what matters.

So think about your user happiness, the lead time to production when a change is built and to the point it's in production, and also your deployment success rate. If you release something that causes a bug that a user has found, this means it's a failure.

So knowing these metrics before you adopt an automation tool, it means that you can actually see if you are getting value from these tools.

Next is to actually focus on less tests and make them effective.

In the testing pyramid, end to end test is at the tip of the pyramid, which is more cost and takes longer to run. So the more tests that you have, it's harder to maintain and it increases your feedback loop.

So focus on those core workflows. And finally, Salesforce has its nuances. So we need to make sure that it does work for your Salesforce org. So try it out and see if it works.

Thank you.

Over to you, Amy.

Amazing. Thank you so much, Gino.

Awesome. Kendra, is there anything you wanted to add to those best practice and key takeaways?

Yeah. I would say, you know, from a I think one of the benefits is that you can build more enhanced documentation by having the automated QA and outputting what it produces.

And then you really have the ability to observe the impact on other functionality, which would result in fewer incidents in your production org overall.

Amazing. Yeah. Thank you. Thank you both so much. Gina, if you want to just quickly pop to the next slide, I'll just share with folks.

We have a couple resources here if anyone wants to learn more about automated testing in general, but also about Gearset's automated testing solution, all the info is there on QR codes. And, also, kindly, Kendra and Gino have offered if you want to reach out. Their info is there as well.

We do have time for some questions, which is awesome. So we have had a lot of questions.

Actually, just around maybe we'll start with some questions around the demo, Gino, if that's okay. And don't worry. If if Gina can't answer your question live or it's a bit too specific, we will get back to you by email. So don't fear. But, Gina, if you're happy, I shall kick off with a question for you.

Go ahead. What are the licensing requirements to implement automated testing?

I think we need to get back to you on that. So the I know that it's a separate pricing, so it's best to kind of contact us and we can go through those details.

Amazing. We'll reach out about that, but there'll also be information and a link if you want to chat to us more about that in the email we're sending out tomorrow.

So another one here for you, Gina. If we adopt Gearset, what automated testing frameworks integrate with Gearset, or do we have our own framework or application for end to end testing?

So this is this is our own framework. So we're trying to make it easy for you just to be able to record and build those tests. And, as Gia said, we're the ones who are making sure that this runs consistently and in a repeatable way.

Amazing. Thank you, Gino. Another question on documentation. Is there any documentation available for setup? This person is says we are using Gearset and would like to test this, and does it need additional licenses?

So, yeah, do contact us, and we can get you set up. But there is documentation available on the doc side. It should be at the top level as automated testing.

Amazing. Just wanna confirm that to everyone, we you will be getting the recording of the session in an email going out round about this time tomorrow. So don't fear. You can recap everything we've chatted about. This question is probably a good one for both of you actually, if that's alright. Kendra, we'll start with you. So what's the biggest mistake you think teams make when they first try to automate Salesforce testing?

Yeah. I would say assuming that you can set it and forget it. Tools that don't automatically update the scripts if the fields move, like talked about before, will fail the test. Another aspect is assuming you need less QA time or resources because it's automated. There's still oversight and planning that needs to occur overall.

Awesome. Juno, did you did you want to add anything to that biggest mistake teams make when they first try to automate?

I think one of the one of the key things I've heard is trying to test too much or trying to test Salesforce itself. So it's really, what you wanna try to do is test your end to end work to make sure that fully works. So start small rather than try to do too much.

Yeah. Great advice. Okay. Another great question that has just come into the chat. Gina, this one's for you again.

Sorry. Does this tool require programming or technical knowledge? Test is a nontechnical business analyst.

No. There's there's no programming or technical knowledge needed. It's simply to you need to record, or you can actually use our AI agent to prompt how to test.

Amazing. Thank you, Gino. Unfortunately, folks, we have run out of time today for all your questions. I will make a note of all the questions, and we will reach out after the session via email with answers to everything.

Or if you like, you can reach out to us on gear set dot com. We have a chat there. We're always happy to ask answer any questions you have around gear set, around automated testing, around DevOps, whatever you like. As I said, watch out for the email in your inboxes. Tomorrow, there'll be the recording, ways to reach out to us to to look more into the solution that Gino's demoed today.

Once again, a huge thank you to today's guests, Kendra and Gino, for presenting. Thank you so much for joining us, and thank you to everyone who has attended. We hope you enjoyed today's session and you found it super helpful.

And we look forward to seeing you on another gear set webinar in the future. So thank you so much.