Samuel Arroyo – Embedding quality into your Salesforce software development lifecycle

Share with


Transcript

Okay. Good morning, everyone.

We're gonna start so that we are right on time.

When I did a dry run yesterday, if I go full speed, we have maybe three minutes left for q and a at the at the at the end. So I'll try my best. Somebody said, well, just cut cut the content by one third and speak slowly, but it's like I don't know. I feel like everything is important, which reminds me of my history teacher who used to tell us what to highlight for the exam and essentially the whole book was highlighted So they didn't make any sense. But, today, hopefully, if you're here, it's because you care about quality in some way, and the topic for this session is embedding quality into your Salesforce development life cycle.

My name is Samuel Arroyo. I'm one of the VPs of product and technology at ProVar. If you don't know what ProVar does, we build, solutions for companies and teams who want to take care of the the software quality.

Before we get into the topic, maybe I need to refresh our minds into what is, a software development life cycle. I had initially what is the software development life cycle, but if you search online, there's so many different variants. I don't think there is a particular s l d c that is the culmination of everyone's thoughts. But I wanted to just go through what it usually is on its most basic terms.

And probably, if you have developed software, you feel that this is a good representation of the process that goes behind the scenes. So first, usually, you have a team of people that need to plan. Why is it gonna be built? Who is gonna do it?

When are we gonna start this project? All this effort, when is it gonna be finished?

How much time? How much money is it gonna cost us?

Next, how are we how are we going to solve these different issues that we want to build? So that's why you design, how are you gonna build the solution, then just do it, implement it, and then you test it. Is it right? Is it what we wanted to do in the first place?

Finally, you release it. You deploy it maybe to production.

And with that comes training your users, telling them what is new, what has changed, how to use it.

And finally, how do you operate that? How do you maintain it? How do you fix what has been built in case there were some issues that you didn't realize were there or how do you improve on on the things that you've built.

And again, this is a cycle, so the wheel keeps turning from, the maintenance that you go back to planning, design, and so on.

So hopefully, we have a very basic idea of what a software development life cycle is.

And then when we say embed quality into this cycle, what do we mean really?

I think it's both mindset, how do we build a quality mindset into the people, and there's also processes and tools that we could look into.

But mostly this session is gonna be about, making questions to ourselves about the things that we do and maybe give us an angle into our projects that we haven't thought about, how to involve, maybe testers if we have them or the role of testing, the role of quality, how does that become part of our thinking?

So let's take the framework of the life cycle and go stage by stage, and and think about it. So let's begin with plan.

When we think about planning, usually you have obviously the project team. You may have project manager who is in charge of putting things in maybe into a beautiful gantt chart. You have developers estimate. You have a lot of people involved, but, sometimes we don't realize that testers or people around working in the QA team can also benefit from using sort of sort of framework, which let's call it test plans, so the plan for testing.

And if we take a test plan, here I'm gonna go through all the different aspects of it, and maybe as you go through it, you start realizing these are questions that maybe your team wasn't asking before or sometimes they're skipped. So it's just really to raise awareness of how you can instill this, quality mindset into your team. So first, what is the main objective?

What are you trying to achieve by testing this piece of work?

The scope, what will be tested and what what will not be tested, maybe the aspects of the system, certain changes who that they just cannot be tested for whatever reason or it's just too much for the scope and maybe delayed for later?

What's the approach your team is gonna take? What types of tests?

If you have developers, they probably have to do their unit tests anyway. But, if you have some UI components, some interactions, are you doing UI testing? Are there any integrations in there? You need to do API testing?

Is the security team gonna complain if you don't do any security testing, performance testing? No test. There's so many kinds of testing. Decide what types of tests you're gonna use in this plan.

Are you just gonna do manual testing? Do you have automation in in place? What's the balance there? When are you doing manual? When are you automating it?

How are test is gonna fill the balance? What testing tools are you gonna use, if if any? Even if you're just doing manual testing, are you using spreadsheets? Are you using a notebook? How how is the information being shared?

Do you have any assumptions, maybe that you can the testers are gonna get their hands on the on the work at this point in time or that they will be able to test things. What constraints do you have? Do you have Christmas in between and suddenly there are constraints on on time?

Test data strategy. How are the testers gonna get their hands on the test data? Who produces that test data? Where is it?

How can they get it? How soon can they get it? How can they get it and put it into the test system? How is that gonna work?

And finally, test coverage plan.

You probably know what code coverage, but what else are you gonna cover? User stories. Do all user stories have to have their own tests?

You identifying risks? Do those have tests as well? Do you have a wide cut up analysis of all the metadata that is in the org? And then you decide, okay. This part of the these flows are not being tested, and therefore, they're part of your plan.

And then there's this is the second aspect of test plans to identify risks. What can go wrong with testing, what is your mitigation strategy around it, testing procedures, what is the acceptance criteria, what is your definition of this is what good looks like and until this has happened, we don't give our green light for things to go live. What is your criteria and your standard?

Defect management. How are you gonna manage defects? What different levels of priority and importance do your defects have?

Is everyone on the same page on those? Are you gonna fix all defects before you go live? Which defects can be postponed and fixed at a later date?

Testing frequency. How often are you gonna test? Are you gonna test just as we saw on the DevOps, loop only after things have been built? Are you testing every time that somebody makes a change on the code base? Is that being tested automatically?

How often are you testing? Are you testing in production all the time or only in sandboxes?

And finally, who are part of the team? Who are the testers? Or if you don't have testers, who is taking on that responsibility?

Who is reviewing their work? And how are you training testers to learn how to use the new tools, the new changes, new systems, because otherwise, you just you just give them a user story. On a new system, you're like, well, we don't even know how to to get our hands on it. So during the planning stage, just take a look at test plans.

If you have testers, maybe you can encourage them. You don't have to fill in a massive document, but just it's about asking those questions and trying to have an answer so that you're planning ahead in in in terms of, what are you gonna do with testing.

Let's move on to design.

Why should testers be involved during the design phase? And here, I wanted to highlight the collaboration between testers and all the groups of people starting with architects. So why would you involve testers with architects? First, because the sooner they understand the systems are are gonna be tested tested, the sooner they can start learning how to test them. Maybe they require new tools and new approaches to test those systems.

Also, they they get an understanding on the data flows. How is the user gonna move across these systems?

And therefore, if we want to do end to end testing, how do we move from one system to another? What tools can we use to to do that? And how the data flows across systems? So if you involve them sooner during the design phase and they can have those chats with architects, then they can prepare a lot of work that otherwise will have to happen later down the line.

If you involve, testers with business analysts, I think it helps them to understand what parts of the work that is gonna be accomplished is important and therefore will require more testing.

Testers are always looking at acceptance criteria to understand what good looks like, and I think they're a good resource to have at hand when you are defining the acceptance criteria yourself. So if you're a product owner, you are maybe a business analyst, and you will have to design define the acceptance criteria for a user story or some piece of work, you can ask the testers for help because maybe you have some gaps in there, some angles you're missing, and they are so used to looking at acceptance criteria that it can help you define those as well. And also if you are defining personas, what are different prototypes of people that are gonna use the tool like salespeople or managers of some kind, that will help them to define their tests from their point of view.

So instead of being as a user generic person, I will do this and that, it's not as Samuel, a salesperson in this department who behaves and thinks this way and is this much familiar with the system. I'm gonna do this test case and I'm gonna maybe manually go through it as if I were that person.

Finally, I think it's good to collaborate with UI UX designers.

I think, testers are very familiar with the things that are wrong in the system and maybe not on the first iteration. On the second iteration, they can provide their defects to say, well, these are pain points that the user are currently feeling and maybe a pain point is related to a defect in the system. And finally, maybe the testers can influence on the beautiful and magnificent designs that sometimes they come up with to say, this is great, but there's no way for me to test these interactions because of the way you're asking the user to behave. There's no way for me to simulate that at a at a later date. So maybe they can influence decisions to make testing easier.

Moving on to implement and test. So usually in the life cycle, you have implement and then test, but my opinion is that they don't have to go one after the other, and we will see that right now. So I put implement and test on the same one. And my main question to you is, well, how should developers and testers work together?

In my experience, there are many two camps and two ways of doing it, model a, let's say. You build first and you test later. And, unfortunately, I think that's mostly the case when I've seen it around.

But here's some thoughts I have about that. So while the testers get involved once the feature is built and then is, like, handed over to them, So whatever feedback they have is gonna happen late. It's not gonna happen as the thing is being built. It's gonna happen late because they only get asked to get involved at at a later date.

The developers probably don't have any tests they can run as they're building things. So no automated testing available during development potentially.

Knowledge and responsibilities are isolated. I compartmentalize.

Developers know how things have been built. Testers barely know. They only know, like, if it was a black box and on the surface. So knowledge very much stays with developers or admins, whereas a tester, it just has very little clue of how things happen.

And then if you've been in this model, worked in this model, there's a lot of back and forth. Developer finishes, hands this over to tester. Maybe tester has to start from scratch asking questions. By the time they're done with testing, the developer has already moved on to another user story. Their knowledge is gone, and he's like, oh, I I found all these defects. And then the developer is like, what did I build here?

So there's a lot of context switching for the developer and even for the tester later on, which, just involves delays, lots of back and forth.

Now here's an alternative where you try to build and test in parallel as much as possible.

And, personally, I when I've seen this working, it's just so smooth. It's a delight to work in these sort of teams where testers and developers are working together from the beginning. They are bouncing ideas from on each other's heads, like, a lot of communication, a lot of feedback about how things are being built. The tester probably has access to the branch where the feature branch where the developer is building things and they have their own environment and they're, like, almost real time as things are being developed. They're they're trying out. It's like, okay. This is how it's gonna work.

They are able to create test cases, sometimes even automate them so that even by the time the developer is almost finished, they have something to say, click play and the issue behave this way. So the the developer is in a position where they wouldn't hand over something that they can already test is not working as expected.

The shared knowledge and responsibilities, it doesn't feel like the developer develops the tester test, but both of them are involved in the decisions made to do with development and testing. And if you ask the tester the tester how it's been built, he can tell you or she can tell you same thing with the developer. There's no silos in terms of knowledge.

And what I found is that testing is finished shortly after development and there's less waste of time.

How much time do I have? Good.

Let's move on to deploy.

Here again, I have a question.

How do we ensure that new changes don't break or degrade the system?

We've we've implemented our beautiful user stories. We've tested them. We deployed them. How do we know we didn't break something else that we didn't have time to test because we don't have unlimited resources?

How do we know that the system is not going slower for the user, for example.

So maybe you're familiar with topics like regression testing.

Automated regression testing means that you are able you have a suite of tests, not manual, automated, that you can just run, and it verifies that the system behaves as expected, including the new changes that you've made. So if you deploy something and you run your test suite, something doesn't work, it's probably because it was just broken right now.

Maybe you have some way of rolling back your changes or to quickly fix those.

But, yeah, you need to ask your thing yourself, is everything else still working? Are user flows taking long longer than usual? And that's something that sometimes we don't measure. We don't know how long it takes for a user to do a, b, and c tasks unless you have some sort of measuring that, maybe with a test case, which is simulating through the UI all these tasks, and it takes one minute.

We deploy changes. Now it takes two minutes. But we were not supposed to influence this user journey at all. Unless you have some sort of automated tests, it's very difficult to understand those baselines and how you're impacting them.

Can the system handle the pressure? So are you doing any sort of performance or load testing?

Are you trying to test maybe on a full sandbox how will the system behave with something closer to how users will test it.

And finally, how do you slow down the system because you introduce code changes that maybe introduce new risks, new performance degradation, or security issues.

And for that, obviously, you can use certain tools that will scan your code and tell you maybe you're doing this, which is not efficient or you're introducing these particular security risks.

So hopefully, some of these questions ring a bell or relate to you and say, yeah. We've we have that problem, but we've never asked the question to the team how we can solve it. Maybe it wasn't that much of a pain, but maybe something worth doing.

Finally, everything has been deployed. It's time to just use it as intended, hopefully, and maintain it. So see what works, what doesn't. Sometimes it's because you you thought you were building the right thing and then users realize, yeah, they're not using it as expected or they prefer something else so you need to go back to the drawing board or sometimes they're just things that don't don't work as as well as they should.

So a few things to think about here as well.

The more you test in terms of, breadth, the better and also test more often. How many of you have tried to deploy something to production only to find unit tests are not passing, production doesn't look like it should, somebody made changes, which we were not aware of, deployment has to be delayed because we cannot just deploy and, tests are not passing.

If you were testing production and your environments, more often you will realize of things breaking sooner rather than later.

A quite important topic, I think, how are stakeholders, and that can be not only your product team and your managers, but also the end users, how are you now enabling them to share their feedback about the things that you built? Tell you, here's a bug, here's a fix. No. Sorry. Not a fix. Here's a bug, here's a defect.

They may ask questions about, is this supposed to work this way? And then realize, no. It's that's not right. How do they share their feedback?

How are you empowering them? Do we have a form? Do you have any process where you say, if you you've said there's nothing wrong with it, what we built, please go through this channel so that we understand. And on the next cycle run, then we can, take that into consideration, which comes to the third question, which maybe you feel, yourself in your own teams, how much priority is given to new shiny features versus fixing bugs and paying your technical debts, which is not nice. And nobody notices it, and nobody will tell you thank you. So it's very easy to say, hey. This is everything that we delivered, instead of saying, have it this is all the pain we removed from your your day to day.

And in theory, we improve the performance, so everything should be faster. And something that we find most of the time as we build products is that users care more about removing pain from their day to day work and for things to just go faster versus new features that probably the team just had the nicest idea, but they didn't test the idea. Nobody asked for it. It seemed like a nice idea. And then they they build it. They put all the work and effort on that for the users not to use it or not to use it as much as we thought they would.

So yeah. What is your what sort of conversation do you have in your team when you're planning? Like, well, yeah, we have all these new things we could build, but there's all these bugs that we say we were going to fix at the deployment. But, yeah, there's always that tension there. So please pay your debts and fix your your bugs eventually.

Maybe you were expecting this talk to be a bit more technical.

Sorry to disappoint you.

I just wanted to raise some questions, make you think more. I think you came to this talk because you take quality seriously or you want to take it more seriously.

So hopefully, some of these questions helped you to to think about your own team, how it behaves. Maybe you are the tester. Maybe you are the person in charge of quality. How can you raise those questions to the team and say, hey. Why don't we discuss planning from the quality side of things?

How can testers get involved sooner rather than later?

But I also wanted to touch on some of the tools. If, when I was talking about automation, you were like, we'll have that, or you were talking about and you think about your own processes and everything feels too manual, too too much time spent doing manual tasks. So I just wanted to touch on some tools in case you wanna do a bit more research on how they can help you.

First one, test management. How are you managing your test cases? All of them, how you're structuring those into test suites? How do you relate those to the work that's going on the release management side of things, like your user stories, your sprints, your releases.

As part of Provar, we of offer Provar Manager, which is an AppExint tool that can help you with the managing side of things. But you also have tools like TestRail, X-ray, Sephora, Tricentis, and Testmo. There's plenty of them. Some of them are more generic. Some are more Salesforce centric.

But, yeah, don't think that because you have Jira, your testers are happy and, they can use it for whatever.

Just put your test cases in a notepad doc or something. No. Like, they probably need their own tools to manage their own work.

Test automation. Usually, when people think about quality and testing, they just think about test automation, which is not the whole of thing, the whole of quality, but it's probably the one that has more return on investment because you're you can quantify how much manual hours are you saving by automating everything. We provide provide automation. You can come downstairs to speak to us and talk more about that.

But you have tools like Selenium, which is more generic and is very well known. There's always, like, Playwright and so on. And there's Utam, which Salesforce provides as well. And you can see the icon there.

That means it's open source and free. So if budget is a constraint for you, maybe you can look into those.

Utam is obviously Salesforce centric, whereas Selenium is more generic. Then you have other tools like Trecentis, CRT, which is Copilot Robotic Testing, AxleQ. Downstairs, you also have TestSigma.

Obviously, I will talk about ProVar, but, personally, I I prefer that you just care more about quality, improve your processes with the tools that benefit you and that work for you. And if, again, if budget is a constraint, there are open source tools out there as well.

Code quality. If, you don't have a way to measure this or you're not thinking about it, your developers are running wild with code and every time you ask them to change a checkbox, it takes a week, maybe it's time to look at the code base. Why is that?

Salesforce has code analyzer, PMD also can help. SonarQube has a community free edition, and then you have tools like CodeScan, Clayton, QualityClouds, and others who can help you to identify the issues in your code base. Finally, on the DevOps side of things, we have the Salesforce CLI, which Salesforce provides open source, Salesforce DevOps Center, which is relatively new, improvement to chain sets. We provide a CLI to embed quality into your pipelines, and it's open source.

Oh, there you have. KJEC, Copado, Flossm, Autorabbit, Protely, Bluecam, Basalto, and then a couple of mentions for well, I don't know if you can see it really.

D2X and SFDX hardies, which are open source tools that you can use in your deployment processes to help you with this.

Nice.

So some time for Q and A.

You have any questions about what I've just spoken about or you have any problems in your team, some, you want to know how to raise awareness about quality in your team, you have any issues or any questions, please now is the time.

Yes.

I think when we talk about regression suite, it feels like a overwhelming amount of effort to cover the whole system. Ideally, it would be happening bit by bit. As you develop your user stories, they come with their own test and and they all become part of the regression test suite. So as you build, that test suite is becoming bigger and bigger and testing your whole system.

I think that question, I would question it I would rephrase it maybe well.

If you introduce changes to the system, you are the developers are probably changing the unit tests to reflect those because otherwise they don't pass. The same way you need to ask the testers who maybe automate those test cases to to change. Like, the underlying process, change it, so you need to change the test case or test it.

Every time you introduce changes to the system, there's probably tests that were covering those scenarios that will need to be updated.

There's an inherent maintenance effort, which comes with it.

But I would say by the time you've pushed something to production, you always have a clean, stable regression test suite.

If some if that breaks, it shouldn't be because it wasn't updated. It should be because something broke. So it shouldn't be like a false positive or a false issue.

I know. It just adds more to the list of things that teams have to do. But, yeah, keep your tests up to date. In a way, the tests reflect how the system should behave. If you don't have documentation, you can at least look at the tests and, like, well, that's how it should behave. Some teams don't have the implementation. So, if the tests are wrong and no longer reflect the system, then you don't know how the system should behave.

Does that solve your question?

You would eventually merge your SAT test into the wider regression test suite. They will become part of it.

Yeah.

Yeah. Any increment you make that ends up being part of the wider testing of the system.

Yeah. I can see some challenges, of course, but there are many sense.

Thank you. Yes.

Yeah. You have different approaches. Some people say you don't need testers. The developer should be doing that the same way they're doing unit tests.

Yeah. I think there's a stigma. We just want to build things and get them out there, and we don't want to spend time documenting or testing.

Certain tools help you with maintenance, but inevitably, if you change the underlying process, the test that tested have to change. They need to be updated. The more you change the system, the more you need to update your test cases.

I wouldn't I would recommend, doing it as soon as possible so the sooner it changes, the sooner you update your test cases so that it doesn't become a burden at a later date where it's like an afterthought and then it it feels painful. You don't wanna do it.

If you have a tester, a dedicated person doing the testing, they probably do it faster than the developer or the admin or anyone else who would do it. And they can if you go for, like, model b where they work in tandem and in parallel, you will end up seeing that tests are not as much out out of date as they were before, and the tester doesn't have any problem with doing their job. So, yeah, if you if the current team has a for them is a burden, testing, then you probably need somebody who cares about it enough to just be dedicated to it.

Good.

So we have time for q and a. And hopefully, it wasn't too fast, but hopefully, you got some of those questions, some of those ideas to come back to your team and say, hey. Look what we learned. Maybe we need to care about quality a bit more. Thank you.