Description
Watch our Gearset accelerator session where we talk about best practice and game changing features in Gearset for change monitoring and unit testing.
In this video we walkthrough:
- Tracking code quality: Gain some insight into how you can use Gearset to standardise your code quality.
- Tracking metadata changes: Learn how to monitor all changes that are taking place across your orgs.
Gearset Accelerator videos:
Transcript
So, yeah. Hello, one and all, and welcome to, another edition in our gear set accelerator series.
So we'll jump into a demo very shortly. But just as a quick instruction, you and I here are part of the customer success team, over at Gearset. We're here every day to help you all get the most out of the platform.
I'm sure some of you may have spoken with us directly in the past, and perhaps some of you may not have, but it's great to have you here either way. So as a quick recap, in last month's accelerator session, we spent twenty minutes on the platform talking about the compare and deploy functionality, within GSM.
We looked at some of the tips and tricks, of what you can kinda do with command deploy, and we also kinda showed you how to get the most out of it as well. Today, we'll be moving onwards from that, and we'll be looking at some of the foundational automation functionality within GearX. So many of GearX customers rely heavily on automation to cut down on human error, to free up time by removing unnecessary repetitive tasks from their day to day, and ultimately to take big steps towards maturing the DevOps processes.
In a moment, I'll hand over to Hugh, and we'll start looking at org monitoring, static code analysis, and unit testing, which is typically a great first step introducing automation into your, Salesforce release process.
So we'll do that in just a second. But if you've got any questions throughout, there's no formal process here. Just pop them in the chat throughout, and we'll do our best to answer you, either on the fly or after the demo. So that's enough from me. I'll hand you over to Hugh now to, to a bit of a demo on the platform.
Brilliant. Thanks, Matt. Let me just start off with by showing my screen, making sure we're all up and running there.
Can we see my screen?
Looks good to you.
Perfect. Brilliant. Well, lovely to meet everyone.
So we'll be starting doing, that aspect of gearset. So I'll just navigate on the inside here down to the section of the asset. You're gonna see a large the job, and I'm just gonna filter it down so it's a bit prettier with just the jobs I'm concerned with.
So today, we're gonna start here with all and using native tools back and wanted to change in Salesforce all can be a bit arduous, especially when you're trying to track down specific changes.
And even then, how easily are we able to revert these changes back to previous states or move these changes on? What flexibility do we have to actually work with this information?
What we're gonna show you here is how using Gearstep, we can track these changes day by day, restore these changes if needed, and all of this is gonna help us build a strong audit bill for our environment with a few extra options when it comes to using this information in these different snapshots.
So I'm gonna try and make this as relevant as we can. So we've got a few different use cases and common use cases we see teams use for these monitoring jobs. We'll highlight those now with jobs I've got set up on my screen here.
So starting with this middle one, you certain stakeholders may need to have high visibility into what changes are happening in what might be one of our most important environments production.
And this job here is set with a wide scope of metadata that it's gonna capture and track to keep an eye on me when the changes happen. As you can see right, I can see there's been some differences, though there's been some activity within my production.
On the bottom here, potentially, I care more about the state of the code that exists in this environment.
So I can reduce scope of what we're capturing here. We focus on a few different objects, a few different select parts of this Apex code that I want to keep an eye on.
And finally, maybe my team in a shared environment. Or for a certain project, I have to spin up one of these shared environment.
That can lead to some issues around overriding or keeping a track of what's happening where. So I've got this job set up on a specific environment, my development environment, to keep track of this and allow me to revert these changes if this does happen. And I'll be pointing out how we can do that in a bit.
Just to note, all of these are right now set on different environments, but I could have all three of these jobs on the same environment just with different purposes in nature. So you're not restricted by one per environment.
There's a lot of flexibility there that you have.
If I click into this middle job here, you're gonna see we have a few options along the bottom.
So to start with, you're gonna have the coffee job ID.
This is more on the troubleshooting side of Gear Set. So if we have an issue with certain job, we have a question, I can take this job ID. I can put it in the bottom right here in the in app chat, and my wonderful colleagues, our customer support engineers, can help dig into any issues with you.
We have the option to take a snapshot, but we can manually trigger this job. And this could be used if we're just about to deploy maybe a significant change to this environment, making a change that we want to have the most up to date snapshot any failures to allow us to wrap roll back.
And I've used the term snapshot quite a lot. So just in context of gear set, a snapshot is essentially us capturing the state of the metadata at this time. This is gonna allow us to refer back to you for a comparison need, build those differences over time, but also allow us to roll back to a certain date.
And if we need to edit this job, we can you can see here we have to reduce this in scope, change the settings, add more stakeholders' in the notification settings.
And there'll be a bit more about that in a bit. We actually set up one of these jobs live.
Now we have the static code visualization here. So I'll just click into this for us.
And here, we're gonna get an over timeline and history of the static code analysis for this particular job.
That static code analysis, in the context of gear set, is essentially a commentary on the quality of your code base, and that's based on PMD library. It's very similar to what you might have seen in our last session in the compare and deploy or if you're already familiar with some of these features of gear set.
So my team, maybe at the moment, we really care about improving the security style of our code that we're writing.
Maybe we have some third party consultants coming in and working in this environment. We just wanna keep track of what they're delivering, what we're delivering, and how that is changing over time.
We can get more granular into this, and we're gonna take, a look into some individual snapshots and how this look in a moment.
So going into this job here, we'll use this example. We can actually go into the history of this job.
And when we go in more more detail, you're gonna see a list view of the previous snapshots that have happened in this particular job and quite a high level view of what's actually happened within the the status if they're different, the total differences, what changed, what's new, what deleted, and, again, what code issues are present there.
So getting quite a lot of information here, so let's just filter that down to the past week so we can have a bit have a cleaner view of this here.
Now clicking into this maybe the second run here, you can see we have a few different options similar to the previous screen.
Starting at the right here, you can see we can explore which profiles, missions, this particular state of the org. Now for those that were on the last session or comparison process with the gear set, Both of these options are gonna use that same engine of having source target, allowing you to have that metadata included in that diffuser and choosing what you want to roll back, what you want to deploy to that target.
And as promised, we can take a bit more of a granular view with the code analysis here.
So this is a commentary on the quality of the code in this particular snapshot. Perhaps I have a keen focus again on security.
Our team here, we want to improve that. We wanna see where these errors are coming up and where we can quickly identify these issues.
Now I could scroll down here.
I could filter for specific comments on this, or on the left there, I could quickly filter through to just security issues or maybe best practice, or maybe we have a worrying increase in the error prone code that we have coming through.
So all of this is gonna allow you to see where this is showing up in your code and what the might be.
So those are the different we're gonna get. Different different levels of information you can get on a high level and a bit more of a select level on these jobs.
Now if I go back to the management overview here, I've run through how we add we can quickly set up one of these jobs.
If we add a new job here, and let's just say this is gonna be targeting piculostation, for example, I can give that a friendly job name that when I actually want this run to happen. So maybe at the end of the day, when, essentially, all these changes have gone through, And I can choose the org that I've already connected via gearset.
Now with the static code analysis, you're not just set on using just what gear set has reset for you. You can alter this in the account settings and maybe have a more custom one specific for your needs, and you can choose that here as needed.
Now with notifications, you can get send these send this notification on every single run, but maybe that might create a lot of noise.
So maybe you're only ones that when the run when there's actually differences acted on that run.
We can email that to multiple different stakeholders within the team, have these texted to us.
And if you're a team that uses Stack or Team, you can have a particular channel set up to have this all piped through so you can have that external view of what's happening as well.
Now, again, those that were at the previous session or, again, familiar with a lot of gear set functionality, this metadata filter is gonna seem very clear to you as this is a core part of how Gearset tells how how we tell Gearset what metadata we want to capture.
So we can use some default comparisons here.
We can reduce this in scope, increase this in scope, but, also, we can block out a lot of white note. Maybe there's a few objects in here that might change from day to day, but don't actually have that much of an impact.
So you can refine this as needed, so you're really focusing on what you want to do as a team with this job as well. For this, stick with one of basic comparison filters and press save there. You can see we have simply set up a job now, and you can see that's pending its first run.
So what Gayside's gonna do here once the job is made is we'll take an initial snapshot so we have a baseline to work from. From here, we can detect what differences there are day by day and also what we can roll back to as needed.
So that is all monitoring within this session. The next thing we'll be talking about is unit testing.
So the monitoring we just went through is that specific example as well. It's a bit more focused on production and maybe that right hand side of our workflow and process.
Now we're gonna look at unit testing, which can help us, and maybe a little bit earlier in our process and can raise that visibility into issues before they even get close to production.
We can start shifting our view left, identifying issues earlier, helping us increase the reliability of our system.
I can let's say, for example, maybe we as a team are kicking off a more code heavy section for our development, and we want eyes on this testing as early as possible.
This is also worth promoting and encouraging good code maintainability within our team and throughout our process.
And, hopefully, with these automated jobs in gear set, we can actually achieve this.
It's a very similar dashboard, what we've just seen in org monitoring, with all the jobs I have set up present as I've just filtered down to those.
You're gonna get an overall commentary on the code coverage of the org you're targeting for this job, the current threshold that you set, the number of times this is run, the last run, also a bit of data to list of if there's load code coverage here or if you have any failed tests.
On the right here, you can see we can run this job as needed, disable it, and and dive into the history in a little bit more detail, which we'll do in a second.
Along the bottom here, you're gonna see, again, familiar settings to allow us to help troubleshoot with the team on this end and also edit these jobs as needed.
If we go into the view history of the UAD one, again, you're gonna get a bit more of a a deeper view into what's actually going on with this particular job.
So taking a look at this job, you can see that we have the number of tests that have actually run.
We can see the total test time, the amount of tests that have been skipped, failed, and passed. Again, a commentary on that code coverage and the coverage change.
And we have a few more actions that we can do here when we actually view these results.
So let's jump into the view ops.
Here, we have two different tabs that we can go through. We have test outcomes.
So kind of says what it what it does on the tin there. Here, we can get a view into the tests that have passed, failed, then we have skipped.
If you have a lot of tests going on, we could search through these with the global search at the top here, or you can quickly jump to maybe you just have have a few failed tests that you want eyes on, wanna know what tests have been skipped, or just a view into the tests that have passed as well.
Now in the top left, we can look at the code coverage.
So here, we're gonna get an insight into the coverage, how many lines are actually covered by tests, and how many cover lines there are. Along the bottom, we're gonna be able to easily identify where we might be lacking here.
And us as a team, we can go and address this as needed early in the process before this comes up as an issue closer to deployment time.
We want to bring as much of that pain to the left as we can with a lot of this.
Again, we can narrow this down to test that have been modified last seven days and filter through this if there's a lot of information we'll go through.
Now similar with our org monitoring job, I just wanna run through how we can actually set one of these jobs up, and it's gonna look very, very familiar to what we just went through.
But here, you can see when we want these runs to actually go, what time.
Again, give this a friendly name as needed, and select the environment that we want to work on and focus on for this particular unit test.
We can set our own minimum code coverage here, and we can also tell Geoset what tests we want to include on top of all of this as well.
And notification settings that we just went through before, you can be notified on every run or only when tests fail or that code coverage drops.
And let Gearset know where you want these notifications to actually go to.
Again, you can set up dedicated channels specific for this unit testing as needed.
And once you've done that and press save, that's all set for you. You can carry on as you want, carry on developing that code, ideally putting those tests in that are gonna pass, and GIS is gonna do the rest there for you.
So with that, that's a overview from a to b of augmentation, setting those jobs up, what that can do for you, the different use cases and views you're gonna get, alongside unit testing with how we can set those unit tests up, what information we're gonna get from Gear Set, and how we can use that.
I think that is everything from me. So we can address any questions that have popped up.
Yeah. Thank you. Thanks, Hugh. We have got a couple of questions in the chat. So, let's cover these both off. So, the question that's come through here is, will the monitoring job, track the changes in data?
Good question. So with the augmentation of the gear set, it's typically only metadata.
However, we know CPQ can be handled differently.
So with CPQ, you can track the metadata and that data config within these jobs as well. However, you have to have a certain license to do that with your monitoring.
Hope that covers that.
Hopefully. Yeah. Thank you. The other question that came through on chat just a moment ago is, for the unit testing, am I able to run specific tests, or do I have to run all of them?
Good question. I think it's one we get quite a lot. So with this, you can't be selective with the specific tests that you want to run. You'll have to run all of them. However, those tests that aren't included in that, you can essentially tell us that we want to run those top of the tests that are running. So you can make sure you're getting coverage where you want it and testing what you want to.
In a way, kind of an all or nothing, but you can add more to that all.
Great stuff. Alright. Got a couple of questions that come up quite regularly from customers as well, so maybe we can address those at the same time. So one of the question questions that we see quite regularly is, do the safe snapshots expire in all monitoring? And, and if they do, how long do I have to roll back to a previous run of the monitoring job?
Of course. Good question. So to answer it plainly, no. These snapshots do not expire. So if we go to one of these runs that I've going on quite a while now and view the history, we can go to an all time rate. That might take a little upload up with how many I have, but I can jump back as far as I need to to kind of see what the state of the org would then. That might be around.
If your order purpose might be a specific change, we're trying to narrow down, but from there, you can identify those changes and bring those forward needed.
Really nice stuff. Yeah. That's eight hundred and fifty one snapshots you're loading there. That's definitely gonna take a while, isn't it?
Yeah. I think I'll just just tick off that.
So one of the other questions that we, that we get quite regularly is why might we skip tests during the, during the unit testing?
It's a not a good question. So with these tests that we're telling Gearset to run, the most typical reason we see these being skipped, essentially, is these might already be queued to run for that particular org. These might be manually queued or already queued by another ESS unit testing job. So, essentially, what we're is we're not gonna run twice those tests twice for you. We're gonna be identifying what's already queued and running those tests needed.
Nice. K. Sounds good. Alright. We'll keep our eyes on the chat for any more questions to pop up, but we're ahead of schedule. So, I could probably start to wrap this up a little bit.
So, yeah, that kinda brings us to the end of today's gear set accelerator episode. I hope today's demo has been useful for everyone. We'll wrap this up here in just a moment. But, as a reminder, you can contact you, me, and the rest of the customer success team, over here at Gearset by emailing us on success at Gearset dot com. So if you've got any final questions, you can drop them in the chat now, or, otherwise, feel free to email us on success at gear set dot com, and we'll definitely be able to help you out. We'll do what we can to help you get the most out of the gear set platform.
Outside of emailing us, you can also contact us via the in app chat as well. So when you're logged in on the platform, that'll appear in the bottom right hand side for you.
And then finally, I guess, if the gears accelerator session today has been useful for you and you want to rewatch this or share it with colleagues, then you can. We'll follow-up with links to this recording and last month's, in the coming days. So you'll hear from us on that. And then as far as, future state, we have another webinar coming up, very soon. It should be in three or four weeks. I think it was taking place, right at the beginning of April, and we'll let you know once that's scheduled so that you can attend that. We'll be covering off, CICD and version control so we can really start to look at sophisticated automation and maturing those DevOps processes even further.
So that brings us to the end of today's session. But, yeah, thank you all for attending, and we'll see you all very soon.
Bye.