Description
Watch this talk from Francis Pindar (Technical Architect at AdminToArchitect.com) as he walks through 10 key habits of highly successful DevOps engineers.
Francis discusses:
- His DevOps journey
- The importance of DevOps in the current market
- The key habits of successful DevOps engineers
Learn more:
Transcript
Fox.
Thank you very much. I am gonna leave you in the very capable hands, of Ms. Francis Pindar, who's gonna be telling us about the ten habits of least successful Salesforce DevOps engineers.
Francis, take that away. Hello, everybody.
Great. I'm gonna chuck this down here.
So, yeah, my name is Francis Finder, and I'm here to talk to you about the ten habits of highly successful dev fault's engineers.
Because I'm on a bit of a mission to empower the next generation of Salesforce admins and developers to become successful architects, and I've trained over a hundred and thirty thousand people now in a hundred and sixty countries, Salesforce.
And this is really This kind of my dev ops journey. I don't know if anybody saw my talk last year.
I kind of started off my dev ops journey quite a while ago. Can trying to learn it from scratch pretty much. And since then, I've wanted to learn more and more and more about the Salesforce platform and best practices and everything else, and I very selfish me set up the Salesforce policy, podcast. So I can get people and ask them questions and really pick their brains on stuff. And your probably notice a couple of people on here. Somebody on there, as well about learning about dev ops and best practices and stuff like that. And just recently, we've been, been talking about the state of Salesforce implementations, and how it's changed or not.
Over the recently.
And because it's been quite a shift, I would say, recently in the market, you know, lots of things happening, layoffs and, things like that, and but I think that DevOps has been a bit of that shining light, right? Because everybody's wanting to do more with less. And that is the wheelhouse of DevOps, right?
And I think it's been a bit of the saving grace of Salesforce because was, Salesforce also getting more complicated. They're getting more complicate complex. The honeymoon of Salesforce, I think, is over, right? It's a lot less new companies adopting Salesforce. It's just existing orgs that are getting more more complex.
And so this is why obviously DevOps comes in. Now, let's just reset a little bit and take a look at DevOps, because I think and it's come up later in the previous talk as well. DevOps isn't just a engine earning type role. It is now gone wider. If we think of, the affinity curve of monitoring, deploying, planning, verifying, and releasing. All of this doesn't necessarily mean just scripting and building change, and and putting in another git repo or changing our branching strategy, it's actually seeing the value we're giving all the way through the process.
So it's more than just an engineer, really now.
So my ten habits of highly successful dev ops admin slash architects slash other.
So the first one, I think is this eating cake, no, thinking in layers.
And the approach to actually you're adding value. I think DevOps is a bit of sometimes it would be quite close and it's quite hard to show the value you're giving, right? Unless you're kind of really kind of showing it to the business, and understand it. So kind of thinking in this layered approach of, well, how I gonna describe this and and really kind of maximize this investment in DevOps? And how can I demonstrate that across everybody in the organization? Station.
Oh, there you go. Thinking layers. And Salesforce does this as well. Like, you know, who's seen this?
Right? Yeah. Exactly. And the reason they show this is because their target audience is the c level execs, right?
They wanna get by at this level, and everything else. So how can how this is how I do it when I'm trying to demonstrate the value of DevOps might not be the right way or wrong way, but and I go, well, actually, what is our organization's goals, alright, overall goals for the organization?
And it could be these, it could be something different. And I know if you do it, you know, there's some business value drivers and benefits that are coming out of what they're trying to achieve. And a lot of these are what you have in the Salesforce project. Are we gonna improve the sales processes and all this kind of jazz? But for me, I wanna really pull out the one that are relevant to DevOps and actually show that we're adding value as the same way as making changes in sales course in our dev ops world. So I can really demonstrate that we're on the same, you know, going to the same team on the same team and achieving the same goals.
But then after that, obviously, we come down to the measures and KPIs, that obviously in the Salesforce world, we have win rates, we have conversion rates and all this, and we're trying to, you know, increase or decrease, these. And then finally, we got the function capabilities, but we've got the functional capabilities of Salesforce, plus we've got our own dev ops world and what capabilities are we use it to harness, to measure, to then show the value. So then everybody kind of can see the a fit of what DevOps is having in the organization.
Make sense? Yep.
Brilliant.
So I've kind of left these blank because obviously, the next thing is about being data driven. And I think this is a kind of, a bit of a multi layered thing, right?
The first thing is actually understanding your kind of DevOps value stream.
So, this I kind of showed last year, but this is how I kinda looked at when that requirement comes into the system, how is it flowing through which systems are being used and where how many people are going to be, you know, working on this piece of work. And you can notice here, like, I had fifteen people signing off at deployment, and it was like, this is crazy.
But I think being data driven and understanding, you know, the metrics coming through, you can really understand, okay, where do we need to focus on.
And obviously, there's the Dora metrics, right? Has everybody heard of the Dora metrics, I'm hoping. Yeah. Also, I kind of think this DevOps role is kind of getting wider. It's not just looking at how we push change through the kind dev ops beast and and the work for it. It's also kind of, well, actually what value of the things that we're doing is actually giving value to the organization as well, that we're doing the right things and marrying the two up together.
And so yeah.
This is taken from gear set. Sorry, gear set, but it's straight from your deck. So, yeah, it's important. If you're not and if we go back, Yeah.
Lead time, adding value, and understanding where the bottlenecks are in the process is I think it's really important because you need to know then focus on that. Okay. My next one, I have no idea how long I'm doing. How quickly I'm going through this?
Should be cool. Right. Okay. Next one.
Asking why I'm listening. Right? I think listening is the talent for devsecops organization, especially when you have those confronting, you know, the teams that are kind of a little bit challenged, I suppose.
I think of security and security architects and in a high regulated organization that I can work in. And it's like, no, we need to lock everything down, but there's a pragmatic way of understanding actually what are the risks they are actually trying to control and also kind of showing them actually, you know, this is what we can do better more efficiently, and bringing in the dev teams, the business. So, and I'm really understanding, what's going So it's almost like that kind of customer centric mindset of going, these are the developers that are using the system. You know, what is, you know, the user interface on most crucial, crucial elements for you and being more empathic, about listening, which I'm terrible at.
So yeah, and asking why a hell of a lot to understand actually why are we doing this and can we change it and can we make it more efficient?
And and then it's obviously been great communicator.
So once you've listened and brought all that, you can communicate it out. And this is where the come in as well. It's actually you're communicating at the right layer level, for the audience you're talking to. So, for, you know, I'm not talking to a c level exec about how I'm changing the branching strategy in git because he probably doesn't care. He is he does wanna know how those business value drivers are changing, the organization and how DevOps is actually giving value back to that organization, which is kind of, a little tricky, but also having that kind of, that big picture vision and that detail orientation as well, so that you can really kind of understand and work on the delivery of that kind of DevOps, improvement process, as well as kind of communicating that well across the org optimization, and to the develop and to the developers, as well as operations. Okay. Who who what do you think this one is?
Harry Potter. No.
Well, I suppose the glasses. Yeah.
My one claimed to fame as I was in Harry Potter.
But, yeah, I was fed Weasly stunt double, then that's another story.
Yeah, so having a learning mindset, now I asked this question yes last year and one person put their hand up. So I'm hoping more people will put their hand up this year baby.
Who's read this book? Oh, yes. Brilliant.
Is it a good book? Do you like it? Yeah. I I love it. Really, but for me, this was this really kind of got, basically was my starting journey almost with Salesforce.
It was, oh, in DevOps. It's basically a storybook, a novel, on DevOps. Right?
And the first hundred pages is just complete disaster after disaster. Right? This story of the I think the first page is how this public company share price has gone down because they've missed their earnings targets again due to this delayed project, the Phoenix project, which is supposed to revolutionize the company. And as you're reading it, you're going, oh, yeah.
Characters, Jeff in security.
Or whatever it is. So it's really relatable. But but it's really it's a really great book really. And whenever I do I work on a dev ops project, whatever.
I'm handing this book out to people. Right? Because it it I think it's brilliant. Now, the one after that is unicorn project.
Anyone read that?
Yeah. Not quite as good as I don't think. Yeah. Quite not quite as good as as as as a video photo, but yeah, it's still pretty good. And also there's the DevOps handbook. That kind of, is all part the kind of the same series.
But one of the key things that I kind of really learned from that is is the the silent killer.
Anybody know what the silent killer is in Devox?
If we reach you know this, hopefully this will come out. Oh, no. It's not. The slides are broken. Okay.
Wip, work in pro. Progress is the silent killer in any Salesforce project.
Already DevOps, really.
It's this build up of work at different workstations, you know, you want to get that optimal process all the way through, and get work pushed through kind of quickly tested, automated, and through the system.
And so, yeah, and this and the the Phoenix body really originated from this book, the which was all around manufacturing techniques and stuff like that. And this is really kind of the key tenant of it, improvements to any part of the system other than the constraint will not yield any significant benefit. So for example, this work in progress at some point through our of DevOps, if we're having a build up of work and progress, say, at a constraint. So it might be testing, and we're building up, you know, some a lot of whip, then we improve the development process, well, we're just increasing the work, you know, the work in progress behind testing. It's not improved anything. And if approve everything after testing, well, then they're starved of work because it's all built up at testing. So making sure we focus down on the actual constraint and the key constraints that are slowing down the whole process is really important.
Everybody know the theory of constraints, Anybody?
Okay. I'll go through it. So this is all around. Okay. I've understand how work throws through the system. I know there's different work tasks that are happening through my process, and I've now done the analysis, and I've identified that actually there's certain constraints that are slowing work down.
And there's, essentially, a a step process of first just observing that constraint, understanding wall, how are they doing things? What is happening at this, at the constraints? So we can understand it better. The next one is really just exploiting that constraint, like understanding, okay, what can we do better with that constraint so they work more efficiently.
Right?
So they can work better. And it might be that fails, right? And you're still getting lots of work coming in. And the next one is subordinate, which is basically the whole of the everybody else in that in the in the work basically real lines to support that constraint.
So it might be that the user stories coming through the testing aren't clear and they keep on being bounced back to find out more information before they can go back into testing again to actually test it properly. And actually, they were just being written wrong in the first place, or they weren't clear enough in the first place. And so therefore, actually, let's focus on this. Getting those user stories where it's good, making sure the acceptance criteria is all there, that we can kind of have a smoother ride through that, constraint.
And then if that fails, then it's looking at how do we elevate that constraint? Do we bring in more automated testing? Do we actually give them trade do we put more resources on there? Do we just have more EC two instances to run our own automated testing or whatever you're doing it?
And so, you know, this is key for me. Oh, yeah. And then once you've finished, it's what's the next constraint. I think when you initially start, you can probably find the big constraints pretty quickly, right?
But then as you're iterating through and improving and improving, these constraints get harder and harder to find.
But, yeah, continue we do, to improve everything. Does that make sense?
Cool. How much time I'm whizzing through. Brilliant.
Okay. So, next one, the slam dunk, iterative automotive mindset.
So obviously, automotive an automation focus is the key to DevOps, right? That makes sense. Okay. We're trying to automate a lot of the manual processes.
We want to increase efficiency, reduce errors, free up time for that kind of more complex and iterative work.
But I think it's also being that having that iterative mindset of going, look, let's let's fix the big things we can fix easily and iterate and learn from it, and having that continuous feedback on algae was it successful. And do it, you know, think big, but start small, which I think Sophie said, you know, in the earlier call, earlier session.
So yeah. And It's all the I think, you know, all the agile processes and everything else. It's kind of all comes part and parcel of that.
And all that. So that that makes sense. All good.
And now this and now the learning tree, which I think we're dev ops, you I I kind of think of it as like this t shaped technical knowledge, right?
It's having that proficiency in or even, you know, having that very wide knowledge of all the tools that are available to you, and all the things you could do so that you can kind of it down to actually what's useful for the organization, but also kind of going deep into the areas maybe of like scripting or DX packaging and things like this so that you can actually kind of connect the these systems together.
So, you know, CICD tools, you know, gear set tools, elements cloud, all these things that could potentially improve that process, having an understanding of them, but then putting that kind of real world measure of actually, is this gonna have benefit for our organization if I bring them in?
So, yeah, if we think of the dev ops process, there's a lot of tools, right, out there that could support it. Some have better capabilities than others. Some, we can support you in, you know, backing up or obviously gear set.
And all the kind of open source tools versus, you know, cloud based tools and paid for tools. But also there's a number of source tools that are used quite a lot. Does anybody use hard disks in Visual Studio?
No. Okay. That's what I use. Nebula Logger.
Another. Oh, yeah. Cool. Brilliant. So that's for, as a great logging all for logging, Apex, Lightning web components, flows, and having a framework to manage errors, or messaging that you want in the platform, which I find really useful.
I don't know why that's missing. Is that Oh, no. No. No. It's on that image.
Okay. So, SF DX Delta. Anybody use that? So that's giving a Delta. Yeah. So a delta of your git repos.
You're only deploying the changes.
And I think it is on here.
Oh, no. It's disappeared. I don't know where that one went. Okay.
But yeah, so there's a whole range of tools, right, out there, making sure that you've got that Marway knowledge of what's available to you and then picking the right one. But having may, you know, a deep knowledge in certain areas, because is a multi, you know, I think as a DevOps engineer, you're either a lone wolf working on the project, or you're kind of got handling something else as well, and it all depends on obviously the scale of the project. So having that my world and being deep a certain area of it is always good.
So now my little Yoda for the next one, being a trusted and vital, adviser.
So I kind of think of this as when you're starting out, you're very functional. Right? You're being told, oh, we need to improve the way we do dev ops. Okay.
We haven't got a git repo. I'll get a git repo in. Okay. What's the such strategy.
Okay. I'll go off and find it. Oh, shoot. This is okay, right? And I'm very functional in doing the work, right, based on being requested.
But then that kind of shifts into this trusted advisor where actually people ask you, what what is the best thing we should be doing next. Where should we be looking? But also, I find that when we're shifting into this world that actually you're hearing more information that you might not necessarily hear as being function And for me, that's kind of really important, like, actually the testing team isn't great, and we're looking at outsourcing it. Right?
But that's not something you, you know, they want public knowledge of, right? And then you can advise them back if that is a good or bad, bad idea.
But being in that trusted advisor, I think, really important because you're not gonna be privy to that information, and it's it's kinda hard to get to. But then after that, it's becoming vital, and this is really good for you, really bad for the company. Right? Because you are the single point of failure, right?
Because you are vital for the business, but for you running your dev ops empire, it's brilliant because you can basically ask for more money. You can have holiday that isn't in the holiday rules, right, whenever you want. Because they see that your value to the organization is or perceived value is this high. And therefore, you know, the rules don't quite apply to you.
So getting to this is brilliant for you, but not so great for maybe the organization. So I'm kind of thinking Okay. When I'm working for an organization, am I still being thought of as a functional person? Am I have I moved into this kind of trusted advisor world, or I'm actually now vital and I am getting all the information from all the different teams.
Advice on things, being in that hub of the collaboration and communication so I know what's going on because they're want my advice on things, which helps me out building better solutions, right?
Okay. And this is my What type how long have I got left?
How long have I got ten minutes or something?
Sorry. Wait a minute. Oh, god. Laves of time. Okay. Cool.
So then there's this, which is the futurist in designing for change, which I think is always like tricky. It's kind of like you're trying to build for the future without knowing what it is, but you can bring in, you know, modularization, loose coupled integrations within that DevOps pipeline and kind of bringing that to the making sure you're doing it so that if things do change over time, a company goes bankrupt that you to be using. You can swap it out quite easily, easily, ish, but also it's it's it's, Another way of kind of like, well, why are you even here? Right? You could go into chat GPT and just ask, what are the ten habits of high successful demo top, engineers. Right? And you could probably do it for all the other talks here as well.
And I think AI is gonna change the way DevOps works, right? Maybe not here. There's a lot of, you know, hyper around it. But it's doing amazing things. Now, this is why I go off the fairway and into the long grass a little bit. Right? So things like this It's an iris, right?
But is it male or female? No doctor can tell you, but ai can. You can say it's female. Right? Okay?
You got for medical research and stuff. It's just crazy things happening where, images can, can checked. Yeah.
Yeah.
Can work out the better ways, but this is the thing think is most amazing. And this is look at this, this is from December last year. It's like a year ago. Okay? And that using AI, the NHS has increased the recoverability of stroke victims three times tripling the number, you know, your recoverability.
That is bonkers, right?
It's just mind boggling. I I don't get it. And has anybody signed up to our future health?
Yeah? Oh, brilliant. Do it. Sign it up. It's gonna be the biggest medical research project in on the globe, I think.
And it's basically saying, why share your data anonymously, but to allow us to plan and work out all these things like recover, you know, stroke victims and stuff like this to make meta and better. Right? And, obviously, AI needs a huge amount of data, the more data it has, the better. Therefore, hence this, you know, program started, which I think is really exciting.
But also, people say, oh, my guy, guys, gonna transform the world tomorrow, the world's gonna explode and everything else because it's gonna change so quickly. And I'm like, well, it's not new, right? These innovations and changes, right? Here's a picture in New York.
And can you see there that is a picture of a car. Right? And then you got horse drawn carps everywhere else. And this is nineteen hundred.
Okay?
Thirteen years later, all cars, one single horse drawn car in New York. Right?
It's just thirteen years.
Right? But that whole industry transformed overnight, but we bring it back to tech and look, what actually, this is the adoption rates of different technologies over the years. And you can see they're getting sharper and sharper and sharper and AI is gonna be the same, right? I think.
But we are in this world of narrow AI.
Oh, Some of the laptop has gone.
Okay. We're in this world of narrow AI where thing is very kind of siloed. Okay? So we've got, Einstein GBT, for example, on the Salesforce platform.
Great. I've got chat BTs to do my kind of things in that. Mid Journey, did all my images in my deck for me, right? From Yoda to the people sitting down, in the conference center.
It's bonkers.
Face recognition, you know, shopping preferences in Google. It's all very narrow eye, right? But there's this thing called, you know, this superintelligence banner over the top. Oh, here we go.
That actually even this week, OpenText has asked for more funding from Microsoft to kind of look into this. Now, is it gonna happen? Who knows? But, you know, bringing this all together in maybe just talking to Alexa and saying, can you create me a presentation on the ten habits of highly, you know, a executive DevOps engineers, including a kind of pictures from this and that and other, or and do the marketing collateral in my voice, for snippets for TikTok and stuff like this, and it just produces it all for me, maybe. I don't know.
And I can't see what makes love it is.
Oh, yeah.
But then okay. What's this gotta do with Salesforce?
So again, is it? I don't know. Who knows? I don't know. Any ideas, it could be things like.
Well, actually, we understand how users are using Salesforce. We have all log information. We have all the data of the records changing in Salesforce. We have all the unit tests.
We have our automated tests. We have all this knowledge around how Salesforce is being used. So what if I give the AI a target to say, actually refactor, we know that having Apex running on different APIs is less performant than having everything all running on the latest, right, greatest. So give AI that goal of refactoring all the load to align to one API.
Based on all the processes and everything else, I don't want you to break, but refactor it so it all continues to work. Right? Maybe things like that. I don't know.
But it's giving, you know, targeted goals to the AI to do things for us. And so has anybody seen this? So this is basically saying by giving the AI the the goal of buying a pizza from Domino's, yeah, a regular pizza. And then it goes off and tries every permutation to try and figure out how it's going to buy a pizza from Domino's until it gets to the result it buys the pizza, right, just by giving that prompt right at the very beginning. But also, I think that the change is gonna happen with UIs.
Right, that actually everything has been built thinking about human interaction, right, to Salesforce, human interaction to DevOps, you know, get branches and strategies and things like this. But actually, if we're kind of making UI and are we asking the questions and it's coming up with the graphs based on all this data, is all UI is gonna change? The way we test systems gonna change in the future. Who knows? Any ideas?
Anybody got any ideas? At all. I don't know exactly. It's kind of like thinking far ahead and trying to prepare for the future without even knowing what's gonna happen. Right?
And so that's really the last one is what is yours. Right? What what do you think is gonna be the future? And what do you think are the values of a dev ops engineer that can really add value to an organization and excel in it. And that is all. The only other thing is I do have this, scorecard, which one of the, I've I've how do you measure against a successful Salesforce architect? And one of those things within what I think as a Salesforce architect's role is working with DevOps to create that foundational architecture of making sure that the Salesforce release processes and everything are as streamlined as possible.
So yeah, if you want to try that out, it kind of measures you against each of kind of different areas and lets you know how you score and gives you some advice as well. So that's it.
You very much.
Hello. Hello.
Francis, you have a little bit of time left. Is it a question? We're gonna throw throw it out for out for some questions if we have them.
Do we? Don't be shy. Who's got the first question? It's not over are there any questions? There's always questions got the first one. What do you think what do you think a talented DevOps engineer in your opinion is?
Or do you think the most the the best values are to a dev ops engineer?
Any ideas?
I don't.
This is just my subjective idea, John.
Hang on. I get your mic. Hang on.
Check.
Hi. You all hear me now.
Just wondered what your thoughts are on softer skills, selling the vision, because we might be all deep in the the weeds of technology that we love, but sometimes we have to tell other people what we do. Any advice?
Yeah. It's tricky, and that comes back to the audience you're communicating too. Right? And and how you you describe it. And I fight like I know this end Wow.
It's enterprise architect who is absolutely brilliant at just creating one pages that describe everything, right? And And I think it is very much the softest, you know, even just presenting, right, and and targeting at the right people and knowing that actually this person is most interested in his world or their world as a developer. Yeah? Stuff comes in, stuff goes out, but what's gonna help them and target it at you being in their shoes, I suppose, and the softer skills of that.
Also just like how do you communicate that on one slide. Right? So everybody is aware and aligned to that vision and that can get behind it and actually, you know, focus on it, which I think is just a really hard skill, to do. But, yeah, and it's that continual.
When I do the, the it's also presenting, right? And for the community groups, you get people actually one of these is, convince, like how do you convince stakeholders and different people in different ways? And a lot of it is, you know, standing up and talking to people. Yeah. Even if it's a small group or a large group, and we have the community groups, the five minute feature, right? So if you're not a bit service of turning up and talking to people, well, actually let's just do it for five minutes in a group of community members that are very forgiving first so you can picture idea or some feature, like it's only five minutes. If it's a disaster, it doesn't matter if it's great, brilliant.
But then you can slowly build from there. So it's starting small, I think, with the softer skills, and then slowly building up so you get more confident in doing it.
Any others?
Do we think that Salesforce Oh, yeah. Oh, I'm gonna run run. I'm coming. Is the Salesforce UI gonna completely change and break We passed that.
Use a automated user test. So who knows?
Hello. Thank you for that. I just want to ask. I'm surprised or wondering if you'd include a another C on that slide for coaching.
I've worked with a number of architects over the years. My background's an admin.
And one of the best that I've worked with is where he actually spent a lot of time coaching a team of devs to think like an architect and that kind of succession planning Yeah. Absolutely. And I I think it's that mindset, right, of getting things. And I think, you know, with architects, you get very lost in the tech, right?
And it's about pulling out of that. And actually, yeah, my team, I remember there was a call that I was on and we kind of invite our you know, team members onto it. And one of them was a junior and she came after and after it, she's like, oh my word. It was a light bulb moment in that call.
Because the customer is asking, we need a validation rule to do this, to do that, and we did these fields and stuff like that. But actually, it's kind of don't allow them to solutionize, get to the root problem and identify what it is, and then realize that none of that is needed. Right? And actually, they could do it in a completely different way, which requires no work.
Right?
So I think it is that definitely coaching. Yeah. Is that track actually, I didn't mention it on the learning, right, the the the best learners and the best people that really wanna learn more actually teach each because they want to be in that mindset. The best way to learn about a subject is to teach it, right, because you're forced to learn every little caveat of it. And if you got an audience of a load of people asking you questions, then you need to know the subject. Right? I remember at Dreamforce, we did Dreamforce live, which was basically a room of five hundred people asking us questions about the platform, and it could be anything.
A little bit scary, but you need to know your stuff right.
But yeah, definitely a good question. Thank you.
Yeah.
Think about the future of, I mean, you already were going into an UI testing. So, for example, you you bought test classes that make it covered in some stuff, but there's a gap where, you know, when it comes to testing the functionality, UI testing is very brittle. Yeah. Things like selenium.
Okay. They break as soon as Salesforce updates and, tag name or ID or anything. Or if there's a time out, they just logs of feel rather than Yeah. And you can kinda, like, provar kinda gets around some of those, but not everything.
Right? And I I, how does AI, can AI help make that a lot smarter and, you know, kind of I think it's like how AI requires data, right? So I think it is maybe, right? It might be that if we recording all the interactions of the of the users and understand how it's being used that then actually the AI can be more smart around, actually, we have got all those end to end processes all in the AI, right?
And it's understanding that that actually these things are broken and these have not. Or when the UI does change, that it's changed for everybody and what the impact of that could have, right, and dynamically update all the, you know, unit tests because that one element has changed. Now we've changed it to something else, and it cascades through everything. But I always have a, you know, I think everybody has this kind of love hate relationship with automated testing, right?
Because it is, like, how deep you go or how, you know, and and how, you know, you don't wanna be, you know, refactoring a unit test or your automated testing more than actually doing development, but you need to find the balance. Right?
So, yeah, hopefully, there's recently the future. And this is it. It's like maybe that whole unit testing kind of disappears. I don't know. If you, you know, the This is how we expect it to work in other sandboxes before you push it in production or you're doing experimentations for a set of group of users before it kind of goes global across in your Salesforce org, and you can kind of structure it in that way so it can learn and know if something breaks or not. Who knows?
Oh, yeah.
Oh, online audience question. So what strategies have you got for pressing that silent Oh, the whip.
Yeah. Yeah. Yeah.
So it all comes about what's what are the strategies around the silent killer, which is whip. And actually, I always relate it to, you know, when you're doing the CAMban view on opportunities, yeah, and you get all the cards appearing, right? You you can kinda sometimes just see it. Right?
Because in Jira or whatever it is, you can see this backlog of tasks appearing at different stages through the the pipeline. And so for that, even in the sales process Wait a minute. We're getting loads of you know, opportunity stuck at this point, it's very visible to you. And I think for the big problems, it's gonna be evident from that, right?
But then the smaller issues, when you start kind of going down, the the rates, it gets harder to detect this, and then you've gotta start relying more on the data.
Like, for example, having a load of whip at the, you know, stuck at this workstation, that's if it's stuck at that workstation, but it might be that the workstation's just pinging it back like, I need more info. Need more info. Leave me off. So it's not evident that it is stuck at that workstation, right?
Cause they're just pinging it back to the devs because want to try and push as much work off themselves because they're completely swamped. So I think this is where it comes down to kind of empathizing, talking to people, and really understanding how they're doing work rather than them giving you. Oh, yes. We do this.
Oh, yeah. We do the testing. Oh, yeah. Absolutely. And then we push it. They're always doing the normal process, the best case, the happy cat happy path to the system, when in fact they're struggling, they're having problems, and that's not immediately evident.
Perfect. Francis, thank you so much for your time. Ron, of course, France, everybody.