Ian Gotts – How to Supercharge DevOps using AI

Share with


Description

Using AI we can now ensure the DevOps teams can focus on what matters most and deliver functionality that gets better adoption. This entertaining and challenging session will give you actionable ideas that you can apply in your organization.

Ian Gotts – Founder & CEO, Elements.cloud. Ian has been a Salesforce customer since 2002 and was a speaker at the first UK “Dreamforce” which was Marc, 3 customers and just 120 delegates! Since then he has spoken at Dreamforce and World Tour multiple times and has written 11 books on Salesforce, business analysis, change management and compliance. He is an entertaining speaker and will challenge your current thinking.

Transcript

Should have brought it out. So first of all, thank you everybody for joining me.

This session is all about from a DevOps perspective how you can be twice as productive and obviously AI can help that. But actually the twice as productive has nothing to do with AI it's actually about the way that people operate.

Two seconds, Ian Gotts, some of you know me.

Three numbers ten, twenty, thirty.

Ten years as a consultant written ten books.

Twenty years as a Salesforce customer and a partner and thirty years standing doing this shouting about the importance of business analysis.

So you probably see me on LinkedIn there's a LinkedIn link there please follow me I try and write stuff that's not salesy but actually useful and productive.

So this is the reason that you can be twice as productive.

Stop building stuff that nobody uses.

So at MS. Cloud we analyse about one point five billion metadata items a month. So we've got receipts for this. This is data that we know is actually happening out there.

So fifty one percent of all custom objects are never used.

Okay? And that means you spent money with time and money on requirements, emails, Slack discussions, arguing about what you're going to call it and what color it's going to be. And then you build it and then you test it and deploy it. Luckily you don't waste any time documenting it so we haven't wasted any time there.

But it's not just the object it's all the fields that go with it so forty three percent of all standard fields sorry, custom fields on standard objects never get populated. And by the way, we're discounting all the managed packages. Okay, these are things that people have actually spent time building and that number forty three goes up to about eighty percent when you start to look at the core objects. I think it's about sixty percent when we think about account contacts and so on. Another really scary number, show how many fields on average do you think we find on opportunity page layouts?

Seven hundred.

Oh, you're very dark. It's lower than that. There's one hundred and seventy five not on the object but actually on the fit on the page layout that people can access one hundred and seventy five.

And you know how people fill out that opportunity you just hit save and then whatever's red you put three dots in and then we're good to go.

So all of this technical data is a huge issue but actually from a DevOps perspective you're driving all this through pipelines unnecessarily.

So it's yes it's maybe on the business analysts in terms of they're asking to build the wrong things but also I think it's on you not accepting really poor specifications.

So again can you build that? The answer is yeah I can but why and do we understand what the implications of it are?

And one of the things that Salesforce says I don't know what AI knows your org. I'm sorry that's kind of not true but if it does know your org will it be confused or disappointed?

Okay based on the lack of documentation. So I'm afraid AI is not going to solve this problem this actually is about how we go around the implementation lifecycle.

And that if there's a link there that's the research series. By the way these slides I should have gone back down sorry There's a QR code there which gets you to the deck. It's got some useful resources at the end. If you don't want to take a photograph now that QR code appears at the end as well so you can download the deck.

So that's the research series where we've been looking at different dimensions on metadata which is how many fields are the ideal number of fields on a page layout? Number seven.

Seven fields you can edit for good UX. So one hundred and seventy five is quite a long way off. How well used are how well used are objects? And then we've got another report around security in terms of just the all the overlapping security permissions and profiles that we're seeing they're just built on top of each other. So any of you can relate to this?

All of you, yeah?

Those of you who put your hand up are either doing something else or haven't seen it all yet. We had one customer we synced the other day two fifty six record types Yeah, exactly. One hundred and thirty picklist value fields, many of them with hundreds of picklist values. So every picklist value has to be mapped to a record type. Yes, there were five million mappings.

How many record types had data?

Just one.

And we can all laugh at it and go, that's ridiculous. But we it's like, you know, you see those pictures of like a car upside down in a shop and you go, how on earth did they do that? We're kind of like that. We appear after the accident and go how did you get yourself here?

We had no idea what built up to that. Maybe they were about to acquire a company they added all these things but what was interesting was they were about to merge it with a very simple org. This was War and Peace that was Dilbert. We said look before you do that you need to try and clean them up.

So I think even though we feel from a development perspective you're going actually I need to build what I've been told to there is an opportunity for you to push back and go I can't build this, there isn't enough information, this is not well enough specified, I don't understand the implications of building it. If I build it, will anybody use it?

And we're seeing this a time and time again. And I think the challenge if we then move to, say, building agents you can't build agents on top of sand. You need solid foundations and this is clearly not solid foundations.

So who here is actually involved in building agents yet?

One, two, four. Okay.

It's going to happen more and more. Agent Force is in its infancy I think is probably the best way of putting it. I would suggest that the product works but actually the implementation approaches people are using don't work.

We found a way that works but it hasn't it's not commonplace yet.

I won't digress into that but again if people have questions I'm happy to come amongst you and we'll answer questions. This is not me standing up here shouting at you.

I've got some slides, but they're just some structure here. So again, please ask questions. In fact, I might just I'm staring straight at some lights here. Oh, that's better.

I can see you all now. Okay. Right. So, that's the issue. Yes?

You guys start coding, I'll find out what they want. I think what's happened in the last twenty years of Salesforce is that we've gone straight from capture and validate requirements straight to configure and build.

Define work items, work out the implication of making those changes just hasn't happened.

And I know why Salesforce has said it's easy just build an object. The answer is yeah I know that might sell licenses but that actually doesn't actually help long term implementations.

So if you think about it, the Business Analysis Certification came out what, three years ago, two years ago, probably three years ago. I would suggest about seventy percent of the ecosystem joined before there was even a trail associated with Business Analysis.

Everything started here.

Every Dreamforce presentation started with someone going build an object, add a field. None of it around business analysis, and we've been fighting some of the rear guard action for the last eight years ago. You've got to ask questions and people like Jodie Herbeck's got a great book and there are other books that are now coming out around how to do business analysis better. This becomes even even more important when we start thinking about agents. You can't build agents on the current implementations we've got.

So the challenge is we go straight from CAPTCHA requirements and we actually don't do any of that work. And I get the challenge which is your business users and stakeholders going we need to see you build this thing and you've got to be able to push back and go I will build it but if I take more time thinking about what I build we'll get the whole the right thing built first time rather than us building it quickly, it's wrong, we go around that cycle again and again and again.

We need to think about not time to build but time to adoption.

And it's interesting back to the agent stuff we're seeing about ten percent of all agents that are being built deployed only ten percent are making it to deployment. We're rushing into build while thinking about actually are these things going to work? Have we thought about how they're going to work? So, we're seeing the same thing replicated in the Agent Force world, but with even greater consequences because agents are not quite like Flow. They're a little bit more unreliable. We're seeing fewer and fewer agents being put into production.

This making sense?

Yeah?

So the talk said how can AI help you? And I think actually from a business analysis perspective there's a bunch of stuff that we can now do using AI.

By the way, it's not replacing your business analyst, it's supporting them. Okay? So things like we've built a business analyst agent where it can ask the questions of it and you can go through it and it will help you scope out what you really want and then it will draw a process diagram which you've then got something to engage with stakeholders. That diagram is a really pivotal document and it's not like oh yeah we need to do it because someone said Iain I saw Ian say you have to do a process diagram. No, it's really important because that's how you engage the stakeholders. That's how you get them to go oh no that's not how it works.

Typically people think about the happy path It does that okay but what are all the other angles? What are the what's the other if it doesn't go right what's the feedback loops? So the non happy paths. So they're the things you need to think about and that diagram helps you do that.

And what's really cool is you can then also generate those diagrams from AI.

You can use a sketch, an image, text, the statement of work from a client. You can use any of those and it will build you the first cut diagram. And I think what's interesting when you get the process diagram out is you've changed the relationship. It used to be you draw on the diagram if you got around to drawing it, and I know a lot of people don't because they go it's all too hard.

Well, we've made it easier and quicker. But the other thing is now you've got a diagram that's drawn by AI. It's not very good, but it's not the stakeholder going oh, you should know what we're doing, that's a bad diagram. It's now you and your stakeholder going how do we fix this together?

We're seeing it's changing that whole relationship. You're getting that diagram out generated relatively quickly.

And that actually is helping move the game on. Suddenly people go oh we should draw diagrams, we now can get engagement.

The last company I ran probably ten percent of the Fortune five hundred used at the company I was working at to drive their process process diagrams to drive their business. This is not new. I've been doing this for thirty years. In fact, I drew my first process diagram in nineteen eighty six when I worked for Accenture. Okay, this is not new.

Cloud didn't change it. Agents won't change it.

We all as humans need to go, what is it we do?

And let me show you what a diagram looks like. Maybe that's let me show you a UPN diagram.

Now let's do something different first. So I said AI could build stuff.

You should be able to just generate a diagram by giving it either description or dropping an image in there and it will then draw the diagram. Doesn't have to be elements. There are lots of other ways of doing that. But if we can get a script in there or just drop an image in there and go, let's see if it'll do it for us.

Diagram name, raise opportunity, and raise opportunity what from?

Let's do that. There's a description.

There we go.

So what it comes up with. We had got fun. We actually had some fun earlier. We were giving it take the lyrics from a song and see if it could draw a process diagram and actually say what the song was. And actually it's quite interesting.

We gave it R. E. N, the end of the world, which is just like a stream of lyrics, which is the band we play in. They play it we play that song sometimes and the lead singer went oh is that what it's about? Wow, okay.

Interesting of Taylor Swift's love story actually is Romeo and Juliet. And when it mapped the process out it actually mapped out with the resources Romeo and Juliet as it went through. AI is amazing in terms of what it can do, and it's really good at this kind of stuff. There you go.

Started to draw me some process, very simple process. But again, we got started. Let's make it a bit bigger so you can see what it's doing. A bit smaller.

Identify potential opportunity, document the opportunity details.

Yeah, but fairly simple but I didn't give anything very complicated to do. What's really interesting though is it's already using the UPN format. So, when people start mapping and extending it you're going to continue using the same format not coming out with your wacky format of I always use this shape, I always use this colour that nobody else understands. So, again we're getting some consistency.

It will do that from a sketch, even with my handwriting. I can give it a sketch and it will draw the diagram out. You can take a photograph of a whiteboard. So, there's some really interesting things that you can do with some of this now.

And AI is starting to help us and it's only going to get better and better and better and better. So, this excuse of oh, we don't want to understand we don't want to validate the requirements by drawing a process diagram. Some of those excuses I think are disappearing now. You've got to be able to go okay talking someone through okay so what actually is So, I understand meeting with client concluded but what does opportunity identified mean?

Again, of all the years I've done process mapping workshops we spend more time arguing about the outputs rather than the activity. But my activities start with a verb identify, document.

But what's an opportunity identified? If you're in marketing, I manage to scan them as they walk past my stand.

If you're sales, they're just about to place a purchase order.

That's the discussions that you end up having when you're starting to get into this level of detail with a diagram.

I see so many diagrams on LinkedIn where someone says it's a process and that the boxes don't have lines out and the lines don't have text. That's not a process. That's marketing.

I can't follow that. So I'm getting in my soapbox here, but this stuff actually is a level of detail you need to get to. These things start with verbs. You need lines between them.

If you haven't got that coming out of your Business Analyst team they haven't thought it through enough. Oh it's kind of an opportunity and then yeah document the details. Okay, but based on what? What exactly is an opportunity identified?

What status do you want me to put it in?

Okay, who's responsible for doing that? Is the sales rep?

Who's responsible for documenting that? So again, different dimension. Who's responsible for something that has an okay, what permissions do we need to be able to give people? There's so much more comes out of the diagram than just going, oh, that's quite pretty, let's move on.

Is this making sense still? Yeah? Question?

Oh, thumbs up. Okay.

Right. Okay. I also said over here get over right over here I said you can also now create it from org metadata. Well, in twenty seventeen we first presented the first diagram and everyone said I'm assuming the org control this automatically and we went no.

How would it do that? That's really hard. Eight years later and three hundred man years or FTE years of effort We can now do that. You can take all the metadata and it will then draw a diagram for you.

Let me see if I can show you how that's possible.

So it's meant a bit bigger.

So which org do you want to connect to? So I'm going to connect to that one and then just ask a question.

Now, how do so it could be anything how do opportunities work, how do we do commissions, talk to me about a case. So think about natural language what question would you ask of your org?

What are we going to call it? We're going to call it opportunity and it's now going to try and find which objects it thinks are relevant to that question. So there's a little bit of AI at the front and then there's a ton of algorithms at the back end. So we're in the AI bit now.

Opportunity. Okay. It's now looking at all the fields on Opportunity with picklist and going well which one would kick it off? And we go, I don't know, stage.

And it's saying, oh, what active processes are there? And we'll look at it from an opportunity perspective.

It's now going to go and draw a diagram.

So it takes a while sometimes, so what I'll do is I'll show you one it built already.

And this is what it built.

So it's looked at all the objects, all the fields, all the validation rules, all the processes, all the probe files, all the permissions, all the process builder workflows, all the flows, just all that stuck record types and then gone draw this diagram for you and you go, that looks horrible. And the answer is that's how your org works. It's not how you hoped it worked, how you thought it worked, how the consultants told you it worked, how you remembered it worked. It's actually how it works.

And you look at it and all the diagrams that we generate kind of look the same because they've got this funny L shape because all the stuff on the left hand side is all the different ways you could start creating an opportunity.

Now that may not be what you planned but that is how people can create opportunities. We're calling this process configuration mining. You may have heard of process mining.

Process mining is where you look at all of the different paths that people take. So if you think about it's all the busiest paths through the city. What this is doing is it's showing you every single path that could be taken all little back alleys, all little shortcuts and then you can overlay on top of that how often people go in certain directions. But what we discovered here and let me just make it bigger so you can even see what's happening here.

Make it bigger.

It's saying we create an opportunity with a quick action and there's all the metadata associated with it.

And it showed and then we got this one here through another quick action and another record and then another one and from a related list and so you get you start to see if you look at this diagram we're seeing lots of interesting things coming out from a DevOps perspective. First of all, is this all the ways we should be adding data? So suddenly you've got a data governance challenge here. It's all very well cleaning all the data up but then we're giving people seventeen different ways of adding opportunities in different ways.

We did it for our own org and essentially our admin team said oh no they can't do that. Said well you can. They went no you can't. No no let me show you you can.

And they went so we took all the buttons off. Well I didn't. They took all the buttons off the related lists the new button because we've got a ScreenFlow. And then our sales team went, we can't create opportunities, like why not?

They said, we should use the ScreenFlow and they all everyone went, what's the ScreenFlow? It's like, oh, really? So we built this lovely ScreenFlow with all the validation collecting the right data and nobody was using it. They were all using other routes.

So we so you can start seeing how you can that was just pumping the wrong data into your org. And that's I think is what's most fascinating you go oh I never realized that happened. The next thing was this says that the marketing team can do this. Well, why can the marketing team do this?

Well, because you've built profiles on top of profiles and you're going, oh, I really need Amy to have this, but she I need the same as Josh is, so I'll give you that permission. Oh, you've got some other stuff, but I'm sure she won't use it. You can see how that escalates over time. Suddenly you've got a huge security vulnerability potentially.

Certainly if you say again in the world of agents you go well I'm going to give an agent this profile because it matches somebody else who's doing the same job. What else are you giving them?

So all the things which we kind of got away with in Salesforce not doing a particularly good job. Yeah, okay. It's a bit of technical debt. Yeah, we didn't document it. Yeah, permissions aren't that great are going to nail all of us when it comes to Agent Force because the agent will go, oh, I'll work with whatever I've got. And that's I think it's a huge concern. We've got quite a lot of work I think to try and actually make sure the areas where we're thinking about agents are clean enough.

So, yeah, there's so the first one is it's like data governance. Another one is security and the other one is so all these all these different ways can't we clean this up? And what's the implications of cleaning all those up? So if we're going to delete that these are the so we need to take it off a page layout.

We need to get rid of some of these profiles. You can suddenly assess the impact of making those changes. So from a DevOps perspective when you just get thrown this oh it can't take that long Okay, but have you thought about can we add another ten percent of time because we need to go and clean these things up at the same time? So again, it gives us a chance to build some estimates into how we build it.

The other thing from a DevOps perspective is how complex is this? Is this a quick fix so we can put it through just straight from sandbox into production or is this really complicated it's got to go through integration UAT all the way into production?

And we've always got to assume everything's the most complex in case it blows up on our face. So we're missing opportunities to get some quick wins if we don't understand the implication of making changes.

So this is not just a business analysis problem. I think from a DevOps perspective you can look at it and go how does this help me actually build better pipelines and actually allocate work to each of those different pipelines correctly?

The other area is just straight business improvement, which is we're looking at this diagram and going, why is all the validation being done at the back end here? Why aren't we doing the validation at the front?

Why are we actually allowing all these changes to make their way through and then validating them out in step four? So there are lots of different perspectives on this.

And I think another interesting thing of that which is do you need to document your org anymore? I think you need to document fields how things work, but drawing these diagrams you don't need to do because you can do these on demand.

If it takes you three minutes to generate it why wouldn't you just generate it before you're about to use it And then generate it from the perspective you want to. We're looking at the case object, but I'm looking at it from collecting feedback. Oh, I'm using it for collecting customer surveys. You can look at any object or anything from any perspective and get that view on it.

So I think it's going to change the nature about the way we start looking at orgs.

Will it fix all of the fundamentals of we need to do decent business analysis? No, it won't fix that. Anyone who tells you AI will do this for you automatically is smoking something for the moment.

Give it two, three years. Maybe we'll be getting there. But at the moment, what we've just shown you here is five percent coding and a ton of code we wrote.

The spec is sixty pages long to do this just to what it does is it takes the question and then it works its way through a very complex structure to build a big table of data so that then AI draws the diagram.

So this isn't just, oh, point AI at my org, it'll do it. If anyone tells you that, send them to me.

I'll sort them out.

But again, this presentation was talking about what you can do with what we do with AI. I think AI is going we couldn't have done what we've just shown you without a little bit of AI. It's not everything, but it's suddenly it's adding that little bit of analysis, that little bit of acceleration.

When I showed you drawing a diagram from text, we couldn't do that. We've been trying to do that for five years but AI suddenly got good enough to do that. So things which were impossible today six months from now may well be possible.

So again it's interesting how quickly some of these things are moving on.

The other thing where AI is getting quite good is writing. So you could take any one of these boxes and just say, let's go to one of those.

Sorry, that one.

If I change into edit mode, right mouse click, say generate a user story, and it will write me a user story.

And then it will also go back and look at the org and say, based on this user story, I'll write the user story, I'll write the acceptance criteria based on your org, can I find anything you've already built?

Now, it's only as good as the documentation. I mean a lot of technical debt is built up because you don't actually have very good documentation.

You look at a flow and go I kind of not sure how that works, nothing's written about it, I don't use it, I'll create another one and another one. Again, back to I'll bring up agents again. You start reusing, say, a flow that's used somewhere in the business and it's used by an agent. You don't realize from a dependency perspective that that's used by an agent. You make a change for your business.

Will it break the agent? Probably not. The agent will just go, I'll work around it. You may never know.

So this idea of making sure you understand the dependencies and the documentation is becoming increasingly important.

Seems to seems to create me a, now it's created me a user story.

Again, AI is really good at writing.

Let's just edit it so you can see. Identify potential opportunity acceptance criteria.

I'm going to say ask elements and I'll have a look in the org and it will start there we go so it's suggesting some objects and fields and permission sets and profiles.

Is it right? Probably not. Is it a good start? Absolutely.

If you've documented the stuff you've already got reasonably well it will look at that and it will find it. But if you've renamed Case to something else it's not going to stand a chance, which I think again is part of the problem. But again, you don't need to fix the whole org.

Project by project fix an area and go let's just get this working.

This idea of doing better business analysis. Let's just pick a small area, show it works, push back on a little bit and go right we're not going to do that. I don't think the spec's good enough. Let's help you do that business analysis properly and see if we can come up with a better answer and then use that evidence to go right now let's apply it to the next project to the next project. I'm not suggesting you all leave here and go I heard this amazing presentation, we've got to stop doing what we're doing because that will do none of us any good. You need to pick a few places where tactically you can go this area, it's not mission critical. I know they want it, but let's just push back and start measuring how much time it would have taken to have gone around the loop four times versus do a bit of business analysis, get the right requirements and then come back.

I'm conscious of near end the end of time, I'll tell you one story.

We take this approach internally every requirement comes in, we draw the process diagram, we go around this cycle, we get to the end, we assess how big a change is going to be. So capture requirements, map the processes, look at the data model, look at the dependencies, look at how complex it's going to be and then come back and say this is how how long it's going to take. And then we go back to the business and go do you want it? Which is a strange question because they asked us for it, but based on how much time it's going to take do you still want it?

And just recently they came back and went no, actually there are better things you could spend time on. That's when you realize you've reached a level of maturity when you do the first bit and you don't ever build it because it wasn't the right thing to build. The moment you start finding some of those, fantastic. You've got to a point now where the business is engaged because now you can come back and go this is going to take six man weeks of effort to build this thing.

Do you really want it? They went, oh no, I thought it's going to be really easy. In that case, I'd rather you've devoted those six years of six weeks of man of effort building on this other thing. Can you do that?

That's how we need to try and shift it. AI will help get some of that business analysis faster, but it's not going to change that fundamental of being able to have a conversation between you, the Dev team, and the Business Analyst team, which is not like I've given it to you, you build it. It needs to be a lot more collaborative.

Got a few minutes for questions. If anybody's got questions, I'll go back to the there's a bunch of resources which you can get to from that QR code. Happy to have any questions. I know this has been very short, but thank you for joining me for this thirty minutes. But what questions have you got?

None? Really? None?

That picture I showed you where it's looking at the metadata, it can look at anything that's sitting on the platform. If you've documented inside your org that you've got links to third parties absolutely you'll pull those out, but it's not going to go at the moment. It can't go outside. It'll look at data cloud, it'll look at Agent Force, it'll look at all the industry stuff because that's on platform.

But at the moment anything that's not on the platform it's not currently looking at.

But I mean over time there's no reason I guess why you couldn't extend it. Now we've got the principle set, it's a lot easier to now extend it. Getting us to here was really hard. The next bit is but again there's no reason why you shouldn't be documenting your integration points by in a metadata dictionary.

Did that answer the question? Thanks.

So we don't currently do Experience Cloud and all the B2B e commerce stuff is not on the platform. B2B is? B2B is, yes. So it's looking at all it's looking at all the page layouts. It's looking at all the UX stuff.

So it's looking at that metadata to try and work out what the picture looks like. I'm not sure we I don't think we've got any clients who've used it for B2B. I know B2C isn't, but the B2B side.

It's just started when it's going to beta this week.

Yeah, but think okay. This is not this is not telling you how people follow the path. This is showing the potential paths. You need to overlay on top of that how people are actually operating. But this is just showing you what what they could do. Yeah, of course.

Yeah. Go behind you then, aren't you?

Yeah, absolutely. So it's looking at it's looking at obviously if it's a managed package we can't look in sorry, flip back.

Obviously we can't look inside a managed package if it's compiled but if yeah, if it's a lightning web component it's looking at Apex, it's looking at code. Absolutely. Again, back to how well documented that is it has a better chance if it knows what it's trying to do. And it'll only get better over time.

Sorry?

Industry clouds again, if they're on the platform, so yes. It's looking at omniscience that are sitting inside objects because we consider that metadata.

So, yeah, all the industry clouds were now on the platform. It can deal with stuff. It's getting quite good actually. We're really impressed. We did a presentation at TDX for a room for one hundred and fifty people and eight forty three people signed up, which was slightly more than the guy presenting was expecting.

So clearly there's a level of interest here. We had one client say I've just spent five hundred thousand dollars with a I won't name the consulting firm but a consulting firm and six months to do what you've just shown me in five minutes. So there's definitely a need here and all of you who are who've got SIs we're encouraging SIs to go don't spend time documenting, do use something like this whether it's us or other people and therefore you spend the money that the client's giving you on things that take you forward rather than documenting it. Because a consultant documenting this in like Lucidchart or us and then leaving on the shelf for six months is instantly out of date, it's pointless.

I think we need again that we're changing a paradigm again in terms of documentation.

No, no. It'll draw it'll draw you a new diagram, but you can compare diagrams. But at the moment, we're just giving you a new diagram each time. But we do we do track every change to metadata as you're going through.

So you could you could look back at a changelog and go, okay, what's changed? Good question. What what I talked to our CTO yesterday about was like, hang on, we've got all the data that's gone now. We've got all the data that builds this thing.

Why don't we just build an agent that sits on top of the diagram so you can just ask the diagram questions? It's like, oh, that's interesting.

So instead of having to wade through it, you go, oh, why is this happening? Or why is this pro tell me everywhere this profile is used? So I think there's some interesting things we can do when you now you've got a lump of data, just sit an agent on top of it. And then what else can you do? Ask the question to the diagram.

Yeah. But again, I don't think you need to do that because you wouldn't update all diagrams. You only use you only create them when you need them. There's no point updating them all and then not touching them for six months. Just just create them the moment you're about to start a project.

It's like it's the way people are thinking about it at the moment. I'm about to work on this area. Now, let's create the let's create the documentation and see how it works, not how it used to work. I don't really care how it used to work. I care how it works now.

Yeah absolutely you could do that. So here's where we were, this is where we're trying to get to, this is the change management aspect of the Socioverdict, absolutely.

Hopefully that satisfied some of the questions. I think the one thing to take away is that you, if you're from a DevOps side and the business analyst side, you need to be able to push back and they need to understand why you're pushing back. Go let's come up with a better specification, better thought through and we can come out with the right answer more quickly so we don't end up as one of the statistics.

Thank you so much for joining me.