DevOps Accelerated: Complex Salesforce Package Development and Release made easy

Share with


Description

In this video, David Ray, Technical Architect at MCIM by Fulcrum Collaborations, demonstrates how to simplify and accelerate complex Salesforce package development, through the integration of open-source projects D2X and CumulusCI. He does this alongside the utilization of Gearset, Fulcrum Collaborations, and Muselab to establish a comprehensive DevOps pipeline.

Learn more:

Relevant videos:

Transcript

Thanks for everybody. I know we're kinda coming to the end, so thanks for sticking in there with us.

I'm actually really excited. That last session feeds perfectly into what we're gonna talk about a little bit.

I'm David Ray.

I was previously with Salesforce for about seven years And, towards the end of my time with Salesforce, I was working on Cumulus CI or an internal development tools team, and I was working with Jason.

Yeah. And I'm Jason Lance. So I currently run a company that I left Salesforce about two years ago to start News Lab, focused on really initial really solving the challenges of ISV style development and DevOps on the platform. So that's continuing a journey that I was on for eight and a half years at Salesforce.

I got hired at salesforce dot org as one of the first two members on the product team. My job was to figure out DevOps for the managed packages that we built back in two thousand thirteen before Scratch Orgs, before SFDX and all that fun stuff. So that's been my life for the last decade.

And now I'm really excited to be out on the outside working with ISVs, kind of helping them learn from that process. I had the pleasure of having David on my team as well.

Another interesting fact about David for anybody who's used Cumulus CI or is interested in it, David was the first ever real ISD dev user of Cumulus CI with Scratch Works because he had just started on our team. It was a prototype that we were building and he was willing to be the the first user of the prototype.

So I really wanted to use Scratch Works.

I was excited about it. And James, unfortunately, he couldn't make it. He had a scheduling conflict. So it's just the two of us.

So I wanted to cover kind of, the company I work for now, is MCIM.

And what is that? It's a mission critical facility management platform.

What that means is we've got some very risk adverse customers that have massive data centers, and a data center needs power and needs cooling, and they wanna make sure that that stuff doesn't fail.

The servers inside those buildings are really important, And, and we have a lot of customers that are if they go down, they're gonna have issues. And so since twenty thirteen, the company was started as a project with Wells Fargo and grew from there.

And so it's making sure that our product is stable and so these data centers from these customers are stable is our primary focus.

And so we have, you know, this is a fun marketing slide, global footprint.

And basically we've collected, curated and connected data for more than one million infrastructure assets across twenty thousand facilities globally.

So really what happened was about eighteen months ago ish, Fulcrum was acquired by a PE firm and that meant, hey, it's time for you to grow and that meant that we got some significant investment in building out our product and engineering teams and of course with that comes expectations for growth and we need to generate value with our product and that meant that we needed to enable the development team to innovate and I always been actually in earlier, I think it was yesterday and, at Salesforce, we had we also called it a release train. I really like that analogy. So I would like to I would tell our team is my job is to help us build rails that we can just really move quickly on because that was what was important to that scalability and and growth. So what that meant is we had to shift our focus from being project focused to be, to being product focused. What that means is our customers would send us a request.

When this technician is doing his rounds, he's making sure that there isn't a problem in the data center and they need to mark that down.

You know, they would have a feature request. And so us to have the ability to to add those innovations, it we have these inefficiencies, requiring human intervention, bugs and instability due to minimal manual testing.

Our subscriber environments, we're an ISV. We're building a product that's installed in these customers' orgs. And to test our product fully, we wanted to make sure that we could capture their configuration and test before we did releases. And it caused a lot of stress and burnout with our support team because they were performing this configuration and upgrades for our customers. We're an OEM partner, so we handle a lot of that for our customers. Security and compliance checks were difficult to implement, because we didn't have a standard DevOps pipeline and some best practices that are starting to come out around segmenting code into packages, unlock packages, man we're doing managed packages, made it really difficult to build on top of existing code bases.

So this is an example of our existing package structure. It's simplified, but we've got a query package that's a one first generation managed package and our core package has a dependency on that.

There's these little modules embedded in it. That's what the little dark blue boxes are. And what we've started to do with what we've accomplished these last few months is add extension second generation managed packages.

So the benchmarking, the portal, enhanced scheduling, those are actually separate packages, depending on what the customer needs.

And so this is an example of what the software development life cycle looked like when I, you know, started working with NCIM.

We had And I'll I'll just add Yeah.

This is the software development life cycle at almost every startup ISV, because it's it's the easiest way to work, or at least before two g p existed. You spin up a one g p packaging org and you just start building something and you put it on the AppExchange. It was an incredibly easy process to do this process wrong.

And so a lot of businesses started this way, and understandably.

And this process closely mirrors what you would see if you weren't building a managed package to be delivered on the AppExchange.

We've got our developer and he would create a scratch org or she and they would have a QA person, they would test it, that passes, that code goes to GitHub, that developer has to push that code to the packaging org. And then if it was a sandbox environment or something that you were developing in, that same developer would then cut a package release. Then our support team would have to take that package, push that to our trial source org, which is how we would distribute our product for people that wanna, you know, have a trial of it. All those sandbox orgs, a UAT org, And once things looked good there, it could then also be pushed to, production orgs. So a lot of the times to to get through all of our customers and all that configuration, it was about seventy two hours of of work. Three or four people, full days worth of work, four or five days. And it was incredibly inefficient, But because the company was relatively small enough, those manual processes did it was adequate.

But On this real quick, like, there's probably a lot of people in the room that are not ISVs.

Something I wanna point out is that ISVs have been at the forefront of figuring out package based DevOps because they've had to. There's no choice. They don't they can't, like, dip their toe in the water and see if it's ready for them yet and then choose to go to it. ISVs have been in having to figure this out for a long time.

And I think that the ISV challenge kinda stresses the scalability of the process at a couple of different angles that are interesting to think about when you evaluate a a DevOps process. So for instance, in this one, as their number of customers increases, that's an additional production org that they are managing through the org development model. So you we we like, I've heard a lot of talk about the struggles of how do I get a DevOps process to deploy to my single production org. But also think about, well, what if you had multiple production orgs and you need the process to be able to work for that?

It's a different angle of scalability, to think about.

Yeah. And the same process, if you're not using packages, you still have to move metadata from an environment to an environment to an environment, whether it's for an integration environment, to a testing environment, to a UAT environment. So the the same process is still pretty common.

So that moves us kind of to the next phase is what was the theory behind the transformation that we wanted to make?

Yeah. And I think that was that was one of the themes that I really loved out of David Cannone's last session.

That really this is a cultural organizational, the way that you think about it. Really, the best way that I've come to distill this down is the problem is a misdefinition of what your product is. That's that's the problem that ISVs face. And I think if you're not an ISV you face a similar thing in how you think about what your configuration of your org is, which is ultimately a product.

So, yeah, that title didn't overlap right?

But it's this really fundamental point. A product is more than a package.

And what I mean by that is there are very few AppExchange products or packages, you know, products from ISVs that you can go install from the app exchange and log into and immediately start using.

Why is that? Because there's a lot more than just what's in the package that's necessary to build a product experience, to like to actually create the product. You need all the other stuff in the org in addition to just installing the package.

I was thinking about this earlier or yesterday just in thinking about some of the slides.

If I translate the same thing for the org development model, it's a deployment or a delivery is more than a deployment.

And I would argue that a delivery is when you've gone through the manual checklist of post deployment steps.

Then it's delivered. Then it's complete. But it's not complete until then. So this same concept here, think about automating and and being able to automate, recreate as consistently as possible the complete thing that you are building.

So the challenge that ISVs face, and I think a lot of teams face this, you know, when they try to get in with working with DX, I've come to refer to this as the DX silo effect. Scratch orgs are really only used by developers. The scratch orgs that they develop in have no fidelity with any of the orgs that the rest of the business actually views as the source of truth of the product.

So, nobody tests in a dev org. Nobody, you know, or like in a scratch org or anything like that, the developers do their work, they get it into version control. A release has to be created so that a persistent test org that is the source of truth to everyone else can get upgraded like a sandbox.

But the challenge is the dev is working on code that's in source control. That's the package source.

That's the source of truth to the dev in the package development model.

And, but for the rest of the organization, the source of truth for the product that dev is building is that that the orgs that the organization is experiencing it through. So if the devs are not automating everything about what goes into that org, There's a disconnect between their process and the business.

So we played around with this a lot at dot org and kind of retroactively came to refer to our I think we kind of created a different model. So we have the org development model, with a package development model. Package development model, your job is to create a package version as an artifact of the development process.

In the product delivery model, your job is to create a recipe that can deliver a complete experience of the product into a new or existing org.

The easiest way to start with this is Scratchbooks.

If you can if you can automate from the repo everything to build out a completely usable scratch or the code, the thing you're trying to build, it is very easy to then take that automation and get it to run against your sandbox and deploy there. Like and then Scratch Works create a consistent environment. There's never state drift in the org. You can reset it very easily. They're cheap, right?

At least if you don't have to go to the limits and pay for more. But everybody gets a really nice allotment of Scratch Works.

But when you think about the job of development is to create a product experience that anybody can then an automated product experience or SME that anyone can use to spin up a demo work, to spin up a test work, to do a customer implementation of the product. It's the the it's you're not just creating a package, you've got to create that whole delivery. What often happens is all of this post development implementation work is thrown over the wall to solutions engineering, thrown over the wall to QA to figure out how a QA environment is supposed to be set up. It's not a collaborative process with development.

So when you think about a product experience recipe, I generally break it down into kind of these five areas on the right. You have to understand the dependencies that you need. This is something, if you're breaking from your sandboxes in the org development model, you become responsible for another managed package, you've gotta automate getting that managed package installed as part of your recipe.

Then you got to deploy the main thing you're building, whether it's a package, unmanaged metadata, whatever.

Any kind of post install configuration that you want to do, settings, configuration as data, demo storytelling datasets.

I've had a couple of people propose, and I agree with this argument, I just haven't updated the slide, an additional layer of automated tests on top of this that actually tests the scenario that you're creating the product experience for.

But, I think Michael, you might have been one of the ones that proposed that to me at one point.

So really if you have these recipes and this is the focus of what you're building, now it unlocks this process where I love David David and I were having a conversation last night about the concept of a digital twin. And I walked in and I saw Charlie's keynote earlier and all the emphasis on this notion of a digital twin. So the easiest way to think about it to me is if you were to if you wanted to start a car company now, like today with modern technology, you would not build the physical prototype of the car and then reverse engineer the AutoCAD diagram from it.

Would you? I mean, maybe. I don't know. Maybe you prefer working with clay and you don't wanna but like, generally that's not the right approach to do it.

The way you would do it is you design the AutoCAD model. You can simulate that model throughout the design process, test it throughout the design process, then you create the actual thing from it. It's a much more flexible model. It allows you to play around with different scenarios and that's why Charlie was kinda talking about that this morning, that this is the key to the future of business, that flexibility or as adaptability is what architect can tell us about it.

And if you have a lot of persistent orgs, if you are using a lot of sandboxes we have Now we have to go. We gotta do this. We got okay. I gotta copy this data, and now I gotta load that, and that can take a a day. It could take more.

And and so even if you're not doing package development necessarily, you still have the ability to automate and and control that configuration. And if you're doing testing, maybe you have several different scenarios that might require different configuration to test each scenario and so you can deploy those at different points and do it efficiently.

Yeah. I think, you know, the other thing to point out about these, these recipes really are the core of the product delivery model and thinking about the product delivery model. You have to think about these recipes as part of the development process.

It's not an afterthought. Like, I had so many arguments when when actually David was one of the first developers on one of our teams at dot org that started really trying to focus not on building features in the package, but building delivery automation to be able to deliver the package more easily to customers.

And I had many debates back and forth over, you know, with product management and generally it would be something along the lines of I need my devs building features, not building automation.

And I think that, you know, the product delivery model, the whole definition is to say, nope, sorry.

The automation to deliver it is a feature.

So that's to be, I think, the biggest concept of this. So I got engaged with MCIM.

What was it? November of last year, I think, is when we when we started.

Shortly after they had hired David, I thought it was pretty amazing to get to go work with a member of my former team and kind of help, consult alongside.

So generally, the work that I do is, kind of six month DevOps transformation engagements, mostly with ISVs, but I'm open to others. I'm not here to really pitch that. But I just wanna point out what the process is. It's generally understanding what the current challenges are, kind of designing the prototype of here's what the build process looks like with these open source tools that implement this model.

And then you roll out the process to devs for their daily work. So feature work starts happening in those works, then you roll out the process for the release train automation.

And then finally, you're really focused on building out that full integration recipe that, like, you can spin up the equivalent of your your trial for source org that has all the configuration of your product immediately usable, but you've got that whole recipe defined.

And with that. Oh wait. No, this is still me.

So we we started the engagement and the timing of it actually worked out really well because it was just coming off my last engagement.

Is Jeff Kranz in the room? I don't know. Risk Adect? Yes, he's over there.

And during that project, they had well this was a portion of their product suite. It was eighteen managed packages, a mix of one GP and two GP with extension package hierarchies four or five layers deep.

And most everybody that I that they talked to was like that's crazy. Why would you ever do that? And I heard that and I'm like hey that sounds fun.

But I quickly realized that I was gonna wind up duplicating a lot of logic across all the repositories as I was building it out. And I kind of went to them early on and I said, hey, if we can pause this discovery for about two weeks, let me go kind of build this open source framework that we can roll out each of your projects a lot easier.

That's what we did. So the goals of D2X, short for the development to delivery experience, is to be easy to launch new projects, to run projects, efficient to run projects, and many different projects, and to be extensible. So bundle all the best tools so you can do whatever you need to do with it.

D two x is really no magic.

It's just three things. It's a Docker image that has all the tooling pre installed, has scripts to handle environment variables for authentic any dot orgs.

And then give me a sec.

It has reusable GitHub actions. So what you actually put in your repo for your GitHub actions with d two x has no logic in it. It's like thirteen lines and it's just mapping secrets values.

You don't have to duplicate a bunch of logic across your repositories if you centralize.

And then it also has a configuration for dev containers. These are dockerized dev environments, so you can actually have all your developers working in the exact same tooling versions that your CI builds are running in. It's really cool.

But on that last point about reusable workflows, I've seen a lot of examples in the ecosystem of like here's this three hundred line GitHub actions workflow that you can just drop this yml in your repo and it does everything that you need in order to build and test and work. Just copy it around.

Don't do that.

We as an ecosystem need to be better about collaborating with each other and figuring out like, hey, this process works. This is the best way to plug this in. And let's build shared reusable workflows and because the maintenance nightmare of hundreds of lines of code across eighteen different repositories. Like you could tell yourself you'll go back and refactor it, but you never will.

So apply those same engineering principles that you apply to code to how you think about your DevOps scripts or really your pro your DevOps product.

Yeah.

And that Yeah. So this shows basically where we ended up. Didn't take that long to get here, beak with, the t two x platform.

It was still similar if you remember that first, slide that showed the old SDLC with this new, we have the dev creating a scratch org, doing their development, passing code back and forth to the repo. They'd submit code for a pull request.

Then once their ticket gets set to ready for review, the QA person can create their own scratch org because what that developer has done is created everything needed to carry that new feature, the configuration. Maybe it's not part of the package, it's a part of that unmanaged metadata layer that sits on top but then the next person in the line can just spin up a new scratch org. That configuration goes right along with it, any data maybe, and they can do the QA. And if that looks good, then it passes that and it goes, the GitHub actions start to kick in.

So once that pull request is approved and in GitHub actions, it automatically kicks off a feature test in a new scratch org that's using the same recipe that the dev used and the QA person used. And it's gonna run that. It's gonna run some feature tests and anything else that you need to verify. In our case, we maybe do some linting or doing some code scanning.

We've talked about how it was hard to put some compliance checks or or security checks in there. This is a perfect place to do that.

And once that passes, it automatically, let me take a step back. To actually have that pull request merged, the feature test has to pass. So I got ahead of myself. So feature test, if that passes, if that's green, then that pull request can actually be merged.

Once the pull pull request is merged, it kicks off the beta test. And in our case, since we're creating a managed package, we use those betas. And if you're doing unlocked package development, same thing can apply for you. You create a beta package and you can install it into a scratch or it could do your tests.

So once, because in our case our core package is a one GP, we still have to deploy that metadata to the packaging org and Cumulus CI under the hood will tell it to create that beta. That beta will be installed into a new scratch org and test it again. And so if that beta passes successfully and that's green, we know that our production release is probably a much higher high nineties percentage of being successful.

The developers were very happy when I told them that they didn't have to do deployments anymore. You don't have to stay up till two in the morning because it took four hours to even try to upload and to see a failure.

And so by shifting that testing let further left, we could guarantee that our production releases were gonna be much more successful at a higher rate. And so then, of course, on the far side, the all those human spots were able to be moved so those people's time could be spent supporting customers, innovating, and that's seventy two hours of time was cut down to two to three hours because now we just have a single script to do the installation of the new package. And sometimes that new package might require its own additional configuration.

Maybe we put a new field in our package and that field needs to be put into a layout.

So with all that, even those small changes getting updated and being updated through automation, it just saves a lot of time.

Yeah. So one, one thing to point out that sort of a bit of a technical component of the tooling, but I think this is a weird trick that we did. It's kind of baked into Cumulus CI.

Cumulus CI, when it creates two GP packages, it doesn't use the SFCLI to do it. It directly interacts with the objects in the tooling API. And the reason we did that is because we wanted to create a different packaging experience. My first inspiration for that, I'm philosophically opposed to the idea that I have to create a new commit in order to record the fact that I just created a new release on my main branch. I find that silly.

Maybe that's silly that I have that objection, but that was kind of the first thing. I was just like, no, I don't like that. I like that I have to add edit this file because then am I supposed to run builds on my main branch again?

And, you know, do I build the next beta of that?

But beyond that, there's a bunch of other reasons why having that stored in the file limits you in ways that this approach unlocks. So rather than storing release information about package versions that were just created in an SFDX project dot JSON file, CumulusCI stores them in the GitHub repository. So it creates a release. It creates a tag.

It actually puts JSON information in the description of the tag field, as you can see here, that has the zero four t information, like the package version ID so you can install that package. So the build process creates a digital twin version of every commit and puts a commit status with the package version. So everybody who builds a QA org is building a packaged QA org, not the unmanaged source. They're building it with a namespace.

And that's a huge difference. And it enables can you go to the next slide?

It enables this notion of two g p feature test packages, which as of today I'm gonna rename digital twin packages because that's essentially what this is. And if you think about it, you need to be able to have digital twin packages. I wanna quickly be able to create a package version without having go through the whole process, especially for one g p of getting it in my packaging work and deploying it and all that.

If you create unverified two g p packages, they upload in like a few seconds, So like there's very little impact to to to creating them. And then what happens is the build process is the one that's doing this, so devs don't need to have access to cut package versions. Your CI system has that credential.

Devs can push up to GitHub in order to get a build run. By GitHub actions that creates the tag and then Cumulus ei's dependency system, as it goes to look up and resolve things, can go look at that information in the repo and kind of automatically determine things.

Automatically determine things for you.

Yeah. So composable dependency management.

If you remember the the package model I showed earlier, there was five or six packages in there. Each of those are their own separate repo in GitHub and their own separate Cumulus CI project.

And this is a simple example, but you can see, one of our packages, I can look here.

If if that package if I have, my portal package and it has a dependency on the core package and that core package has a dependency on the query package, I don't need to list all those dependencies in my top level portal package because KumaCI is going to crawl that chain for me and collect those automatically and install them in sequence while it's building tests or whatever it's doing. So if it's setting up an environment to develop in or to test in, I have an environment that looks very much like my customer's environment, it happens quickly. And one of the friction points of doing package development even if you're trying to break your code apart into smaller composable packages for maintenance reasons or we're starting to see that as a best practice even if you're not creating a managed package to be delivered.

It just makes that easier and it removes that friction from figuring out what dependency, how to install it. It's just all done automatically.

So what we're doing is we're creating our first generation packages, the beta packages and production releases through automation. So typically before you would see there's a checklist of things that you had to do to get there, and now it's can just be a button.

And all those checks are done through automation.

And the entire log history of that is recorded for audit purposes. Yeah. I I I think that's something you you mentioned, you know, briefly earlier, but I think overall if you get your entire development process running through GitHub, compliance and audits become incredibly easy because every auditor out there is familiar with how to configure GitHub to be audit compliant. And if GitHub is really your source of truth, the compliance becomes a breeze.

We I did the audits for our fifty managed packages at dot org and we were able to in one session or two sessions, get all fifty packages audited and approved because we were able to show that they were following a consistent process with a consistent code base. That was really cool. Yeah.

So and I think somebody asked a question that was related to this in the last session. But when you're doing package development and sometimes it'll happen that we've got this new feature we wanna create. So we've got this new little package we're gonna create to isolate this the code for this new feature. And it's gonna have a dependency on the core package.

But what that means is maybe we need to change the core package and the new feature is gonna depend on that code. So normally, what you would do is create a package, create a new version so you can release that code and then install it in the org you're gonna do your development on so you can reference that new code. But with the digital twin packages and some fancy, what do you call branch naming matching, if the core package has a branch named feature slash new thing and my new package has that same branch, the digital twin package that is from the commit status from the, core package that the developer created will be installed in the Scratch org for that new feature org.

So you can do multiple package developments in parallel without having to release anything And then the tests can also follow that same process, which means Or like in in the instance of of risk connect that I mentioned earlier, like, where you have have like eighteen packages, five layers deep and extension hierarchies.

If you have to go one layer at a time and do a complete release each time, that's a really expensive process.

So Yeah. So I've heard it even this week that doing package development has a lot of friction. And we learned at stat dot org that we can remove all that friction by having a tool in place. And that's really what QMLCI was built to do for the nonprofit success pack originally because it has a pretty complex dependency hierarchy.

I I did have someone point out to me that there is actually pain in packaging. Yeah.

It's true.

So what we did is we had a pretty complicated setup. We've got several customers that we wanna replicate their environments to to do our testing on. And so there's a couple of different tools that that we used. Of course, one of the things that really helped some of our teammates in support or solution engineers that maybe aren't comfortable using the command line, they would use Gear Set. And so we talked about creating those recipes.

Those teammates were the best people because they know the customer and they could pull Gearset and look to see what metadata is do I wanna grab. And and so we could build what we call high fidelity customer environments. And so as a part of our feature rollout in our in our automated test suite, we can, in parallel run several tests and and our goal is to even shift that further left so that the devs, when they commit code, that's that, feature test that we saw earlier can also run it against that, an environment that looks real.

And, so these tools, Gear Sets specifically that we used can help you to create those recipes and get that metadata together and Qumas CI with the help of d two x can just put it all together and and follow that through.

So we had some pretty significant efficiency increases. If you remember between the two different slides, the one seventy two hours down to three hours, that's a twenty four hundred percent increase in efficiency.

And that meant that a week of work to do installations and configuration, one person could just start a script and monitor that and then see the results. And it meant that we could actually deliver with a higher we could deliver more quickly if we felt like we needed to.

And one of the cool things that we added this towards as we're talking about this, Typically, because of our code base, it would take on the high end depending on the day and the speed of the platform at that minute about forty five minutes to create a scratch org.

Up up to forty five.

Up to forty five minutes.

It was generally about twenty five minutes.

So But you really feel it when it's Yeah.

So one of the new, it's still in beta, but the Scratch Org snapshot feature allowed you to create a fully configured Scratch Org, create a snapshot of it that you reference by name, alias, and it reduced that forty five minutes for this one particular developer, every scratch org to four minutes. And so another one of those pain points of scratch orgs take a while. If I don't have the metadata, it takes too long. I gotta wait for everything to install.

Now it's you know, another big number, eleven hundred and twenty five percent efficiency gain with Scratch Works.

I was shocked by this number, by the way. I was a skeptic when Snapshots first came out and, didn't think that they were gonna be that much faster. To be like four or what? Like five five to nine times faster, in in in the time that it takes to get a new development environment.

And and I think that's even faster than probably sandbox spin ups might be. I guess maybe not with the developer pro, but but it's getting up there. It's getting to the point where you can have a fully configured usable scratch org environment much faster and that removes one of the major objections.

I don't think many I haven't heard many people playing around with it yet, but you should.

Yeah. Once once it became available because our QA team, that was one of the pain points process and I'm gonna go do this thing and I'm gonna come back and they were kind of ping pong and, the context switching, we all know, can be a big time waster. So that was that was a huge another huge efficiency gain.

So how do we get started?

Yeah. So, one of the things that I did in in building out so, d two x, which is open sourced out on GitHub, but starting it from GitHub is a little bit challenging so I tried to make that easier. I built a web app called d two x Launchpad. It's at launchpad dot muse lab dot com. Totally free tool. Like, I'm not gonna charge anybody for this or whatever.

But I just wanted to make it easy for you to spin up a new GitHub repository with all the d two x build process and all that configuration ready to go, so that you can experience what it's like to go build something in a greenfield environment in Scratch or on GitHub using all this pre built automation.

Super easy to get started. This is like my clients. This is how they spin up new projects internally as they go out to here and we have a new solution pack we want to build or a new product.

And go to the next slide.

And then well, yeah I guess I didn't cover everything. I'll go back.

Hi, Matt.

So you need the the d two x project and workflow. But then really what you need to start doing is thinking about how do you organize the different types of configurations that are important to you.

I would highly encourage you to try to keep the number of configurations you wanna maintain to a minimum.

Everybody tries to think like, well I need an internal training work and an external training work and I need this. You know, like, can you get those teams to collaborate together and agree on one common source of truth of like what the configuration of the product is?

The fewer of those recipes you have to maintain, the thinking this way is taking the source thinking this way is taking the source of truth of your configuration and whatever the metadata is that comprises your Salesforce implementation.

Put that into put that into source control because it probably common is you don't know the difference between one environment to the next sometimes. And if those aren't identical twins, sometimes that testing is invalidated from one environment to the next. And so to to lift that out and that's, I think that kind of the four step there is just make sure that you've you've got that configuration and you understand it. But once it's in source control, it makes it a lot easier to really know what changed.

So, I think we're running out of time. So key takeaways, package development isn't just for ISVs.

Or like I just said, organize that metadata into logical packages and open up those efficiencies because you too can see huge gains. You don't have to reinvent the wheel. There are, vendors here that have cool tools, other open source tools that exist. You can engage with somebody like Jason with has experience implementing these things.

Like we talked about, prepare that metadata if it's that layer above.

And don't be afraid to try it. It's it's open source. It's free to use so you can have fun.

So, thanks everybody for your time. And, oh, yeah. And if you have any questions, I think we got a minute or two. I don't know. I don't think anybody's after us.

So I don't think so.

Any questions? No questions. Okay.

Oh, yeah. Yeah.

Oh, yeah. So if you do have a question, raise your hand. We'll get the microphone to you since these are recorded.

One question. Looks like you've got a couple different things in there talking about, one g p specific in particular.

Yeah.

For somebody doing net new, brand new, just now trying to get Don't do one g p.

Don't do it. Okay. The the way I like to and and hey. I'm on the outside now. This created a little bit of friction for me at times. Like, CumulusCI was built before SFDX existed to create essentially the DX process that SFDX was built to build with two GP.

But we had to do it with one GP.

So I, at the scale of what we were building and we knew that we had to build that whole scaffold.

So we automate everything necessary to know that if the pipeline passes packaging org, what's in the package that we're about to cut is exactly what's in the repo.

That's the responsibility of the pipeline. There's a bunch of automation in that pipeline to make sure of that.

So we can make one GP behave almost like two GP from a developer experience perspective and we've been able to do that for a long time. So like to me two GP was kind of cool, but it was like I don't know what it gets me because it doesn't necessarily get me the better developer experience. Now there are things that it gets you and there's a bunch of reasons. Like don't start a new project on one g g p today.

But you can mix, Like you can do two g p extensions of a one g p package.

That's also a pattern. Like if you're starting a new extension package, do it as a two g p, not a one g p.

They just can't share the same name space.

Yeah. It can't be in the same name space as the one g p package.

But your two g p extension package can share a name space.

You can buy for great release lines, which we did at one point. That's horrible.

Yeah.

So, like, is there a particular question about two g p in that sense?

Or No.

No. Just I I know other people it seems like package based development is growing a lot. I I use it personally on several things. I I really like working with two GP. I don't actually have have much experience with one g p.

And so I just wasn't sure if there's still a spot for it in the ecosystem or for it to have a user score.

There's a spot for one g p in the ecosystem because there's a ton of ISVs that are still using it and stuff. Sure. Yeah. Yeah. And the conversion tool is coming. It's been coming for many years but I think we're getting really close to to finally having it, which is exciting.

And but not all of those ISVs are gonna jump and immediately go through the conversion process. And going through that conversion process is gonna require them to set up a source driven development environment for the package.

So I think that we're gonna have as an ecosystem, you know, even if you're a customer, you're probably relying on a bunch of ISVs, for functionality in your org. You're still gonna be using one g p packages. The impact of the one g p developer experience on the ISVs that you rely on impacts you, right? If they can't innovate as quickly or they can't test the quality as as well as they should be able to, that potentially impacts you. So I see that like, I see solving ISV DevOps as an ecosystem wide problem even though it's this weird little niche that most people are like, that's its own world.

And, so Yeah.

And the power of what we were talking about, those digital twin packages that we're creating to test from a developer's commit, even though the released package is a first generation package, those feature twin packages are actually second generation as so you and, they're generated just without validation and and it's just it's done to quickly innovate on that and and use those as dependencies on other extension packages if we if you need to. So you still actually kind of have we're still basically operating in a hybrid sense if our managed package is still one GP, we're still able to take advantage of some of the second generation, life cycle improvements.

And like all the stuff about being able to develop, like, if you have a packaging stack of extension packages five layers deep, each in different Cool. And we'll be hanging out. We'll be around happy hour keeping Cool.

And we'll be hanging out. We'll be around. Happy hour. Keeping you from it. So thanks.