When Salesforce is life!

Tag: Git

📣DevOps Center is now Generally Available!

Finally this amazing tool is GA!

DevOps Center is IMHO one of the most anticipated tools that we, the community of Salesforce professionals, were waiting since ages 👴

This gap has been filled in the years by many amazing products like Copado, Flosum, Gearset, AutoRABIT, Blue Canvas, Prodly or Opsera to name a few, but finally a Salesforce branded tool has just born to overcome many of the difficulties with Change Sets.

DevOps Center is a valid alternative to organize your work, track changes automatically, integrate seamlessly with GitHub (other GIT providers coming soon), and deploy updates easily with clicks: developers who are used to work on Git can still go on with it as DevOps center automatically updates its UI based on Git activity and admins can still participate in tracking changes on Git using clicks and not command line.

DevOps Center is available in any production org with Professional, Enterprise, or Unlimited Edition, or a Developer Edition org…so you can get your hands dirty!

Take a look at Salesforce Developers official blog for more links on how to learn!

Setting up SFDX Continuous Integration using Bitbucket Pipelines with Docker image

Ivano Guerini is a Salesforce Senior Developer at Webresults, part of Engineering Group since 2015.
He started my career on Salesforce during his university studies and based his final thesis on it.
He’s passionate about technology and development, in his spare time he enjoys developing applications mainly on Node.js.


In this article, I’m going to walk you through the steps to set up CI with Salesforce DX.

For this, I decided to take advantage of Bitbucket and it’s integrated tool Bitbucket Pipelines.

This choice is not made after a comparison between the various version control systems and CI tools but is driven by some business needs for which we decided to fully embrace the cloud solutions and in particular the Atlassian suite of which Bitbucket its part.

What is Continuous Integration?

In software engineering, continuous integration (often abbreviated to CI) is a practice that is applied in contexts in which software development takes place through a versioning system. It consists of frequent alignment from the work environments of the developers to the shared environment.

In particular, it is generally assumed that automatic tests have been prepared that developers can execute immediately before releasing their contributions to the shared environment, so as to ensure that the changes do not introduce errors into the existing software.

Let’s apply this concept to our Salesforce development process using sfdx.

First of all, we have a production org where we want to deploy and maintain the application than typically we have one or more sandboxes such as for UAT, Integration Test and development.

With sfdx, we also have the concept of scratch org, disposable and preconfigured organizations where we, as developers, can deploy and test our work before pushing them into the deployment process.

In the image below you can see an approach to the CI with Salesforce DX. Once a developer have finished a feature he can push into the main Developer branch, from this the CI take place creating a scratch Org to run automated tests, such as Apex Unit Test or even Selenium like test automatisms. If there is no error the dev can create a pull request moving forward in the deployment process.

In this article, I’ll show you how to set up all the required tools and as an example, we will only set up an auto-deploy to our Salesforce org on every git push operation.

Toolbox

Let’s start with a brief description of the tools we’re going to use:

  • Git – is a version control system for tracking changes in files and coordinating work on those files across the team. All metadata items, whether modified on the server or locally, are tracked via GIT. This provides us with a version history as well as traceability.
  • Bitbucket – is a cloud-based GIT server from Atlassian used for hosting our repository. It provides a UI to navigate the GIT repository and has many additional features like pull requests. These are used for approving and merging changes.
  • Docker – provides a way to run applications securely, packaged with all its dependencies and libraries. So, we will be using it to create an environment for running sfdx commands.
  • Bitbucket Pipelines – is an add-on for Bitbucket cloud that will allow us to kick off deployments and validations when updates are made to the branches in Bitbucket.

If you have always worked in Salesforce, then it’s quite possible that Docker containers sound alien to you. So what is Docker? In simple terms, Docker can be thought of as a virtual machine in the cloud. Docker provides an environment in the cloud where applications can run. Bitbucket Pipelines support Docker images for running the Continuous Integration scripts. So, instead of installing sfdx in your local system, you’d now specify them to be installed in your Docker image, so that our CI scripts can run.

Create a developer Org and enable the DevHub

We made a brief introduction about what CI is and the tools we’re going to use, now it’s time to get to the heart of it and start configuring our tools. Starting from our Salesforce Org.

We are going to enable the devhub to be able to work with sfdx and we are going to set up a connected app that allows us to handle the login process inside our docker container.

For this article, I created a dedicated developer Org in order to have a clean environment.

We can do this simply filling out the form from the Salesforce site: https://developer.salesforce.com/signup and complete the registration process.

In this way, we will obtain a new environment on which to perform all the tests we want.

Let’s go immediately to enable the DevHub: Setup → Development → DevHub click on the Enable DevHub toggle.

Once enabled it can’t be disabled but this is a requirement to be able to work with SFDX.

Now you can install the sfdx cli tool on you computer.

Create a connected app

Now that we have our new org and the sfdx cli installed, we can run sfdx commands that makes it easy for us to manage the entire application development life cycle from the command line, including creating scripts that facilitate automation.

However, our CI will run in a separate environment where we haven’t a direct control, such as for the logging. So we will need a way to manage the authorization process inside the docker container when your CI automation job runs.

To do this we’ll use the OAuth JSON Web Token (JWT) bearer flow that’s supported in the Salesforce CLI, this OAuth flow gives you the ability to authenticate using the CLI without having to interactively login. This headless flow is perfect for automated builds and scripting.

Create a Self-Signed SSL Certificate and Private Key

For a CI solution to work, you’ll generate a private key for signing the JWT bearer token payload, and you’ll create a connected app in the Dev Hub org that contains a certificate generated from that private key.

To create an SSL certificate you need a private key and a certificate signing request. You can generate these files using OpenSSL CLI with a few simple commands.

If you use Unix Based System, you can install the OpenSSL CLI from the official OpenSSL website.

If you use Windows instead, you can download an installer from Shining Light Productions, although there are plenty of alternatives.

We will follow some specific command to create a certificate for our needs, if you want to better understand how OpenSSL works, you can find a handy guide in this article.

If you are not familiar with OpenSSL you can find a good

  1. Create a folder on your PC to store the generated files
    mkdir certificates
  2. Generate an RSA private key
    openssl genrsa -des3 -passout pass:<password> -out server.pass.key 2048
  3. Create a key file from the server.pass.key file using the same password from before:
    openssl rsa -passin pass:<password> -in server.pass.key -out server.key
  4. Delete the server.pass.key:
    rm server.pass.key
  5. Request and generate the certificate, when prompted for the challenge password press enter to skip the step:
    openssl req -new -key server.key -out server.csr
  6. Generate the SSL certificate:
    openssl x509 -req -sha256 -days 365 -in server.csr -signkey server.key -out server.crt

The self-signed SSL certificate is generated from the server.key private key and server.csr files.

Create the Connected App

The next step is to create a connected app on Salesforce that includes the certificate we just created.

  1. From Setup, enter App Manager in the Quick Find box, then select App Manager.
  2. Click New Connected App.
  3. Enter the connected app name and your email address:
    1. Connected App Name: sfdx ci
    1. Contact Email: <your email address>
  1. Select Enable OAuth Settings.
  2. Enter the callback URL:
  3. http://localhost:1717/OauthRedirect
  4. Select Use digital signatures.
  5. To upload your server.crt file, click Choose File.
  6. For OAuth scopes, add:
    • Access and manage your data (api)
    • Perform requests on your behalf at any time (refresh_token, offline_access)
    • Provide access to your data via the Web (web)
  7. Click Save

Edit Policies to avoid authorization step

After you’ve saved your connected app, edit the policies to enable the connected app to circumvent the manual login process.

  1. Click Manage.
  2. Click Edit Policies.
  3. In the OAuth policies section, for Permitted Users select Admin approved users are pre-authorized, then click OK.
  4. Click Save.

Create a Permission Set

Lastly, create a permission set and assign pre-authorized users for this connected app.

  1. From Setup, enter Permission in the Quick Find box, then select Permission Sets.
  2. Click New.
  3. For the Label, enter: sfdx ci
  4. Click Save.
  5. Click sfdx ci | Manage Assignments | Add Assignments.
  6. Select the checkbox next to your Dev Hub username, then click Assign | Done.
  7. Go back to your connected app.
    1. From Setup, enter App Manager in the Quick Find box, then select App Manager.
    2. Next to sfdx ci, click the list item drop-down arrow (), then select Manage.
    3. In the Permission Sets section, click Manage Permission Sets.
    4. Select the checkbox next to sfdx ci, then click Save.

Test the JWT Auth Flow

Open your Dev Hub org.

  • If you already authorized the Dev Hub, open it:
    sfdx force:org:open -u DevHub
  • If you haven’t yet logged in to your Dev Hub org:
    sfdx force:auth:web:login -d -a DevHub

Adding the -d flag sets this org as the default Dev Hub. To set an alias for the org, use the -a flag with an argument to set an alias.

To test the JWT auth flow you’ll use some of the information that we asked you to save previously. We’ll use the consumer key that was generated when you created the connected app (CONSUMER_KEY), the absolute path to the location where you generated your OpenSSL server.key file (JWT_KEY_FILE) and the username for the Dev Hub (HUB_USERNAME).

  1. On the command line, create these three session-based environment variables:
    export CONSUMER_KEY=<connected app consumer key>
    export JWT_KEY_FILE= ../certificates/server.key
    export HUB_USERNAME=<your Dev Hub username>


    These environment variables facilitate running the JWT auth command.
  2. Enter the following command as-is on a single line:
    sfdx force:auth:jwt:grant –clientid ${CONSUMER_KEY} –username ${HUB_USERNAME} \ –jwtkeyfile ${JWT_KEY_FILE} –setdefaultdevhubusername

This command logs in to the Dev Hub using only the consumer key (client ID), the username, and the JWT key file. And best of all, it doesn’t require you to interactively log in, which is important when you want your scripts to run automatically.

Congratulations, you’ve created your connected app and you are able to login using it with the SFDX CLI.

Set up your development environment

In this section we will configure our local environment, creating a remote repository in Bitbucket and linking it to our local sfdx project folder.

If you are already familiar with these steps you can skip and pass directly to the next section.

Create a Git Repository on Bitbucket

If you don’t have a bitbucket account, you can create a free one registering to the following link: https://bitbucket.org/account/signup/

Just insert your email and follow the first registration procedure.

Once logged in you will be able to create a new git repository from the plus button on the right menu.

You will be prompted to a window like the following, just insert a name for the repository, in my case I’ll name it: sfdx-ci, leaving Git selected as Version Control System.

We’re in but our repo is totally empty, Bitbucket provides some quick commands to initialize our repo. Select the clone command:

git clone https://[email protected]/username/sfdx-ci.git

Move to your desktop and open the command line tool and paste and execute the git clone command. This command will create a folder named like the Bitbucket repository already linked to it as a remote branch.

Initialize SFDX project

Without moving from our position, execute the sfdx create project command:
force:project:create -n sfdx-ci

Using the -n parameter with the same name of the folder we just cloned from git.

Try deploy commands

Before we pass to configure our CLI operations let’s try to do it in our local environment.

First of all, we must create our sfdx project.

The general sfdx deployment flow into a sandbox or production org is:

  1. Convert from source form to metadata api form
    sfdx force:source:convert -d <target directory>
  2. Use the metadata api to deploy
    sfdx force:mdapi:deploy -d <same directory as step 1> -u <username or alias>

These commands will be the same we are going to use inside our Bitbucket Pipelines, You can try in your local environment to see how they work.

Set up Continuous Integration

In previous sections, we talked mostly about common Salesforce project procedures. In the next one, we are going deeper in the CI world. Starting with a brief introduction to Docker and Bitbucket Pipelines.

Lastly, we’ll see how to create a Docker image with SFDX CLI installed and how to use it in our pipeline to run sfdx deploy commands.

Docker

Wikipedia defines Docker as

an open-source project that automates the deployment of software applications inside containers by providing an additional layer of abstraction and automation of OS-level virtualization on Linux.

In simpler words, Docker is a tool that allows developers, sys-admins, etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development.

Docker Terminology

Before we go further, let me clarify some terminology that is used frequently in the Docker ecosystem.

  • Images – The blueprints of our application which form the basis of containers.
  • Containers – Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run.
  • Docker Daemon – The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operating system to which clients talk to.
  • Docker Client – The command line tool that allows the user to interact with the daemon.
  • Docker Hub – A registry of Docker images. You can think of the registry as a directory of all available Docker images.
  • Dockerfile – A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It’s a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands.

Build our personal Docker Image with SFDX CLI installed

Most Dockerfiles start from a parent image. If you need to completely control the contents of your image, you might need to create a base image instead. A parent image is an image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image.

Most Dockerfiles start from a parent image, rather than a base image, this will be our case, we will start from a Node base image.

Create a folder on your machine and create a file named Dockerfile, and paste the following code:

FROM node:jessie
RUN apk add --update --no-cache git openssh ca-certificates openssl curl
RUN npm install sfdx-cli --global
RUN sfdx --version
USER node

Let’s explain what this code means, in order:

  1. We use a Node base image, this image comes with NPM and Node.js preinstalled. This one is the official Node.js docker image, and jessie indicate the last available version;
  2. Next, with the apk add command we are going to install some additional utility tools mainly git and openssl to handle sfdx login using certificates;
  3. Lastly using npm command we install the SFDX CLI tools;
  4. Just a check for the installed version;
  5. And finally the USER instruction sets the user name to use when running the image.

Now we have to build our image and publishing it to the Docker Hub so to be ready to use in our Pipelines.

  1. Create an account on Docker Hub.
  2. Download and install Docker Desktop. If on Linux, download Docker Engine – Community
  3. Login to Docker Hub with your credentials. 
    docker login –username=yourhubusername –password=yourpassword
  4. Build you Docker Image
    docker build -t <your_username>/sfdxci
  5. Test your docker image locally:
    docker run <your_username>/sfdxci
  6. Push your Docker image to your Docker Hub repository
    docker push <your_username>/sfdxci

Pushing a docker image on the Docker Hub will make it available for use in Bitbucket pipelines.

Bitbucket Pipelines

Now that we have a working Docker Image with sfdx installed we can continue configuring the pipeline, that’s the core of our CI procedure.

Bitbucket Pipelines is an integrated CI/CD service, built into Bitbucket. It allows you to automatically build, test and even deploy your code, based on a configuration file in your repository. Essentially, it creates containers in the cloud for you.

Inside these containers, you can run commands (like you might on a local machine) but with all the advantages of a fresh system, custom configured for your needs.

To set up Pipelines you need to create and configure the bitbucket-pipelines.yml file in the root directory of your repository, if you are working with branches,  to be executed this file must be present in each branch root directory.

A bitbucket-pipelines.yml file looks like the following:

image: atlassian/default-image:2
 pipelines:
   default:
     - step:
         script: 
           - echo "Hello world default"
   branches:
     features/*:
         - step:
             script: 
               - echo "Hello world feature branch"

There is a lot you can configure in the bitbucket-pipelines.yml file, but at its most basic the required keywords are:

  • image – the Docker image that will be used to create the Docker Container, You can use the default image (atlassian/default-image:latest), but using a personal one is preferred to avoid time consumption during the installation of required tools (e.g. SFDX CLI), To specify an image, use image: <your_dockerHub_account/repository_details>:<tag>
  • pipelines – contains all your pipeline definitions.
  • default – contains the steps that run on every push, unless they match one of the other sections.
  • branches – Specify the name of a branch on which run the defined steps, or use a glob pattern (to learn more about the glob patterns, refer to the BitBucket official guide).
  • step – each step starts a new Docker container with a clone of your repository, then runs the contents of your script section.
  • script – a list of cli commands that are executed in sequence.

Other than default and branches there are more signals keyword to identify what step must run, such as pull-request, but I leave you to the official documentation, we are going to use only these two.

Keep in mind that each step in your pipeline runs a separate Docker container and the script runs the commands you provide in this environment with the repository folder available.

Configure SFDX deployment Pipelines

Before configuring our pipeline, let’s review for a moment the steps needed to deploy to a production org using sfdx cli.

First of all we need to login into our SF org, to do so we have created a Salesforce Connected App to allow us logging in without any manual operation, simply using the following command:

sfdx force:auth:jwt:grant --clientid  --username  --jwtkeyfile keys/server.key --setdefaultdevhubusername --setalias sfdx-ci --instanceurl 

As you can see there are three parameters that we have to set in this command line:

  • CONSUMER_KEY
  • SFDC_PROD_USER
  • SFDC_PROD_URL

Bitbucket offer a way to store some variables that can be used in our pipelines in order to avoid hard-coded values.

Under Bitbucket repository Settings → Pipelines → Repository Variables create three variables and fill them in with the data at your disposal.

Another parameter required by this command is the server.key file, in this case I simply added it in my repository under the keys folder.

It’s not a good practice and I will move it in a more secure position, but for this demonstration it’s enough.

Now you are logged in, you need only two sfdx commands to deploy your metadata. One to convert your project in a metadata API format and one to deploy in the sf org:
sfdx force:source:convert -d mdapi
sfdx force:mdapi:deploy -d mdapi -u <SFDC_PROD_USER>

Like the login command we are going to use a Pipeline Variable to indicate the target org username under the -u parameter.

OK, now that we know how to deploy a SFDX proggect we can put all this into our pipeline.

Move to the root of our sfdx project and create the bitbucket-pipelines.yml file and paste the following code (replace the image name with your own Docker image):

image: ivanoguerini/sfdx:latest
 pipelines:
  default:
 step:
    script: 
      - echo $SFDC_PROD_URL
      - echo $SFDC_PROD_USER
      - sfdx force:auth:jwt:grant --clientid $CONSUMER_KEY --username $SFDC_PROD_USER --jwtkeyfile keys/server.key --setdefaultdevhubusername --setalias sfdx-ci --instanceurl $SFDC_PROD_URL
      - sfdx force:source:convert -d mdapi
      - sfdx force:mdapi:deploy -d mdapi -u $SFDC_PROD_USER 

Commit and push this changes to the git repository.

Test the CI

OK we have our CI up and running, let’s do a quick test.

In your project create a new apex class and put some code in it. Then commit and push your changes.

git add .
git commit -am “Test CI”
git push

As we said the pipeline will run on every push into the remote repository, you can check the running status under the Pipelines menu. You will see something like this:

As you know, the mdapi:deploy command is asynchronous so to check if there was some errors during the deploy you have to run the following command mdapi:deploy:report specifying the jobId or if you prefer you can check the deploy directly in the salesforce Org under Deployment section.

Conclusions

With this article I wanted to provide you with the necessary knowledge to start configuring a CI using the BitBucket Pipelines.

Obviously what I showed you is not enough for a CI that can be used in an enterprise project, there is still a lot to do.

Here are some starting points to improve what we have seen:

  1. Store the server.key in a safe place so that it is not directly accessible from your repository.
  2. Manage the CI in the various sandbox environments used
  3. For the developer branch, consider automating the creation a scratch org and running Apex Unit Tests.

But, I leave this to you.

[Salesforce / VCS ] The team factor (or How a business analyst can affect the overall delivery speed)

In the previous post, we outlined a simple process, which did not solely focus on development but instead considered the path a feature takes from definition to production deployment. A, well, release process.

And although for us Salesforce developers it seems tons of unnecessary overhead, there is a reason, why multiple parties are involved: it’s a system of checks and balances to make sure that features are stable and according to end-user expectations.

Imagine you shop online for a TV and you get delivered a simple 23” monitor. It’s kind of similar, but not what you wanted. And although you can see a movie on it, it will not fit the use case you had in mind when ordering a kick-ass UHD supersmart ludicrously large television.

A process introduces the necessary structure for defining, developing, testing and delivering a feature so you can watch the world cup with your friends (no jokes about Italy please…).

We also introduced Copado as our release management platform in the last post, and we did not ditch it. It allows to unify and drive the collaboration between teams, and there are specific items, where I think Copado help teams to do their job.

But I want to take a step back further, because in order to know how a tool can make a process more efficient, we need to understand how each team in a project impacts overall delivery time. So instead of talking about technology, I want to focus on people in this post, describe the roles typically involved in a Salesforce implementation, their tasks and what each team member can do to improve the flow.

Excuse me, but: what exactly are you doing?

Let’s recap the process focusing on the tasks per team member:

Define a feature:

Working in an agile way, a feature is usually defined by client business stakeholders, ideally by the product owner. As there are a lot of features to be defined and also tested, the project setup includes one or more business analysts to support the product owner in story definition, documentation and follow up. Toughened in uncountable meetings and armed with knowledge about the business process they tend to become advocates for the business side and bounce ideas with developers to get a feature done in a way business will like it. We could spend a whole post on that topic, but for now: time for action.

Develop a feature:

Well, do magic. But stay in time and in scope. Oh, and please, make it as scalable and maintainable as possible. You know, the usual.

Peer or Lead Dev review and rework:

Regardless if you just do a face to face explanation or if you use a more structured approach, such as a pull request: having your development reviewed has too many benefits to skip it. It prevents you from introducing a bad design crippling the project in the long run for short term success. Insights are shared within the team so there are no surprises. And you become a better developer through feedback.

If you work with Git (which is surprisingly easy with Copado), a pull request can prevent you from introducing bad items in your feature branch, which at the end will result in easier deployments.

Deploy to QA:

Well, this can be easy or an endless pain. If you work with Git, follow the golden rule, and you should avoid most of the issues:
– Review your commits and never include other people changes as part of your branch. (See why the pull request comes in handy?)
– Less references → less potential deployment errors or conflicts.

Test a feature:

There are elements in this world, which are natural opponents bound in eternal struggle. Water and fire for instance, or cats and dogs. And of course:


(copyright: https://www.monkeyuser.com/2018/the-struggle/)

Yet, without testers, our features would be less robust and might not be according to business requirements. Or even worse, we could break existing functionality. And believe me, you rather prefer a testers feedback, than explaining to your project manager and business sponsor why you broke the internet. On larger implementations, you are more likely to have a QA team focussing on writing and reviewing tests for current developments and automating regression testing.

Finally, the approval of a feature, at least in agile, can only be performed by product owners or delegated business users.

They ordered a TV, so only they are entitled to approve they received one.

Teamwork in software development is a real thing

Great, now we know the main roles involved in a project. But how does that help us to improve our releases? I mean, those business analysts, they don’t do development or any Git stuff; just talking, meetings and powerpoints. How can they possibly impact delivery speed?

A lot.

Analysts: Define and conquer

User stories are an easy way to capture requirements. Right? Well, yes and no.

Yes.
Because the basic format is easy. For instance: as a user, I want to see how much is an account worth, so that I always know on what to focus during sales and service.

No.
Because it misses tons of details. How do you calculate an account’s worth? Where should you see it? In which moment? Do we need information from other systems? If so, do we need it in real time? This simple story could result in fetching the latest orders from the ERP system on account page load.

Here is where the business analyst would spend time with the client, guide them towards a reasonable solution and future iterations and document it as part of the acceptance criteria.

Regardless of your project framework and methodology, good acceptance criteria fulfill at least three criteria:

  • Provide process context.
  • Provide enough information for the developers to know what to do, so they can outline a design and estimate the effort.
  • Be precise enough to be testable. Who needs to log in? Where should you navigate? What action do you need to perform to get a result?

Ultimately, the time invested in documenting decent acceptance criteria will result in multiple benefits for other teams. For instance, less guesswork for developers during estimations and test scripts can be written in parallel with development.

As we use Copado as our release management solution, the User Story created by analysts will be used by developers to scope and deploy the changes. How neat is that? Definition, estimation and deployment all aligned.

Developers: A short cut for you but a long run for the team

After a feature is defined (estimated, prioritized and included in the sprint backlog) it is time to develop it. Done.
But is there anything a developer can do to increase process efficiency?

Just as the business analyst can impact overall speed by investing time in preparation, a developer can impact delivery to production investing time in working according to best practices and reviewing his or her work.

  • Documentation, for example, is something most devs dislike, however it is crucial for others to understand the overall design and making informed decisions when moving the feature towards higher environments.
  • Writing org-independent apex tests, which don’t rely on existing data. Just remember: every time you write @isTest(SeeAllData=true) a kitten dies with puppies crying and unicorns get sad.
  • Do not hardcode references to a specific org. Do that and the unicorn will fade away.
  • Check your feature branch, if you work with Version Control (Git). Always make sure that, whatever is committed in your feature branch includes your feature only. A peer or pull request review process can help to catch those wrongly committed items (e.g. reference to a field you don’t expect in a layout or a report type).
  • Make reviewing easy. How? Well, what about pasting the URL for feature definition and documentation in the pull request description? Takes less than a minute, but saves minutes in searching for the reviewer.
  • Perform a validation deployment towards the target org including running the test classes.
  • Write down pre or post deployment tasks in an understandable way, because it might be that less tech-savvy team members need to perform them.

So that’s it? Well, not entirely.

With the right solutions in place, such as Copado, you can even go further, and there is a good reason to do so: there’s more than one deployment.

In our setup, there are two deployments required to release a feature. One to QA and another to Prod.

But what if your pipeline has further orgs? Staging, for example, a hotfix org and multiple dev sandboxes? In such a scenario, each manual step needs to be executed multiple times for different orgs.

Luckily Copado has some fancy logic under the hood, which can increase the level of automation considerably, so that you don’t even need to worry about hard coded references (yes, indeed).

In the next post we will take a closer look on how this can be achieved, but just as a teaser, the magic words are: Deployment Tasks and Environment Variables. And of course: “Please”.

Test team: Heavy lifting – easy testing

In an ideal agile world business testers would just check the functionality against the acceptance criteria and say yes or no, but the reality paints a different picture. Client stakeholders are caught between user story definition workshops, maybe their ongoing non-project work, internal stakeholder management and testing. Therefore a dedicated QA team can be of great value to drive testing effort, where better preparation results immediately in a shorter test time.

As soon as a feature is in a QA environment the clock starts ticking, so the core objective of the QA team is to make sure, that everything is ready for client business stakeholders to test and approve the functionality (or reject it (-_-) ). Here are a few things the QA team can do to prevent a story being stuck in testing:

  • Ensure you have a test script indicating the steps to follow to achieve a certain result. If acceptance criteria are well documented they will come in handy to work on them in parallel to story development.
  • Get the test script steps approved from a peer, who ideally dry-runs it on the dev org if the feature is available.
  • Make sure testers have access to the org with the appropriate permissions to execute the test. Just in case, because who would miss something that obvious, right?
  • Dry run tests in QA org, to avoid any surprise. Not required, but highly recommended.
  • Identify a test owner who will perform the test, with the help of business analysts and/or product owners.
  • Chase and support business for and during testing. Make sure they are notified and co-ride tests to support their efforts.
  • Chase and support developers if issues are found and a fix is required.

If you think: “That sounds like a lot of documentation, tracking, monitoring, and coordination.”, you are completely right and tooling can help here which in our case it is an easy thing. We defined the user story in Copado with all the required agile information, then developers used it to commit and deployed their changes to QA. The test team can use the same User Story record to define their scripts and track execution. Nice.

Release Managers: Connect the dots to align

Analysts document, developers work according to best practices, testers prepare. So, what can release managers do to move features faster to prod? Apart from the obvious tasks this role implies (e.g. helping teams to fix errors and resolve conflicts) release managers are owners of the overall release process, the technology enabling it and as a result they need to run continuous improvement efforts. Sounds like lean management? Absolutely. Reducing errors, increasing automation and avoid waste (including time) are key lean principles – and still valid, although we produce software instead of cars (or TVs).

Ok, let’s say I’m a release manager. What do I need to do?

  • Increase tooling knowledge. Regardless of which tool you use, you can only apply what you know, and being aware of what a tool can do is your foundation.
  • Monitor and analyze the process. What are the steps done by the team? How long they usually take? Is there anything which you might have missed?
  • Analyze challenges. Some deployments are easy, while others may take ages. List them down and investigate the root cause.

Once you have a good overview you can start applying your knowledge to tackle the issues:

  • Identify and implement automation. Even smaller improvements add up. For instance, we could fire a message to the assigned tester as soon as a story is in QA and the script is approved. With Copado this can be done easily with a process builder.
  • Document solutions. Usually deploying Salesforce metadata is easy. Select, deploy, hope, done. Yet some metadata types, such as profiles, standard picklist values or processes have their own tweaks to them. A good documentation of best practices and how to handle specific situations will enable teams to avoid pitfalls.
  • Listen to the team. Yes, people complain – often with a valid reason. So although nobody likes to get complaints, resolving them will lead to improvements.

Because in Copado your overall flow is aligned to User Stories and you have the power of the force.com platform on your fingertips you can tinker around with your Copado installation to provide you the information required. With process builder, you can set up time stamps on the User Story object to analyze the time spent per step and identify bottlenecks. You can even make dashboards and share them with your team.

And if you don’t have a dedicated place where you store process documentation, just create a new object to document solutions, and another one to hand in suggestions (which you later can take as input for User Stories).

Technology does not implement technology. People implement technology.

With all the innovation, features and what not being released each week, we sometimes forget that at the end of the day you work with people. You might like one more than another, however everyone who participates in an implementation is bound to a common goal. There is real value in working together, communicating, helping each other at the cost of being nice. Even to testers.

And once you got your team mojo going, the next logical question is: How to make it faster?

Well, this is the moment when tooling is back on the main stage. Copado has some nice features on how to automate steps in your process and in the next post we will take a detailed look at how to set it up so you spend less time with clicking around and more time with the team.

And watching the FIFA world cup on a kick ass UHD super smart ludicrously large television.

[Salesforce / VCS] Develop VS Deliver Features in Salesforce

A devs life could be so easy…

Developing a feature in Salesforce is easy, right?

  • Log into your org
  • Use (mostly) some point &bmp; click methods to enhance logic, user interface or the data model
  • Done

Sounds like it is developed.

But it is not delivered.

Although the feature is technically done, it is not available for end users in the production environment. Also, nobody tested if it fits the business requirements or if it breaks existing functionality.
In addition, maybe someone should take a look at your feature, if it is aligned with the overall solution design.
Oh, and in order to minimize business impact, deployments to production may be restricted to specific time windows per week.

Sounds like we as a team should follow a process to release features in a controlled way.

This is how we roll

There are a variety of processes for release management out there as each team is individual, but usually they structure a series of quality gates in a flow.
Taking the example from above, the high-level process would probably look like this:

So far so good, but now we have a challenge.

Production deployments can only be done at certain moments so what happens, if one feature is tested and ready to go and another one is still being reworked, and both share components such as an Account Layout?
Oh, and we want to have a backup of our metadata (not only classes) to be able to roll back, in case we have an issue after deployment?

It would be great, if we could work in a way that tracks changes over time and allows to release specific versions of our metadata.

Git for the save.

As described above, developing on the Force.com platform can be very straight forward. But apart from Flows and Process Builder, old versions are lost, once you save your changes, e.g. Classes, Formulas, Validation Rules or Layouts.
To avoid that, you can store local copies of your metadata by retrieving it (e.g. through Workbench or ANT Migration Tool). You can also deploy the retrieved items. So we could use that to account for our prod deployment, but that sounds like a lot of effort to manage those local files and versions.

Here is where a Version Control Solution (VCS) comes in handy. And by the frequency VCS is being mentioned, it has become an important pillar for working with Salesforce. There are several solutions out there (SVN, Mercurial), but as of now Git can be considered industry standard.

So instead of storing retrieved items on our hard drive using different names and folders for tracking versions, we can simply store them in our Git repository, which will track changes. This will allow us to go back to an earlier moment for rolling back changes or deploy a specific version from the past.

That escalated quickly. Can it be easy again?

Let’s take a step back.

What started out as an easy way to build valuable business features, suddenly sounds somewhat complex. Being able to roll back, having quality gates in place, all those are valid points, but now as a developer apart from creating functionality and work peer and QA feedback, you also need to do something with ANT or Workbench then storing it in Git and then deploy it?
Is there an easy way to do this?

Yes, Copado.

To get started, you need to download it from the AppExchange or the Copa.do website. Also, as the goal is to work with version control, get a free Git repo from GitHub or Atlassian/Bitbucket.
Next you need to connect Copado to your Salesforce environments (Dev, QA and Prod in this case) and set it up with the Git repository. There is a quick-start guide you can follow with links to additional documentation. While you set up Copado, you notice that it is natively build on the force.com platform. So your knowledge about Salesforce is all you need to modify it (This will be important later, so keep it in mind).

Once the setup is done, the process described out above using Git version control as source of truth would be the following with Copado:

Define feature in Copado

Assuming most Salesforce implementations are done in some form of Agile, it can be done in Copado directly, including all required information, such as Sprints, Epics, Acceptance Criteria or Story Points (click here for more details).

Scrum Masters and Analysts can use the Work Manager and Kanban Board to manage stories, roadmaps and sprints.

Develop feature in your environment

Let’s get to the part we like: get creative on how to solve the business issue in Salesforce. This one is easy indeed.

Perform a peer review in your environment

This is done between developers, however, we would like to document the results with a flag to mark the story as “Review Passed”.

Here is when the catchphrase “native force.com” turns into a benefit.
Just create a Checkbox on the Copado User Story object called “Peer Review Passed”, make it available for the required User Profiles and put it on the User story layout. Done*.

*: Wait, you work in Production directly? You can use Copado to deploy this modification.

Deploy to QA

So far so good, let’s go ahead and deploy. Scared?

Just click on “Commit Changes” on the Copado User Story, select your items (use column search and sorting to make your life easy), provide a message and finish your commit.

Back on the User Story page, check “Promote & Deploy”* and the following will be done by Copado**:

  • Create a feature branch
  • Retrieve the items you selected
  • Commit the items you selected on the feature branch
  • Create an intermediate Promotion Branch merge your feature branch on it (more info on the branching strategy can be found here)
  • Perform the deployment using Git as source
  • You can review your selections on the story, and click on the “View in Git” links to quickly navigate to your repository.

    ** bonus points if you click on “Validate” to make sure you can deploy

    Test user story

    Once the story is deployed to the next environment, it will be visible on the User Story page and we can change the status to “Ready for testing” and notify the Test Team through chatter.

    If you are thinking “Wait, this just a record update in Salesforce and it could be automated easily”, you are completely right! Wait for the upcoming blog posts.

    As soon as the test team approves the story, they can set the status to “Complete”.

    Deploy to Production

    Testing is done, and we can move to Prod. But wasn’t there something about other stories modifying the same component and them not being ready?
    Well, this is the beauty of version control. Copado will pick the feature branch contents for deployment and those did not change. Your story is independent and you can work in a true Agile way.

    Check “Promote & Deploy” again.

    Done.

    That’s it. That’s all?

    Well, not exactly. The tool offers tons of functionality which can make your life easy, such as the way profiles are trimmed and deployed with Git, an engine to remove (or replace) unwanted tags from xml files, modules for recording and automating testing, and the easiest way to handle Salesforce DX you have ever seen. You can even launch internal Copado logic through Process Builder!

    Check out on their demos or browse a little the documentation to get an overview of what is possible.

    We, however, will leave technical feature descriptions aside and focus at improving our process, as there are elements which will need to be tackled to get your team closer to smooth releases.

    • You’ll never work alone, so how to improve releases by working as a team?
    • Deploying with a simple click is maybe too easy. Can we implement quality gates?
    • Those are too many clicks. Can we automate this?

    Look out for the next post, where we will take a closer look at the involved team members and how Business Analysts can play a key role reducing the time required to release a feature.

[Salesforce / Git] git commit -m “Salesforce”

Why Salesforce developers and admins should use Git and some of the best tools to help you do so.

I’m opening this (hopefully) wonderful 2017 with a guest post about a subject I really love, that is Version Control in Salesforce.

Alex Brausewetter of Blue Canvas contributed this guest post. He is a founder of Blue Canvas – a company that makes version control and CI solutions for Salesforce developers and admins. Prior to starting Blue Canvas Alex built the Salesforce integration for Cloud9 IDE.

A Brief History of Git

Software development changed forever on a humble weekend in April 2005. Linus Torvalds, creator of Linux, was getting annoyed with his the version control system he was using to work on the Linux kernel. They were using a proprietary source control management (SCM) system call Bitkeeper. Legend has it that the notoriously caustic Torvalds and the commercial company that owned Bitkeeper started feuding that spring. Eventually, Torvalds knew he could do a better job himself and went off for a marathon coding session in which he built an entirely new version control system. He named the tool Git after himself (“git” is apparently British slang for an unpleasant person).

Git didn’t catch on right away though. It took until the founding of GitHub in 2008 before it really took off. GitHub provided added value on top of Git because it created a hosted service and user interface that made Git much more accessible. The company was founded in 2008. One year later over 46,000 public repos were hosted on GitHub.

Why Developers Love Git

Today, Git isn’t just for open source projects though. Major enterprises use it regularly.

Software developers in large and small companies love Git because it’s a unique and simple source control system.

Git is very fast – it was written in C – a low level language that can move extremely quickly. Unlike previous version control systems, Git also allows you to work offline. It’s also fully distributed so no one server hosts the code.

Git and Salesforce

Probably the most popular thing about Git though is it’s collaboration tools. And it is these tools which make Git the ideal version control system for Salesforce.

Salesforce is one of the world’s great software development platforms. There is so much you can do with the platform. And you don’t have to be a traditional developer to make great applications with Salesforce. So many people can be involved with a Salesforce project: developers, Awesome Admins, business analysts, product managers, sales ops managers and so many other diverse types of roles can work together to create great applications with Force.com.

Git is a great tool for helping all of these diverse groups collaborate on a code base. It prevents developers from overwriting each other’s work and handles merge conflicts. With Git you can do code reviews and have a picture of who has changed what at all times. You can even use “git blame” to see the specific ways in which a file has changed over time. Who wrote this line of code and when? It’s extremely useful for debugging.

Git also unlocks the power of CI for Salesforce developers. Git facilitates the kind of best practices that make you feel comfortable pushing code to production more frequently. This is good for users because they are getting new features more quickly. It’s also great for hiring developers because developers love seeing their work live in production as soon as possible.

Tools for Using Git

That said, Git can be challenging to use. Here are some of the best tools that make it easier to use.

SourceTree

SourceTree is a free tool from Atlassian that acts as a graphical user interface (GUI) for Git. Most of the time developers use Git on the command line. But many Git commands can be cumbersome and repetitive or even unintuitive. SourceTree cuts through all of that by providing a simple interface for Git commands. It allows you to push, pull, merge, fetch, clone, rebase and so many other Git commands through a simple, well-designed interface. Oh and did we mention it’s free? Many Salesforce developers are already leveraging SourceTree today to make their Git experience smoother.

GitHub, GitLab and Bitbucket: Hosted Git Services

Today it’s not uncommon to hear about GitHub before you even hear about Git itself. That’s because hosted Git solutions provide such an intuitive and wonderful way to leverage the collaborative benefits of Git. GitHub, Bitbucket and GitLab are all great tools. All provide added features like Pull Requests and commenting, as well as save you the trouble of having to host and maintain your own Git server. Which service you prefer is a matter of personal preference but all are worth looking at. GitLab and Bitbucket both also offer CI services which can be useful for Salesforce teams looking to automate their deployment pipeline.

SCM Breeze

SCM Breeze is a lesser known tool but it’s really nice. SCM Breeze is essentially a series of command line aliases for Git which make typing commands much faster and simpler. Instead of typing “git commit -m ‘my commit message’” you can simply type “gc -m ‘my commit message”.

Even better, when you are adding files you can simply type “gs” for “git status” and see a list of all files which have changed since your last commit. And instead of typing “git add ” for each file, you can for example type “ga 1-10” and it will stage all ten of your files for commit. You can even cherry pick files by typing “ga 1-4” and “ga 6” and “ga 9” for example.

Blue Canvas

Finally, there is Blue Canvas: a hosted Git implementation designed specifically for the Salesforce platform. Blue Canvas is version control for Salesforce. It automatically picks up changes that are made on your Orgs and commits them into Git. It will even pick up declarative changes so no one needs to learn Git on the command line if they don’t want to. Everything is synced in Git in real time. Developers can access their code base and refresh their local environments using “git pull”. To learn more check out: https://bluecanvas.io/.

Powered by WordPress & Theme by Anders Norén