For this new how to post, welcome Akashdeep Arora, Salesforce Evangelist/Team Lead at HyTechPro. He started Salesforce journey in 2015. 4X Salesforce Certified Professional, 4X Trailhead Ranger, 5X Trailhead Academy Certified.Founder of #BeASalesforceChamp campaign.
It happens many a times when developers create Communities in Salesforce that somehow they are not able to see the Manage or Workspaces link, like below:
Let’s demystify the mystery without any delay.
Before creating communities, it’s mandatory to check the “Enable Communities” checkbox from Communities Settings in Quick Find Box.
Afterwards, search for All Communities in Quick Find box and you will see something like this:
Click on New Community Button and you will be redirected to choose the template for creating the community.
Choose the template as per your requirement and give it a name.
Then, you would be able to see your community like this:
Now, here is a twist in the story as you can see under
Action, Workspaces and Builder link is
missing.
You might be thinking what could be the issue?
Well, I’ll help you out.
The main problems is that your profile is not listed as a member of the community. So, you need to add yourself as a Community Member.
But, if you ain’t a member of a community, you can’t access Community Management to update administration settings.
Now, the question would be: how to add yourself as a Community Member?
You need to update the Network Id of the community URL that you created and Profile or Permission Set ID.
STEP 1: From Setup, enter All Communities in the Quick Find box, select All Communities, and then right-click the community URL and select Inspect. The data-networkId provides your NetworkId.
This ID should start with “0DB”.
STEP 2: From Setup, enter Profiles in the Quick Find box, then select Profiles. Click on the profile that you want to add. The ProfileId is the last part of the browser URL after the last “/” character in classic or after the %2F character in Lightning Experience (/lightning/setup/Profiles/page?address=%2F00e58000000n15c): this ID should start with “00e”.
STEP 3: Create a .csv file having two columns for NetworkId and ParentId.
STEP 4: Open Data Loader and select Network Member Group Object. Make sure that you check the “Show all Salesforce objects” checkbox.
Browse the .csv file that you created earlier and map the fields on the Network Member Group object, start the update call and you are all set!
Now, go to “All Communities” in Salesforce and you will be able to see the required links:
Bravo! You have successfully added yourself as a member of the Community and now, as a member, you are able to access Community Management using Workspaces link.
It doesn’t matter how slowly you go as long as you don’t stop.
– Salesforce Developer at Johnson & Johnson EMEA Development Centre (EDC) – Started his Salesforce journey in 2014 – Has passed 10 certifications, including Certified Application Architect – Holds a Physics degree – Leader of Limerick, Ireland Developer Group – Married since 1994, has a daughter – Loves watching sci-fi movies and good comedies – Lives in Limerick, Ireland
Introduction to a familiar situation
Let’s face it with honesty and courage: writing test methods for our Apex code is (very often) boring and time-consuming, usually because of the difficulty regarding the creation of test data. In most implementations we can see a mediocre result, which could be described as a class (e.g. DataFactory) including static methods that generate and insert SObject records. The following code describes this inefficient pattern:
public class DataFactory {
public static List<Account> createAccounts(Integer numRecords) {
List<Account> results = new List<Account>();
for (Integer counter = 1; counter <= numRecords; counter++) {
Account record = new Account();
// define fields
record.Name = ...
...
results.add(record);
}
return results;
}
// other methods
}
Using this logic, we can execute our Data Factory method with:
to create and insert 100 accounts. As you can see, one issue here is how we generate data for other fields in our test method. But things get worse when we create related SObjects, for example contacts with accounts. Let’s examine a new method createContacts, based on the previous pattern:
public static List<Contact> createContacts(Integer numRecords) {
List<Account> accounts = createAccounts(10);
// optionally process accounts and manually add/modify Account fields
insert accounts;
List<Contact> results = new List<Contact>();
for (Integer counter = 1; counter <= numRecords; counter++) {
Contact record = new Contact();
// define fields
record.LastName = ...
record.AccountId = ... // get the ID from accounts list
...
results.add(record);
}
return results;
}
When we call the above method from our test method, e.g. with
List<Contact> contacts = DataFactory.createContacts(100);
// optionally process contacts and manually add/modify Contact fields
insert contacts;
we certainly insert 10 accounts and 100 contacts related to these accounts. But what if we need to modify the generated accounts or we need to insert additional Account fields? This pattern doesn’t allow to do this. In more complex scenarios, we may have to insert many more SObjects. The final result is a Data Factory class with methods that create test data BUT without the ability to easily modify the created records.
I can finally hear your question: Do you propose a better approach? Is there a more flexible and easier way to do it? And the answer is YES!
Making a Data Factory Library with Forceea Templates
Forceea Data Factory framework is an open source GitHub project, with the following capabilities:
creates records for standard or custom objects, for any standard or custom field
automatically definines the required fields
creates static or random data for fields of any data type: Integer, Currency, Double, Date, Datetime, Time, Boolean, String, TextArea, Percent, Reference, Email, Phone, URL, Base64 (BLOB), Picklist and MultiPicklist
creates real random first and last names
creates real random addresses with street, zip code, city, region/state and country
creates serial data for date, datetime, integer, decimal, currency and percent
can copy data from another field of the same record or a lookup record
can create the same random data, using a pseudo-random number generator
handles record types and field dependencies (dependent picklists)
supports record groups for inserting and deleting records
validates the definitions based on the field data type
provides many methods to get/insert the created records, add/delete field definitions, get the errors, configure the amount of information returned during run-time (debug log) and more
includes an extended error messaging system
and will be our main tool to build a powerful and flexible DataFactory class (our Data Factory Library). This class will include static methods, our Templates, which actually will not insert any data at all! What these Templates do is to instruct Forceea how to generate the data.
Let’s meet our first Template:
public class DataFactory {
// returns definitions for: Accounts
public static FObject getDefAccounts() {
FObject result = new FObject('Account');
result.setDefinition('Name', 'static value(Company)');
result.setDefinition('Name', 'serial type(number) from(1) step(1) scale(0)');
result.setDefinition('Phone', 'random type(phone) format("(30) DDD dD-00-DD")');
result.setDefinition('Industry', 'random type(picklist)');
result.setDefinition('Site', 'random type(url)');
return result;
}
}
Obviously the method getDefAccounts returns an FObject – the class instance for generating data with Forceea. Reading the code you can see that we define accounts with random values for all required fields. So, these are our guidelines so far:
Create a DataFactory class
Create a master Template for each SObject, with the name getDef<SObjectApiName>s, e.g. getDefCases
Use the above pattern for each master Template, defining all common required fields (the fields required by any record type)
For the Template field definitions, use definitions which generate – random values for picklist fields – random values for fields with date/datetime, checkbox, email, phone, currency, percent, address, and text area data types – serial values for the Name field (notice how we did it in the getDefAccounts method)
Even though it’s not obvious from the above code, Forceea (by default) will find and insert field definitions for any required fields we haven’t defined, but it’s a best practice to define all these required fields in our master Template.
The setDefinition method sets the expected values for each field, using a descriptive data generation language called Dadela. For example, the definition random type(picklist) except(Hot) for the Rating field generates random values from the pisklist field’s values, excluding the value “Hot”.
Now, for every record type of each SObject create a new Template, for example:
// returns definitions for: Accounts with MediumAccount record type
public static FObject getDefMediumAccounts() {
FObject result = getDefAccounts();
result.setDefinition('RecordTypeId', 'static value(MediumAccount)');
result.setDefinition('NumberOfEmployees', 'random type(number) from(10) to(100) scale(-1)');
result.setDefinition('AnnualRevenue', 'random type(number) from(1000000) to(10000000) scale(3)');
return result;
}
This new Template builds on the master Template getDefAccounts, defining only the record type and the additional fields which are related to this specific record type (NumberOfEmployees and AnnualRevenue). All other defined fields from the master Template are used as they are, so we don’t duplicate any field definitions. Our additional guideline:
Create a record type Template for each SObject’s record type, with the name getDef<RecordTypeDescription><SObjectApiName>s, e.g. getDefServiceCases
This is it! This is what we need to do for SObjects which don’t include a Lookup or Master-detail relationship. But how do we create templates for those SObjects that they do include a relationship? Let’s see how, with our second SObject:
I think you’ll agree that this Template is more interesting. Its first lines use the previously created record type Template getDefMediumAccounts to define the Account fields. We could also
insert one or more new field definitions using objAccount.setDefinition('FieldApiName', '<Field Definition>') before fobjectsByName.put(..) or
modify an existing field definition – for example to define a static value for the (existing) NumberOfEmployees field, we can use // delete previous field definitions objAccount.deleteFieldDefinitions('NumberOfEmployees'); // add new definition objAccount.setDefinition('NumberOfEmployees', 'static value(100)');
Finally, we insert the FObject for the Account into the fobjectsByName map and we proceed to the field definitions for contacts.
If you noticed the definition objContact.setDefinition('AccountId', 'random lookup(Account) source(forceea)' and asked yourself what is source(forceea), this is a way to request that the framework will get the related account IDs from the previously inserted accounts. There are a lot of lookup field definitions that will certainly need your attention if you start developing with the framework, but for the moment let’s not go any deeper.
In many implementations we have a set of dependent SObjects. Let’s say that in order to create records for the Case, we have to create records for Account and Contact (and perhaps records for 4 more other SObjects) using a Template like getDefCasesAndAccountsContacts. This is a kind of quite complex data factory process, which can be handled by Forceea very smoothly – you just add the following pattern for each requird SObject:
objName = '<SObjectApiName>';
FObject objMySObject = new FObject(objName);
objMySObject.setDefinition('<FieldApiName>', '<Field Definition>');
// other required field definitions
fobjectsByName.put(objName, objMySObject);
Our last guidelines:
Document the SObjects that are returned by any Template, with the correct order, e.g. // returns definitions for: Accounts - Contacts - Cases
Use the format getDef<SObjectName>And<RelatedSObjects> for Templates with related SObjects, e.g. getDefCasesAndAccountsContacts
Finally, insert the following method in your DataFactory class:
public static void insertRecords(Map<String, Fobject> fobjects) {
for (FObject obj: fobjects.values()) {
obj.insertRecords(true);
}
}
The test method
After you have created your Templates, let’s see how you can take full advantage of them. We’ll use getDefContactsAndAccounts as an example. In your test method, the first step is to define a map:
The second step is to modify any SObject definitions, if it’s needed. For our example here, we’ll make things a little more difficult with the following requirements:
For the Account: we need to insert 10 records, with a) random values for the Description field and b) any picklist value except “Hot” for the Rating field.
For the Contact: we need to insert 100 records, with the random values from 1/1/1960 to 31/12/2000 for the Birthdate field.
All other fields will get the default definitions from the their Templates.
// initialize
Map<String, FObject> fobjectsByName = DataFactory.getDefContactsAndAccounts();
FObject objAccount = fobjectsByName.get('Account');
FObject objContact = fobjectsByName.get('Contact');
// define number of records
objAccount.records = 10;
objContact.records = 100;
// optionally modify an existing definition
objAccount.deleteFieldDefinitions('Rating');
// optionallydefine new field definitions
objAccount.setDefinition('Description', 'random type(text) minlength(10) maxlength(40)');
objAccount.setDefinition('Rating', 'random type(picklist) except(Hot)');
objContact.setDefinition('Birthdate', 'random type(date) from(1960-1-1) to(2000-12-31)');
// insert records
DataFactory.insertRecords(fobjectsByName);
Using the above pattern, it’s easy for everyone to understand what changes have been made in comparison to the getDefContactsAndAccounts Template.
Did you say that we need the inserted contacts for our System.assert? No problem at all! Just use:
List<Contact> contacts = objContact.getInsertedRecords();
OR
List<Contact> contacts = FObject.getInsertedRecords('Contact');
Conclusion
Forceea Templates are easy to implement and they are powerful enough to help you write your Apex test methods faster. The most important is that
your test methods will be more understandable by any other developer, and
the new test methods will require less effort to develop
The best way to see if this solution is suitable for you is to start working with it and create a pilot version of your new DataFactory class. If you’re not satisfied with your existing Data Factory (or if you don’t have a Data Factory at all), why don’t you give it a try?
I’ve recently published a post on Mason Frank’s blog, where I wrote about some Salesforce Developer hacks. Here’s a quick summary below and link to the full article, I hope you enjoy!
I’m lazy. Most developers are! This is not necessarily a bad thing, and Bill Gates summarizes this concept easily by saying “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it“.
Don’t misunderstand this statement—laziness is not staying on your couch the whole day and watching all of Game of Thrones in one sitting. It’s a different kind of being lazy.
The lazy developer is the one that, in order to avoid doing anything more than once, tries to automate it or knows exactly when that line of code is stored (they may actually not be able to write it themselves and thus have to Google for it).
That’s exactly what I saw in my own 12 years of work experience: the best developer is not the one who knows exactly which Apex function has which parameters (if you can, well… congratulations!), but the one who quickly knows how to solve a problem and where to look to maximize productivity (and happiness for the project manager or the customer).
Let’s start talking abount Salesforce Blockchain, with our guest blogger Priscilla Sharon, Salesforce Business Solution Executive for DemandBlue.
DemandBlue is in the business of helping its customers maximize their Salesforce investment through predictable outcomes. As we thrive in an era of cloud-based Infrastructure, Platform and Software services, DemandBlue has pioneered “Service-as-a-Service” through a value-based On Demand Service model that drives bottom-line results. They foster innovation through “Continuous Engagement and On Demand Execution” that offers their customers Speed, Value and Success to achieve their current and future business objectives.
Demystifying Salesforce Blockchain
Salesforce is always a company that is looking ahead to the next big in technology, whether it is mobile, social, IoT or Artificial Intelligence. The world’s leading CRM Company recently took its wraps off Salesforce Blockchain, its new low-code platform enabling organizations to share verified and distributed data sets across a trusted network of partners and third parties. With the new launch announced at TrailheaDX, the fourth annual developer conference, Salesforce is bringing the combined capabilities of its World’s #1 CRM and low-code Blockchain platform to enable organizations to create Blockchain networks, workflows and apps that have the potential to deliver holistic customer experiences.
Leading global brands are already taking advantage of this new platform. Arizona State University uses Blockchain for Salesforce to design and create an educational network that allows universities to securely verify and share information. IQVIA, a global leader in advanced analytics, technology solutions and contract research services has partnered with Salesforce to explore an array of possible blockchain technology initiatives including regulatory information management and drug label processing. S&P Global Ratings, a key provider of ratings, benchmarks, analytics and data for the global capital and commodity markets is leveraging the platform to review and approve new business bank accounts with improved agility. And more leading brands are exploring the infinite possibilities of delivering seamless customer experiences using the Salesforce Blockchain platform.
Take a deeper dive into the intricacies and the nuances of the brand-new Salesforce Blockchain platform that packs unique and incredible features, and promises to be the first low-code Blockchain platform for the CRM.
Salesforce Blockchain – The Technology
Blockchain Technology for Salesforce is built on Hyperledger Sawtooth, an open source modular platform for designing, deploying, and running distributed ledgers, and it’s customized for Salesforce Lightning, Salesforce’s front-end framework for app development. It consists of three components:
Salesforce Blockchain Builder – a developer toolset for building blockchain applications
Blockchain Connect – integrates blockchain actions with Salesforce apps
Blockchain Engage – enables customers to invite parties to blockchain apps created within Salesforce
Businesses can take advantage of the platform to build and manage blockchain networks, apps and smart contracts using Salesforce’s powerful low-code capabilities. Customers can even create and share a blockchain object using the same process as they already do for any CRM data object in Salesforce without the need for writing code.
We help companies build for the future by making breakthrough technology accessible and easy to use — today we are doing just that with Salesforce Blockchain. Now, companies will be able to create new ecosystems and achieve new levels of interconnectivity through trusted partner networks
Bret Taylor, President and Chief Product Officer of Salesforce at TrailheaDX
3 undeniable Reasons to Love it
Salesforce Blockchain is deeply customized for Salesforce Lightning and is uniquely designed to lower the barrier for creating trusted partner networks. It enables companies to slickly bring together authenticated, distributed data and CRM processes. With Salesforce Blockchain integration you can:
Build Blockchain Networks with Clicks, not Code—Salesforce platform is well received by developers for its unique low-code capabilities. With the newly launched Salesforce Blockchain platform, you can slickly build and maintain blockchain networks, apps and smart contracts, using just clicks, not code
Create Actionable Blockchain Data—Make blockchain data actionable through native integration with Salesforce. Layer complex blockchain data along with existing sales, service and marketing workflows like search queries and process automation. Also, you can now run Einstein-powered artificial intelligence algorithms that integrate blockchain data into sales forecasts, predictions and much more
Lower Barrier to Entry for Partners—Engage partners, distributors and intermediaries easily to leverage Salesforce Blockchain. Even more, companies can now pull in APIs, pre-built apps and integrate any existing blockchains with Salesforce. With an intuitive engagement layer, organizations can also easily interact with and add third parties to their blockchain with a few clicks and simple authentication thereby creating trust networks
Salesforce Blockchain is currently available to select design partners and will be generally available in 2020.
Ivano Guerini is a Salesforce Senior Developer at Webresults, part of Engineering Group since 2015. He started my career on Salesforce during his university studies and based his final thesis on it. He’s passionate about technology and development, in his spare time he enjoys developing applications mainly on Node.js.
In this article, I’m
going to walk you through the steps to set up CI with Salesforce DX.
For this, I decided to
take advantage of Bitbucket and it’s integrated tool Bitbucket Pipelines.
This choice is not made after a comparison between the various version control systems and CI tools but is driven by some business needs for which we decided to fully embrace the cloud solutions and in particular the Atlassian suite of which Bitbucket its part.
What is Continuous Integration?
In software
engineering, continuous integration (often abbreviated to CI) is a practice
that is applied in contexts in which software development takes place through a
versioning system. It consists of frequent alignment from the work environments
of the developers to the shared environment.
In particular, it is
generally assumed that automatic tests have been prepared that developers can
execute immediately before releasing their contributions to the shared
environment, so as to ensure that the changes do not introduce errors into the
existing software.
Let’s apply this
concept to our Salesforce development process using sfdx.
First of all, we have
a production org where we want to deploy and maintain the application than
typically we have one or more sandboxes such as for UAT, Integration Test and
development.
With sfdx, we also
have the concept of scratch org, disposable and preconfigured organizations
where we, as developers, can deploy and test our work before pushing them into
the deployment process.
In the image below you can see an approach to the CI with Salesforce DX. Once a developer have finished a feature he can push into the main Developer branch, from this the CI take place creating a scratch Org to run automated tests, such as Apex Unit Test or even Selenium like test automatisms. If there is no error the dev can create a pull request moving forward in the deployment process.
In this article, I’ll show you how to set up all the required tools and as an example, we will only set up an auto-deploy to our Salesforce org on every git push operation.
Toolbox
Let’s start with a
brief description of the tools we’re going to use:
Git – is a version control system for tracking
changes in files and coordinating work on those files across the team. All
metadata items, whether modified on the server or locally, are tracked via GIT.
This provides us with a version history as well as traceability.
Bitbucket – is a cloud-based GIT server from Atlassian used for hosting our
repository. It provides a UI to navigate the GIT repository and has many
additional features like pull requests. These are used for
approving and merging changes.
Docker
– provides a way to run
applications securely, packaged with all its dependencies and libraries. So, we
will be using it to create an environment for running sfdx commands.
Bitbucket
Pipelines – is an add-on for
Bitbucket cloud that will allow us to kick off deployments and validations when
updates are made to the branches in Bitbucket.
If you have always worked in Salesforce, then it’s quite possible that Docker containers sound alien to you. So what is Docker? In simple terms, Docker can be thought of as a virtual machine in the cloud. Docker provides an environment in the cloud where applications can run. Bitbucket Pipelines support Docker images for running the Continuous Integration scripts. So, instead of installing sfdx in your local system, you’d now specify them to be installed in your Docker image, so that our CI scripts can run.
Create a developer Org and enable the
DevHub
We made a brief
introduction about what CI is and the tools we’re going to use, now it’s time
to get to the heart of it and start configuring our tools. Starting from our
Salesforce Org.
We are going to enable
the devhub to be able to work with sfdx and we are going to set up a connected
app that allows us to handle the login process inside our docker container.
For this article, I
created a dedicated developer Org in order to have a clean environment.
In this way, we will
obtain a new environment on which to perform all the tests we want.
Let’s go immediately
to enable the DevHub: Setup →
Development → DevHub click on
the Enable DevHub toggle.
Once enabled it can’t
be disabled but this is a requirement to be able to work with SFDX.
Now you can install the sfdx cli tool on you computer.
Create a connected app
Now that we have our
new org and the sfdx cli installed, we can run sfdx commands that makes it easy
for us to manage the entire application development life cycle from the command
line, including creating scripts that facilitate automation.
However, our CI will
run in a separate environment where we haven’t a direct control, such as for
the logging. So we will need a way to manage the authorization process inside
the docker container when your CI automation job runs.
To do this we’ll use the OAuth JSON Web Token (JWT) bearer flow that’s supported in the Salesforce CLI, this OAuth flow gives you the ability to authenticate using the CLI without having to interactively login. This headless flow is perfect for automated builds and scripting.
Create a Self-Signed SSL Certificate and Private Key
For a CI solution to
work, you’ll generate a private key for signing the JWT bearer token payload,
and you’ll create a connected app in the Dev Hub org that contains a
certificate generated from that private key.
To create an SSL
certificate you need a private key and a certificate signing request. You can
generate these files using OpenSSL CLI with a few simple commands.
If you use Unix Based System, you can install the OpenSSL CLI from the official OpenSSL website.
If you use Windows instead, you can download an installer from Shining Light Productions, although there are plenty of alternatives.
We will follow some specific command to create a certificate for our needs, if you want to better understand how OpenSSL works, you can find a handy guide in this article.
If you are not
familiar with OpenSSL you can find a good
Create a folder on your PC to store the generated files mkdir certificates
Create a key file from the server.pass.key file using the same password from before: openssl rsa -passin pass:<password> -in server.pass.key -out server.key
Delete the server.pass.key: rm server.pass.key
Request and generate the certificate, when prompted for the challenge password press enter to skip the step: openssl req -new -key server.key -out server.csr
To upload your server.crt file, click Choose File.
For OAuth scopes, add:
Access and manage your data (api)
Perform requests on your behalf at any time (refresh_token, offline_access)
Provide access to your data via the Web (web)
Click Save
Edit Policies to avoid authorization step
After you’ve saved
your connected app, edit the policies to enable the connected app to circumvent
the manual login process.
Click Manage.
Click Edit Policies.
In the OAuth policies section, for Permitted Users select Admin approved users are pre-authorized, then click OK.
Click Save.
Create a Permission Set
Lastly, create a
permission set and assign pre-authorized users for this connected app.
From Setup, enter Permission in the Quick Find box, then select Permission Sets.
Click New.
For the Label, enter: sfdx ci
Click Save.
Click sfdx ci | Manage Assignments | Add Assignments.
Select the checkbox next to your Dev Hub username, then click Assign | Done.
Go back to your connected app.
From Setup, enter App Manager in the Quick Find box, then select App Manager.
Next to sfdx ci, click the list item drop-down arrow (), then select Manage.
In the Permission Sets section, click Manage Permission Sets.
Select the checkbox next to sfdx ci, then click Save.
Test the JWT Auth Flow
Open your Dev Hub org.
If you already authorized the Dev Hub, open it: sfdx force:org:open -u DevHub
If you haven’t yet logged in to your Dev Hub org: sfdx force:auth:web:login -d -a DevHub
Adding the -d flag
sets this org as the default Dev Hub. To set an alias for the org, use the -a
flag with an argument to set an alias.
To test the JWT auth flow you’ll use some of the information that we asked you to save previously. We’ll use the consumer key that was generated when you created the connected app (CONSUMER_KEY), the absolute path to the location where you generated your OpenSSL server.key file (JWT_KEY_FILE) and the username for the Dev Hub (HUB_USERNAME).
On the command line, create these three session-based environment variables: export CONSUMER_KEY=<connected app consumer key> export JWT_KEY_FILE= ../certificates/server.key export HUB_USERNAME=<your Dev Hub username>
These environment variables facilitate running the JWT auth command.
Enter the following command as-is on a single line: sfdx force:auth:jwt:grant –clientid ${CONSUMER_KEY} –username ${HUB_USERNAME} \ –jwtkeyfile ${JWT_KEY_FILE} –setdefaultdevhubusername
This command logs in
to the Dev Hub using only the consumer key (client ID), the username, and the
JWT key file. And best of all, it doesn’t require you to interactively log in,
which is important when you want your scripts to run automatically.
Congratulations, you’ve created your connected app and you are able to login using it with the SFDX CLI.
Set up your development environment
In this section we
will configure our local environment, creating a remote repository in Bitbucket
and linking it to our local sfdx project folder.
If you are already familiar with these steps you can skip and pass directly to the next section.
Just insert your email
and follow the first registration procedure.
Once logged in you
will be able to create a new git repository from the plus button on the right
menu.
You will be prompted
to a window like the following, just insert a name for the repository, in my
case I’ll name it: sfdx-ci, leaving Git selected as Version Control System.
We’re in but our repo
is totally empty, Bitbucket provides some quick commands to initialize our
repo. Select the clone command:
Move to your desktop and open the command line tool and paste and execute the git clone command. This command will create a folder named like the Bitbucket repository already linked to it as a remote branch.
Initialize SFDX project
Without moving from our position, execute the sfdx create project command: force:project:create -n sfdx-ci
Using the -n parameter with the same name of the folder we just cloned from git.
Try deploy commands
Before we pass to
configure our CLI operations let’s try to do it in our local environment.
First of all, we must
create our sfdx project.
The
general sfdx deployment flow into a sandbox or production org is:
Convert from source form to metadata api form sfdx force:source:convert -d <target directory>
Use the metadata api to deploy sfdx force:mdapi:deploy -d <same directory as step 1> -u <username or alias>
These commands will be the same we are going to use inside our Bitbucket Pipelines, You can try in your local environment to see how they work.
Set up Continuous Integration
In previous sections,
we talked mostly about common Salesforce project procedures. In the next one,
we are going deeper in the CI world. Starting with a brief introduction to
Docker and Bitbucket Pipelines.
Lastly, we’ll see how to create a Docker image with SFDX CLI installed and how to use it in our pipeline to run sfdx deploy commands.
Docker
Wikipedia defines
Docker as
an open-source project that
automates the deployment of software applications inside containers by
providing an additional layer of abstraction and automation of OS-level
virtualization on Linux.
In simpler words, Docker is a tool that allows developers, sys-admins, etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development.
Docker Terminology
Before we go further,
let me clarify some terminology that is used frequently in the Docker
ecosystem.
Images – The blueprints of our application which form the basis of containers.
Containers – Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run.
Docker Daemon – The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operating system to which clients talk to.
Docker Client – The command line tool that allows the user to interact with the daemon.
Docker Hub – A registry of Docker images. You can think of the registry as a directory of all available Docker images.
Dockerfile – A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It’s a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands.
Build our personal Docker Image with SFDX CLI installed
Most Dockerfiles start
from a parent image. If you need to completely control the contents of your
image, you might need to create a base image instead. A parent image is an
image that your image is based on. It refers to the contents of the FROM
directive in the Dockerfile. Each subsequent declaration in the Dockerfile
modifies this parent image.
Most Dockerfiles start
from a parent image, rather than a base image, this will be our case, we will
start from a Node base image.
Create a folder on
your machine and create a file named Dockerfile, and paste the following code:
FROM node:jessie RUN apk add --update --no-cache git openssh ca-certificates openssl curl RUN npm install sfdx-cli --global RUN sfdx --version USER node
Let’s explain what
this code means, in order:
We use a Node base image, this image comes with NPM and Node.js preinstalled. This one is the official Node.js docker image, and jessie indicate the last available version;
Next, with the apk add command we are going to install some additional utility tools mainly git and openssl to handle sfdx login using certificates;
Lastly using npm command we install the SFDX CLI tools;
Just a check for the installed version;
And finally the USER instruction sets the user name to use when running the image.
Now we have to build
our image and publishing it to the Docker Hub so to be ready to use in our
Pipelines.
Login to Docker Hub with your credentials. docker login –username=yourhubusername –password=yourpassword
Build you Docker Image docker build -t <your_username>/sfdxci
Test your docker image locally: docker run <your_username>/sfdxci
Push your Docker image to your Docker Hub repository docker push <your_username>/sfdxci
Pushing a docker image on the Docker Hub will make it available for use in Bitbucket pipelines.
Bitbucket Pipelines
Now that we have a
working Docker Image with sfdx installed we can continue configuring the
pipeline, that’s the core of our CI procedure.
Bitbucket Pipelines is
an integrated CI/CD service, built into Bitbucket. It allows you to
automatically build, test and even deploy your code, based on a configuration
file in your repository. Essentially, it creates containers in the cloud for
you.
Inside these
containers, you can run commands (like you might on a local machine) but with
all the advantages of a fresh system, custom configured for your needs.
To set up Pipelines you need to create and configure the bitbucket-pipelines.yml file in the root directory of your repository, if you are working with branches, to be executed this file must be present in each branch root directory.
A bitbucket-pipelines.yml file looks like the following:
There is a lot you can
configure in the bitbucket-pipelines.yml file, but at its most basic the
required keywords are:
image – the Docker image that will be used to create the Docker Container, You can use the default image (atlassian/default-image:latest), but using a personal one is preferred to avoid time consumption during the installation of required tools (e.g. SFDX CLI), To specify an image, use image: <your_dockerHub_account/repository_details>:<tag>
pipelines – contains all your pipeline definitions.
default – contains the steps that run on every push, unless they match one of the other sections.
branches – Specify the name of a branch on which run the defined steps, or use a glob pattern (to learn more about the glob patterns, refer to the BitBucket official guide).
step – each step starts a new Docker container with a clone of your repository, then runs the contents of your script section.
script – a list of cli commands that are executed in sequence.
Other than default and branches there are more signals keyword to identify what step must run, such as pull-request, but I leave you to the official documentation, we are going to use only these two.
Keep in mind that each step in your pipeline runs a separate Docker container and the script runs the commands you provide in this environment with the repository folder available.
Configure SFDX deployment Pipelines
Before configuring our
pipeline, let’s review for a moment the steps needed to deploy to a production
org using sfdx cli.
First of all we need to login into our SF org, to do so we have created a Salesforce Connected App to allow us logging in without any manual operation, simply using the following command:
As you can see there
are three parameters that we have to set in this command line:
CONSUMER_KEY
SFDC_PROD_USER
SFDC_PROD_URL
Bitbucket offer a way
to store some variables that can be used in our pipelines in order to avoid
hard-coded values.
Under Bitbucket
repository Settings →
Pipelines → Repository Variables create three variables and fill them in with the data at your disposal.
Another parameter
required by this command is the server.key file, in this case I simply added it in my
repository under the keys folder.
It’s not a good
practice and I will move it in a more secure position, but for this
demonstration it’s enough.
Now you are logged in, you need only two sfdx commands to deploy your metadata. One to convert your project in a metadata API format and one to deploy in the sf org: sfdx force:source:convert -d mdapi sfdx force:mdapi:deploy -d mdapi -u <SFDC_PROD_USER>
Like the login command
we are going to use a Pipeline Variable to indicate the target org username
under the -u parameter.
OK, now that we know
how to deploy a SFDX proggect we can put all this into our pipeline.
Move to the root of our sfdx project and create the bitbucket-pipelines.yml file and paste the following code (replace the image name with your own Docker image):
Commit and push this changes to the git repository.
Test the CI
OK we have our CI up
and running, let’s do a quick test.
In your project create a new apex class and put some code in it. Then commit and push your changes.
git add .
git commit -am “Test CI”
git push
As we said the pipeline will run on every push into the remote repository, you can check the running status under the Pipelines menu. You will see something like this:
As you know, the mdapi:deploy command is asynchronous so to check if there was some errors during the deploy you have to run the following command mdapi:deploy:report specifying the jobId or if you prefer you can check the deploy directly in the salesforce Org under Deployment section.
Conclusions
With this article I
wanted to provide you with the necessary knowledge to start configuring a CI
using the BitBucket Pipelines.
Obviously what I
showed you is not enough for a CI that can be used in an enterprise project,
there is still a lot to do.
Here are some starting
points to improve what we have seen:
Store the server.key in a safe place so that it is not directly
accessible from your repository.
Manage the CI in the various sandbox environments used
For the developer branch, consider automating the creation a
scratch org and running Apex Unit Tests.
I recently joined other Salesforce influencers in contributing to Mason Frank’s ‘Ask The Experts’ series, where I wrote about my ten best tips to become an amazing Salesforce Developer. Here’s a quick summary below and link to the full article, I hope you enjoy!
10 signs you’re an amazing Salesforce Developer
“Am I the best Salesforce Developer I can be?”
This is a question all Salesforce Developers should be asking themselves. If you said “Yes”, well… you don’t need to read this post as you may be in the “Olympus” of coders.
If your answer is “No”, welcome my friend, keep reading this post. I have some tips for you, based on my experiences, that may lead you to the right trail.
I’ve always felt like I’ve never achieved anything to the top level, and I guess this drove me to overcome my limits and achieve a lot in my personal and professional life.
If you are in the circle of developers that believe they can empower their skills day after day, you are using a mental process that I call “Continuous Self-Improvement” (CSI, isn’t it cool? I guess I’ve not invented anything, but I love giving names to stuff). I even call it the “John Snow syndrome”, because your student mentality means you’re a coder who feels like they “know nothing”.
Our week’s trailblazer is Claudio Marzorati, who will be listing some of his favorite Summer’19 Salesforce platform release.
Claudio is a Senior Salesforce Developer @ Sintegra s.r.l. (Milan). He worked with different retails that allowed him to increase his technical background on several aspects. From analysis, to development, to the direct relationship with the customer, nothing is left to chance. Other passions are running and travels!
Summer 19′ hass finally arrived and in our org all changes are going to be applied.
Here I summarize some of the most important features that can impact your org.
Lightning URL parameters have been namespaced
Finally they have been release the funtiality that forces URL parameters to be namespaced. So if you add ?foo=bar to the URL, it will get auto-stripped. But if you add ?c__foo=bar to the URL, it will persist.
Keep Record Context When Switching from Salesforce Classic to Lightning Experience
When you switch from Salesforce Classic to Lightning Experience, you land on the same page in Lightning Experience, if it exists. If the same page doesn’t exist in Lightning Experience, you are redirected to your default landing page, which is determined by the org default or your customizations.
Choose from Two Record View Options
Now you have two record page view default options. Choose between the current view—now called Grouped view—and the new Full view. In Setup, enter Record Page Settings in the Quick Find box, and select Record Page Settings.
Search Picklist Fields in List Views
You don’t have to manually pick through your list views to find the picklist values you’re looking for. List view search now includes picklists in your query results. Dependent picklists and picklists with translated values aren’t searchable.
Continuation
Finally we can use the Continuation pattern from an Aura component or a Lightning web component. Continuation class in Apex are used to make a long-running request to an external web service Process the response in a callback method. An asynchronous callout made with a continuation doesn’t count toward the Apex limit of 10 synchronous requests that last longer than five seconds. Therefore, you can make more long-running callouts and integrate your component with a complex back-end API. In Lightning-Web-Component now we can use @salesforce/apexContinuation in order to provides access to an Apex method that use the Continuation.
Aura Components
There a lot of improvements expecially in LWC and below I report the most used in my develop.
lightning:recordEditForm
– density option
Sets the arrangement style of fields and labels in the form. Accepted values are compact, comfy, and auto. The default is auto, which lets the component dynamically set the density according to the user’s Display Density setting and the width of the form.
– onerror handler event changed
You can now return the error details when a required field is missing using event.getParam("output").fieldErrors. To display the error message automatically on the form, include lightning:messages immediately before or after the lightning:inputField components.
lightning:inputField
– reset function added
Resets the form fields to their initial values.
There are few deprecated component in force and ui: force:recordEdit, force:recordView, ui:input (all types), ui:button, ui:menu (all types), ui:output (all types), ui:spinner.
Web Component
Salesforce is spending a lot of time and resources in improving this new components. There are a lot of new functionalities added and below I report the most significant.
lightning-input
– autocomplete function added
Controls autofilling of the field. This attribute is supported for email, search, tel, text, and url input types.
– date-style (or time-style) function added
The display style of the date when type='date/time' or type='datetime'. Valid values are short, medium, and long. The default value is medium. The format of each style is specific to the locale. This attribute has no effect on mobile devices.
lightning-input-field
– reset function added
Resets the form fields to their initial values.
lightning-record-edit-form
– density option
Sets the arrangement style of fields and labels in the form. Accepted values are compact, comfy, and auto. The default is auto, which lets the component dynamically set the density according to the user’s Display Density setting and the width of the form.
– onerror handler event changed
You can now return the error details when a required field is missing using event.getParam("output").fieldErrors. To display the error message automatically on the form, include lightning:messages immediately before or after the lightning:inputField components.
Minor UX Improvement
They have changed the UX provided. Some example for recent records and for the related list are below reported.
One of the easiest Javascript libraries for encryption I usually adopt is CryptoJS, quick setup and good support for most algorithms.
But I got an headache trying to make it talk with Salesforce, this was due to my relatively low encryption-topics training but also to a specific way Salesforce handles encryption.
I was surprised that none has ever had the same need before.
I’m not going to explain how I came up to this solution (one of the reasons is that I already forgot it…as I always say, my brain is a cool CPU but with a low amount of storage), but I’ll just give you the way I solved encrypted data exchange between a Javascript script (whether it is client or server side) and Salesforce.
In Apex encrypting and decrypting a string is quite easy:
These are all industry standard Advanced Encryption Standard (AES) algorithms with different size keys. They use cipher block chaining (CBC) and PKCS5 padding.
Salesforce HELP
PKCS5 padding is a subset of the more general PKCS7, that is supported by CryptJS, so it still works.
The only thing that is not clearly stated here (at least for my low storage brain) is that this method uses an Initialization Vector (IV, that is used together with the private key to generate the proper encryption iterations) which has a fixed 16 Bytes length.
Also, the IV is included within the encrypted string: this is the key point.
To encrypt and decrypt using the following method the CryptoJS must be aware of the first 16 Bytes of the IV and append it to (if we are encrypting from JS to Salesforce) or extract it from (if we are decrypting in JS from a Salesforce encrypted string) the encrypted string.
This is what I came up with after a bit of research (you have to deal with binary data when encrypting, that’s why we use Base64 to exchange keys and encrypted strings).
//from https://gist.github.com/darmie/e39373ee0a0f62715f3d2381bc1f0974
var base64ToArrayBuffer = function(base64) {
var binary_string = atob(base64);
var len = binary_string.length;
var bytes = new Uint8Array( len );
for (var i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
};
//from //https://gist.github.com/72lions/4528834
var appendBuffer: function(buffer1, buffer2) {
var tmp = new Uint8Array(buffer1.byteLength + buffer2.byteLength);
tmp.set(new Uint8Array(buffer1), 0);
tmp.set(new Uint8Array(buffer2), buffer1.byteLength);
return tmp.buffer;
};
//from //https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
var arrayBufferToBase64 = function( arrayBuffer ) {
return btoa(
new Uint8Array(arrayBuffer)
.reduce(function(data, byte){
return data + String.fromCharCode(byte)
},
'')
);
},
//Encrypts the message with the given secret (Base64 encoded)
var encryptForSalesforce = function(msg, base64Secret){
var iv = CryptoJS.lib.WordArray.random(16);
var aes_options = {
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7,
iv: iv
};
var encryptionObj = CryptoJS.AES.encrypt(
msg,
CryptoJS.enc.Base64.parse(base64Secret),
aes_options);
//created a unique base64 string with "IV+EncryptedString"
var encryptedBuffer = base64ToArrayBuffer(encryptionObj.toString());
var ivBuffer = base64ToArrayBuffer((encryptionObj.iv.toString(CryptoJS.enc.Base64)));
var finalBuffer = appendBuffer(ivBuffer, encryptedBuffer);
return arrayBufferToBase64(finalBuffer);
};
//Decrypts the string with the given secret (both params are Base64 encoded)
var decryptFromSalesforce = function(encryptedBase64, base64Secret){
//gets the IV from the encrypted string
var arrayBuffer = base64ToArrayBuffer(encryptedBase64);
var iv = CryptoJS.enc.Base64.parse(arrayBufferToBase64(arrayBuffer.slice(0,16)));
var encryptedStr = arrayBufferToBase64(arrayBuffer.slice(16, arrayBuffer.byteLength));
var aes_options = {
iv: iv,
mode: CryptoJS.mode.CBC
};
var decryptObj = CryptoJS.AES.decrypt(
encryptedStr,
CryptoJS.enc.Base64.parse(base64Secret),
aes_options
);
return decryptObj.toString(CryptoJS.enc.Utf8);
};
By sharing the Base64 of the Salesforce generated secret (using the method Crypto.generateAesKey(256) ) between your JS client and Salesforce, you can store and exchange encrypted data with a blink of an eye.
Let’s talk about a great new addition of the Spring’19 platform release to the Salesforce Dev world, the Lightning Web Components framework, with our guest blogger Priscilla Sharon, Salesforce Business Solution Executive for DemandBlue.
DemandBlue is in the business of helping its customers maximize their Salesforce investment through predictable outcomes. As we thrive in an era of cloud-based Infrastructure, Platform and Software services, DemandBlue has pioneered “Service-as-a-Service” through a value-based On Demand Service model that drives bottom-line results. They foster innovation through “Continuous Engagement and On Demand Execution” that offers their customers Speed, Value and Success to achieve their current and future business objectives.
Salesforce DX Setup – Since inception, one of Salesforce’s core philosophies and the Big Idea has been to make building easy. Software should not be complex to install, set up, or customize. In fact, you shouldn’t have to even install software – it should be available to you at the click of a button – This declarative approach of Salesforce brought an end to complex and traditional methods of software development that even non-tech executives including business analysts and managers could slickly build line-of-business applications in a few clicks. However, while Salesforce was democratizing application development through clicks-not-code approach and ushering in the era of citizen programmer, there were other players who were strengthening their appeal to the traditional developer. With nuanced business requirements, modeling complex domains require more flexibility than clicks-not-code affords. Traditional methods of development weren’t dead after all.
As a result, Salesforce’s marketing and development efforts wanted to cater to the traditional developer with the introduction of Salesforce DX, a revolutionary product in the Salesforce App Cloud that allows users to develop and manage Salesforce apps throughout the entire platform in a more direct and efficient way. Used primarily by developers, Salesforce DX setup enables users to have true version control that allows them to have a better control over collaboration, auditing, disaster control and more.
Take a
deeper dive into the comprehensive blog that gives you in-depth insights on how
you can enable Salesforce DX environment and truly maximize its unique
benefits.
Your 12 Step Salesforce DX Setup Guide
1. Set up your project
Salesforce DX introduces a new project structure for your org’s metadata (code and configuration), your org templates, your sample data, and all your team’s tests. Store these items in a version control system (VCS) to bring consistency to your team’s development processes. Retrieve the contents of your team’s repository when you’re ready to develop a new feature.
2. Salesforce DX Setup – Authorize the Developer Hub org for the project
During Salesforce DX setup, the Dev Hub org
enables you to create, delete, and manage your Salesforce scratch orgs. After
you set up your project on your local machine, you authorize with the Dev Hub
org before you create a scratch org.
For this, you need to login to Dev/Sandbox Org from CLI
Run the force:auth:web:login CLI command on a directory where code for deploy to sfdx will be available.
NOTE: Login must be a valid login to your Dev/Sandbox Org and with Admin permissions.
3. Configure your local project
The project configuration file sfdx-project.json indicates that the directory is a Salesforce DX setup project. The configuration file contains project information and facilitates the authentication of scratch orgs and the creation of second-generation packages. It also tells the Salesforce CLI where to put files when syncing between the project and scratch org.
4. Configure your local project
After
you create the scratch org definition
file, you can easily spin up a scratch org and open it directly from the
command line.
a) Create the scratch org
Create a scratch org for development using a scratch org definition file. The scratch org definition defines the org edition, features, org preferences, and some other options.
Specify scratch org definition values on the command line using key=value pairs
Create a scratch org with an alias
Create a scratch org for user acceptance testing or to test installations of packages
Indicate that this scratch org is the default
Specify the scratch org’s duration, which indicates when the scratch org expires (in days)
b) Open the org
To open the scratch org: sfdx force:org:open -u <username/alias>
To open the scratch org in Lightning Experience or open a Visualforce page, use the –path parameter: sfdx force:org:open –path lightning
c) Set default user
Copy the username and enter the
following command to set the defaultusername:
sfdx force:config:set defaultusername={SET THIS TO NEW SCRATCH ORG’S USERNAME FROM THE ABOVE COMMAND}
d) Display All Orgs
Run the following command to confirm the
default Dev Hub [marked with (D)] and Active Scratch Org [marked with (U)]:
sfdx force:org:list --all
5. Push the source from your project to the scratch org
To push changed source to your default scratch org:
sfdx force:source:push
To push changed source to a scratch org that’s not the default, you can indicate it by its username or alias:
Selecting Files to Ignore During Push. It’s likely that you have some files that you don’t want to sync between the project and scratch org. You can have the push command ignore the files you indicate in .forceignore.
If Push Detects Warnings. If conflicts have been detected and you want to override them, here’s how you use the power of the force (overwrite) to push the source to a scratch org.
sfdx force:source:push –forceoverwrite
6. Salesforce DX Setup – Develop the app
a. Create Source Files from the CLI
To add source files from the Salesforce CLI, make sure that you are working in an appropriate directory.
7. Pull the source to keep your project and scratch org in sync
After
you do an initial push, Salesforce DX tracks the changes between your local
file system and your scratch org. If you change your scratch org, you usually
want to pull those changes to your local project to keep both in sync.
During
development, you change files locally in your file system and change the
scratch org using the builders and editors that Salesforce supplies. Usually,
these changes don’t cause a conflict and involve unique files.
By
default, only changed source is synced back to your project.
To pull changed source from the scratch org to the project:
sfdx force:source:pull
To pull source to the project if a conflict has been detected (read more):
sfdx force:source:pull –forceoverwrite
8. Salesforce DX Setup – Run tests
When
you’re ready to test changes to your Salesforce app source code, you can run
Apex tests from the Salesforce DX CLI. Apex tests are run in your scratch org.
You can also execute the CLI command for running Apex tests (force:apex:test:run) from within third-party continuous integration tools, such as Jenkins.
9. Export The Package.xml
Export package.xml file into the temporary directory. Type the commands below in the root folder of your Salesforce DX project:
11. Track Changes Between the Project and Scratch Org
To view the status of local or remote files:
sfdx force:source:status
12. Salesforce DX Setup – Sync up
Sync the local version with the version deployed to Scratch Org for every change and test the changes on the Scratch Org by repeating the above steps. Once the testing is completed, we need to convert the source from Salesforce DX format to the Metadata API format. This is done by running the following command:
Copy the modified metadata files from this output location
to the actual source location where the metadata files are downloaded from
Dev/Sandbox Org to deploy the files to the server.
For this new Back To Basics post, welcome Akashdeep Arora, Salesforce Evangelist/Consultant at HyTechPro. He started Salesforce journey in 2015. 3X Salesforce Certified Professional, 4X Trailhead Ranger, 5X Trailhead Academy Certified.Founder of #BeASalesforceChamp campaign.
Well, Astro turned 5 recently. So, what’s better than writing something related to Astro. As we all know, when you need a guide, Astro’s there for you.
#Astroturns5 #AppyBirthday #BeASalesforceChamp
Albeit, it sounds easy but still many Developers/Admins gets stuck when they want to make a field required based on one value selected from picklist field. Now, you must be thinking the way to achieve it.
We have different ways to make a field required:
Required Checkbox while field creation
Page Layout
Validation Rule
Using custom code (Visualforce Page, Lightning component, Apex Trigger to say a few)
But our scenario is little bit different as we want to make it required based on criteria, i.e. selected picklist value must be Astro.
Yay, let’s begin the fun without any delay.
The easiest way to achieve it is to use a validation rule. We have two fields:
Salesforce Character (a picklist field with values Appy, Astro, Codey, Cloudy and Einstein)
Astro Mom (a text field).
Here, we go.
After saving the rule, it will look like below:
Well, it’s time for testing. Testing is very necessary for anything. (Wink)
Let’s create a record without giving value in the Astro Mom text field and Select “Astro” from Salesforce Character picklist field like below:
As soon as you click on the Save button, it will give you an error “Please enter Astro Mom“.
Wohoooo, our validation rule is perfect it seems. Now, let’s provide the name of Astro Mom in the text field and click on Save button.
Hurrayyy, the record is saved this time. This is how you can make any field required based on selection of a picklist field value.