When Salesforce is life!

Author: Enrico Murru Page 6 of 21

10 signs you’re an amazing Salesforce Developer

I recently joined other Salesforce influencers in contributing to Mason Frank’s ‘Ask The Experts’ series, where I wrote about my ten best tips to become an amazing Salesforce Developer. Here’s a quick summary below and link to the full article, I hope you enjoy!

10 signs you’re an amazing Salesforce Developer

“Am I the best Salesforce Developer I can be?”

This is a question all Salesforce Developers should be asking themselves. If you said “Yes”, well… you don’t need to read this post as you may be in the “Olympus” of coders.

If your answer is “No”, welcome my friend, keep reading this post. I have some tips for you, based on my experiences, that may lead you to the right trail.

I’ve always felt like I’ve never achieved anything to the top level, and I guess this drove me to overcome my limits and achieve a lot in my personal and professional life.

If you are in the circle of developers that believe they can empower their skills day after day, you are using a mental process that I call “Continuous Self-Improvement” (CSI, isn’t it cool? I guess I’ve not invented anything, but I love giving names to stuff). I even call it the “John Snow syndrome”, because your student mentality means you’re a coder who feels like they “know nothing”.

Keep reading on Mason Frank blog…

[Salesforce] Handle encryption and decryption with Apex Crypto class and CrypoJS

One of the easiest Javascript libraries for encryption I usually adopt is CryptoJS, quick setup and good support for most algorithms.

But I got an headache trying to make it talk with Salesforce, this was due to my relatively low encryption-topics training but also to a specific way Salesforce handles encryption.

I was surprised that none has ever had the same need before.

I’m not going to explain how I came up to this solution (one of the reasons is that I already forgot it…as I always say, my brain is a cool CPU but with a low amount of storage), but I’ll just give you the way I solved encrypted data exchange between a Javascript script (whether it is client or server side) and Salesforce.

In Apex encrypting and decrypting a string is quite easy:

//encrypt
String algorithmName = 'AES256';
Blob privateKey = Crypto.generateAesKey(256);
Blob clearText = Blob.valueOf('Encrypt this!');
Blob encr = Crypto.encryptWithManagedIV(algorithmName, privateKey, clearText);
system.debug('## ' + EncodingUtil.base64encode(encr));
//decrypt
Blob decr = Crypto.decryptWithManagedIV(algorithmName, privateKey, encr );
System.debug('## ' + decr.toString());

This could be an example of the output:

## Lg0eJXbDvxNfLcFMwJm6CkFtxy4pWgkmanTvKLcTttQ=
## Encrypt this!

For encryption noobs out there, the encrypted string changes every time you run the script.

The string if first encrypted with the AES256 algorithm and then decrypted using the same secret key (generated automatically by Salesforce).

All is done through Crypto class’ methods:


Valid values for algorithmName
are:

– AES128
– AES192
– AES256

These are all industry standard Advanced Encryption Standard (AES) algorithms with different size keys. They use cipher block chaining (CBC) and PKCS5 padding.

Salesforce HELP

PKCS5 padding is a subset of the more general PKCS7, that is supported by CryptJS, so it still works.

The only thing that is not clearly stated here (at least for my low storage brain) is that this method uses an Initialization Vector (IV, that is used together with the private key to generate the proper encryption iterations) which has a fixed 16 Bytes length.

Also, the IV is included within the encrypted string: this is the key point.

To encrypt and decrypt using the following method the CryptoJS must be aware of the first 16 Bytes of the IV and append it to (if we are encrypting from JS to Salesforce) or extract it from (if we are decrypting in JS from a Salesforce encrypted string) the encrypted string.

This is what I came up with after a bit of research (you have to deal with binary data when encrypting, that’s why we use Base64 to exchange keys and encrypted strings).

//from https://gist.github.com/darmie/e39373ee0a0f62715f3d2381bc1f0974
var base64ToArrayBuffer = function(base64) {
    var binary_string =  atob(base64);
    var len = binary_string.length;
    var bytes = new Uint8Array( len );
    for (var i = 0; i < len; i++)        {
        bytes[i] = binary_string.charCodeAt(i);
    }
    return bytes.buffer;
};
//from //https://gist.github.com/72lions/4528834
var appendBuffer: function(buffer1, buffer2) {
    var tmp = new Uint8Array(buffer1.byteLength + buffer2.byteLength);
    tmp.set(new Uint8Array(buffer1), 0);
    tmp.set(new Uint8Array(buffer2), buffer1.byteLength);
    return tmp.buffer;
};
//from //https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
var arrayBufferToBase64 = function( arrayBuffer ) {
    return btoa(
        new Uint8Array(arrayBuffer)
            .reduce(function(data, byte){
                 return data + String.fromCharCode(byte)
            }, 
        '')
    );
},
//Encrypts the message with the given secret (Base64 encoded)
var encryptForSalesforce = function(msg, base64Secret){
    var iv = CryptoJS.lib.WordArray.random(16);
    var aes_options = { 
        mode: CryptoJS.mode.CBC,
        padding: CryptoJS.pad.Pkcs7,
        iv: iv
    };
    var encryptionObj  = CryptoJS.AES.encrypt(
        msg,
        CryptoJS.enc.Base64.parse(base64Secret),
        aes_options);
    //created a unique base64 string with  "IV+EncryptedString"
    var encryptedBuffer = base64ToArrayBuffer(encryptionObj.toString());
    var ivBuffer = base64ToArrayBuffer((encryptionObj.iv.toString(CryptoJS.enc.Base64)));
    var finalBuffer = appendBuffer(ivBuffer, encryptedBuffer);
    return arrayBufferToBase64(finalBuffer);
};
//Decrypts the string with the given secret (both params are Base64 encoded)
var decryptFromSalesforce = function(encryptedBase64, base64Secret){
    //gets the IV from the encrypted string
    var arrayBuffer = base64ToArrayBuffer(encryptedBase64);
    var iv = CryptoJS.enc.Base64.parse(arrayBufferToBase64(arrayBuffer.slice(0,16)));
    var encryptedStr = arrayBufferToBase64(arrayBuffer.slice(16, arrayBuffer.byteLength));

    var aes_options = { 
        iv: iv,
        mode: CryptoJS.mode.CBC
    };

    var decryptObj  = CryptoJS.AES.decrypt(
        encryptedStr,
        CryptoJS.enc.Base64.parse(base64Secret),
        aes_options
    );

    return decryptObj.toString(CryptoJS.enc.Utf8);
};

By sharing the Base64 of the Salesforce generated secret (using the method Crypto.generateAesKey(256) ) between your JS client and Salesforce, you can store and exchange encrypted data with a blink of an eye.

[ORGanizer] Giraffe release is live: few steps closer to release 1.0!

More then 3 months from the last Reindeer Release say hello to the ORGanizer for Salesforce Giraffe Release (0.6.8.4).

Why a Giraffe, you ask?

Like a Giraffe points its head up to the sky, the Giraffe Release points toward release 1.0, when we’ll finally go out of beta, closing an almost 3 years old path since its first release 0.1 in September 2016.

I’ve worked a lot on stability and bug fixing in these months, reviewing tens of issues and suggestions, provided by my beloved ORGanusers who support my day by day work.

A brand new sponsor

It’s also a pleasure to introduce you to our next sponsor NativeVideo for the next months, starting from the current release!

Founded in London in 2018, NativeVideo is on a mission to bring businesses and people closer together with the power of Video.

NativeVideo is the platform that, once installed from the AppExchange, enables video recording and browsing as a native functionality inside Salesforce.

The company has already released two “extension packages” that customise the solution to 2 specific use cases:

  • LeadGenVideo demand generation / deal nurturing thanks to video messages that include both classic webcam video recording and screen recording
  • TalentVideo designed for those companies that use Salesforce for their recruitment and adds video interviews to the process, with a very well designed workflow and collaboration features.

NativeVideo customers have customised the NativeVideo platform and the use of Video to their needs on other use cases, like Service – screen recording sent by the service representative to answer questions and solve bugs, CPQ – a walkthrough screen recording video where the offer is explained when it is sent to the customer, Customer feedback / testimonial – inviting customers to answer a few questions on video to provide feedback on the service and results they are receiving, and many more.

Jump to NativeVideo landing page to say hello and thank them for helping the ORGanizer to keep the hard work going!

What’s new with the Giraffe?

First we have new consolidated limits for logins storage:

Approaching to release 1.0 the number of logins that can be stored with the free edition of the ORGanizer will gradually decrease. The number of logins will be limited in the free edition but all the other features will always be kept free.

Pro version can be purchased from the Chrome Web Store and now using Promo Codes (only available on Chrome version as of now):

A promo code is strictly related to the user email address and has an expiration date, and conveys the same enhanced limits of the Pro version in-app purchase.

Why a promo code?

To allow companies to mass purchase ORGanizer licenses or for promotions or free trials.

New permissions required

The following permissions are now required:

  • Know your email address: needed to get your email address for Promo Code verification (your email address is never sent to anyone but only used to validate your codes, if any)
  • Read and change data on a number of websites:
    • force.com, salesforce.com, visualforce.com, documentforce.com, salesforce-communities.com: main Salesforce domains
    • organizer-api.enree.co: Promo Code verification endpoint. This endpoint is called only after Promo code validation (if any)

And more and more enhancements and bug fixes

Read the change log for the whole list of what’s inside this new release, and see you in the next release!

This blog has been verified by Rise: Rb4a7093bc3979124c781aae186805e25

[Salesforce / Apex] Handling constants on classes

Few days ago I was thinking about optimizing the use of constants (usually of String type) inside projects to avoid proliferation of public static final String declarations on various classes (with a limited control over duplicates) but giving at the same time developers a way to increase readability of constants in their code.

The reason for this post is that I want to know your opinion on this strategy, that on my eyes appear elegant and clear but may bring some drawbacks.

public class Constants{ 
	private static ObjectName_Constants objectNameConstants; 

	public static ObjectName_Constants ObjectName{  
		get { 
			if(objectNameConstants == null){ 
				objectNameConstants = new ObjectName_Constants(); 
			} 
			return objectNameConstants; 
		} 
	} 

	public class ObjectName_Constants{ 
		public String CustomField_AValue  { get { return 'aValue'; } } 
		public String RecordType_ADevName  { get { return 'aDevName'; } } 
	} 
} 

The class is basically shaped as follows:

This brings to a cool looking:

String myDevName = Constants.ObjectName.RecordType_ADevName;

This way we have the following pros:

  • Clear hirearchy for constants
  • More readable constants names (they are all getters but are used as constants, so no need for upper case)
  • Heap space is allocated on constants only if they are actually used
  • Centralized place for common constants

And these are the cons:

  • More quantity of Apex used to write a constants

I’m curious to get some feedbacks.

Small Business Solutions for Protecting Against Cybercrime

This article has been packed up by Lindsey Weiss, who will tell us some suggestions to keep an eye on security.

Lindsey enjoys marketing and promoting one’s brand. She believes that to move your market, you must know your market. She loves writing articles on helping people build buzz around their brand and boosting their online presence.


For small business owners, fraud and data breaches are a nightmare. Not only can those issues bring work to a standstill, but it can also mean lost consumer confidence and even the closure of a business. It’s crucial to guard against threats, and if you should fall victim to one, expediting your response is the best chance for a sound recovery. 

Are You in Their Bullseye?

Big businesses often make the news when they become victims of cybercrime. However, it’s important for small business owners to recognize their own vulnerability. Gone are the days when it was safe to fly under the radar of cyber scoundrels; in fact, they are catching the eyes of criminals more than ever. Some statistics indicate small businesses are being attacked more each year, with average losses ranging from $84,000 to $148,000. Most of those companies go under within six months of being attacked, and according to studies cited by IBM, for each stolen record, you can expect a loss of nearly $150. 

Take a Careful Inventory

When it comes to evaluating your company’s vulnerability, the easiest place to start is with a careful look at your hardware and software. Making solid choices means you have a wall of defense in every direction. Start with a thorough evaluation using a checklist. Data should be backed up to a remote location routinely, and all computers and devices should have antivirus software installed. If you aren’t using a firewall, that is another a must-have. 

Examine Your Equipment

Research whether the electronics you’re using are known for being secure, and if not, invest in better equipment. For instance, shimming is an unfortunate but growing trend that threatens many small businesses. Data protection ultimately protects your customer base since a breach means lost confidence on the part of consumers. Consider investing in a more secure payment system with features such as safeguards against fraud and real-time data security. 

Where Is Your Data?

If you haven’t already done so, now is a perfect time to start using the cloud. It protects your data by saving it offsite while also freeing up some of your overhead, thereby reducing the amount of time and money your company has to spend updating software and saving files to external drives. It also means your business can operate more freely. Instead of being tethered to the office, you and your staff can do more work on the fly. Better flexibility can mean increased productivity and a better bottom line. Think through what your particulars are, such as how many devices your business requires and how much storage you need, and check reviews to find the right cloud storage option for your situation. 

Add Encryption

If your company handles sensitive data, encryption is another must-have in your line of defense. Basically, encryption uses a cipher to turn your clean data into gobbledygook, keeping would-be criminals at bay. As Business News Daily points out, the law requires encryption if you handle sensitive data such as health records, credit card numbers, or Social Security numbers, but even if you don’t handle that kind of information, it’s a worthwhile layer of protection against to help cybercrime. In fact, some operating systems have built-in encryption options, and there are plenty of encryption software packages available. 

Other Negative Influences

Once you shore up your hardware and software defenses, it’s time to examine the human element. As part of the equation where you have the least control, staying abreast of the people handling your data can be especially challenging for small business owners. Disgruntled or dishonest employees can worm their way into your confidence and your systems, leaving you vulnerable to fraud. With that in mind, make sure you’re hiring people based on their talents and integrity, and mesh your quality staff with top-notch bookkeeping software so you can keep your finger on the pulse of your accounts. 

A strong defense is your key to protecting your business against fraud and data breaches, so ensure your systems are well-protected with carefully thought out choices. When a cybercriminal has your company in his sights, you’ll be ready. 

[ORGanizer] Reindeer release: cool new features and a special gift from The Welkin Suite

The ORGanizer for Salesforce Reindeer Release is finally live!

Thanks to the guys of The Welkin Suite we have a special gift for all ORGanusers: an amazing 40% discount if you start a subscription from within the TWS ORGanizer’s banner! 

To discover how to get it, follow this post!


The Welkin Suite 40% discount

Before discovering all the new features of the Reindeer Release v0.6.8.0 let’s see how to get the 40% discount for The Welkin Suite subscription.

Make sure you have the latest ORGanizer for Salesforce version (right click on the ORGanizer icon > Manage Extensions):

Now click on the ORGanizer for Salesforce icon to show the Popup page and identify The Welkin Suite banner:

Click it and the promo code will be automatically added to your basket on the Welkin Suite site!

If you have purchased ORGanizer PRO, drop me a line using the support form, I’ll send you a dedicated promo code.

Reindeer Release new features

New PRO license limits

If you purchased a PRO license (available only on Chrome version) you’ll now get increased storage limits:

That is you can store up to 2000 logins (sync or not) and up to 1000 sync logins: the Beta limits are 200/150, but the free tire limit will get lower once the ORGanizer gets out of Beta!

Profiles Chamber

Want to massively change profiles? With the Profiles Chamber plugin you can!

In this release the plugin only supports Login Hours massive update with different options and the chance to create templates for a given Org).

You can choose multiple profiles and days to update and then apply changes (by deploying in your ORG): only the Login Hours data is actually pushed.

For detailed info about the Profiles Chamber follow this link.

Replace API names refactoring

The Replace API Names plugin has been completely rewritten to give consistency between Classic and LEX:

The API name is now shown directly next to the real label: few bugs has also been fixed (e.g. misbehavior with fields with the same label).

[Salesforce / Interview Tips] Preparing for a job interview as a Salesforce Administrator

 
Becoming a Salesforce Administrator is often the entry route into the world’s number one CRM technology, but this doesn’t make the job interview process any easier for prospective admins.

As the most prominent role in Salesforce, the competition for a job as a Salesforce Administrator is particularly high. In Mason Frank’s 2018/19 independent Salesforce salary survey, 70% of respondents reported being a Certified Salesforce Administrator, far higher than any other certification. With other candidates waiting in the wings, you need to be sure your interview goes perfectly to guarantee the job offer, and that comes down to preparation.

A job interview for a Salesforce Administrator role can take many forms, and so you’ll need to be prepared for several different lines of questioning. Your interviewer won’t just be interested in your technical experience as an administrator, they’ll also want to know how you see CRM as part of a larger business, and use this to test how much you’ve researched their organisation. In addition, they’ll also want to get to know you as a person.

Read on for a series of tips on how to prepare for your next job interview as a Salesforce Administrator.

Testing your technical knowledge

Ultimately your prospective employer will want to learn how skilled you are on the Salesforce platform, and so you should expect to be asked technical interview questions. A Salesforce Administrator is quite a varied role, and so technical questions you may be asked can be quite broad. You could be asked something very functional such as ‘what is a roll-up summary field’, or perhaps something a little more scenario-based, such as ‘how do you share a record and in what circumstances would that be expected?

One thing that you need to be aware of going into the job interview is that your interviewer may have no experience using Salesforce, or alternatively they may be a Certified Technical Architect.

With this in mind, it’s not enough to simply have a good working knowledge of Salesforce, you need to be prepared to explain technical concepts in plain language so that a non-expert will understand you. Having technical knowledge is one thing, but being able to communicate your knowledge to a layman is another thing entirely, so practice this before the interview.

Testing your experience

While knowledge is valuable, application is power.

Salesforce Trailhead is a fantastic education portal and is responsible for launching the career of thousands of Salesforce professionals, but it won’t provide you with that all-important practical experience that employers are looking for. This is why experience is incredibly valuable, and so you should be prepared to discuss the projects you’ve worked on and what you learned from them.

If you’ve worked as part of an implementation team, be ready to discuss the technical elements as well as the challenges you faced and how you overcame them. If you’ve ever experienced data loss or a data breach, be ready to discuss how you discovered the event and how it was resolved. Don’t be afraid to discuss challenges and mistakes made—this is what experience is all about, and will set you apart from the other candidates.

Something else that employers value highly is your ability to work on a collaborative project. As an Admin it’s unlikely that you will be working completely independently, so be prepared to talk about your communication skills, requirements gathering, and ability to work within the confines of a project timeline, using examples from your previous experience.

Testing your cultural fit

It’s essential you have the skills and experience to perform the job you’re being interviewed for, but your prospective employer will also want to get an idea of who you are as a person. After all, they’ll likely be spending around 40 hours a week in your presence, so it’s important they employ someone who they’ll enjoy working with—you should also be confident that you’ll enjoy working with them as well!

Given that your technical knowledge and experience come with the territory of being a Salesforce professional, getting your personal character across can often be the most nerve-racking element of a job interview, but this shouldn’t be the case. Just be yourself and communicate your goals and ambitions clearly.

It’s always a good idea to think about why you entered Salesforce technology and where you eventually want your career to take you, as long as you can relate this to why you want the job you’re interviewing for and how this will help you achieve your goals.

Being a successful Salesforce Administrator is about more than just doing the job, it’s about finding ways to maximise the value of Salesforce in an organisation, and making yourself indispensable as a result.

[Salesforce / Interview Tips] Preparing for a job interview as a Salesforce Developer

 
Salesforce Developers possess a strong working knowledge of the platform and can add serious value to a business. But given the competitive talent market, you must be prepared to set yourself apart from your peers to really stand out during a job interview.

While job interviews can take many weird and wonderful formats, you should be prepared for two different types of questions during the interview. One will test your technical competency for the role, and the other will measure your experience and knowledge of CRM development in a commercial environment.

Technical questions during the interview

Firstly, the interviewer will want to get a grasp of your technical understanding of the Salesforce platform, so be prepared to answer questions around the architecture or processes involved in Salesforce development.

This can be slightly daunting, as you’re essentially being tested to see whether you really have the skills that are listed on your resume, but there’s no reason to be intimidated. If you’ve worked as a Salesforce Developer in the past, particularly if you’re certified, none of these questions should be outside your realm of understanding.

If you do struggle with a question, there’s nothing wrong with admitting you’ve never worked with that particular tool or concept before—the interviewer will appreciate your honesty, as some people would try to bluff it in this situation (and make themselves look silly in the process). You could even ask them about it, which would show you’re always looking to learn.

If you’re nervous about the kind of questions you may be asked during the interview, we have a resource that details technical interview questions for Salesforce Developers, based on our experience as a specialist Salesforce recruiter. It’s unlikely the interviewer will ask you something incredibly technical, but it’s nice to be prepared just in case.

Experience-driven questions during the interview

As well as what you know, the interviewer will also want to find out what you’ve done so far in your career, and how your experience makes you the perfect candidate for their job. How did you become a Salesforce Developer? What kind of projects have you worked on in the past? What has been the biggest challenge in your career so far and how did you overcome it?

The best way to prepare for this line of questioning is to revisit your portfolio and map out your entire learning journey. A small project you worked on three years ago could be incredibly useful for the task at hand, and so you should be prepared to recall what you did and why.

This is especially useful if you can map it against your education journey—how much did you know at that stage of your career and what would you do now that’s different? Base this around what you’ve learned, the training you’ve undertaken, and the certifications you’ve gained since then.

There’s also merit in talking about mistakes that have been made on projects in the past and how they impacted development. The fact you’ve identified these mistakes and now know better is a testament to the knowledge and experience of your role. Remember that experience is not inherently nominal—a developer with three years of experience who has worked on complex projects will be more valuable than a developer with five years of experience who hasn’t.

Five quick tips for interview preparation

To interview successfully, it isn’t all about having an answer for whatever question is thrown at you. There’s also an onus on you to do your research and find out exactly what will help you stand out in the context of the position you’re applying for. Consider the following:

  • Find out which Salesforce product/edition/instance the company is using — if you don’t know which product you’ll be working with, how can you convince the interviewer that you’re experienced enough to develop on it?
  • Find out what format your interview will take — some interviews are relatively informal chats, whereas some involve practical exercises such as development tasks or challenges. Clarify this before the interview to avoid being blindsided.
  • Focus on how taking this job would benefit both parties — as a Salesforce professional, you’re always looking to improve your career standing. If you can identify what it is about this particular company/role that will help you achieve your long-term career goals, telling this to the interviewer will showcase your ambition and drive.
  • Avoid using overly technical language — in some cases, your interviewer won’t actually have a strong knowledge of the Salesforce platform, and so speaking in technical terms won’t demonstrate your point the way you’d like to. Without being patronising, be prepared to communicate complicated concepts in simple language. This will also demonstrate your comprehensive understanding of the Salesforce platform.
  • Identify the company’s revenue streams and demonstrate how you can optimise them — while a company will value a lot of things, the bottom line is turnover. If you can demonstrate, based on experience, the value you can add to the business and the potential return on investment in you, the interviewer will start to see hiring you as an essential financial decision—you’ve become indispensable before you even sign the contract!
  • Preparation is key, but nobody is immune to a bad interview—sometimes you and the company simply won’t be a good fit, and this is fine. Whether successful or not, it will be valuable experience that you can take forward in your Salesforce journey, so don’t be discouraged if things don’t go to plan. Just remember to approach it with a level head and confidence in your ability. You have skills that this company needs, otherwise they wouldn’t have invited you in to interview in the first place!

[Salesforce] Dealing with the running User on Einstein Bot dialogs

 
As part of the Salesforce Solutions Team at WebResults I spend some time training myself on new products and trying to build POCs (prove of concepts) to give value to the training and promote the knowledge inside my company.

I recently was playing with Einstein Bots and got stuck when needed to identify the running user, when dealing with queries to get actual user info from the CRM.

I though it was enough to call the UserInfo.getUserId() method and I could get who is the user calling the bot, but unfortunately the running user of an Apex InvocableMethod is the AutomatedProcess user, that you can monitor from Setup > Debug Logs > User Trace Flags [New] as shown below:

I believe that Salesforce will add this feature in the next feature but I had to find a workaround.

The Workaround

I started a thread on Stackoverflow, where I got some ideas but no solution at all.

After a while I came up with a dirty solution that allowed me to know the exact user logged into the community. I won’t explain all the trial and errors but the whole solution, with some code samples.

Here is the list of actions:

  1. Enable Pre-Chat
  2. Override Pre-Chat with a Lightning Component
  3. The Pre-chat Lightning component calls its controller to get if there is a valid running user and returns back all the data needed for the prechat
  4. The Lightning component compiles all the needed inputs for the pre-chat (e.g. name, email and username) and submits for the chat: if the info are found, the component hides the pre-chat fields (that may be filled by a non authenticated user)
  5. The info submitted are written on the Transcript object
  6. An Einstein Bot dialog examines the Transcript object using an IncobaleMethod of an Apex class and determines which is the running user
  7. The bot can now change its behavior depending on the fact that the user is logged or not

To get all of this working you need 2 custom fields:

  • A custom field on the Contact/Lead record (we’ll call it Username__c) that is used to configure the prechat on the Snap-In
  • A custom field on the LiveChatTranscript object (we’ll call it Username__c as well) used to store the info got from the Pre-Chat component

Once fields are created, we need to setup the Pre-Chat page on the Snap-in settings related to the bot:

And setup the Pre-chat page:

Now let’s create the Lightning component that will replace the Snap-in chat (configured in the picture above):

prechat.cmp

For the what & whys plese refer to the official documentation in this link.

The component uses the lightningsnapin:prechatAPI that is needed to trigger the chat once the info are filled in.

<aura:component
    implements="lightningsnapin:prechatUI"
    controller="Bot_PreChatCmpCnt">
     
    <aura:attribute name="userId" access="PRIVATE" type="string" default="-"/>
    <aura:attribute name="firstName" access="PRIVATE" type="string" />
    <aura:attribute name="lastName" access="PRIVATE" type="string" />
    <aura:attribute name="email" access="PRIVATE" type="string" />
 
    <!-- Contains methods for getting pre-chat fields, starting a chat, and validating fields -->
    <lightningsnapin:prechatAPI aura:id="prechatAPI"/>
     
 
    <!-- After this component has rendered, call the controller's onRender function -->
    <aura:handler name="init" value="{!this}" action="{!c.doInit}"/>
     
    <aura:renderIf isTrue="{!empty(v.userId)}">
        <lightning:input type="text" value="{!v.firstName}" label="Name *">
        <lightning:input type="text" value="{!v.lastName}" label="Lastname *">
        <lightning:input type="text"  value="{!v.email}" label="Email *">
         
        <lightning:button label="Start chat!" onclick="{!c.onStartButtonClick}"/>
    </aura:renderIf>
</aura:component>

The list of input fields are only shown when the v.userId attribute is blank (you can note that it is initialized with “-“, and in the lightning controller is blanked only when no logged user is found).

prechatController.js

The controller calls the Apex controller to get the main user info: if they are found, the chat is started.

({
doInit: function(component, event, helper) {
        var action = component.get("c.getCurrentUser");
        action.setCallback(this, function(response) {
            var state = response.getState();
            if (state === "SUCCESS") {
                var result = JSON.parse(response.getReturnValue());
                console.log(result, embedded_svc);
                component.set('v.userId', result.userId);
                if(result.userId){
                    component.set('v.firstName', result.firstName);
                    component.set('v.lastName', result.lastName);
                    component.set('v.email', result.email);
                    helper.startChat(component, event, helper);
                }
            }else if (state === "ERROR") {
                var errors = response.getError();
                if (errors) {
                    if (errors[0] && errors[0].message) {
                        console.log("Error message: " +
                                 errors[0].message);
                    }
                } else {
                    console.log("Unknown error");
                }
            }
        });
        $A.enqueueAction(action);
    },
    onStartButtonClick: function(component, event, helper) {
        //handling errors
        if(!component.get('v.firstName')
          || !component.get('v.lastName')
           || !component.get('v.email')) return alert('Missing fields.');
        helper.startChat(component, event, helper);
    }
});

prechatHelper.js

The startChat() method compiles the array of fields to be passed to the chat plugin, that will start the chat with the Bot.

({
     
    startChat: function(component, event, helper){
        var fields = [
            {
                label: 'FirstName',
                name: 'FirstName',
                value: component.get('v.firstName')
            } ,
            {
                label: 'LastName',
                name: 'LastName',
                value: component.get('v.lastName')
            }  ,
            {
                label: 'Email',
                name: 'Email',
                value: component.get('v.email')
            },{
                label: 'Username',
                name: 'Username__c',
                value: component.get('v.userId'),
            }
        ];
        if(component.find("prechatAPI").validateFields(fields).valid) {
            component.find("prechatAPI").startChat(fields);
        }
    }
});

prechatHelper.js

The Apex controller simply makes a query on the user object and considers it a loggedin user if the ContactId field is not null (but you can use your own conditions):

public class Bot_PreChatCmpCnt {
    @auraenabled
    public static String getCurrentUser(){
        Map<String,Object> output = new Map<String,Object>();
        User u = [Select Username, FirstName, LastName, Email, contactId From User Where Id = :UserInfo.getUserId()];
        if(u.ContactId != null){
            output.put('userId', u.UserName);
            output.put('firstName', u.FirstName);
            output.put('lastName', u.LastName);
            output.put('email', u.Email);
        }else{
            output.put('userId', '');
        }
        return JSON.serialize(output);
    }
}

That said we need to instruct the Pre-Chat plugin to write the fields on the LiveChatTranscript object, which is the only thing that grants the Bot’s dialogs to be aware of the running context. Create a new Static Resource and remember to set the Cache Control to Public.

embedded_svc.snippetSettingsFile.extraPrechatFormDetails = [
  { 
    "label":"Username", "transcriptFields":[ "Username__c" ],
  },{
    "label":"Cognome", "transcriptFields": ["LastName__c"]
  },{
   "label":"Nome", "transcriptFields": ["FirstName__c"]
  },{
   "label":"Email", "transcriptFields": ["Email__c"]
}];

This Static Resource must be configured in the Snap-in Component on the Community builder:

Einstein Bot Dialog Apex action

Last step, before configuring the bot, is to create a new Apex Class that will retrieve the users information coming from the current transcript:

public without sharing class Bot_GetSnapInsPreChatData {
    public class PrechatOutput{
        @InvocableVariable
        public String sFirstName;
        @InvocableVariable
        public String sLastName;
        @InvocableVariable
        public String sEmail;
        @InvocableVariable
        public String sContactID;
        @InvocableVariable
        public String sLoggedUser;
    }
    public class PrechatInput{
        @InvocableVariable
        public String sChatKey;
    }
    @InvocableMethod(label='Get SnapIns Prechat Data')
    public static List<PrechatOutput> getSnapInsPrechatData(List<PrechatInput> inputParameters)
    {
        System.debug('######## Input Parameters: '+inputParameters);
         
        String sChatKey = inputParameters[0].sChatKey;
        String sContactId = null;
        List outputParameters = new List();
        PrechatOutput outputParameter = new PrechatOutput();
        if (sChatKey != null && sChatKey != '')
        {
            List<LiveChatTranscript> transcripts = [SELECT Id, CaseId,
                                                    ContactId, Username__c
                                                    FROM LiveChatTranscript WHERE ChatKey = :sChatKey];
            if (transcripts.size()>0)
            {
                sContactId = transcripts[0].ContactId;
                outputParameter.sLoggedUser = transcripts[0].Username__c;
            }
        }
         
        if (sContactId != null && sContactId != '')
        {
            List<Contact> contacts = [SELECT Id, FirstName, LastName, Email, Username__c
                                      FROM Contact WHERE Id = :sContactId];
            if (contacts.size()>0)
            {
                outputParameter.sFirstName = contacts[0].FirstName;
                outputParameter.sLastName = contacts[0].LastName;
                outputParameter.sEmail = contacts[0].Email;
                outputParameter.sContactId = contacts[0].Id;
            }
        }
        outputParameters.add(outputParameter);
        return outputParameters;
    }
}

This class read the coming sChatKey value to query the LiveChatTranscript object that stores all the informations coming from the PreChat.

This LiveChatTranscript can be further manipulated to add more and more data for your context.

You must also define a hidden Bot variable containing the Live chat transcript key, that must be called LiveAgentSessionId (and is populated by the Live Agent engine automatically):

Now you can safely configure your dialog and be sure to know if the user is logged or not, and change the Bot behavior consequently:

[Salesforce / Release Management] YOU SHALL NOT PASS! (Unless you pass the Quality Gates)

 

What happens if, once you learned how to walk, you start to run?

Well, you might stumble and fall.

Depending on how well you react you might get a bruise or a bloody nose.

What happens if you have a tool that can increase the speed of your releases?

Well, you might run into some issues upon releases.

Depending on if you are using version control, you might recover your work or not.

But from a project management perspective the main question is:
How do I avoid stumbling, falling and having a discussion about delivery capabilities with project stakeholders (aka “those who pay the bills”)?

Starting from the running example, we probably stumble because we do not control the movement at some point. Maybe, because we did not practice walking long enough or because the floor was uneven and “surprised” us. In any case, it is because we lost control.

Believe it or not, it’s similar in software development: If you lose control over what happens in your process ( = movement) you can damage the product you deliver.

We have learned how to “walk” with Copado in the first post of this series, we outlined how all team members contribute to an efficient movement in the second one, and started to “run” in the third post.

So before we even come close to falling, let’s talk about how to maintain control and achieve reproducible and predictable release outcomes.

Our final goal is to achieve that German thing everyone once in a while mentions: Quality.
(Hehe, World Cup 2018, that will stick for a while)

Quality gates are not there to annoy – they are a safety net

Working with Copado and having a seamless integration with Git in the background, we avoid a worst case scenario (fall on your face and damp the hit with your forehead) as we always have a way to roll back to a stable version.

But the process we introduced also contains two review points, which is something no release process should miss:

  • A review of your developments from a technical perspective by a peer or a lead developer. This will ideally prevent devs from introducing sketchy solutions and you get to keep your job.
  • A review of your developments from an end user perspective (Test). This will make sure end users don’t complain, and you get to keep your job.

There is a reason both steps are manual:

First of all, as of now, no AI will be able to understand how and why you developed your features. If a robot could review your code, then admit it, a robot can write your code and a lot of us would be out of business.

Also it is to ensure personal accountability for approvals, so people take it seriously.

If you work in an agile project environment the technical and business review might be detailed as a Definition of Done (DoD, aka “the list of things you need to accomplish to burn the story points”). A sample DoD can look like the following list:

  • Document your feature
  • Get peer approval
  • Ensure business test script is available
  • Deploy to QA
  • Test and approve story by business

So wouldn’t it be nice to tie the technical capability to move a feature to the next environment and release it based on the completion of your DoD/Quality Gates?

Validation rules & history tracking to save the day

Before we jump into any config work, let’s get an overview of what we need and what elements (fields, automations) Copado offers already:

DoD Element

Required

Available

To Configure

Document your feature

Field to indicate document location

Yes, but not in the way we would like it.

Create hyperlink field.

Validate, that the field is not empty.

Get peer approval

Field to indicate approval

Yes, but we don’t jump into pull requests. Maybe later. Start small now.

Create Checkbox.

Validate, that the field is checked.

Ensure business test is available

Indicator for approved test script

Yes.
But I would like to automate the checkbox, and also, I don’t like the current name.

Create Checkbox on User Story to indicate approved scripts.

Update the field on an approved test script.

Validate, that the field is checked.

Deploy to QA

Indicator for current org of a feature

Automated by Copado

Nothing to do.

Test approved by business.

Indicator for approved tests.

Yes, but the automation needs to be configured.

Create Checkbox on User Story to indicate a successful test.

Update the field upon a successful test execution..

Validate, that the field is checked.

To sum up, in order to implement our quality gates we need:

  • 4 fields
  • 4 validation rules
  • 2 processes

And to make sure we can track who changed the checkboxes, we will set Feed Tracking on all of them, so that we see a nice history of DoD on the chatter feed related to the User Story. Also we need to modify the layout to make sure fields are displayed.

Create the required fields

Starting with the fields, we create the following custom fields on the Copado User Story object:

  • PeerReviewPassed__c, Checkbox
  • TestScriptReady__c, Checkbox
  • TestScriptPassed__c, Checkbox
  • DocumentationLocation__c, URL

For those who need a deeper explanation, click here.

Ensure accountability through history tracking

Next, we will enable those fields in Feed Tracking (Setup → Customize → Chatter → Feed Tracking → User Story). Enable the Object Field tracking if required, and select the fields we just created. Add more fields, if you consider them worth tracking. Status is always a good one.

Enforce process adherence with validation rules

Next, we need to tackle the validation rules.

As explained in the first post, a user story is deployed to the next environment, when the “Promote & Deploy” checkbox is checked. Also, it can be selected for a manual promotion if you check the “Promote Change” checkbox. So our validation rules should fire in the following logic:

  • If the story is still in the Dev environment, fire when:

    • “Promote & Deploy” or “Promote Change” is checked
    • Documentation Location is empty
    • Peer Review Passed is unchecked
    • Test Script Ready is unchecked

  • If the story is in the UAT environment, fire when:

    • “Promote & Deploy” or “Promote Change” is checked
    • Test Script Passed is unchecked

Modify the layout to make it easy for users to follow

We added a lot of fields, so let’s make it better to manage for the end user and modify the layout. Copado has already some fields, which we do not use in our cases, such as Documentation Complete, because they are not entirely for the case (also it’s about showing what can be done, right?).

However, there is a field called “Apex Tests Passed”. It’s a checkbox and it is set automatically, if you hit the “Run Apex Tests” button on the story, and the classes in your User Story have sufficient coverage. Also, there is already a section called “Definition of Done”. We will just remove unwanted and add our new fields.

Looks nice, and has a certain logic. Let’s check the validation rule:

Perfect.

We are done here, so the only thing left is to set up the process builder automations and go for an after work drink with colleagues and/or friends.

Reduce redundant clicks with process builder

The key here is to fit into the Copado approach a little to get our automations started. Our favorite release management tool assumes, that a test script might not necessarily be written by a single developer, but also by a larger team, where a test script review process can be part of a test script creation. (Yes, those testers again…).

In order to indicate, that a test script is ready, the status needs to be marked as “Complete”. And that’s all the details we need, in order to create a process builder:

  • Object: Test Script (copado__Test_Script__c)
  • Event: On record creation or update
  • Conditions: Status equals Complete

As an immediate action, we want to update a record related to the record, which triggered the process. The object we want to update is the User Story, and the field we want to set is the Test Script Ready field, which should be checked ( = TRUE).

What we also could do is to set the status of the story accordingly (e.g. “Ready for Testing”), but that is not required for now.

For automating the check of the “Test Run Passed” object, we will create a different, but similar process.

  • Object: Test Run (copado__Test_Run__c)
  • Event: On record creation or update
  • Conditions: Status equals “Passed” or “Passed with Comments”

The immediate action, in this case, would need to update the Test Script Passed field on the related story. So it is more or less the same as for the process created before, just the target field changes to “Test Script Passed” = TRUE.

Done. Well, you should test it, of course 🙂

Try that with Jenkins. Or Bamboo. Or TeamCity.

Gosh, I love this tool.

That’s all nice, but what else could you do?

Although this is a rather simple scenario, it will help already any user to follow the process, although they did not study the process before intensively. Simply by setting up the tool correctly to support what we outlined with boxes and arrows and providing information on how to proceed.

We can now easily check on DoD status of stories with list views or reports to find gaps. If someone asks “Who approved this?” we just need to check the list of chatter posts generated by Salesforce. If we now start to discuss User Stories using the Chatter Feed, your team will be best buddies with any internal project audit initiative.

Going a step further, validation rules can help to cover more complex scenarios, e.g. allowing to validate, but not to deploy to a specific org.

And if your project has issues with code quality, a code analysis tool can be an enabler for your code review. Copado has a build in support for an open source framework to analyze Apex (PMD) but if your team is using CodeScan or Checkmarx for analyzing code, I have been told that CodeScan will be available in the next version (Copado v12) and Checkmarx integration is on the roadmap.

For those who want a more structured technical review process, Copado can be set up to mirror pull requests as records linked to the User Story, so with a checkbox, a process builder and a validation rule you would be able to prevent deploying stories, if a PR has not been passed yet.

I am aware that following a release process sometimes is annoying for developers, but there are good reasons for certain checks and why to perform them, even though the change seems small.

But thanks to tools like Copado following the required steps is a breeze and automating repetitive field updates or notifications, you can ensure frictionless releases.

Page 6 of 21

Powered by WordPress & Theme by Anders Norén